hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
d0171a5ca639f23790a7feacbf6d5416212bcb5a
278,278
ipynb
Jupyter Notebook
day2_visualization.ipynb
wudzitsu/dw_matrix_car
d8be5236684057b96dd6e54302da69a96f05fa9e
[ "MIT" ]
null
null
null
day2_visualization.ipynb
wudzitsu/dw_matrix_car
d8be5236684057b96dd6e54302da69a96f05fa9e
[ "MIT" ]
null
null
null
day2_visualization.ipynb
wudzitsu/dw_matrix_car
d8be5236684057b96dd6e54302da69a96f05fa9e
[ "MIT" ]
null
null
null
278,278
278,278
0.938687
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "#!pip install --upgrade tables", "_____no_output_____" ], [ "cd \"/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car\"", "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car\n" ], [ "df = pd.read_hdf('data/car.h5')", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.columns.values", "_____no_output_____" ] ], [ [ "## Wizualizacja danych", "_____no_output_____" ] ], [ [ "df.price_value.hist(bins=100);", "_____no_output_____" ], [ "df.price_value.max()", "_____no_output_____" ], [ "df.price_value.describe()", "_____no_output_____" ], [ "df.groupby(['param_marka-pojazdu'])['price_value'].mean()", "_____no_output_____" ], [ "(\n df\n .groupby(['param_marka-pojazdu'])['price_value']\n .agg(np.mean)\n .sort_values(ascending=False)\n .head(50)\n \n).plot(kind='bar', figsize=(20,5))", "_____no_output_____" ], [ "(\n df\n .groupby(['param_marka-pojazdu'])['price_value']\n .agg((np.mean, np.median, np.size))\n .sort_values(by='size', ascending=False)\n .head(50)\n \n).plot(kind='bar', figsize=(15,5), subplots=True)", "_____no_output_____" ], [ "def plotter(feat_groupby, feat_agg='price_value', agg_funcs=[np.mean, np.median, np.size], feat_sort='mean', top=50, subplots=True):\n return (\n df\n .groupby(feat_groupby)[feat_agg]\n .agg(agg_funcs)\n .sort_values(by=feat_sort, ascending=False)\n .head(top)\n ).plot(kind='bar', figsize=(15,5), subplots=subplots)", "_____no_output_____" ], [ "plotter('param_marka-pojazdu')", "_____no_output_____" ], [ "plotter('param_model-pojazdu', feat_sort='size')", "_____no_output_____" ], [ "plotter('param_model', feat_sort='size')", "_____no_output_____" ], [ "plotter('param_kraj-pochodzenia')", "_____no_output_____" ], [ "plotter('param_color')", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01725c6ac36cc081d6fd5a6831022de9f6cdaae
20,367
ipynb
Jupyter Notebook
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
63c8a6cb84558d8b0fb552272e2838cc8da20498
[ "Apache-2.0" ]
null
null
null
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
63c8a6cb84558d8b0fb552272e2838cc8da20498
[ "Apache-2.0" ]
null
null
null
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
63c8a6cb84558d8b0fb552272e2838cc8da20498
[ "Apache-2.0" ]
null
null
null
30.218101
805
0.580007
[ [ [ "# 1- Class Activation Map with convolutions\n\nIn this firt part, we will code class activation map as described in the paper [Learning Deep Features for Discriminative Localization](http://cnnlocalization.csail.mit.edu/)\n\nThere is a GitHub repo associated with the paper:\nhttps://github.com/zhoubolei/CAM\n\nAnd even a demo in PyTorch:\nhttps://github.com/zhoubolei/CAM/blob/master/pytorch_CAM.py\n\nThe code below is adapted from this demo but we will not use hooks only convolutions...", "_____no_output_____" ] ], [ [ "import io\nimport requests\nfrom PIL import Image\nimport torch\nimport torch.nn as nn\nfrom torchvision import models, transforms\nfrom torch.nn import functional as F\nimport torch.optim as optim\nimport numpy as np\nimport cv2\nimport pdb\nfrom matplotlib.pyplot import imshow\n\n\n# input image\nLABELS_URL = 'https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json'\nIMG_URL = 'http://media.mlive.com/news_impact/photo/9933031-large.jpg'", "_____no_output_____" ] ], [ [ "As in the demo, we will use the Resnet18 architecture. In order to get CAM, we need to transform this network in a fully convolutional network: at all layers, we need to deal with images, i.e. with a shape $\\text{Number of channels} \\times W\\times H$ . In particular, we are interested in the last images as shown here:\n![](https://camo.githubusercontent.com/fb9a2d0813e5d530f49fa074c378cf83959346f7/687474703a2f2f636e6e6c6f63616c697a6174696f6e2e637361696c2e6d69742e6564752f6672616d65776f726b2e6a7067)\n\nAs we deal with a Resnet18 architecture, the image obtained before applying the `AdaptiveAvgPool2d` has size $512\\times 7 \\times 7$ if the input has size $3\\times 224\\times 224 $:\n![resnet_Archi](https://pytorch.org/assets/images/resnet.png)\n\nA- The first thing, you will need to do is 'removing' the last layers of the resnet18 model which are called `(avgpool)` and `(fc)`. Check that for an original image of size $3\\times 224\\times 224 $, you obtain an image of size $512\\times 7\\times 7$.\n\nB- Then you need to retrieve the weights (and bias) of the `fc` layer, i.e. a matrix of size $1000\\times 512$ transforming a vector of size 512 into a vector of size 1000 to make the prediction. Then you need to use these weights and bias to apply it pixelwise in order to transform your $512\\times 7\\times 7$ image into a $1000\\times 7\\times 7$ output (Hint: use a convolution). You can interpret this output as follows: `output[i,j,k]` is the logit for 'pixel' `[j,k]` for being of class `i`.\n\nC- From this $1000\\times 7\\times 7$ output, check that you can retrieve the original output given by the `resnet18` by using an `AdaptiveAvgPool2d`. Can you understand why this is true?\n\nD- In addition, you can construct the Class Activation Map. Draw the activation map for the class mountain bike, for the class lakeside.", "_____no_output_____" ], [ "## Validation:\n1. make sure that when running your notebook, you display both CAM for the class mountain bike and for the class lakeside.\n2. for question B above, what convolution did you use? Your answer, i.e. the name of the Pytorch layer with the correct parameters (in_channel,kernel...) here:\n\n<span style=\"color:red\">Replace by your answer</span>\n3. your short explanation of why your network gives the same predicition as the original `resnet18`:\n\n<span style=\"color:red\">Replace by your answer</span>\n4. Is your network working on an image which is not of size $224\\times 224$, i.e. without resizing? and what about `resnet18`? Explain why?\n\n<span style=\"color:red\">Replace by your answer</span>", "_____no_output_____" ] ], [ [ "net = models.resnet18(pretrained=True)", "_____no_output_____" ], [ "net.eval()", "_____no_output_____" ], [ "x = torch.randn(5, 3, 224, 224)\ny = net(x)\ny.shape", "_____no_output_____" ], [ "n_mean = [0.485, 0.456, 0.406]\nn_std = [0.229, 0.224, 0.225]\n\nnormalize = transforms.Normalize(\n mean=n_mean,\n std=n_std\n)\npreprocess = transforms.Compose([\n transforms.Resize((224,224)),\n transforms.ToTensor(),\n normalize\n])\n\n# Display the image we will use.\nresponse = requests.get(IMG_URL)\nimg_pil = Image.open(io.BytesIO(response.content))\nimshow(img_pil);", "_____no_output_____" ], [ "img_tensor = preprocess(img_pil)\nnet = net.eval()\nlogit = net(img_tensor.unsqueeze(0))", "_____no_output_____" ], [ "logit.shape", "_____no_output_____" ], [ "img_tensor.shape", "_____no_output_____" ], [ "# download the imagenet category list\nclasses = {int(key):value for (key, value)\n in requests.get(LABELS_URL).json().items()}\n\n\ndef print_preds(logit):\n # print the predicitions with their 'probabilities' from the logit\n h_x = F.softmax(logit, dim=1).data.squeeze()\n probs, idx = h_x.sort(0, True)\n probs = probs.numpy()\n idx = idx.numpy()\n # output the prediction\n for i in range(0, 5):\n print('{:.3f} -> {}'.format(probs[i], classes[idx[i]]))\n return idx", "_____no_output_____" ], [ "idx = print_preds(logit)", "_____no_output_____" ], [ "def returnCAM(feature_conv, idx):\n # input: tensor feature_conv of dim 1000*W*H and idx between 0 and 999\n # output: image W*H with entries rescaled between 0 and 255 for the display\n cam = feature_conv[idx].detach().numpy()\n cam = cam - np.min(cam)\n cam_img = cam / np.max(cam)\n cam_img = np.uint8(255 * cam_img)\n return cam_img", "_____no_output_____" ], [ "#some utilities\ndef pil_2_np(img_pil):\n # transform a PIL image in a numpy array\n return np.asarray(img_pil)\n\ndef display_np(img_np):\n imshow(Image.fromarray(np.uint8(img_np)))\n \ndef plot_CAM(img_np, CAM):\n height, width, _ = img_np.shape\n heatmap = cv2.applyColorMap(cv2.resize(CAM,(width, height)), cv2.COLORMAP_JET)\n result = heatmap * 0.3 + img_np * 0.5\n display_np(result)", "_____no_output_____" ], [ "# here is a fake example to see how things work\nimg_np = pil_2_np(img_pil)\ndiag_CAM = returnCAM(torch.eye(7).unsqueeze(0),0)\nplot_CAM(img_np,diag_CAM)", "_____no_output_____" ], [ "# your code here for your new network\nnet_conv = \n# do not forget:\nnet_conv = net_conv.eval()", "_____no_output_____" ], [ "# to test things are right\nx = torch.randn(5, 3, 224, 224)\ny = net_conv(x)\ny.shape", "_____no_output_____" ], [ "logit_conv = net_conv(img_tensor.unsqueeze(0))", "_____no_output_____" ], [ "logit_conv.shape", "_____no_output_____" ], [ "# transfor this to a [1,1000] tensor with AdaptiveAvgPool2d\nlogit_new = ", "_____no_output_____" ], [ "idx = print_preds(logit_new)", "_____no_output_____" ], [ "i = #index of lakeside\nCAM1 = returnCAM(logit_conv.squeeze(),idx[i])\nplot_CAM(img_np,CAM1)", "_____no_output_____" ], [ "i = #index of mountain bike\nCAM2 = returnCAM(logit_conv.squeeze(),idx[i])\nplot_CAM(img_np,CAM2)", "_____no_output_____" ] ], [ [ "# 2- Adversarial examples", "_____no_output_____" ], [ "In this second part, we will look at [adversarial examples](https://arxiv.org/abs/1607.02533): \"An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems...\"\n\nRules of the game:\n- the attacker cannot modify the classifier, i.e. the neural net with the preprocessing done on the image before being fed to the network. \n- even if the attacker cannot modifiy the classifier, we assume that the attacker knows the architecture of the classifier. Here, we will still work with `resnet18` and the standard Imagenet normalization. \n- the attacker can only modify the physical image fed into the network.\n- the attacker should fool the classifier, i.e. the label obtained on the corrupted image should not be the same as the label predicted on the original image.\n\nFirst, you will implement *Fast gradient sign method (FGSM)* wich is described in Section 2.1 of [Adversarial examples in the physical world](https://arxiv.org/abs/1607.02533). The idea is simple, suppose you have an image $\\mathbf{x}$ and when you pass it through the network, you get the 'true' label $y$. You know that your network has been trained by minimizing the loss $J(\\mathbf{\\theta}, \\mathbf{x}, y)$ with respect to the parameters of the network $\\theta$. Now, $\\theta$ is fixed as you cannot modify the classifier so you need to modify $\\mathbf{x}$. In order to do so, you can compute the gradient of the loss with respect to $\\mathbf{x}$ i.e. $\\nabla_{\\mathbf{x}} J(\\mathbf{\\theta}, \\mathbf{x}, y)$ and use it as follows to get the modified image $\\tilde{\\mathbf{x}}$:\n$$\n\\tilde{\\mathbf{x}} = \\text{Clamp}\\left(\\mathbf{x} + \\epsilon *\n\\text{sign}(\\nabla_{\\mathbf{x}} J(\\mathbf{\\theta}, \\mathbf{x}, y)),0,1\\right),\n$$\nwhere $\\text{Clamp}(\\cdot, 0,1)$ ensures that $\\tilde{\\mathbf{x}}$ is a proper image.\nNote that if instead of sign, you take the full gradient, you are now following the gradient i.e. increasing the loss $J(\\mathbf{\\theta}, \\mathbf{x}, y)$ so that $y$ becomes less likely to be the predicited label.", "_____no_output_____" ], [ "## Validation:\n1. Implement this attack. Make sure to display the corrupted image.\n\n2. For what value of epsilon is your attack successful? What is the predicited class then?\n\n<span style=\"color:red\">Replace by your answer</span>\n\n3. plot the sign of the gradient and pass this image through the network. What prediction do you obtain? Compare to [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572) \n\n<span style=\"color:red\">Replace by your answer</span>", "_____no_output_____" ] ], [ [ "# Image under attack!\nurl_car = 'https://cdn130.picsart.com/263132982003202.jpg?type=webp&to=min&r=640'\nresponse = requests.get(url_car)\nimg_pil = Image.open(io.BytesIO(response.content))\nimshow(img_pil);", "_____no_output_____" ], [ "# same as above\npreprocess = transforms.Compose([\n transforms.Resize((224,224)),\n transforms.ToTensor(),\n normalize\n])\n\nfor p in net.parameters():\n p.requires_grad = False\n \nx = preprocess(img_pil).clone().unsqueeze(0)\nlogit = net(x)", "_____no_output_____" ], [ "_ = print_preds(logit)", "_____no_output_____" ], [ "t_std = torch.from_numpy(np.array(n_std, dtype=np.float32)).view(-1, 1, 1)\nt_mean = torch.from_numpy(np.array(n_mean, dtype=np.float32)).view(-1, 1, 1)\n\ndef plot_img_tensor(img):\n imshow(np.transpose(img.detach().numpy(), [1,2,0]))\n\ndef plot_untransform(x_t): \n x_np = (x_t * t_std + t_mean).detach().numpy()\n x_np = np.transpose(x_np, [1, 2, 0])\n imshow(x_np)", "_____no_output_____" ], [ "# here we display an image given as a tensor\nx_img = (x * t_std + t_mean).squeeze(0)\nplot_img_tensor(x_img)", "_____no_output_____" ], [ "# your implementation of the attack\ndef fgsm_attack(image, epsilon, data_grad):\n # Collect the element-wise sign of the data gradient\n \n # Create the perturbed image by adjusting each pixel of the input image\n \n # Adding clipping to maintain [0,1] range\n \n # Return the perturbed image\n return perturbed_image", "_____no_output_____" ], [ "idx = 656 #minivan\ncriterion = nn.CrossEntropyLoss()\nx_img.requires_grad = True\nlogit = net(normalize(x_img).unsqueeze(0))\ntarget = torch.tensor([idx])\n\n #TODO: compute the loss to backpropagate\n\n_ = print_preds(logit)", "_____no_output_____" ], [ "# your attack here\nepsilon = 0\nx_att = fgsm_attack(x_img,epsilon,?)", "_____no_output_____" ], [ "# the new prediction for the corrupted image\nlogit = net(normalize(x_att).unsqueeze(0))\n_ = print_preds(logit)", "_____no_output_____" ], [ "# can you see the difference?\nplot_img_tensor(x_att)", "_____no_output_____" ], [ "# do not forget to plot the sign of the gradient\ngradient = \nplot_img_tensor((1+gradient)/2)", "_____no_output_____" ], [ "# what is the prediction for the gradient? \nlogit = net(normalize(gradient).unsqueeze(0))\n_ = print_preds(logit)", "_____no_output_____" ] ], [ [ "# 3- Transforming a car into a cat\n\nWe now implement the *Iterative Target Class Method (ITCM)* as defined by equation (4) in [Adversarial Attacks and Defences Competition](https://arxiv.org/abs/1804.00097)\n\nTo test it, we will transform the car (labeled minivan by our `resnet18`) into a [Tabby cat](https://en.wikipedia.org/wiki/Tabby_cat) (classe 281 in Imagenet). But you can try with any other target.", "_____no_output_____" ], [ "## Validation:\n1. Implement the ITCM and make sure to display the resulting image. ", "_____no_output_____" ] ], [ [ "x = preprocess(img_pil).clone()\nxd = preprocess(img_pil).clone()\nxd.requires_grad = True", "_____no_output_____" ], [ "idx = 281 #tabby\noptimizer = optim.SGD([xd], lr=0.01)\n\nfor i in range(200):\n #TODO: your code here\n \n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n print(loss.item())\n \n _ = print_preds(output)\n print(i,'-----------------')\n \n # TODO: break the loop once we are satisfied \n if ?:\n break", "_____no_output_____" ], [ "_ = print_preds(output)", "_____no_output_____" ], [ "# plot the corrupted image\n", "_____no_output_____" ] ], [ [ "# 4- Where is the cat hidden?\n\nLast, we use CAM to understand where the network see a cat in the image.", "_____no_output_____" ], [ "## Validation:\n1. display the CAM for the class tabby\n\n2. display the CAM for the class minivan\n\n3. where is the cat?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
d01730a2ca4ad74382459015d384f84a1d20e230
6,671
ipynb
Jupyter Notebook
cifar10/centralized.ipynb
kampmichael/FederatedLearningViaCoTraining
ac3aaa82677e1158fc08fc10060220412a995fcb
[ "Apache-2.0" ]
null
null
null
cifar10/centralized.ipynb
kampmichael/FederatedLearningViaCoTraining
ac3aaa82677e1158fc08fc10060220412a995fcb
[ "Apache-2.0" ]
null
null
null
cifar10/centralized.ipynb
kampmichael/FederatedLearningViaCoTraining
ac3aaa82677e1158fc08fc10060220412a995fcb
[ "Apache-2.0" ]
null
null
null
26.058594
122
0.526308
[ [ [ "from resnet import ResNet18\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nimport torch.backends.cudnn as cudnn\n\nimport torchvision\nimport torchvision.transforms as transforms\n\nimport os\nimport numpy as np\nimport pickle", "_____no_output_____" ], [ "savedir = \"./saved_models/run6\"", "_____no_output_____" ], [ "device = 'cuda' if torch.cuda.is_available() else 'cpu'", "_____no_output_____" ], [ "def train(net, trainloader, optimizer, epoch):\n net.train()\n train_loss = 0\n for batch_idx, (inputs, targets) in enumerate(trainloader):\n inputs, targets = inputs.to(device), targets.to(device)\n optimizer.zero_grad()\n outputs = net(inputs)\n loss = criterion(outputs, targets)\n loss.backward()\n optimizer.step()\n train_loss += loss.item()\n\n if epoch%10==0:\n print('Epoch%d, Loss: %.3f' % (epoch, train_loss/(batch_idx+1)))", "_____no_output_____" ], [ "transform_test = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),\n])\n\ntestset = torchvision.datasets.CIFAR10(\n root='./data', train=False, download=True, transform=transform_test)\ntestloader = torch.utils.data.DataLoader(\n testset, batch_size=100, shuffle=False, num_workers=2)", "Files already downloaded and verified\n" ], [ "def test(net):\n net.eval()\n test_loss = 0\n correct = 0\n total = 0\n with torch.no_grad():\n for batch_idx, (inputs, targets) in enumerate(testloader):\n inputs, targets = inputs.to(device), targets.to(device)\n outputs = net(inputs)\n loss = criterion(outputs, targets)\n\n test_loss += loss.item()\n _, predicted = outputs.max(1)\n total += targets.size(0)\n correct += predicted.eq(targets).sum().item()\n\n print('Loss: %.3f | Acc: %.3f%% (%d/%d)' % (test_loss/(batch_idx+1), 100.*correct/total, correct, total))\n return 1.0*correct/total", "_____no_output_____" ], [ "def train_learner(dataset, epochs, net=None):\n if net is None:\n net = ResNet18()\n net = net.to(device)\n optimizer = optim.SGD(net.parameters(), lr=0.005, momentum=0.9)\n trainloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True, num_workers=2)\n for epoch in range(epochs):\n train(net, trainloader, optimizer, epoch)\n return net", "_____no_output_____" ], [ "local_trainset = pickle.load(open(os.path.join(savedir, \"local_trainset\"), \"rb\"))", "_____no_output_____" ], [ "criterion = nn.CrossEntropyLoss()\nnet = train_learner(local_trainset, epochs=200)", "Epoch0, Loss: 1.836\nEpoch10, Loss: 0.646\nEpoch20, Loss: 0.350\nEpoch30, Loss: 0.214\nEpoch40, Loss: 0.141\nEpoch50, Loss: 0.071\nEpoch60, Loss: 0.041\nEpoch70, Loss: 0.038\nEpoch80, Loss: 0.020\nEpoch90, Loss: 0.021\nEpoch100, Loss: 0.026\nEpoch110, Loss: 0.016\nEpoch120, Loss: 0.006\nEpoch130, Loss: 0.007\nEpoch140, Loss: 0.002\nEpoch150, Loss: 0.007\nEpoch160, Loss: 0.003\nEpoch170, Loss: 0.016\nEpoch180, Loss: 0.010\nEpoch190, Loss: 0.006\n" ], [ "test(net)", "Loss: 1.076 | Acc: 84.230% (8423/10000)\n" ], [ "torch.save(net, os.path.join(savedir, \"localdata_centralized\"))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01730b5fa25645d6f37a84c426c406bf39cdda7
5,573
ipynb
Jupyter Notebook
9-MyJupyterNotebooks/13-SystemsOfLinearEquations/.ipynb_checkpoints/SystemsOfLinearEquations-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
9-MyJupyterNotebooks/13-SystemsOfLinearEquations/.ipynb_checkpoints/SystemsOfLinearEquations-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
9-MyJupyterNotebooks/13-SystemsOfLinearEquations/.ipynb_checkpoints/SystemsOfLinearEquations-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
27.589109
112
0.478019
[ [ [ "# linear equations", "_____no_output_____" ], [ "# SolveLinearSystem.py\n# Code to read A and b\n# Then solve Ax = b for x by Gaussian elimination with back substitution", "_____no_output_____" ], [ "# linearsolver with pivoting adapted from \n# https://stackoverflow.com/questions/31957096/gaussian-elimination-with-pivoting-in-python/31959226\ndef linearsolver(A,b):\n n = len(A)\n M = A\n\n i = 0\n for x in M:\n x.append(b[i])\n i += 1\n# row reduction with pivots\n for k in range(n):\n for i in range(k,n):\n if abs(M[i][k]) > abs(M[k][k]):\n M[k], M[i] = M[i],M[k]\n else:\n pass\n\n for j in range(k+1,n):\n q = float(M[j][k]) / M[k][k]\n for m in range(k, n+1):\n M[j][m] -= q * M[k][m]\n# allocate space for result\n x = [0 for i in range(n)]\n# back-substitution\n x[n-1] =float(M[n-1][n])/M[n-1][n-1]\n for i in range (n-1,-1,-1):\n z = 0\n for j in range(i+1,n):\n z = z + float(M[i][j])*x[j]\n x[i] = float(M[i][n] - z)/M[i][i]\n# return result\n return(x)\n#######", "_____no_output_____" ], [ "#", "_____no_output_____" ], [ "# Code to read A and b\namatrix = [] # null list to store matrix read\nbvector = [] # null list to store vector read\nrowNumA = 0\ncolNumA = 0\nrowNumB = 0\nafile = open(\"A.txt\",\"r\") # connect and read file for MATRIX A\nfor line in afile:\n amatrix.append([float(n) for n in line.strip().split()])\n rowNumA += 1\nafile.close() # Disconnect the file\ncolNumA = len(amatrix[0])\nafile = open(\"B.txt\",\"r\") # connect and read file for VECTOR b\nfor line in afile:\n bvector.append(float(line)) # vector read different -- just float the line\n rowNumB += 1\nafile.close() # Disconnect the file", "_____no_output_____" ], [ "#", "_____no_output_____" ], [ "# check the arrays\nif rowNumA != rowNumB:\n print (\"row ranks not same -- aborting now\")\n quit()\nelse:\n print (\"row ranks same -- solve for x in Ax=b \\n\")\n# print all columns each row\ncmatrix = [[0 for j in range(colNumA)]for i in range(rowNumA)]\ndmatrix = [[0 for j in range(colNumA)]for i in range(rowNumA)]\nxvector = [0 for i in range(rowNumA)]\ndvector = [0 for i in range(rowNumA)]\n\n# copy amatrix into cmatrix to preserve original structure\ncmatrix = [[amatrix[i][j] for j in range(colNumA)]for i in range(rowNumA)]\ndmatrix = [[amatrix[i][j] for j in range(colNumA)]for i in range(rowNumA)]\ndvector = [bvector[i] for i in range(rowNumA)]\n\ndvector = linearsolver(amatrix,bvector) #Solve the linear system\n\nprint (\"[A]*[x] = b \\n\")\nfor i in range(0,rowNumA,1):\n print ( (cmatrix[i][0:colNumA]), \"* [\",\"%6.3f\"% (dvector[i]),\"] = \", \"%6.3f\"% (bvector[i]))\n#print (\"-----------------------------\")\n#for i in range(0,rowNumA,1):\n# print (\"%6.3f\"% (dvector[i]))\n#print (\"-----------------------------\")", "row ranks same -- solve for x in Ax=b \n\n[A]*[x] = b \n\n[4.0, 1.5, 0.7, 1.2, 0.5] * [ 0.595 ] = 5.000\n[0.0, 5.625, 0.7250000000000001, 1.0999999999999999, 0.575] * [ 0.508 ] = 6.000\n[0.0, 0.0, 3.707777777777778, 2.8911111111111114, 0.7544444444444445] * [ 0.832 ] = 7.000\n[0.0, 0.0, 0.0, 7.128360803116572, 1.6951333533113575] * [ 0.630 ] = 8.000\n[0.0, 0.0, 0.0, 0.0, 4.231527930403315] * [ 1.037 ] = 9.000\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
d0173869d13005e8f1d95e2b4054313b3dced6b5
16,934
ipynb
Jupyter Notebook
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
95283d87062fcba594d11a881a0cbf2bfe835b4b
[ "MIT" ]
null
null
null
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
95283d87062fcba594d11a881a0cbf2bfe835b4b
[ "MIT" ]
null
null
null
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
95283d87062fcba594d11a881a0cbf2bfe835b4b
[ "MIT" ]
null
null
null
38.6621
674
0.599681
[ [ [ "# Introduction to Deep Learning with PyTorch\n\nIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.\n\n", "_____no_output_____" ], [ "## Neural Networks\n\nDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply \"neurons.\" Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.\n\n<img src=\"assets/simple_neuron.png\" width=400px>\n\nMathematically this looks like: \n\n$$\n\\begin{align}\ny &= f(w_1 x_1 + w_2 x_2 + b) \\\\\ny &= f\\left(\\sum_i w_i x_i +b \\right)\n\\end{align}\n$$\n\nWith vectors this is the dot/inner product of two vectors:\n\n$$\nh = \\begin{bmatrix}\nx_1 \\, x_2 \\cdots x_n\n\\end{bmatrix}\n\\cdot \n\\begin{bmatrix}\n w_1 \\\\\n w_2 \\\\\n \\vdots \\\\\n w_n\n\\end{bmatrix}\n$$", "_____no_output_____" ], [ "## Tensors\n\nIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.\n\n<img src=\"assets/tensor_examples.svg\" width=600px>\n\nWith the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.", "_____no_output_____" ] ], [ [ "# First, import PyTorch\nimport torch", "_____no_output_____" ], [ "def activation(x):\n \"\"\" Sigmoid activation function \n \n Arguments\n ---------\n x: torch.Tensor\n \"\"\"\n return 1/(1+torch.exp(-x))", "_____no_output_____" ], [ "### Generate some data\ntorch.manual_seed(7) # Set the random seed so things are predictable\n\n# Features are 3 random normal variables\nfeatures = torch.randn((1, 5))\n# True weights for our data, random normal variables again\nweights = torch.randn_like(features)\n# and a true bias term\nbias = torch.randn((1, 1))", "_____no_output_____" ] ], [ [ "Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:\n\n`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. \n\n`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.\n\nFinally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.\n\nPyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. \n> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.", "_____no_output_____" ] ], [ [ "## Calculate the output of this network using the weights and bias tensors\nactivation(torch.sum(weights*features) + bias)", "_____no_output_____" ] ], [ [ "You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.\n\nHere, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error\n\n```python\n>> torch.mm(features, weights)\n\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\n<ipython-input-13-15d592eb5279> in <module>()\n----> 1 torch.mm(features, weights)\n\nRuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033\n```\n\nAs you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.\n\n**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.\n\nThere are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).\n\n* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.\n* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.\n* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.\n\nI usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.\n\n> **Exercise**: Calculate the output of our little network using matrix multiplication.", "_____no_output_____" ] ], [ [ "## Calculate the output of this network using matrix multiplication\ntorch.matmul(features,weights.reshape(5,1))+bias", "_____no_output_____" ] ], [ [ "### Stack them up!\n\nThat's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.\n\n<img src='assets/multilayer_diagram_weights.png' width=450px>\n\nThe first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated \n\n$$\n\\vec{h} = [h_1 \\, h_2] = \n\\begin{bmatrix}\nx_1 \\, x_2 \\cdots \\, x_n\n\\end{bmatrix}\n\\cdot \n\\begin{bmatrix}\n w_{11} & w_{12} \\\\\n w_{21} &w_{22} \\\\\n \\vdots &\\vdots \\\\\n w_{n1} &w_{n2}\n\\end{bmatrix}\n$$\n\nThe output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply\n\n$$\ny = f_2 \\! \\left(\\, f_1 \\! \\left(\\vec{x} \\, \\mathbf{W_1}\\right) \\mathbf{W_2} \\right)\n$$", "_____no_output_____" ] ], [ [ "### Generate some data\ntorch.manual_seed(7) # Set the random seed so things are predictable\n\n# Features are 3 random normal variables\nfeatures = torch.randn((1, 3))\n\n# Define the size of each layer in our network\nn_input = features.shape[1] # Number of input units, must match number of input features\nn_hidden = 2 # Number of hidden units \nn_output = 1 # Number of output units\n\n# Weights for inputs to hidden layer\nW1 = torch.randn(n_input, n_hidden)\n# Weights for hidden layer to output layer\nW2 = torch.randn(n_hidden, n_output)\n\n# and bias terms for hidden and output layers\nB1 = torch.randn((1, n_hidden))\nB2 = torch.randn((1, n_output))", "_____no_output_____" ] ], [ [ "> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ", "_____no_output_____" ] ], [ [ "## Your solution here\nactivation(torch.matmul(activation(torch.matmul(features,W1) + B1),W2) + B2)", "_____no_output_____" ] ], [ [ "If you did this correctly, you should see the output `tensor([[ 0.3171]])`.\n\nThe number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.", "_____no_output_____" ], [ "## Numpy to Torch and back\n\nSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.", "_____no_output_____" ] ], [ [ "import numpy as np\na = np.random.rand(4,3)\na", "_____no_output_____" ], [ "b = torch.from_numpy(a)\nb", "_____no_output_____" ], [ "b.numpy()", "_____no_output_____" ] ], [ [ "The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.", "_____no_output_____" ] ], [ [ "# Multiply PyTorch Tensor by 2, in place\nb.mul_(2)", "_____no_output_____" ], [ "# Numpy array matches new values from Tensor\na", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d0173e87644261c0b2d4f6198af97cee09331b51
131,116
ipynb
Jupyter Notebook
notebook/Bases de datos.ipynb
seppo0010/sysarmy-sueldos-2020.1
d9a7c959a033429f669c3a98ef6c278bec192f23
[ "BSD-3-Clause" ]
13
2020-06-27T19:29:16.000Z
2021-05-20T23:37:56.000Z
notebook/Bases de datos.ipynb
seppo0010/sysarmy-sueldos-2020.1
d9a7c959a033429f669c3a98ef6c278bec192f23
[ "BSD-3-Clause" ]
null
null
null
notebook/Bases de datos.ipynb
seppo0010/sysarmy-sueldos-2020.1
d9a7c959a033429f669c3a98ef6c278bec192f23
[ "BSD-3-Clause" ]
5
2020-07-04T00:16:33.000Z
2022-03-06T19:00:43.000Z
103.403785
38,228
0.749481
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport wikipedia\nimport xml.etree.ElementTree as ET\nimport re\nfrom sklearn.manifold import TSNE\nfrom sklearn.decomposition import PCA\nfrom sklearn.model_selection import cross_val_score\nimport xgboost as xgb\nfrom sklearn.metrics import r2_score\n\n%matplotlib inline", "_____no_output_____" ], [ "df = pd.read_csv('2020.1 - sysarmy - Encuesta de remuneración salarial Argentina - Argentina.csv', skiprows=9)\ndf = df[df['Salario mensual BRUTO (en tu moneda local)'] < 1_000_000]\ndf = df[df['Años en la empresa actual'] < 40]\ndf = df[(df['Salario mensual BRUTO (en tu moneda local)'] >= 10_000) & (df['Salario mensual BRUTO (en tu moneda local)'] <= 1_000_000)]\ndf.head()\ndf['Bases de datos']", "_____no_output_____" ], [ "df_databases_cols = df['Bases de datos'].fillna('').apply(lambda pls: pd.Series([v.lower().strip() for v in pls.replace('Microsoft Azure (Tables, CosmosDB, SQL, etc)', 'Microsoft Azure(TablesCosmosDBSQLetc)').split(',') if v.lower().strip() not in ('', 'ninguno')], dtype=str))\ncount_databases = pd.concat((df_databases_cols[i] for i in range(df_databases_cols.shape[1]))).value_counts()\ncount_databases", "_____no_output_____" ], [ "count_databases = count_databases[count_databases > 10]\ncount_databases", "_____no_output_____" ], [ "count_databases = count_databases.drop(['proxysql', 'percona xtrabackup'])", "_____no_output_____" ], [ "def find_categories(database):\n database = {\n 'oracle': 'Oracle Database',\n 'microsoft azure(tablescosmosdbsqletc)': 'Cosmos DB',\n 'amazon rds/aurora': 'Amazon Aurora',\n 'amazon dynamodb': 'Amazon DynamoDB',\n 'google cloud storage': 'Google Storage',\n 'ibm db2': 'Db2 Database',\n 'hana': 'SAP HANA',\n 'amazon redshift': 'Amazon Redshift',\n 'apache hive': 'Apache Hive',\n 'apache hbase': 'Apache HBase',\n 'percona server': 'Percona Server for MySQL',\n 'sql server': 'Microsoft SQL Server',\n }.get(database, database)\n # autosuggest redirects linux to line (why?)\n return wikipedia.page(database, auto_suggest=False).categories\ndatabase_categories = {p: find_categories(p) for p in count_databases.index}\ndatabase_categories", "_____no_output_____" ], [ "catcount = {}\nfor categories in database_categories.values():\n for cat in categories:\n catcount[cat] = catcount.get(cat, 0) + 1\ncatcount = pd.Series(catcount)\ncatcount = catcount[catcount > 1]\ncatcount", "_____no_output_____" ], [ "df_databases = pd.DataFrame({plat: {cat: cat in cats for cat in catcount.index} for plat, cats in database_categories.items()}).T\ndf_databases.head()", "_____no_output_____" ], [ "_, ax = plt.subplots(1, 1, figsize=(10, 10))\ndf_embedded = PCA(n_components=2).fit_transform(df_databases)\nax.scatter(df_embedded[:, 0], df_embedded[:, 1])\nfor lang, (x, y) in zip(df_databases.index, df_embedded):\n ax.annotate(lang, (x, y))\nax.set_xticks([]);\nax.set_yticks([]);", "_____no_output_____" ], [ "from sklearn.cluster import SpectralClustering\nclustering = SpectralClustering(n_clusters=8, assign_labels=\"discretize\", random_state=0).fit(df_embedded)\n_, ax = plt.subplots(1, 1, figsize=(10, 10))\nax.scatter(df_embedded[:, 0], df_embedded[:, 1], c=clustering.labels_, cmap='Accent')\nfor plat, (x, y) in zip(df_databases.index, df_embedded):\n ax.annotate(plat, (x, y))\nax.set_xticks([]);\nax.set_yticks([]);", "_____no_output_____" ], [ "best = {'colsample_bytree': 0.7000000000000001, 'gamma': 0.8500000000000001, 'learning_rate': 0.025, 'max_depth': 16, 'min_child_weight': 15.0, 'n_estimators': 175, 'subsample': 0.8099576733552297}\nregions_map = {\n 'Ciudad Autónoma de Buenos Aires': 'AMBA',\n 'GBA': 'AMBA',\n 'Catamarca': 'NOA',\n 'Chaco': 'NEA',\n 'Chubut': 'Patagonia',\n 'Corrientes': 'NEA',\n 'Entre Ríos': 'NEA',\n 'Formosa': 'NEA',\n 'Jujuy': 'NOA',\n 'La Pampa': 'Pampa',\n 'La Rioja': 'NOA',\n 'Mendoza': 'Cuyo',\n 'Misiones': 'NEA',\n 'Neuquén': 'Patagonia',\n 'Río Negro': 'Patagonia',\n 'Salta': 'NOA',\n 'San Juan': 'Cuyo',\n 'San Luis': 'Cuyo',\n 'Santa Cruz': 'Patagonia',\n 'Santa Fe': 'Pampa',\n 'Santiago del Estero': 'NOA',\n 'Tucumán': 'NOA',\n 'Córdoba': 'Pampa',\n 'Provincia de Buenos Aires': 'Pampa',\n 'Tierra del Fuego': 'Patagonia',\n}\nclass BaseModel:\n def __init__(self, **params):\n self.regressor_ = xgb.XGBRegressor(**params)\n\n def get_params(self, deep=True):\n return self.regressor_.get_params(deep=deep)\n\n def set_params(self, **params):\n return self.regressor_.set_params(**params)\n \n def clean_words(self, field, value):\n value = value.replace('Microsoft Azure (Tables, CosmosDB, SQL, etc)', 'Microsoft Azure(TablesCosmosDBSQLetc)')\n value = value.replace('Snacks, golosinas, bebidas', 'snacks')\n value = value.replace('Descuentos varios (Clarín 365, Club La Nación, etc)', 'descuentos varios')\n value = value.replace('Sí, de forma particular', 'de forma particular')\n value = value.replace('Sí, los pagó un empleador', 'los pagó un empleador')\n value = value.replace('Sí, activa', 'activa')\n value = value.replace('Sí, pasiva', 'pasiva')\n return [self.clean_word(field, v) for v in value.split(',') if self.clean_word(field, v)]\n\n def clean_word(self, field, word):\n val = str(word).lower().strip().replace(\".\", \"\")\n if val in ('ninguno', 'ninguna', 'no', '0', 'etc)', 'nan'):\n return ''\n if field == 'Lenguajes de programación' and val == 'Microsoft Azure(TablesCosmosDBSQLetc)':\n return 'Microsoft Azure (Tables, CosmosDB, SQL, etc)'\n if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('pycon', 'pyconar'):\n return 'pyconar'\n if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('nodeconf', 'nodeconfar'):\n return 'nodeconfar'\n if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('meetup', 'meetups'):\n return 'meetups'\n if field == '¿A qué eventos de tecnología asististe en el último año?':\n return val.replace(' ', '')\n if field == 'Beneficios extra' and val == 'snacks':\n return 'snacks, golosinas, bebidas'\n if field == 'Beneficios extra' and val == 'descuentos varios':\n return 'descuentos varios (clarín 365, club la nación, etc)'\n return val\n\n def row_to_words(self, row):\n return [\n f'{key}={row.fillna(\"\")[key]}'\n for key\n in (\n 'Me identifico',\n 'Nivel de estudios alcanzado',\n 'Universidad',\n 'Estado',\n 'Carrera',\n '¿Contribuís a proyectos open source?',\n '¿Programás como hobbie?',\n 'Trabajo de',\n '¿Qué SO usás en tu laptop/PC para trabajar?',\n '¿Y en tu celular?',\n 'Tipo de contrato',\n 'Orientación sexual',\n 'Cantidad de empleados',\n 'Actividad principal',\n )\n ] + [\n f'{k}={v}' for k in (\n '¿Tenés guardias?',\n 'Realizaste cursos de especialización',\n '¿A qué eventos de tecnología asististe en el último año?',\n 'Beneficios extra',\n 'Plataformas',\n 'Lenguajes de programación',\n 'Frameworks, herramientas y librerías',\n 'Bases de datos',\n 'QA / Testing',\n 'IDEs',\n 'Lenguajes de programación'\n ) for v in self.clean_words(k, row.fillna('')[k])\n ] + [\n f'region={regions_map[row[\"Dónde estás trabajando\"]]}'\n ]\n\n def encode_row(self, row):\n ws = self.row_to_words(row)\n return pd.Series([w in ws for w in self.valid_words_] + [\n row['¿Gente a cargo?'],\n row['Años de experiencia'],\n row['Tengo'],\n ])\n\n def fit(self, X, y, **params):\n counts = {}\n for i in range(X.shape[0]):\n for word in self.row_to_words(X.iloc[i]):\n counts[word] = counts.get(word, 0) + 1\n self.valid_words_ = [word for word, c in counts.items() if c > 0.01*X.shape[0]]\n self.regressor_.fit(X.apply(self.encode_row, axis=1).astype(float), y, **params)\n return self\n \n def predict(self, X):\n return self.regressor_.predict(X.apply(self.encode_row, axis=1).astype(float))\n \n def score(self, X, y):\n return r2_score(y, self.predict(X))\ncross_val_score(BaseModel(), df, df['Salario mensual BRUTO (en tu moneda local)'])", "_____no_output_____" ], [ "database_embeddings = {l: [] for l in clustering.labels_}\nfor database, label in zip(df_databases.index, clustering.labels_):\n database_embeddings[label].append(database)\ndatabase_embeddings", "_____no_output_____" ], [ "class ModelPCA:\n def __init__(self, **params):\n self.regressor_ = xgb.XGBRegressor(**params)\n\n def get_params(self, deep=True):\n return self.regressor_.get_params(deep=deep)\n\n def set_params(self, **params):\n return self.regressor_.set_params(**params)\n \n def clean_words(self, field, value):\n value = value.replace('Microsoft Azure (Tables, CosmosDB, SQL, etc)', 'Microsoft Azure(TablesCosmosDBSQLetc)')\n value = value.replace('Snacks, golosinas, bebidas', 'snacks')\n value = value.replace('Descuentos varios (Clarín 365, Club La Nación, etc)', 'descuentos varios')\n value = value.replace('Sí, de forma particular', 'de forma particular')\n value = value.replace('Sí, los pagó un empleador', 'los pagó un empleador')\n value = value.replace('Sí, activa', 'activa')\n value = value.replace('Sí, pasiva', 'pasiva')\n return [self.clean_word(field, v) for v in value.split(',') if self.clean_word(field, v)]\n\n def clean_word(self, field, word):\n val = str(word).lower().strip().replace(\".\", \"\")\n if val in ('ninguno', 'ninguna', 'no', '0', 'etc)', 'nan'):\n return ''\n if field == 'Lenguajes de programación' and val == 'Microsoft Azure(TablesCosmosDBSQLetc)':\n return 'Microsoft Azure (Tables, CosmosDB, SQL, etc)'\n if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('pycon', 'pyconar'):\n return 'pyconar'\n if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('nodeconf', 'nodeconfar'):\n return 'nodeconfar'\n if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('meetup', 'meetups'):\n return 'meetups'\n if field == '¿A qué eventos de tecnología asististe en el último año?':\n return val.replace(' ', '')\n if field == 'Beneficios extra' and val == 'snacks':\n return 'snacks, golosinas, bebidas'\n if field == 'Beneficios extra' and val == 'descuentos varios':\n return 'descuentos varios (clarín 365, club la nación, etc)'\n return val\n\n def contains_database(self, row, databases):\n k = 'Bases de datos'\n for v in self.clean_words(k, row.fillna('')[k]):\n if v in databases:\n return True\n return False\n\n def row_to_words(self, row):\n return [\n f'{key}={row.fillna(\"\")[key]}'\n for key\n in (\n 'Me identifico',\n 'Nivel de estudios alcanzado',\n 'Universidad',\n 'Estado',\n 'Carrera',\n '¿Contribuís a proyectos open source?',\n '¿Programás como hobbie?',\n 'Trabajo de',\n '¿Qué SO usás en tu laptop/PC para trabajar?',\n '¿Y en tu celular?',\n 'Tipo de contrato',\n 'Orientación sexual',\n 'Cantidad de empleados',\n 'Actividad principal',\n )\n ] + [\n f'{k}={v}' for k in (\n '¿Tenés guardias?',\n 'Realizaste cursos de especialización',\n '¿A qué eventos de tecnología asististe en el último año?',\n 'Beneficios extra',\n 'Plataformas',\n 'Frameworks, herramientas y librerías',\n 'Bases de datos',\n 'QA / Testing',\n 'IDEs',\n 'Lenguajes de programación'\n ) for v in self.clean_words(k, row.fillna('')[k])\n ] + [\n f'region={regions_map[row[\"Dónde estás trabajando\"]]}'\n ] + [\n f'database_type={i}'\n for i, databases in database_embeddings.items()\n if self.contains_database(row, databases)\n ]\n\n def encode_row(self, row):\n ws = self.row_to_words(row)\n return pd.Series([w in ws for w in self.valid_words_] + [\n row['¿Gente a cargo?'],\n row['Años de experiencia'],\n row['Tengo'],\n ])\n\n def fit(self, X, y, **params):\n counts = {}\n for i in range(X.shape[0]):\n for word in self.row_to_words(X.iloc[i]):\n counts[word] = counts.get(word, 0) + 1\n self.valid_words_ = [word for word, c in counts.items() if c > 0.01*X.shape[0]]\n self.regressor_.fit(X.apply(self.encode_row, axis=1).astype(float), y, **params)\n return self\n \n def predict(self, X):\n return self.regressor_.predict(X.apply(self.encode_row, axis=1).astype(float))\n \n def score(self, X, y):\n return r2_score(y, self.predict(X))\ncross_val_score(ModelPCA(), df, df['Salario mensual BRUTO (en tu moneda local)'])", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d017408ea4c810023c164310d4ed82e0200149f4
13,788
ipynb
Jupyter Notebook
notebooks/09 evaluation.ipynb
jonasspinner/weighted-f-free-edge-editing
5db2590615db7ef6a05d2187a54fc09edd201ada
[ "MIT" ]
1
2021-02-18T13:57:41.000Z
2021-02-18T13:57:41.000Z
notebooks/09 evaluation.ipynb
jonasspinner/weighted-f-free-edge-editing
5db2590615db7ef6a05d2187a54fc09edd201ada
[ "MIT" ]
null
null
null
notebooks/09 evaluation.ipynb
jonasspinner/weighted-f-free-edge-editing
5db2590615db7ef6a05d2187a54fc09edd201ada
[ "MIT" ]
1
2021-01-22T14:36:25.000Z
2021-01-22T14:36:25.000Z
36.379947
156
0.495648
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport yaml\nfrom pathlib import Path\nfrom collections import defaultdict\nfrom pandas.api.types import CategoricalDtype\n", "_____no_output_____" ], [ "EXPERIMENTS_PATH = Path.home() / \"ba\" / \"experiments\"\nbenchmarks_paths = list((EXPERIMENTS_PATH / \"C4P4\").glob(\"lb.*/*.benchmarks.yaml\"))", "_____no_output_____" ], [ "benchmarks_paths", "_____no_output_____" ], [ "DEFAULT_CATEGORY = lambda: \"category\"\nCATEGORIES = defaultdict(DEFAULT_CATEGORY,\n forbidden_subgraphs=CategoricalDtype([\n \"P3\", \"P4\", \"P5\", \"P6\", \"C4P4\", \"C5P5\", \"C6P6\", \", C4_C5_2K2\", \"C4_C5_P5_Bowtie_Necktie\"]),\n lower_bound_algorithm=CategoricalDtype([\n \"Trivial\", \"Greedy\", \"SortedGreedy\", \"LocalSearch\", \"LPRelaxation\", \"NPS_MWIS_Solver\",\n \"LSSWZ_MWIS_Solver\", \"fpt-editing-LocalSearch\", \"GreedyWeightedPacking\"]),\n dataset=CategoricalDtype([\n \"barabasi-albert\", \"bio\", \"bio-C4P4-subset\", \"bio-subset-A\", \"duplication-divergence\",\n \"misc\", \"powerlaw-cluster\", \"bio-subset-B\", \"bio-unweighted\"])\n )\n\ndef load_raw_df(paths):\n docs = []\n for path in paths:\n with path.open() as file:\n docs += list(yaml.safe_load_all(file))\n return pd.DataFrame(docs)\n\ndef load_data_unweighted_fpt_editing(paths):\n df = load_raw_df(paths)\n df[[\"dataset\", \"instance\"]] = df[\"instance\"].str.split(\"/\", expand=True)[[1, 2]]\n df[\"lower_bound_algorithm\"] = \"fpt-editing-LocalSearch\"\n return df\n\ndef load_data_weighted_fpt_editing(paths):\n df = load_raw_df(paths)\n df[\"value\"] = df[\"values\"].str[0]\n df.rename(columns={\"lower_bound_name\": \"lower_bound_algorithm\"}, inplace=True)\n\n df[[\"dataset\", \"instance\"]] = df[\"instance\"].str.split(\"/\", expand=True)[[1, 2]]\n\n return df\n\ndef load_data(paths):\n columns = [\"forbidden_subgraphs\", \"dataset\", \"instance\", \"lower_bound_algorithm\", \"value\"]\n df1 = load_data_weighted_fpt_editing([p for p in paths if \"fpt-editing\" not in p.parent.name])\n df2 = load_data_unweighted_fpt_editing([p for p in paths if \"fpt-editing\" in p.parent.name])\n \n df1 = df1[columns]\n df2 = df2[columns]\n \n df = pd.concat([df1, df2], ignore_index=True)\n \n df = df.astype({k: CATEGORIES[k] for k in\n [\"forbidden_subgraphs\", \"lower_bound_algorithm\", \"dataset\"]})\n df.loc[df[\"value\"] < 0, \"value\"] = np.nan\n \n \n m = df[\"lower_bound_algorithm\"] == \"fpt-editing-LocalSearch\"\n df.loc[m, \"value\"] = df.loc[m, \"value\"] / 100\n return df\n\ndf = load_data(benchmarks_paths)\ndf.head()", "_____no_output_____" ], [ "for lb, df_lb in df.groupby([\"lower_bound_algorithm\", \"dataset\"]):\n print(lb, len(df_lb))", "_____no_output_____" ], [ "# df = df[df[\"dataset\"] == \"bio\"]", "_____no_output_____" ], [ "def plot_line_scatter(x, y, xlabel, ylabel, path=None):\n fig, ax = plt.subplots(figsize=(6, 6))\n ax.set_aspect(\"equal\")\n ax.scatter(x, y, alpha=0.2)\n ax.plot([0, 5e5], [0, 5e5])\n ax.set_yscale(\"log\"); ax.set_xscale(\"log\")\n ax.set_ylim([1e-1, 5e5]); ax.set_xlim([1e-1, 5e5])\n ax.set_ylabel(ylabel); ax.set_xlabel(xlabel)\n \n if path is not None:\n plt.savefig(path)\n plt.show()", "_____no_output_____" ], [ "def plot_ratio_scatter(x, y, xlabel, ylabel):\n\n ratio = x / y\n ratio[x == y] = 1\n\n fig, ax = plt.subplots(figsize=(6, 4))\n ax.scatter(x, ratio, alpha=0.2)\n ax.set_xscale(\"log\")\n ax.set_xlim((1e0, 5e5))\n ax.set_xlabel(xlabel); ax.set_ylabel(f\"{xlabel} / {ylabel}\")\n plt.show()", "_____no_output_____" ], [ "def plot_ratio(x, y, xlabel, ylabel, path=None):\n ratio = x / y\n ratio[x == y] = 1\n\n print(\"-\" * 10)\n print(f\"path: {path}\")\n print(f\"{((x==0) & (y==0)).sum()} or {100*((x==0) & (y==0)).mean():.4}% where x = y = 0\")\n print(f\"{(ratio == 1).sum()} / {ratio.shape[0]} or {100*(ratio == 1).mean():.4}% where ratio = 1\")\n print(f\"{ratio.isnull().sum()} / {ratio.shape[0]} where ratio = NaN\")\n\n # TODO: print quantiles\n q = np.array([0, 0.05, 0.1, 0.5, 0.9, 0.95, 1])\n x = np.quantile(ratio[~ratio.isnull()], q)\n # print(f\"{x}\")\n for q_i, x_i in zip(q, x):\n print(f\"{100*q_i:>6.2f}% {ylabel} / {xlabel} > {100 / x_i:>7.2f}%\")\n \n q_line = \" & \".join([f\"{q_i:.2f}\\\\%\" for q_i in q])\n x_line = \" & \".join([f\"{100 / x_i:.2f}\\\\%\" for x_i in x])\n print(f\"\"\"\\\\begin{{table}}[h]\n\t\\\\begin{{tabular}}{{lllllll}}\n\t\t{q_line} \\\\\\\\ \\\\hline\n\t\t{x_line}\n\t\\\\end{{tabular}}\n\\\\end{{table}}\"\"\")\n \n fig, ax = plt.subplots(figsize=(6, 4))\n ax.hist(ratio[ratio != 1], bins=np.linspace(min([0, ratio.min()]), max([0, ratio.max()]), 31))\n ax.set_xlabel(f\"{xlabel} / {ylabel}\"); ax.set_ylabel(\"count\")\n \n if path is not None:\n plt.savefig(path)\n plt.show()", "_____no_output_____" ], [ "def draw_plots(df, dataset=\"\"):\n a = df[(df[\"lower_bound_algorithm\"] == \"SortedGreedy\")].reset_index()\n b = df[(df[\"lower_bound_algorithm\"] == \"LPRelaxation\")].reset_index()\n c = df[(df[\"lower_bound_algorithm\"] == \"NPS_MWIS_Solver\")].reset_index()\n d = df[(df[\"lower_bound_algorithm\"] == \"LocalSearch\")].reset_index()\n e = df[(df[\"lower_bound_algorithm\"] == \"fpt-editing-LocalSearch\")].reset_index()\n b.loc[b[\"value\"] < 0, \"value\"] = np.nan\n\n # plot_line_scatter(a[\"value\"], b[\"value\"], \"SortedGreedy\", \"LPRelaxation\")\n\n # plot_ratio_scatter(a[\"value\"], b[\"value\"], \"SortedGreedy\", \"LPRelaxation\")\n # plot_ratio_scatter(a[\"value\"], c[\"value\"], \"SortedGreedy\", \"NPS_MWIS_Solver\")\n\n# plot_ratio(a[\"value\"], b[\"value\"], \"SortedGreedy\", \"LPRelaxation\",\n# path=f\"ratio-histogram-SortedGreedy-LPRelaxation-{dataset}.pdf\")\n# plot_ratio(a[\"value\"], c[\"value\"], \"SortedGreedy\", \"NPS_MWIS_Solver\",\n# path=f\"ratio-histogram-SortedGreedy-NPS_MWIS_Solver-{dataset}.pdf\")\n# plot_ratio(c[\"value\"], b[\"value\"], \"NPS_MWIS_Solver\", \"LPRelaxation\",\n# path=f\"ratio-histogram-NPS_MWIS_Solver-LPRelaxation-{dataset}.pdf\")\n \n plot_ratio(d[\"value\"], b[\"value\"], \"LocalSearch\", \"LPRelaxation\",\n path=f\"ratio-histogram-LocalSearch-LPRelaxation-{dataset}.pdf\")\n plot_ratio(a[\"value\"], d[\"value\"], \"SortedGreedy\", \"LocalSearch\",\n path=f\"ratio-histogram-SortedGreedy-LocalSearch-{dataset}.pdf\")\n #if len(e) > 0:\n # plot_ratio(e[\"value\"], b[\"value\"], \"fpt-editing-LocalSearch\", \"LPRelaxation\")\n # plot_ratio(d[\"value\"], e[\"value\"], \"LocalSearch\", \"fpt-editing-LocalSearch\")\n\n\n#draw_plots(df[df[\"dataset\"] == \"bio\"], dataset=\"bio\")\n#draw_plots(df[df[\"dataset\"] == \"bio-unweighted\"], dataset=\"bio-unweighted\")", "_____no_output_____" ], [ "X_unweighted = [(g[0], df.reset_index()[\"value\"]) for (g, df) in df.groupby([\"lower_bound_algorithm\", \"dataset\"]) if g[1] == \"bio-unweighted\"]", "_____no_output_____" ], [ "X_weighted = [(g[0], df.reset_index()[\"value\"]) for (g, df) in df.groupby([\"lower_bound_algorithm\", \"dataset\"]) if g[1] == \"bio\"]", "_____no_output_____" ], [ "def plot_matrix_histogram(X, ignore_zero_lb=False, ignore_equality=False, xmin=0, xmax=None, path=None):\n n = len(X)\n fig, axes = plt.subplots(nrows=n, ncols=n, figsize=(2*n, 2*n), sharex=True, sharey=True)\n\n for i, (lb_i, x_i) in enumerate(X):\n axes[i, 0].set_ylabel(lb_i)\n axes[-1, i].set_xlabel(lb_i)\n\n for j, (lb_j, x_j) in enumerate(X):\n if i != j:\n r = x_i / x_j\n\n if not ignore_zero_lb:\n r[(x_i == 0) & (x_j == 0)] == 1\n if ignore_equality:\n r[r == 1] = np.nan\n \n if xmax is None:\n xmax = r.max()\n\n axes[i, j].axvline(1, c=\"k\", ls=\"--\", alpha=0.5)\n axes[i, j].hist(r, bins=np.linspace(xmin, xmax, 25))\n #axes[i, j].set_title(\" \".join([\n # f\"{100*x:.2f}%\" for x in np.quantile(r[~np.isnan(r)], [0.05, 0.5, 0.95])]), fontdict=dict(fontsize=10))\n\n fig.tight_layout()\n if path is not None:\n plt.savefig(path)\n plt.show()\n\nplot_matrix_histogram(X_unweighted, xmax=2, path=\"lb-ratio-bio-unweighted.pdf\")\nplot_matrix_histogram(X_weighted, xmax=5, path=\"lb-ratio-bio.pdf\")\nplot_matrix_histogram(X_unweighted, xmax=2, ignore_equality=True, ignore_zero_lb=True, path=\"lb-ratio-bio-unweighted-filtered.pdf\")\nplot_matrix_histogram(X_weighted, xmax=5, ignore_equality=True, ignore_zero_lb=True, path=\"lb-ratio-bio-filtered.pdf\")", "_____no_output_____" ], [ "def plot_matrix_scatter(X, ignore_zero_lb=False, ignore_equality=False, xmin=0, xmax=None, path=None):\n n = len(X)\n fig, axes = plt.subplots(nrows=n, ncols=n, figsize=(2*n, 2*n))\n \n for ax in axes.flatten():\n ax.set_aspect(\"equal\")\n\n for i, (lb_i, x_i) in enumerate(X):\n axes[i, 0].set_ylabel(lb_i)\n axes[-1, i].set_xlabel(lb_i)\n\n for j, (lb_j, x_j) in enumerate(X):\n if i != j:\n m = ~np.isnan(x_i) & ~np.isnan(x_j)\n l, u = min([x_i[m].min(), x_j[m].min()]), max([x_i[m].max(), x_j[m].max()])\n axes[i, j].plot([l, u], [l, u], c=\"k\", ls=\"--\", alpha=0.5)\n axes[i, j].scatter(x_i, x_j)\n #axes[i, j].set_title(\" \".join([\n # f\"{100*x:.2f}%\" for x in np.quantile(r[~np.isnan(r)], [0.05, 0.5, 0.95])]), fontdict=dict(fontsize=10))\n\n fig.tight_layout()\n if path is not None:\n plt.savefig(path)\n plt.show()\n\nplot_matrix_scatter(X_weighted)", "_____no_output_____" ], [ "\n\nplt.scatter()", "_____no_output_____" ], [ "X_weighted[1]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0174b6402fea712a110c4efcf632f9618281c64
51,918
ipynb
Jupyter Notebook
examples/language_modeling.ipynb
vblagoje/notebooks
85e5a0df81a1684e9930647627ff148a860aed78
[ "Apache-2.0" ]
null
null
null
examples/language_modeling.ipynb
vblagoje/notebooks
85e5a0df81a1684e9930647627ff148a860aed78
[ "Apache-2.0" ]
null
null
null
examples/language_modeling.ipynb
vblagoje/notebooks
85e5a0df81a1684e9930647627ff148a860aed78
[ "Apache-2.0" ]
null
null
null
39.875576
1,662
0.637005
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d01750f6d101686bb13d5aaa307120c2ee1aafcf
1,868
ipynb
Jupyter Notebook
Identify and Remove Duplicate Rows.ipynb
PacktPublishing/Data-Cleansing-Master-Class-in-Python
47e04c258ec31e8011e62d081beb45434fd3948f
[ "MIT" ]
3
2021-11-08T22:25:35.000Z
2022-01-05T16:33:53.000Z
Identify and Remove Duplicate Rows.ipynb
PacktPublishing/Data-Cleansing-Master-Class-in-Python
47e04c258ec31e8011e62d081beb45434fd3948f
[ "MIT" ]
null
null
null
Identify and Remove Duplicate Rows.ipynb
PacktPublishing/Data-Cleansing-Master-Class-in-Python
47e04c258ec31e8011e62d081beb45434fd3948f
[ "MIT" ]
4
2021-12-21T17:42:41.000Z
2022-01-16T23:17:12.000Z
21.227273
57
0.498394
[ [ [ "# locate rows of duplicate data\nfrom pandas import read_csv\n# load the dataset\ndf = read_csv('iris.csv', header=None)\n# calculate duplicates\ndups = df.duplicated()\n# report if there are any duplicates\nprint(dups.any())\n# list all duplicate rows\nprint(df[dups])\n", "True\n 0 1 2 3 4\n34 4.9 3.1 1.5 0.1 Iris-setosa\n37 4.9 3.1 1.5 0.1 Iris-setosa\n142 5.8 2.7 5.1 1.9 Iris-virginica\n" ], [ "# delete rows of duplicate data from the dataset\nfrom pandas import read_csv\n# load the dataset\ndf = read_csv('iris.csv', header=None)\nprint(df.shape)\n# delete duplicate rows\ndf.drop_duplicates(inplace=True)\nprint(df.shape)", "(150, 5)\n(147, 5)\n" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
d01752c3caecb7c55d17bc6d88534541a5436f95
147,475
ipynb
Jupyter Notebook
unsupervised_crypto.ipynb
Vilma011/Unsupervised-learning
b14ee77351e817cc1386704a5866425efa6cec9c
[ "ADSL" ]
null
null
null
unsupervised_crypto.ipynb
Vilma011/Unsupervised-learning
b14ee77351e817cc1386704a5866425efa6cec9c
[ "ADSL" ]
null
null
null
unsupervised_crypto.ipynb
Vilma011/Unsupervised-learning
b14ee77351e817cc1386704a5866425efa6cec9c
[ "ADSL" ]
null
null
null
87.108683
37,400
0.753267
[ [ [ "import pandas as pd\nfrom os import getcwd\n\nimport numpy as np\nfrom sklearn.manifold import TSNE\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import StandardScaler\n\nfrom sklearn.cluster import KMeans\n\n\nimport matplotlib.pyplot as plt \n\ngetcwd()", "_____no_output_____" ], [ "infile_01 = 'crypto_data.csv'\n\ndf = pd.read_csv(infile_01,index_col=0)\ndf.head()", "_____no_output_____" ], [ "# observe values columns\n\ndf.describe()", "_____no_output_____" ], [ "# data types of each column\ndf.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 1252 entries, 42 to PUNK\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CoinName 1252 non-null object \n 1 Algorithm 1252 non-null object \n 2 IsTrading 1252 non-null bool \n 3 ProofType 1252 non-null object \n 4 TotalCoinsMined 744 non-null float64\n 5 TotalCoinSupply 1252 non-null object \ndtypes: bool(1), float64(1), object(4)\nmemory usage: 59.9+ KB\n" ], [ "# data best dtypes\ndf = df.convert_dtypes()", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 1252 entries, 42 to PUNK\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CoinName 1252 non-null string \n 1 Algorithm 1252 non-null string \n 2 IsTrading 1252 non-null boolean\n 3 ProofType 1252 non-null string \n 4 TotalCoinsMined 744 non-null Float64\n 5 TotalCoinSupply 1252 non-null string \ndtypes: Float64(1), boolean(1), string(4)\nmemory usage: 62.4+ KB\n" ], [ "# remove unneeded commas,whitespace,and periods\nstrip_list = []\nold_list = df['TotalCoinSupply'].to_list()\nfor i in range(len(old_list)):\n entry=old_list[i].replace('.','').replace(' ','').replace(',','')\n \n strip_list.append(entry)", "_____no_output_____" ], [ "df['TotalCoinSupply'] = strip_list", "_____no_output_____" ], [ "# convert strings to int64 \ndf['TotalCoinSupply']=pd.to_numeric(df['TotalCoinSupply'],downcast='float')", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 1252 entries, 42 to PUNK\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CoinName 1252 non-null string \n 1 Algorithm 1252 non-null string \n 2 IsTrading 1252 non-null boolean\n 3 ProofType 1252 non-null string \n 4 TotalCoinsMined 744 non-null Float64\n 5 TotalCoinSupply 1252 non-null float32\ndtypes: Float64(1), boolean(1), float32(1), string(3)\nmemory usage: 57.5+ KB\n" ], [ "# duplicated columns\ndupes = df.duplicated()\ndupes.value_counts()", "_____no_output_____" ], [ "# null values\nfor column in df.columns:\n print(f\"Column - {column} has {df[column].isnull().sum()} null values\")", "Column - CoinName has 0 null values\nColumn - Algorithm has 0 null values\nColumn - IsTrading has 0 null values\nColumn - ProofType has 0 null values\nColumn - TotalCoinsMined has 508 null values\nColumn - TotalCoinSupply has 0 null values\n" ], [ "# drop currencies not being traded\ndf_trading = df.loc[df['IsTrading'] == True]\ndf_mined = df_trading.loc[df_trading['TotalCoinsMined']>0]\ndf_clean = df_mined.drop(columns=['IsTrading','CoinName'],axis=1)", "_____no_output_____" ], [ "# drop all NaN\ndf_clean = df_clean.dropna(how='any')", "_____no_output_____" ], [ "# show data loss as percentage\nprint(f'Rows in initial DF -> {len(df_trading.index)}')\nprint(f'Rows with No NaN DF -> {len(df_clean.index)}')\nprint(f'{round((len(df_clean.index)) / (len(df_trading.index)) * 100,2)}% information was NaN')", "Rows in initial DF -> 1144\nRows with No NaN DF -> 532\n46.5% information was NaN\n" ], [ "df_clean", "_____no_output_____" ], [ "# process the string data into dummy columns for model\nX_dummies = pd.get_dummies(data = df_clean, columns = ['Algorithm','ProofType'])\nX_dummies.shape", "_____no_output_____" ], [ "X_dummies", "_____no_output_____" ], [ "# scale the data \nscaler = StandardScaler()\nX_scaled = scaler.fit_transform(X_dummies)\nX_scaled.shape", "_____no_output_____" ], [ "# dimensionality reduction using PCA\npca = PCA(n_components=.9)\ncomponents = pca.fit_transform(X_scaled)\ncomponents.shape", "_____no_output_____" ], [ "# dimensionality reduction using t-SNE\nX_embedded = TSNE(perplexity=30).fit_transform(components)\nX_embedded.shape", "C:\\Users\\vicky\\Anaconda-2021.5\\lib\\site-packages\\sklearn\\manifold\\_t_sne.py:780: FutureWarning: The default initialization in TSNE will change from 'random' to 'pca' in 1.2.\n warnings.warn(\nC:\\Users\\vicky\\Anaconda-2021.5\\lib\\site-packages\\sklearn\\manifold\\_t_sne.py:790: FutureWarning: The default learning rate in TSNE will change from 200.0 to 'auto' in 1.2.\n warnings.warn(\n" ], [ "fig = plt.figure(figsize = (10.20,10.80))\n\nplt.scatter(X_embedded[:,0],X_embedded[:,1])\nplt.grid()\nplt.show()", "_____no_output_____" ], [ "inertia = []\nk = list(range(1, 11))\n\n# Calculate the inertia for the range of k values\nfor i in k:\n km = KMeans(n_clusters=i, random_state=0)\n km.fit(X_embedded)\n inertia.append(km.inertia_)\n\n# Create the Elbow Curve using hvPlot\nelbow_data = {\"k\": k, \"inertia\": inertia}\ndf_elbow = pd.DataFrame(elbow_data)\ndf_elbow", "C:\\Users\\vicky\\Anaconda-2021.5\\lib\\site-packages\\sklearn\\cluster\\_kmeans.py:1036: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=3.\n warnings.warn(\n" ], [ "# Plot the elbow curve to find the best candidate(s) for k\n\nfig = plt.figure(figsize = (10.20,10.80))\n\nplt.plot(df_elbow['k'], df_elbow['inertia'])\nplt.xticks(range(1,11))\nplt.xlabel('Number of clusters')\nplt.ylabel('Inertia')\nplt.title('Elbow curve')\nplt.grid()\nplt.show()", "_____no_output_____" ], [ "def get_clusters(k, data):\n # Initialize the K-Means model\n model = KMeans(n_clusters=k, random_state=0)\n\n # Train the model\n model.fit(data)\n\n # Predict clusters\n predictions = model.predict(data)\n\n # Create return DataFrame with predicted clusters\n data[\"class\"] = model.labels_\n\n return data", "_____no_output_____" ], [ "# transform embedded array into df for clustering purposes\ncluster_df = pd.DataFrame(X_embedded, columns=['col_1','col_2'])", "_____no_output_____" ], [ "# display the cluster df\ncluster_df", "_____no_output_____" ], [ "# after plotting the inertia of the K-means cluster data, 4 clusters was determined to be the best\nclusters = get_clusters(4, cluster_df) ", "_____no_output_____" ], [ "# cluster_df with defined classes\nclusters", "_____no_output_____" ], [ "def show_clusters(df):\n fig = plt.figure(figsize = (10.20,10.80))\n\n plt.scatter(df['col_1'], df['col_2'], c=df['class'])\n plt.xlabel('col_1')\n plt.ylabel('col_2')\n plt.grid()\n plt.show()", "_____no_output_____" ], [ "show_clusters(clusters)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0175355081daee19bc157c1c6ae81d56da8bb9b
85,285
ipynb
Jupyter Notebook
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
3d9a797cf5f86e767346116a2da5e0123d35a6a2
[ "MIT" ]
null
null
null
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
3d9a797cf5f86e767346116a2da5e0123d35a6a2
[ "MIT" ]
null
null
null
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
3d9a797cf5f86e767346116a2da5e0123d35a6a2
[ "MIT" ]
null
null
null
84.945219
45,451
0.740423
[ [ [ "# Passive Membrane Tutorial", "_____no_output_____" ], [ "This is a tutorial which is designed to allow users to explore the passive responses of neuron membrane potentials and how it changes under various conditions such as current injection, ion concentration (both inside and outside the cell), change in membrane capacitance and passive conductances.\n\nWritten by Varun Saravanan; February 2018\n\nAll units are in SI units.\n", "_____no_output_____" ], [ "## Parameters:", "_____no_output_____" ] ], [ [ "dt = 1e-4 #Integration time step. Reduce if you encounter NaN errors.\nt_sim = 0.5 #Total time plotted. Increase as desired.\n\nNa_in = 13 #Sodium ion concentration inside the cell. Default = 13 (in mM)\nNa_out = 120 #Sodium ion concentration outside the cell. Default = 120 (in mM)\nK_in = 140 #Potassium ion concentration inside the cell. Default = 140 (in mM)\nK_out = 8 #Potassium ion concentration outside the cell. Default = 8 (in mM)\n\nCm = 1e-7 #Membrane capacitance. Default = 0.1 microF.\ngNa = 5e-7 #Passive sodium conductance. Default = 0.5 microS.\ngK = 1e-5 #Passive potassium conductance. Default = 10 microS.\n", "_____no_output_____" ] ], [ [ "Nernst Potential Equations:", "_____no_output_____" ] ], [ [ "import math as ma\nEna = -0.058*ma.log10(Na_in/Na_out);\nEk = -0.058*ma.log10(K_in/K_out);", "_____no_output_____" ] ], [ [ "#If you wish to use pre-determined ENa and EK values, set them here and convert this cell into code from Markdown:\nEna = ??;\nEk = ??;", "_____no_output_____" ] ], [ [ "import numpy as np\nniter = int(t_sim//dt) #Total number of integration steps (constant).\n#Output variables:\nVm = np.zeros(niter)\nIe = np.zeros(niter)", "_____no_output_____" ], [ "#Starting values: You can change the initial conditions of each simulation here:\nVm[0] = -0.070;", "_____no_output_____" ] ], [ [ "## Current Injection", "_____no_output_____" ] ], [ [ "I_inj =-5e-8 #Current amplitude. Default = 50 nA.\nt_start = 0.150 #Start time of current injection.\nt_end = 0.350 #End time of current injection.\n\nIe[int(t_start//dt):int(t_end//dt)] = I_inj", "_____no_output_____" ] ], [ [ "### Calculation - do the actual computation here:", "_____no_output_____" ] ], [ [ "#Integration steps - do not change:\nfor i in np.arange(niter-1): \n Vm[i+1] = Vm[i] + dt/Cm*(Ie[i] - gNa*(Vm[i] - Ena) - gK*(Vm[i] - Ek));\n \n", "_____no_output_____" ] ], [ [ "## Plot results", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib notebook\nplt.figure()\nt = np.arange(niter)*dt;\nplt.plot(t,Vm);\nplt.xlabel('Time in s')\nplt.ylabel('Membrane Voltage in V')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d01753ab43b1b5fa0eefd826477a073e086226f1
325,632
ipynb
Jupyter Notebook
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
3d7153fb966189486196a491aed2a65436d992bf
[ "MIT" ]
null
null
null
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
3d7153fb966189486196a491aed2a65436d992bf
[ "MIT" ]
null
null
null
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
3d7153fb966189486196a491aed2a65436d992bf
[ "MIT" ]
null
null
null
902.027701
110,086
0.950039
[ [ [ "# Circuit visualize\n\nこのドキュメントでは scikit-qulacs に用意されている量子回路を可視化します。\nscikitqulacsには現在、以下のような量子回路を用意しています。\n- create_qcl_ansatz(n_qubit: int, c_depth: int, time_step: float, seed=None): [arXiv:1803.00745](https://arxiv.org/abs/1803.00745)\n- create_farhi_neven_ansatz(n_qubit: int, c_depth: int, seed: Optional[int] = None): [arXiv:1802.06002](https://arxiv.org/pdf/1802.06002)\n- create_ibm_embedding_circuit(n_qubit: int): [arXiv:1804.11326](https://arxiv.org/abs/1804.11326) \n- create_shirai_ansatz(n_qubit: int, c_depth: int = 5, seed: int = 0): [arXiv:2111.02951](http://arxiv.org/abs/2111.02951)\n 注:微妙に細部が異なる可能性あり\n- create_npqcd_ansatz(n_qubit: int, c_depth: int, c: float = 0.1): [arXiv:2108.01039](https://arxiv.org/abs/2108.01039)\n- create_yzcx_ansatz(n_qubit: int, c_depth: int = 4, c: float = 0.1, seed: int = 9):[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)\ncreate_qcnn_ansatz(n_qubit: int, seed: int = 0):Creates circuit used in https://www.tensorflow.org/quantum/tutorials/qcnn?hl=en, Section 1.\n\n回路を見やすくするために、パラメータの値を通常より小さくしています。", "_____no_output_____" ], [ "量子回路の可視化には[qulacs-visualizer](https://github.com/Qulacs-Osaka/qulacs-visualizer)を使用しています。\nqulacs-visualizerはpipを使ってインストールできます。\n```bash\npip install qulacsvis\n```", "_____no_output_____" ], [ "## qcl_ansatz\ncreate_qcl_ansatz(\n n_qubit: int, c_depth: int, time_step: float = 0.5, seed: Optional[int] = None\n)\n\n[arXiv:1803.00745](https://arxiv.org/abs/1803.00745)", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_qcl_ansatz\nfrom qulacsvis import circuit_drawer\n\nn_qubit = 4\nc_depth = 2\ntime_step = 1.\nansatz = create_qcl_ansatz(n_qubit, c_depth, time_step)\ncircuit_drawer(ansatz._circuit,\"latex\")", "_____no_output_____" ] ], [ [ "## farhi_neven_ansatz\ncreate_farhi_neven_ansatz(\n n_qubit: int, c_depth: int, seed: Optional[int] = None\n)\n\n[arXiv:1802.06002](https://arxiv.org/abs/1802.06002)", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_farhi_neven_ansatz\n\nn_qubit = 4\nc_depth = 2\nansatz = create_farhi_neven_ansatz(n_qubit, c_depth)\ncircuit_drawer(ansatz._circuit,\"latex\")", "_____no_output_____" ] ], [ [ "## farhi_neven_watle_ansatz\nfarhi_neven_ansatzを @WATLE さんが改良したもの\n\ncreate_farhi_neven_watle_ansatz(\n n_qubit: int, c_depth: int, seed: Optional[int] = None\n)", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_farhi_neven_watle_ansatz\n\nn_qubit = 4\nc_depth = 2\nansatz = create_farhi_neven_watle_ansatz(n_qubit, c_depth)\ncircuit_drawer(ansatz._circuit,\"latex\")", "_____no_output_____" ] ], [ [ "## ibm_embedding_circuit\ncreate_ibm_embedding_circuit(n_qubit: int)\n\n[arXiv:1802.06002](https://arxiv.org/abs/1802.06002)", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_ibm_embedding_circuit\n\nn_qubit = 4\ncircuit = create_ibm_embedding_circuit(n_qubit)\ncircuit_drawer(circuit._circuit,\"latex\")", "_____no_output_____" ] ], [ [ "## shirai_ansatz\ncreate_shirai_ansatz(\n n_qubit: int, c_depth: int = 5, seed: int = 0\n)\n\n[arXiv:2111.02951](https://arxiv.org/abs/2111.02951)", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_shirai_ansatz\n\nn_qubit = 4\nc_depth = 2\nansatz = create_shirai_ansatz(n_qubit, c_depth)\ncircuit_drawer(ansatz._circuit,\"latex\")", "_____no_output_____" ] ], [ [ "## npqcd_ansatz\ncreate_npqcd_ansatz(\n n_qubit: int, c_depth: int, c: float = 0.1\n)\n\n[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_npqc_ansatz\n\nn_qubit = 4\nc_depth = 2\nansatz = create_npqc_ansatz(n_qubit, c_depth)\ncircuit_drawer(ansatz._circuit,\"latex\")", "_____no_output_____" ] ], [ [ "## yzcx_ansatz\ncreate_yzcx_ansatz(\n n_qubit: int, c_depth: int = 4, c: float = 0.1, seed: int = 9\n)\n\n[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_yzcx_ansatz\n\nn_qubit = 4\nc_depth = 2\nansatz = create_yzcx_ansatz(n_qubit, c_depth)\ncircuit_drawer(ansatz._circuit,\"latex\")", "_____no_output_____" ] ], [ [ "## qcnn_ansatz\ncreate_qcnn_ansatz(n_qubit: int, seed: int = 0)\n\nCreates circuit used in https://www.tensorflow.org/quantum/tutorials/qcnn?hl=en, Section 1.", "_____no_output_____" ] ], [ [ "from skqulacs.circuit.pre_defined import create_qcnn_ansatz\n\nn_qubit = 8\nansatz = create_qcnn_ansatz(n_qubit)\ncircuit_drawer(ansatz._circuit,\"latex\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0175f34066798b425fd1290ce35380a978ef96e
2,607
ipynb
Jupyter Notebook
Chapter06/Exercise6.03/.ipynb_checkpoints/test_exercise6_03-checkpoint.ipynb
ibmdev/The-Machine-Learning-Workshop
9c6e3c978b09b8a6ff1d95f0a6fd2001de96d8b4
[ "MIT" ]
21
2020-03-17T17:22:44.000Z
2022-03-08T04:38:23.000Z
Chapter06/Exercise6.03/.ipynb_checkpoints/test_exercise6_03-checkpoint.ipynb
ibmdev/The-Machine-Learning-Workshop
9c6e3c978b09b8a6ff1d95f0a6fd2001de96d8b4
[ "MIT" ]
null
null
null
Chapter06/Exercise6.03/.ipynb_checkpoints/test_exercise6_03-checkpoint.ipynb
ibmdev/The-Machine-Learning-Workshop
9c6e3c978b09b8a6ff1d95f0a6fd2001de96d8b4
[ "MIT" ]
41
2020-03-05T13:25:28.000Z
2022-01-31T17:13:20.000Z
29.292135
239
0.504411
[ [ [ "import unittest\nimport numpy as np\nimport pandas as pd\nimport numpy.testing as np_testing\nimport pandas.testing as pd_testing\nimport os,sys\nimport import_ipynb\n\nclass Test(unittest.TestCase):\n\n def _dirname_if_file(self, filename):\n if os.path.isdir(filename):\n return filename\n else:\n return os.path.dirname(os.path.abspath(filename))\n\n def setUp(self):\n import Exercise6_03\n self.exercise = Exercise6_03\n\n self.a = 1\n self.b = 0.56\n self.c = 1\n self.d = 1\n self.e = 1\n self.f = 0\n self.g = 1\n self.h = -1\n self.i = 0.63\n\n self.model = self.exercise.NN_Model()\n \n def test_model(self):\n self.pred = self.pred = self.model.predict(season = self.a, age = self.b, childish = self.c, trauma = self.d, \\\n surgical = self.e, fevers = self.f, alcohol = self.g, smoking = self.h, sitting = self.i)\n\n self.assertEqual(self.pred, self.exercise.pred)\n\nif __name__ == '__main__':\n unittest.main(argv=['first-arg-is-ignored'], exit=False)", "/home/subhash/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:33: ResourceWarning: unclosed file <_io.BufferedReader name='/home/subhash/packt/The-Machine-Learning-Workshop/Chapter06/Exercise6.03/model_exercise.pkl'>\nResourceWarning: Enable tracemalloc to get the object allocation traceback\n.\n----------------------------------------------------------------------\nRan 1 test in 0.003s\n\nOK\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
d0176ba453a31251023fac9df1aef324f57c2ac0
32,563
ipynb
Jupyter Notebook
Defect_check.ipynb
franchukpetro/steel_defect_detection
99c5d2cbc51572a880fa8ee7b9bf18c456387fde
[ "MIT" ]
null
null
null
Defect_check.ipynb
franchukpetro/steel_defect_detection
99c5d2cbc51572a880fa8ee7b9bf18c456387fde
[ "MIT" ]
null
null
null
Defect_check.ipynb
franchukpetro/steel_defect_detection
99c5d2cbc51572a880fa8ee7b9bf18c456387fde
[ "MIT" ]
null
null
null
34.312961
1,686
0.531831
[ [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ] ], [ [ "## **Downloading data from Google Drive**", "_____no_output_____" ] ], [ [ "!pip install -U -q PyDrive\nimport os\nfrom pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nfrom oauth2client.client import GoogleCredentials\nimport zipfile\nfrom google.colab import drive\n\n# 1. Authenticate and create the PyDrive client.\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\ndrive = GoogleDrive(gauth)\n\n# choose a local (colab) directory to store the data.\nlocal_download_path = os.path.expanduser('content/data')\ntry:\n os.makedirs(local_download_path)\nexcept: pass\n\n# 2. Auto-iterate using the query syntax\n# https://developers.google.com/drive/v2/web/search-parameters\n# list of files in Google Drive folder\nfile_list = drive.ListFile(\n {'q': \"'1MsgfnmWPV-Nod0s1ZejYfsvbIwRMKZg_' in parents\"}).GetList()\n\n# find data in .zip format and save it\nfor f in file_list:\n if f['title'] == \"severstal-steel-defect-detection.zip\":\n fname = os.path.join(local_download_path, f['title'])\n f_ = drive.CreateFile({'id': f['id']})\n f_.GetContentFile(fname)\n\n# extract files from zip to \"extracted/\" directory, this directory will be \n# used for further data modelling\nzip_ref = zipfile.ZipFile(fname, 'r')\nzip_ref.extractall(os.path.join(local_download_path, \"extracted\"))\nzip_ref.close()\n", "_____no_output_____" ] ], [ [ "Define working directories", "_____no_output_____" ] ], [ [ "working_dir = os.path.join(local_download_path, \"extracted\")\n\n# defining working folders and labels\ntrain_images_folder = os.path.join(working_dir, \"train_images\")\ntrain_labels_file = os.path.join(working_dir, \"train.csv\")\n\ntest_images_folder = os.path.join(working_dir, \"test_images\")\ntest_labels_file = os.path.join(working_dir, \"sample_submission.csv\")", "_____no_output_____" ], [ "train_labels = pd.read_csv(train_labels_file)\ntest_labels = pd.read_csv(test_labels_file)", "_____no_output_____" ] ], [ [ "# **Data preprocessing**", "_____no_output_____" ], [ "Drop duplicates", "_____no_output_____" ] ], [ [ "train_labels.drop_duplicates(\"ImageId\", keep=\"last\", inplace=True)", "_____no_output_____" ] ], [ [ "Add to the train dataframe all non-defective images, setting None as value of EncodedPixels column", "_____no_output_____" ] ], [ [ "images = os.listdir(train_images_folder)\npresent_rows = train_labels.ImageId.tolist()\nfor img in images:\n if img not in present_rows:\n train_labels = train_labels.append({\"ImageId\" : img, \"ClassId\" : 1, \"EncodedPixels\" : None}, \n ignore_index=True)\n", "_____no_output_____" ] ], [ [ "Change EncodedPixels column, by setting 1 if images is defected and 0 otherwise", "_____no_output_____" ] ], [ [ "for index, row in train_labels.iterrows():\n train_labels.at[index, \"EncodedPixels\"] = int(train_labels.at[index, \"EncodedPixels\"] is not None)", "_____no_output_____" ] ], [ [ "In total we got 12,568 training samples", "_____no_output_____" ] ], [ [ "train_labels", "_____no_output_____" ] ], [ [ "Create data flow using ImageDataGenerator, see example here: https://medium.com/@vijayabhaskar96/tutorial-on-keras-flow-from-dataframe-1fd4493d237c", "_____no_output_____" ] ], [ [ "from keras_preprocessing.image import ImageDataGenerator\n\ndef create_datagen():\n return ImageDataGenerator(\n fill_mode='constant',\n cval=0.,\n rotation_range=10,\n height_shift_range=0.1,\n width_shift_range=0.1,\n vertical_flip=True,\n rescale=1./255,\n zoom_range=0.1,\n horizontal_flip=True,\n validation_split=0.15\n )\n\ndef create_test_gen():\n return ImageDataGenerator(rescale=1/255.).flow_from_dataframe(\n dataframe=test_labels,\n directory=test_images_folder,\n x_col='ImageId',\n class_mode=None,\n target_size=(256, 512),\n batch_size=1,\n shuffle=False\n )\n\ndef create_flow(datagen, subset_name):\n return datagen.flow_from_dataframe(\n dataframe=train_labels,\n directory=train_images_folder,\n x_col='ImageId',\n y_col='EncodedPixels',\n class_mode='other',\n target_size=(256, 512),\n batch_size=32,\n subset=subset_name\n )", "_____no_output_____" ], [ "data_generator = create_datagen()\ntrain_gen = create_flow(data_generator, 'training')\nval_gen = create_flow(data_generator, 'validation')\ntest_gen = create_test_gen()", "Found 10683 validated image filenames.\nFound 1885 validated image filenames.\nFound 5506 validated image filenames.\n" ] ], [ [ "# **Building and fiting model**", "_____no_output_____" ] ], [ [ "from keras.applications import InceptionResNetV2\nfrom keras.models import Model\nfrom keras.layers.core import Dense \nfrom keras.layers.pooling import GlobalAveragePooling2D\nfrom keras import optimizers", "_____no_output_____" ], [ "model = InceptionResNetV2(weights='imagenet', input_shape=(256,512,3), include_top=False)\n#model.load_weights('/kaggle/input/inceptionresnetv2/inception_resent_v2_weights_tf_dim_ordering_tf_kernels_notop.h5')\nmodel.trainable=False\n\nx=model.output\nx=GlobalAveragePooling2D()(x)\nx=Dense(128,activation='relu')(x)\nx=Dense(64,activation='relu')(x) \nout=Dense(1,activation='sigmoid')(x) #final layer binary classifier\n\nmodel_binary=Model(inputs=model.input,outputs=out) ", "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:203: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2041: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4271: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n\nDownloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.7/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5\n219062272/219055592 [==============================] - 2s 0us/step\n" ], [ "model_binary.compile(\n loss='binary_crossentropy',\n optimizer='adam',\n metrics=['accuracy']\n )", "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3657: The name tf.log is deprecated. Please use tf.math.log instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\n" ] ], [ [ "Fittting the data", "_____no_output_____" ] ], [ [ "STEP_SIZE_TRAIN=train_gen.n//train_gen.batch_size\nSTEP_SIZE_VALID=val_gen.n//val_gen.batch_size\nSTEP_SIZE_TEST=test_gen.n//test_gen.batch_size\n\nmodel_binary.fit_generator(generator=train_gen,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=val_gen,\n validation_steps=STEP_SIZE_VALID,\n epochs=15\n )", "Epoch 1/15\n333/333 [==============================] - 637s 2s/step - loss: 0.5724 - acc: 0.7208 - val_loss: 1.1674 - val_acc: 0.3987\nEpoch 2/15\n333/333 [==============================] - 632s 2s/step - loss: 0.3274 - acc: 0.8580 - val_loss: 0.6656 - val_acc: 0.7275\nEpoch 3/15\n333/333 [==============================] - 621s 2s/step - loss: 0.2728 - acc: 0.8835 - val_loss: 0.6790 - val_acc: 0.7636\nEpoch 4/15\n333/333 [==============================] - 621s 2s/step - loss: 0.2439 - acc: 0.8963 - val_loss: 0.2292 - val_acc: 0.9007\nEpoch 5/15\n333/333 [==============================] - 621s 2s/step - loss: 0.2275 - acc: 0.9085 - val_loss: 0.3075 - val_acc: 0.8732\nEpoch 6/15\n333/333 [==============================] - 618s 2s/step - loss: 0.2094 - acc: 0.9168 - val_loss: 0.3808 - val_acc: 0.8381\nEpoch 7/15\n333/333 [==============================] - 645s 2s/step - loss: 0.2031 - acc: 0.9174 - val_loss: 0.1383 - val_acc: 0.9369\nEpoch 8/15\n333/333 [==============================] - 644s 2s/step - loss: 0.1876 - acc: 0.9245 - val_loss: 0.3507 - val_acc: 0.8392\nEpoch 9/15\n333/333 [==============================] - 644s 2s/step - loss: 0.1842 - acc: 0.9241 - val_loss: 0.5051 - val_acc: 0.7922\nEpoch 10/15\n333/333 [==============================] - 635s 2s/step - loss: 0.1767 - acc: 0.9278 - val_loss: 0.2712 - val_acc: 0.8931\nEpoch 11/15\n333/333 [==============================] - 634s 2s/step - loss: 0.1626 - acc: 0.9380 - val_loss: 0.5116 - val_acc: 0.8365\nEpoch 12/15\n333/333 [==============================] - 634s 2s/step - loss: 0.1593 - acc: 0.9355 - val_loss: 0.2529 - val_acc: 0.9045\nEpoch 13/15\n333/333 [==============================] - 630s 2s/step - loss: 0.1588 - acc: 0.9359 - val_loss: 0.4838 - val_acc: 0.7820\nEpoch 14/15\n333/333 [==============================] - 630s 2s/step - loss: 0.1444 - acc: 0.9439 - val_loss: 0.0859 - val_acc: 0.9628\nEpoch 15/15\n333/333 [==============================] - 621s 2s/step - loss: 0.1493 - acc: 0.9434 - val_loss: 0.1346 - val_acc: 0.9487\n" ] ], [ [ "Predicting test labels", "_____no_output_____" ] ], [ [ "test_gen.reset()\npred=model_binary.predict_generator(test_gen,\nsteps=STEP_SIZE_TEST,\nverbose=1)", "5506/5506 [==============================] - 211s 38ms/step\n" ] ], [ [ "# **Saving results**", "_____no_output_____" ], [ "Create dataframe with probalities of having defects for each image", "_____no_output_____" ] ], [ [ "ids = np.array(test_labels.ImageId)\npred = np.array([p[0] for p in pred])\nprobabilities_df = pd.DataFrame({'ImageId': ids, 'Probability': pred}, columns=['ImageId', 'Probability'])\n", "_____no_output_____" ], [ "probabilities_df", "_____no_output_____" ], [ "from google.colab import files\ndf.to_csv('filename.csv') \nfiles.download('filename.csv')\ndrive.mount('/content/gdrive') \n", "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/gdrive\n" ], [ "!cp /content/defect_present_probabilities.csv gdrive/My\\ Drive", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
d01774c9d3eb222d8d8436da404d3d03ed956567
12,544
ipynb
Jupyter Notebook
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
e178b066f9dde56421cbadfbbe524ebc12f5a7b3
[ "Apache-2.0" ]
null
null
null
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
e178b066f9dde56421cbadfbbe524ebc12f5a7b3
[ "Apache-2.0" ]
null
null
null
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
e178b066f9dde56421cbadfbbe524ebc12f5a7b3
[ "Apache-2.0" ]
null
null
null
21.702422
236
0.35499
[ [ [ "<a href=\"https://colab.research.google.com/github/cocolleen/CPEN-21A-CPE1-2/blob/main/Loop_Statement.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "##For Loop\n\n", "_____no_output_____" ] ], [ [ "week = [\"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\", \"Thursday\",\"Friday\", \"Saturday\"]\n\nfor x in week:\n print (x)", "Sunday\nMonday\nTuesday\nWednesday\nThursday\nFriday\nSaturday\n" ] ], [ [ "#The Break Statement", "_____no_output_____" ] ], [ [ "\nfor x in week:\n print (x)\n if x == \"Thursday\":\n break", "Sunday\nMonday\nTuesday\nWednesday\nThursday\n" ], [ "for x in week:\n if x == \"Thursday\":\n break\n print (x)", "Sunday\nMonday\nTuesday\nWednesday\n" ] ], [ [ "#Looping through string", "_____no_output_____" ] ], [ [ "for x in \"Programmming with python\":\n print (x)", "P\nr\no\ng\nr\na\nm\nm\nm\ni\nn\ng\n \nw\ni\nt\nh\n \np\ny\nt\nh\no\nn\n" ] ], [ [ "#The range function", "_____no_output_____" ] ], [ [ " for x in range(10):\n print (x)", "0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n" ] ], [ [ "#Nested Loops", "_____no_output_____" ] ], [ [ "adjective = [\"red\", \"big\", \"tasty\"]\nfruits = [\"apple\",\"banana\", \"cherry\"]\n\nfor x in adjective:\n for y in fruits:\n print (x, y)", "red apple\nred banana\nred cherry\nbig apple\nbig banana\nbig cherry\ntasty apple\ntasty banana\ntasty cherry\n" ] ], [ [ "##While loop", "_____no_output_____" ] ], [ [ "i = 10\nwhile i > 6:\n print(i)\n i -= 1 #Assignment operator for subtraction i = 1 - i", "10\n9\n8\n7\n" ] ], [ [ "#The break statement", "_____no_output_____" ] ], [ [ "i = 10\nwhile i > 6:\n print (i)\n if i == 8: \n break\n i-=1", "10\n9\n8\n" ] ], [ [ "#The continue statement", "_____no_output_____" ] ], [ [ "i = 10\nwhile i>6:\n i = i - 1\n if i == 8:\n continue\n print (i)\n", "9\n7\n6\n" ] ], [ [ "#The else statement", "_____no_output_____" ] ], [ [ "i = 10\nwhile i>6:\n i = i - 1\n print (i)\nelse:\n print (\"i is no longer greater than 6\")", "9\n8\n7\n6\ni is no longer greater than 6\n" ] ], [ [ "###Aplication 1", "_____no_output_____" ] ], [ [ "#WHILE LOOP\nx = 0\nwhile x <= 10:\n print (\"Value\", x)\n x+=1\n", "Value 0\nValue 1\nValue 2\nValue 3\nValue 4\nValue 5\nValue 6\nValue 7\nValue 8\nValue 9\nValue 10\n" ], [ "#FOR LOOPS\nvalue = [\"Value 1\", \"Value 2\", \"Value 3\", \"Value 4\", \"Value 5\",\"Value 6\", \"Value 7\", \"Value 8\", \"Value 9\", \"Value 10\"]\n\nfor x in value:\n print (x)", "Value 1\nValue 2\nValue 3\nValue 4\nValue 5\nValue 6\nValue 7\nValue 8\nValue 9\nValue 10\n" ] ], [ [ "###Application 2 ", "_____no_output_____" ] ], [ [ "i = 20\nwhile i>4:\n i -= 1\n print (i)\n\nelse:\n print ('i is no longer greater than 3')", "19\n18\n17\n16\n15\n14\n13\n12\n11\n10\n9\n8\n7\n6\n5\n4\ni is no longer greater than 3\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
d01781c4a653c70dc146f2e9387bafda9b1243e8
173,796
ipynb
Jupyter Notebook
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
ad68b6e61f09c25c3dd04d777087b9320ba7d0ca
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
ad68b6e61f09c25c3dd04d777087b9320ba7d0ca
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
ad68b6e61f09c25c3dd04d777087b9320ba7d0ca
[ "MIT" ]
null
null
null
125.393939
21,900
0.834496
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "class Datafuzzy():\n def __init__(self, score, decission):\n self.score = score\n self.decission = decission", "_____no_output_____" ], [ "markFollower = [0, 15000, 33000, 51000, 79000, 100000]\nmarkEngagement = [0, 0.6, 1.7, 5, 7, 8, 10]\nlingFollower = ['NANO', 'MICRO', 'MEDIUM']\nlingEngagement = ['NANO', 'MICRO', 'MEDIUM', 'MEGA']", "_____no_output_____" ] ], [ [ "### PLOT FOR FOLLOWER", "_____no_output_____" ] ], [ [ "# THE FOLLOWERS'S VALUE AND NAME \n\nplt.plot(markFollower[:3], [1, 1, 0])\nplt.suptitle(\"FOLLOWER - NANO\")\nplt.show()\n\nplt.plot(markFollower[1:5], [0, 1, 1,0])\nplt.suptitle(\"FOLLOWER - MICRO\")\nplt.show()\n\nplt.plot(markFollower[3:], [0, 1, 1])\nplt.suptitle(\"FOLLOWER - MEDIUM\")\nplt.show()\n\nplt.plot(markFollower[:3], [1, 1, 0], label=\"NANO\")\nplt.plot(markFollower[1:5], [0, 1, 1,0], label=\"MICRO\")\nplt.plot(markFollower[3:], [0, 1, 1], label=\"MEDIUM\")\nplt.suptitle(\"FOLLOWER\")\nplt.show()", "_____no_output_____" ] ], [ [ "### PLOT FOR LINGUSITIC", "_____no_output_____" ] ], [ [ "# THE LINGUISTIC'S VALUE AND NAME \nmarkEngagement = [0, 0.6, 1.7, 4.7, 6.9, 8, 10]\nplt.plot(markEngagement[:3], [1, 1, 0])\nplt.suptitle(\"ENGAGEMENT - NANO\")\nplt.show()\n\nplt.plot(markEngagement[1:4], [0, 1, 0])\nplt.suptitle(\"ENGAGEMENT - MICRO\")\nplt.show()\n\nplt.plot(markEngagement[2:6], [0, 1, 1, 0])\nplt.suptitle(\"ENGAGEMENT - MEDIUM\")\nplt.show()\n\nplt.plot(markEngagement[4:], [0, 1, 1])\nplt.suptitle(\"ENGAGEMENT - MEGA\")\nplt.show()\n\nplt.plot(markEngagement[:3], [1, 1, 0], label=\"NANO\")\nplt.plot(markEngagement[1:4], [0, 1, 0], label=\"MICRO\")\nplt.plot(markEngagement[2:6], [0, 1, 1, 0], label=\"MEDIUM\")\nplt.plot(markEngagement[4:], [0, 1, 1], label=\"MEGA\")\n\nplt.suptitle(\"ENGAGEMENT\")\nplt.show()", "_____no_output_____" ] ], [ [ "## Fuzzification", "_____no_output_____" ] ], [ [ "# FOLLOWER=========================================\n# membership function\n\ndef fuzzyFollower(countFol):\n follower = []\n \n # STABLE GRAPH\n if (markFollower[0] <= countFol and countFol < markFollower[1]):\n scoreFuzzy = 1\n follower.append(Datafuzzy(scoreFuzzy, lingFollower[0]))\n\n # GRAPH DOWN\n elif (markFollower[1] <= countFol and countFol <= markFollower[2]):\n scoreFuzzy = np.absolute((markFollower[2] - countFol) / (markFollower[2] - markFollower[1]))\n follower.append(Datafuzzy(scoreFuzzy, lingFollower[0]))\n \n # MICRO\n # GRAPH UP\n if (markFollower[1] <= countFol and countFol <= markFollower[2]):\n scoreFuzzy = 1 - np.absolute((markFollower[2] - countFol) / (markFollower[2] - markFollower[1])) \n follower.append(Datafuzzy(scoreFuzzy, lingFollower[1]))\n \n # STABLE GRAPH\n elif (markFollower[2] < countFol and countFol < markFollower[3]):\n scoreFuzzy = 1\n follower.append(Datafuzzy(scoreFuzzy, lingFollower[1]))\n \n # GRAPH DOWN\n elif (markFollower[3] <= countFol and countFol <= markFollower[4]):\n scoreFuzzy = np.absolute((markFollower[4] - countFol) / (markFollower[4] - markFollower[3]))\n follower.append(Datafuzzy(scoreFuzzy, lingFollower[1]))\n\n # MEDIUM\n # GRAPH UP\n if (markFollower[3] <= countFol and countFol <= markFollower[4]):\n scoreFuzzy = 1 - scoreFuzzy\n follower.append(Datafuzzy(scoreFuzzy, lingFollower[2]))\n \n # STABLE GRAPH\n elif (countFol > markFollower[4]):\n scoreFuzzy = 1\n follower.append(Datafuzzy(scoreFuzzy, lingFollower[2]))\n \n return follower\n \n# ENGAGEMENT RATE =========================================\n# membership function\ndef fuzzyEngagement(countEng):\n engagement = []\n # STABLE GRAPH\n if (markEngagement[0] < countEng and countEng < markEngagement[1]):\n scoreFuzzy = 1\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[0]))\n \n # GRAPH DOWN\n elif (markEngagement[1] <= countEng and countEng < markEngagement[2]):\n scoreFuzzy = np.absolute((markEngagement[2] - countEng) / (markEngagement[2] - markEngagement[1]))\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[0]))\n \n # MICRO\n # THE GRAPH GOES UP\n if (markEngagement[1] <= countEng and countEng < markEngagement[2]):\n scoreFuzzy = 1 - scoreFuzzy\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[1]))\n\n # GRAPH DOWN\n elif (markEngagement[2] <= countEng and countEng < markEngagement[3]):\n scoreFuzzy = np.absolute((markEngagement[3] - countEng) / (markEngagement[3] - markEngagement[2]))\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[1]))\n \n #MEDIUM\n # THE GRAPH GOES UP\n if (markEngagement[2] <= countEng and countEng < markEngagement[3]):\n scoreFuzzy = 1 - scoreFuzzy\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[2]))\n \n # STABLE GRAPH\n elif (markEngagement[3] <= countEng and countEng < markEngagement[4]):\n scoreFuzzy = 1\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[2]))\n\n # GRAPH DOWN\n elif (markEngagement[4] <= countEng and countEng < markEngagement[5]):\n scoreFuzzy = np.absolute((markEngagement[5] - countEng) / (markEngagement[5] - markEngagement[4]))\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[2]))\n\n # MEGA\n # THE GRAPH GOES UP\n if (markEngagement[4] <= countEng and countEng < markEngagement[5]):\n scoreFuzzy = 1 - scoreFuzzy \n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[3]))\n \n # STABLE GRAPH\n elif (countEng > markEngagement[5]):\n scoreFuzzy = 1\n engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[3]))\n \n return engagement", "_____no_output_____" ] ], [ [ "## Inference", "_____no_output_____" ] ], [ [ "def cekDecission(follower, engagement):\n temp_yes = []\n temp_no = []\n if (follower.decission == \"NANO\"):\n # Get minimal score fuzzy every decision NO or YES\n temp_yes.append(min(follower.score,engagement[0].score))\n\n # if get 2 data fuzzy Engagement\n if (len(engagement) > 1):\n temp_yes.append(min(follower.score,engagement[1].score))\n\n elif (follower.decission == \"MICRO\"):\n if (engagement[0].decission == \"NANO\"):\n temp_no.append(min(follower.score, engagement[0].score))\n else:\n temp_yes.append(min(follower.score, engagement[0].score))\n\n if (len(engagement) > 1):\n if (engagement[1].decission == \"NANO\"):\n temp_no.append(min(follower.score, engagement[1].score))\n else:\n temp_yes.append(min(follower.score, engagement[1].score))\n\n else:\n if (engagement[0].decission == \"NANO\" or engagement[0].decission == \"MICRO\"):\n temp_no.append(min(follower.score, engagement[0].score))\n else:\n temp_yes.append(min(follower.score, engagement[0].score))\n\n # if get 2 data fuzzy engagement \n if (len(engagement) > 1):\n if (engagement[1].decission == \"NANO\" or engagement[1].decission == \"MICRO\"):\n temp_no.append(min(follower.score, engagement[1].score))\n else:\n temp_yes.append(min(follower.score, engagement[1].score))\n \n return temp_yes, temp_no", "_____no_output_____" ], [ "# Fuzzy Rules\ndef fuzzyRules(follower, engagement):\n temp_yes = []\n temp_no = []\n temp_y = []\n temp_n = []\n \n temp_yes, temp_no = cekDecission(follower[0], engagement)\n \n # if get 2 data fuzzy Follower \n if (len(follower) > 1):\n temp_y, temp_n = cekDecission(follower[1], engagement)\n \n temp_yes += temp_y \n temp_no += temp_n\n \n return temp_yes, temp_no", "_____no_output_____" ] ], [ [ "### Result", "_____no_output_____" ] ], [ [ "# Result\ndef getResult(resultYes, resultNo):\n yes = 0\n no = 0\n\n if(resultNo):\n no = max(resultNo)\n if(resultYes):\n yes = max(resultYes)\n \n return yes, no", "_____no_output_____" ] ], [ [ "### Defuzzification", "_____no_output_____" ] ], [ [ "def finalDecission(yes, no):\n mamdani = (((10 + 20 + 30 + 40 + 50 + 60 + 70) * no) + ((80 + 90 + 100) * yes)) / ((7 * no) + (yes * 3))\n return mamdani", "_____no_output_____" ] ], [ [ "### Main Function", "_____no_output_____" ] ], [ [ "def mainFunction(followerCount, engagementRate):\n follower = fuzzyFollower(followerCount)\n engagement = fuzzyEngagement(engagementRate)\n resultYes, resultNo = fuzzyRules(follower, engagement)\n yes, no = getResult(resultYes, resultNo)\n\n return finalDecission(yes, no)", "_____no_output_____" ], [ "data = pd.read_csv('influencers.csv')\ndata", "_____no_output_____" ], [ "hasil = []\nresult = []\nidd = []\n\nfor i in range (len(data)):\n # Insert ID and the score into the list\n hasil.append([data.loc[i, 'id'], mainFunction(data.loc[i, 'followerCount'], data.loc[i, 'engagementRate'])])\n result.append([data.loc[i, 'id'], (data.loc[i, 'followerCount'] * data.loc[i, 'engagementRate'] / 100)])\n \n# Sorted list of hasil by fuzzy score DECREMENT\nhasil.sort(key=lambda x:x[1], reverse=True)\nresult.sort(key=lambda x:x[1], reverse=True)", "_____no_output_____" ], [ "result = result[:20]\nhasil = hasil[:20]\nidd = [row[0] for row in result]", "_____no_output_____" ], [ "hasil", "_____no_output_____" ], [ "idd", "_____no_output_____" ], [ "def cekAkurasi(hasil, result):\n count = 0\n for i in range(len(hasil)):\n if (hasil[i][0] in idd):\n count += 1\n return count", "_____no_output_____" ], [ "print(\"AKURASI : \", cekAkurasi(hasil, result)/20*100, \" %\")", "AKURASI : 30.0 %\n" ], [ "chosen = pd.DataFrame(hasil[:20], columns=['ID', 'Score'])", "_____no_output_____" ], [ "chosen", "_____no_output_____" ], [ "chosen.to_csv('choosen.csv')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01782a2a2a80ab19dfb2c61e4933cde96979466
15,172
ipynb
Jupyter Notebook
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
d84cb09a8a51208437734280e4a5c927a8b034a1
[ "MIT" ]
8
2020-09-24T06:43:54.000Z
2022-01-23T20:52:43.000Z
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
d84cb09a8a51208437734280e4a5c927a8b034a1
[ "MIT" ]
4
2021-02-24T22:07:02.000Z
2021-09-09T03:24:43.000Z
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
d84cb09a8a51208437734280e4a5c927a8b034a1
[ "MIT" ]
6
2020-09-14T00:22:35.000Z
2021-09-25T09:26:44.000Z
34.639269
288
0.587332
[ [ [ "### Road Following - Live demo (TensorRT) with collision avoidance\n### Added collision avoidance ResNet18 TRT\n### threshold between free and blocked is the controller - action: just a pause as long the object is in front or by time\n### increase in speed_gain requires some small increase in steer_gain (once a slider is blue (mouse click), arrow keys left/right can be used)\n### 10/11/2020", "_____no_output_____" ], [ "# TensorRT", "_____no_output_____" ] ], [ [ "import torch\ndevice = torch.device('cuda')", "_____no_output_____" ] ], [ [ "Load the TRT optimized models by executing the cell below", "_____no_output_____" ] ], [ [ "import torch\nfrom torch2trt import TRTModule\n\nmodel_trt = TRTModule()\nmodel_trt.load_state_dict(torch.load('best_steering_model_xy_trt.pth')) # well trained road following model\n\n\nmodel_trt_collision = TRTModule()\nmodel_trt_collision.load_state_dict(torch.load('best_model_trt.pth')) # anti collision model trained for one object to block and street signals (ground, strips) as free", "_____no_output_____" ] ], [ [ "### Creating the Pre-Processing Function", "_____no_output_____" ], [ "We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:\n\n1. Convert from HWC layout to CHW layout\n2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.0\n3. Transfer the data from CPU memory to GPU memory\n4. Add a batch dimension", "_____no_output_____" ] ], [ [ "import torchvision.transforms as transforms\nimport torch.nn.functional as F\nimport cv2\nimport PIL.Image\nimport numpy as np\n\nmean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()\nstd = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()\n\ndef preprocess(image):\n image = PIL.Image.fromarray(image)\n image = transforms.functional.to_tensor(image).to(device).half()\n image.sub_(mean[:, None, None]).div_(std[:, None, None])\n return image[None, ...]", "_____no_output_____" ] ], [ [ "Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.\n\nNow, let's start and display our camera. You should be pretty familiar with this by now. ", "_____no_output_____" ] ], [ [ "from IPython.display import display\nimport ipywidgets\nimport traitlets\nfrom jetbot import Camera, bgr8_to_jpeg\n\ncamera = Camera()", "_____no_output_____" ], [ "import IPython\n\nimage_widget = ipywidgets.Image()\n\ntraitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)", "_____no_output_____" ] ], [ [ "We'll also create our robot instance which we'll need to drive the motors.", "_____no_output_____" ] ], [ [ "from jetbot import Robot\n\nrobot = Robot()", "_____no_output_____" ] ], [ [ "Now, we will define sliders to control JetBot\n> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment\n\n1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider`` \n2. Steering Gain Control (steering_gain_sloder): If you see JetBot is woblling, you need to reduce ``steering_gain_slider`` till it is smooth\n3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets\n\n> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.", "_____no_output_____" ] ], [ [ "speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain')\nsteering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.10, description='steering gain')\nsteering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.23, description='steering kd')\nsteering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')\n\ndisplay(speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider)\n\n#anti collision ---------------------------------------------------------------------------------------------------\nblocked_slider = ipywidgets.FloatSlider(description='blocked', min=0.0, max=1.0, orientation='horizontal')\nstopduration_slider= ipywidgets.IntSlider(min=1, max=1000, step=1, value=10, description='Manu. time stop') #anti-collision stop time\n#set value according the common threshold e.g. 0.8\nblock_threshold= ipywidgets.FloatSlider(min=0, max=1.2, step=0.01, value=0.8, description='Manu. bl threshold') #anti-collision block probability\n\ndisplay(image_widget)\nd2 = IPython.display.display(\"\", display_id=2)\n\ndisplay(ipywidgets.HBox([blocked_slider, block_threshold, stopduration_slider]))\n\n# TIME STOP slider is to select manually time-for-stop when object has been discovered\n", "_____no_output_____" ], [ "#x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')\n#y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')\n#steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')\n#speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')\n\n#display(ipywidgets.HBox([y_slider, speed_slider,x_slider, steering_slider])) #sliders take time , reduce FPS a couple of frames per second\n\n#observation sliders only", "_____no_output_____" ], [ "from threading import Thread \n\ndef display_class_probability(prob_blocked):\n global blocked_slide\n blocked_slider.value = prob_blocked\n \n return\n\ndef model_new(image_preproc):\n global model_trt_collision,angle_last\n xy = model_trt(image_preproc).detach().float().cpu().numpy().flatten()\n x = xy[0] \n y = (0.5 - xy[1]) / 2.0\n angle=math.atan2(x, y) \n pid =angle * steer_gain + (angle - angle_last) * steer_dgain\n steer_val = pid + steer_bias \n \n angle_last = angle\n \n robot.left_motor.value = max(min(speed_value + steer_val, 1.0), 0.0)\n robot.right_motor.value = max(min(speed_value - steer_val, 1.0), 0.0)\n return", "_____no_output_____" ] ], [ [ "Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps\n\n1. Pre-process the camera image\n2. Execute the neural network\n3. Compute the approximate steering value\n4. Control the motors using proportional / derivative control (PD)", "_____no_output_____" ] ], [ [ "import time\nimport os\nimport math\n\nangle = 0.0\nangle_last = 0.0\nangle_last_block=0\n\ncount_stops=0\ngo_on=1\nstop_time=20 #number of frames to remain stopped\nx=0.0\ny=0.0\nspeed_value=speed_gain_slider.value\nt1=0\nroad_following=1\nspeed_value_block=0\n\n\ndef execute(change):\n global angle, angle_last, angle_last_block, blocked_slider, robot,count_stops, stop_time,go_on,x,y,block_threshold\n global speed_value, steer_gain, steer_dgain, steer_bias,t1,model_trt, model_trt_collision,road_following,speed_value_block\n \n \n steer_gain=steering_gain_slider.value\n steer_dgain=steering_dgain_slider.value\n steer_bias=steering_bias_slider.value\n \n image_preproc = preprocess(change['new']).to(device)\n #anti_collision model----- \n prob_blocked = float(F.softmax(model_trt_collision(image_preproc), dim=1) .flatten()[0]) \n \n #blocked_slider.value = prob_blocked\n #display of detection probability value for the four classes \n t = Thread(target = display_class_probability, args =(prob_blocked,), daemon=False) \n t.start()\n \n stop_time=stopduration_slider.value\n \n if go_on==1: \n if prob_blocked > block_threshold.value: # threshold should be above 0.5,\n #start of collision_avoidance\n count_stops +=1\n go_on=2\n road_following=2\n x=0.0 #set steering zero\n y=0 #set steering zero\n speed_value_block=0 # set speed zero or negative or turn\n #anti_collision end-------\n else:\n #start of road following \n go_on=1\n count_stops=0\n speed_value = speed_gain_slider.value # \n t = Thread(target = model_new, args =(image_preproc,), daemon=True) \n t.start()\n \n road_following=1\n else:\n count_stops += 1\n if count_stops<stop_time:\n go_on=2\n else:\n go_on=1\n count_stops=0\n road_following=1\n \n \n #x_slider.value = x #take time 4 FPS\n #y_slider.value = y #y_speed\n \n \n if road_following>1: \n angle_block=math.atan2(x, y) \n pid =angle_block * steer_gain + (angle - angle_last) * steer_dgain\n steer_val_block = pid + steer_bias \n angle_last_block = angle_block\n \n robot.left_motor.value = max(min(speed_value_block + steer_val_block, 1.0), 0.0)\n robot.right_motor.value = max(min(speed_value_block - steer_val_block, 1.0), 0.0) \n \n t2 = time.time() \n s = f\"\"\"{int(1/(t2-t1))} FPS\"\"\"\n d2.update(IPython.display.HTML(s) )\n t1 = time.time() \n \n \nexecute({'new': camera.value})", "_____no_output_____" ] ], [ [ "Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.\n\nWe accomplish that with the observe function.", "_____no_output_____" ], [ ">WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!", "_____no_output_____" ] ], [ [ "camera.observe(execute, names='value')", "_____no_output_____" ] ], [ [ "Awesome! If your robot is plugged in it should now be generating new commands with each new camera frame. \n\nYou can now place JetBot on Lego or Track you have collected data on and see whether it can follow track.\n\nIf you want to stop this behavior, you can unattach this callback by executing the code below.", "_____no_output_____" ] ], [ [ "import time\n\ncamera.unobserve(execute, names='value')\n\ntime.sleep(0.1) # add a small sleep to make sure frames have finished processing\n\nrobot.stop()", "_____no_output_____" ], [ "camera.stop()", "_____no_output_____" ] ], [ [ "### Conclusion\nThat's it for this live demo! Hopefully you had some fun seeing your JetBot moving smoothly on track follwing the road!!!\n\nIf your JetBot wasn't following road very well, try to spot where it fails. The beauty is that we can collect more data for these failure scenarios and the JetBot should get even better :)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
d01789ac2f115b552e92fd354094dbeaa570fa1a
4,269
ipynb
Jupyter Notebook
Day 5 Assignment2.ipynb
puttaraju-Kv/letsupgrade
9393464081199f2d4a1c8380d0c4826ead32e9a4
[ "Apache-2.0" ]
null
null
null
Day 5 Assignment2.ipynb
puttaraju-Kv/letsupgrade
9393464081199f2d4a1c8380d0c4826ead32e9a4
[ "Apache-2.0" ]
null
null
null
Day 5 Assignment2.ipynb
puttaraju-Kv/letsupgrade
9393464081199f2d4a1c8380d0c4826ead32e9a4
[ "Apache-2.0" ]
null
null
null
67.761905
3,197
0.599204
[ [ [ "# for prime numbers using filter function ", "_____no_output_____" ], [ "n = range(1, 2500)\nfor i in range(2, 8):\n n= list(filter(lambda n: n == i or n%i, n))\n\nprint(n)\n", "[1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 121, 127, 131, 137, 139, 143, 149, 151, 157, 163, 167, 169, 173, 179, 181, 187, 191, 193, 197, 199, 209, 211, 221, 223, 227, 229, 233, 239, 241, 247, 251, 253, 257, 263, 269, 271, 277, 281, 283, 289, 293, 299, 307, 311, 313, 317, 319, 323, 331, 337, 341, 347, 349, 353, 359, 361, 367, 373, 377, 379, 383, 389, 391, 397, 401, 403, 407, 409, 419, 421, 431, 433, 437, 439, 443, 449, 451, 457, 461, 463, 467, 473, 479, 481, 487, 491, 493, 499, 503, 509, 517, 521, 523, 527, 529, 533, 541, 547, 551, 557, 559, 563, 569, 571, 577, 583, 587, 589, 593, 599, 601, 607, 611, 613, 617, 619, 629, 631, 641, 643, 647, 649, 653, 659, 661, 667, 671, 673, 677, 683, 689, 691, 697, 701, 703, 709, 713, 719, 727, 731, 733, 737, 739, 743, 751, 757, 761, 767, 769, 773, 779, 781, 787, 793, 797, 799, 803, 809, 811, 817, 821, 823, 827, 829, 839, 841, 851, 853, 857, 859, 863, 869, 871, 877, 881, 883, 887, 893, 899, 901, 907, 911, 913, 919, 923, 929, 937, 941, 943, 947, 949, 953, 961, 967, 971, 977, 979, 983, 989, 991, 997, 1003, 1007, 1009, 1013, 1019, 1021, 1027, 1031, 1033, 1037, 1039, 1049, 1051, 1061, 1063, 1067, 1069, 1073, 1079, 1081, 1087, 1091, 1093, 1097, 1103, 1109, 1111, 1117, 1121, 1123, 1129, 1133, 1139, 1147, 1151, 1153, 1157, 1159, 1163, 1171, 1177, 1181, 1187, 1189, 1193, 1199, 1201, 1207, 1213, 1217, 1219, 1223, 1229, 1231, 1237, 1241, 1243, 1247, 1249, 1259, 1261, 1271, 1273, 1277, 1279, 1283, 1289, 1291, 1297, 1301, 1303, 1307, 1313, 1319, 1321, 1327, 1331, 1333, 1339, 1343, 1349, 1357, 1361, 1363, 1367, 1369, 1373, 1381, 1387, 1391, 1397, 1399, 1403, 1409, 1411, 1417, 1423, 1427, 1429, 1433, 1439, 1441, 1447, 1451, 1453, 1457, 1459, 1469, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1501, 1507, 1511, 1513, 1517, 1523, 1529, 1531, 1537, 1541, 1543, 1549, 1553, 1559, 1567, 1571, 1573, 1577, 1579, 1583, 1591, 1597, 1601, 1607, 1609, 1613, 1619, 1621, 1627, 1633, 1637, 1639, 1643, 1649, 1651, 1657, 1661, 1663, 1667, 1669, 1679, 1681, 1691, 1693, 1697, 1699, 1703, 1709, 1711, 1717, 1721, 1723, 1727, 1733, 1739, 1741, 1747, 1751, 1753, 1759, 1763, 1769, 1777, 1781, 1783, 1787, 1789, 1793, 1801, 1807, 1811, 1817, 1819, 1823, 1829, 1831, 1837, 1843, 1847, 1849, 1853, 1859, 1861, 1867, 1871, 1873, 1877, 1879, 1889, 1891, 1901, 1903, 1907, 1909, 1913, 1919, 1921, 1927, 1931, 1933, 1937, 1943, 1949, 1951, 1957, 1961, 1963, 1969, 1973, 1979, 1987, 1991, 1993, 1997, 1999, 2003, 2011, 2017, 2021, 2027, 2029, 2033, 2039, 2041, 2047, 2053, 2057, 2059, 2063, 2069, 2071, 2077, 2081, 2083, 2087, 2089, 2099, 2101, 2111, 2113, 2117, 2119, 2123, 2129, 2131, 2137, 2141, 2143, 2147, 2153, 2159, 2161, 2167, 2171, 2173, 2179, 2183, 2189, 2197, 2201, 2203, 2207, 2209, 2213, 2221, 2227, 2231, 2237, 2239, 2243, 2249, 2251, 2257, 2263, 2267, 2269, 2273, 2279, 2281, 2287, 2291, 2293, 2297, 2299, 2309, 2311, 2321, 2323, 2327, 2329, 2333, 2339, 2341, 2347, 2351, 2353, 2357, 2363, 2369, 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2407, 2411, 2413, 2417, 2419, 2423, 2431, 2437, 2441, 2447, 2449, 2453, 2459, 2461, 2467, 2473, 2477, 2479, 2483, 2489, 2491, 2497]\n" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
d0178c939eed1b6614c82ee82d865e4971e80bd4
9,219
ipynb
Jupyter Notebook
elasticsearch_install.ipynb
xSakix/AI_colan_notebooks
b7a40384811e77bb5ff12689596362a9f0356c83
[ "MIT" ]
2
2020-02-21T09:53:28.000Z
2020-07-20T20:24:14.000Z
elasticsearch_install.ipynb
xSakix/AI_colab_notebooks
b7a40384811e77bb5ff12689596362a9f0356c83
[ "MIT" ]
null
null
null
elasticsearch_install.ipynb
xSakix/AI_colab_notebooks
b7a40384811e77bb5ff12689596362a9f0356c83
[ "MIT" ]
null
null
null
33.645985
1,216
0.519796
[ [ [ "<a href=\"https://colab.research.google.com/github/xSakix/AI_colab_notebooks/blob/master/elasticsearch_install.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "#Elastic search in Collab\n\nHad to install elastic search in colab for 'reasons' and this is the way it worked for me. Might be usefull for someone else also.\n\nWorks with 7.9.2. Probably could be run also with 7.14.0, but didn't have time to debug the issues. If you want, you can try and just run the instance under the 'elasticsearch' user to get the proper error log.", "_____no_output_____" ] ], [ [ "#7.9.1 works with ES 7.9.2\n!pip install -Iv elasticsearch==7.9.1", "_____no_output_____" ], [ "#download ES 7.92 and extract\n%%bash\n\nwget -q https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.9.2-linux-x86_64.tar.gz\nwget -q https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.9.2-linux-x86_64.tar.gz.sha512\nshasum -a 512 -c elasticsearch-oss-7.9.2-linux-x86_64.tar.gz.sha512 \ntar -xzf elasticsearch-oss-7.9.2-linux-x86_64.tar.gz", "elasticsearch-oss-7.9.2-linux-x86_64.tar.gz: OK\n" ], [ "# create user elasticsearch and group elasticsearch, under which will the ES instance be running\n# ES can't run under root\n!sudo useradd elasticsearch\n!sudo grep elasticsearch /etc/passwd \n!sudo groupadd elasticsearch\n!sudo usermod -a -G elasticsearch elasticsearch\n!grep elasticsearch /etc/group", "elasticsearch:x:1000:1000::/home/elasticsearch:/bin/sh\ngroupadd: group 'elasticsearch' already exists\nelasticsearch:x:1000:elasticsearch\n" ], [ "# change the directory rights to user:group \n!sudo chown elasticsearch:elasticsearch -R elasticsearch-7.9.2", "_____no_output_____" ], [ "#run ES instance as a daemon\n%%bash --bg\nsudo -H -u elasticsearch elasticsearch-7.9.2/bin/elasticsearch", "Starting job # 0 in a separate thread.\n" ], [ "# give time to start up\nimport time\ntime.sleep(20)", "_____no_output_____" ], [ "#print the process\n%%bash\n\nps -ef | grep elastic", "root 151 149 0 09:37 ? 00:00:00 sudo -H -u elasticsearch elasticsearch-7.9.2/bin/elasticsearch\nelastic+ 152 151 99 09:37 ? 00:00:20 /content/elasticsearch-7.9.2/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Djava.io.tmpdir=/tmp/elasticsearch-13828270847204549773 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m -XX:MaxDirectMemorySize=536870912 -Des.path.home=/content/elasticsearch-7.9.2 -Des.path.conf=/content/elasticsearch-7.9.2/config -Des.distribution.flavor=oss -Des.distribution.type=tar -Des.bundled_jdk=true -cp /content/elasticsearch-7.9.2/lib/* org.elasticsearch.bootstrap.Elasticsearch\nroot 386 384 0 09:37 ? 00:00:00 grep elastic\n" ], [ "#test the instance\n%%bash\n\ncurl -sX GET \"localhost:9200/\"", "{\n \"name\" : \"80196037624b\",\n \"cluster_name\" : \"elasticsearch\",\n \"cluster_uuid\" : \"LQGhp_Y6TNGzHsw0lt4JEA\",\n \"version\" : {\n \"number\" : \"7.9.2\",\n \"build_flavor\" : \"oss\",\n \"build_type\" : \"tar\",\n \"build_hash\" : \"d34da0ea4a966c4e49417f2da2f244e3e97b4e6e\",\n \"build_date\" : \"2020-09-23T00:45:33.626720Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"8.6.2\",\n \"minimum_wire_compatibility_version\" : \"6.8.0\",\n \"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n" ], [ "# test the python client/lib\nfrom elasticsearch import Elasticsearch\nes = Elasticsearch()\nes.ping()", "/usr/local/lib/python3.7/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!\n RequestsDependencyWarning)\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d017ab6391760c564b39a153da30c1eb22c3e649
5,597
ipynb
Jupyter Notebook
APIs/SentinelSat/dec_working/stacking.ipynb
SumanjaliDamarla/remote-sensing
73e7cbc10932370a6c26fc0d940060aadd786834
[ "MIT" ]
1
2020-04-11T23:16:13.000Z
2020-04-11T23:16:13.000Z
APIs/SentinelSat/dec_working/stacking.ipynb
SumanjaliDamarla/semiautomated-vegetation-analysis
73e7cbc10932370a6c26fc0d940060aadd786834
[ "MIT" ]
5
2020-04-18T00:12:10.000Z
2020-04-26T00:11:07.000Z
APIs/SentinelSat/dec_working/stacking.ipynb
SumanjaliDamarla/remote-sensing
73e7cbc10932370a6c26fc0d940060aadd786834
[ "MIT" ]
2
2021-01-25T13:53:35.000Z
2021-08-21T18:59:25.000Z
33.118343
124
0.512775
[ [ [ "import os\nimport shutil\nimport rasterio\nfrom glob import glob", "_____no_output_____" ], [ "data = os.getcwd() + \"\\\\data\"\nmonths_list = glob(data+\"\\\\*_data\")\nmonths_list", "_____no_output_____" ], [ "if not os.path.exists(os.getcwd()+\"\\\\stacked_images\"):\n stack_folder = os.makedirs(os.getcwd()+\"\\\\stacked_images\")\nfor month in months_list:\n dst = month.split('\\\\')[-1]\n if not os.path.exists(os.getcwd()+\"\\\\stacked_images\\\\\"+dst):\n os.makedirs(os.getcwd()+\"\\\\stacked_images\\\\\"+dst)", "_____no_output_____" ], [ "#stacking\nfor month in months_list:\n print(month)\n data_list = glob(month+\"\\\\*\")\n num_scenes = len(data_list)\n i = 0\n for scene in data_list:\n i = i + 1\n path = scene + \"\\\\GRANULE\"\n data_folder = glob(path + \"\\\\*\")\n data_folder[0] = data_folder[0] + \"\\\\IMG_DATA\\\\R10m\"\n tifFiles = []\n tifFiles.append(glob(data_folder[0] + \"\\\\*B08_10m.tif\")[0])\n tifFiles.append(glob(data_folder[0] + \"\\\\*B04_10m.tif\")[0])\n tifFiles.append(glob(data_folder[0] + \"\\\\*B03_10m.tif\")[0])\n tifFiles.append(glob(data_folder[0] + \"\\\\*B02_10m.tif\")[0])\n with rasterio.open(tifFiles[0]) as src0:\n meta = src0.meta\n meta.update(count = len(tifFiles))\n dest = tifFiles[0].split('\\\\')[-1].split('.')[0].split('_')[:2]\n dest_name = dest[0] + \"_\" + dest[1] + \".tif\"\n dest = data_folder[0] +\"\\\\\"+ dest_name\n if not os.path.exists(dest):\n with rasterio.open(dest, 'w', **meta) as dst:\n for id, layer in enumerate(tifFiles, start=1):\n with rasterio.open(layer) as src1:\n dst.write_band(id, src1.read(1))\n print(\"(\" + str(i) + \"/\" + str(num_scenes) + \") Stacked \" + dest_name + \"...\")\n stack_dest_name = scene.split(\"\\\\\")[-1].split('.')[0] + \".tif\"\n stack_dest = os.getcwd() + \"\\\\stacked_images\\\\\"+ month.split('\\\\')[-1] + \"\\\\\" + stack_dest_name\n if not os.path.exists(stack_dest):\n shutil.copy2(dest, stack_dest)\n print(\"(\" + str(i) + \"/\" + str(num_scenes) + \") Copied to stacked_data ...\")", "C:\\Users\\Hello\\Documents\\remote-sensing\\APIs\\SentinelSat\\other-data\\data\\jan_data\n(1/5) Stacked T44PLC_20190101T050211.tif...\n(1/5) Copied to stacked_data ...\n(2/5) Stacked T44PMC_20190101T050211.tif...\n(2/5) Copied to stacked_data ...\n(3/5) Stacked T44PNC_20190101T050211.tif...\n(3/5) Copied to stacked_data ...\n(4/5) Stacked T44QMD_20190101T050211.tif...\n(4/5) Copied to stacked_data ...\n(5/5) Stacked T44QND_20190101T050211.tif...\n(5/5) Copied to stacked_data ...\n" ], [ "#cleanup data folder\n\nfor month in months_list:\n print(month)\n data_list = glob(month+\"\\\\*\")\n for scene in data_list:\n shutil.rmtree(scene)\n print(\"Deleted scene \" + scene.split('\\\\')[-1].split('.')[0] + \"...\")", "C:\\Users\\Hello\\Documents\\remote-sensing\\APIs\\SentinelSat\\other-data\\data\\jan_data\nDeleted scene S2A_MSIL2A_20190101T050211_N0211_R119_T44PLC_20190101T085212...\nDeleted scene S2A_MSIL2A_20190101T050211_N0211_R119_T44PMC_20190101T085212...\nDeleted scene S2A_MSIL2A_20190101T050211_N0211_R119_T44PNC_20190101T085212...\nDeleted scene S2A_MSIL2A_20190101T050211_N0211_R119_T44QMD_20190101T085212...\nDeleted scene S2A_MSIL2A_20190101T050211_N0211_R119_T44QND_20190101T085212...\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
d017afcf2d089d68392721275f1402d7931b38bd
24,703
ipynb
Jupyter Notebook
distribution_files/python/examples/pancreas/hovorka.ipynb
clarissa-albanese/MoonLight
164e00e940e39b932cad125fcaa9786956f10d17
[ "Apache-2.0" ]
5
2020-04-08T08:52:05.000Z
2021-11-22T21:02:13.000Z
distribution_files/python/examples/pancreas/hovorka.ipynb
clarissa-albanese/MoonLight
164e00e940e39b932cad125fcaa9786956f10d17
[ "Apache-2.0" ]
9
2020-03-04T12:15:40.000Z
2020-03-28T09:38:47.000Z
distribution_files/python/examples/pancreas/hovorka.ipynb
clarissa-albanese/MoonLight
164e00e940e39b932cad125fcaa9786956f10d17
[ "Apache-2.0" ]
1
2021-11-17T16:46:04.000Z
2021-11-17T16:46:04.000Z
117.075829
19,000
0.829616
[ [ [ "# %load hovorka.py\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\n\n\ndef model(x, t, t_offset=None):\n w = 100\n ka1 = 0.006 #\n ka2 = 0.06 #\n ka3 = 0.03 #\n kb1 = 0.0034 #\n kb2 = 0.056 #\n kb3 = 0.024 #\n u_b = 0.0555\n tmaxI = 55 #\n VI = 0.12 * w #\n ke = 0.138 #\n k12 = 0.066 #\n VG = 0.16 * w #\n # G = x[0] / VG\n F01 = 0.0097 * w #\n FR = 0\n EGP0 = 0.0161 * w #\n AG = 0.8 #\n Gmolar = 180.1559\n tmaxG = 40 #\n sp = 110 * VG / 18\n l = (x[14] * x[10] + x[13] * x[11] + x[12] * (-(\n - F01 - x[5] * x[0] + k12 * x[1] - FR + EGP0 * (1 - x[7]) + (x[9] * AG * 1000 / Gmolar) * x[8] * np.exp(\n -x[8] / tmaxG) / (tmaxG ** 2)))) + u_b - x[2] / tmaxI,\n\n dxdt = [\n - F01 - x[5] * x[0] + k12 * x[1] - FR + EGP0 * (1 - x[7]) + (x[9] * AG * 1000 / Gmolar) * x[8] * np.exp(\n -x[8] / tmaxG) / (tmaxG ** 2),\n x[5] * x[0] - (k12 + x[6]) * x[1],\n ((x[14] * x[10] + x[13] * x[11] + x[12] * (-(\n - F01 - x[5] * x[0] + k12 * x[1] - FR + EGP0 * (1 - x[7]) + (x[9] * AG * 1000 / Gmolar) * x[8] * np.exp(\n -x[8] / tmaxG) / (tmaxG ** 2)))) + u_b - x[2] / tmaxI) + u_b - x[2] / tmaxI,\n (x[2] - x[3]) / tmaxI,\n x[3] / (tmaxI * VI) - ke * x[4],\n - ka1 * x[5] + kb1 * x[4],\n - ka2 * x[6] + kb2 * x[4],\n - ka3 * x[7] + kb3 * x[4],\n 1,\n 0,\n 0 - (- F01 - x[5] * x[0] + k12 * x[1] - FR + EGP0 * (1 - x[7]) + (x[9] * AG * 1000 / Gmolar) * x[8] * np.exp(\n -x[8] / tmaxG) / (tmaxG ** 2)),\n sp - x[0],\n 0,\n 0,\n 0,\n (sp - x[0])**2,\n (x[8] + t_offset)**2 * (sp - x[0])**2\n ]\n return dxdt", "_____no_output_____" ], [ "w=100\nVG = 0.16 * w\nsp = 110 * VG / 18\n# initial condition\n\nKd = [0, -0.0602, -0.0573, -0.06002, -0.0624]\nKi = [0, -3.53e-07, -3e-07, -1.17e-07, -7.55e-07]\nKp = [0, -6.17e-04, -6.39e-04, -6.76e-04, -5.42e-04]\n\ni=1\ndg1 = np.random.normal(40,10)\ndg2 = np.random.normal(90,10)\ndg3 = np.random.normal(60,10)\n\n# dg1 = 40\n# dg2 = 90\n# dg3 = 60\n\nx0 = [97.77, 19.08024, 3.0525, 3.0525, 0.033551, 0.01899, 0.03128, 0.02681, 0.0, dg1, 0.0, 0.0, Kd[i], Ki[i], Kp[i], 0, 0];\n\n\n# time points\nt_offset=0\nt_sleep = 540\nt_meal = 300\nt = np.arange(0,t_meal,0.2)\n\ny = odeint(model,x0,t,args=(t_offset,))\nytot = y\nttot = t\nystart = y[-1,:]\nystart[8] = 0\nystart[9] = dg2\ny = odeint(model,ystart,t,args=(t_offset,))\nytot = np.vstack([ytot,y])\nttot = np.hstack([ttot,t+ttot[-1]])\nystart = y[-1,:]\nystart[8] = 0\nystart[9] = dg3\nt = np.arange(0,t_meal+t_sleep,0.2)\ny = odeint(model,ystart,t,args=(t_offset,))\nytot = np.vstack([ytot,y])\nttot = np.hstack([ttot,t+ttot[-1]])", "_____no_output_____" ], [ "# plot results\n\nplt.fill_between([ttot[0],ttot[-1]], [4,4],[16,16],alpha=0.5)\nplt.plot(ttot,ytot[:,0]/VG,'r-',linewidth=2)\nplt.axhline(y=sp/VG, color='k', linestyle='-')\nplt.xlabel('time')\nplt.ylabel('y(t)')\nplt.legend()\nplt.xlabel('Time (min)')\nplt.ylabel('BG (mmol/L)')\nplt.show()", "No handles with labels found to put in legend.\n" ], [ "ttot,ytot[:,0]/VG", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
d017b12b8706daf33d48730c81f7fe2355c50405
20,773
ipynb
Jupyter Notebook
examples/pp-example.ipynb
maxpkatz/pynucastro
556372eba20b64482bad862b2f6bd128bdf7f676
[ "BSD-3-Clause" ]
null
null
null
examples/pp-example.ipynb
maxpkatz/pynucastro
556372eba20b64482bad862b2f6bd128bdf7f676
[ "BSD-3-Clause" ]
null
null
null
examples/pp-example.ipynb
maxpkatz/pynucastro
556372eba20b64482bad862b2f6bd128bdf7f676
[ "BSD-3-Clause" ]
null
null
null
78.685606
14,132
0.79069
[ [ [ "import pynucastro", "_____no_output_____" ], [ "rates = [\"p-p-d-ec\",\n \"d-pg-he3-de04\",\n \"he3-he3pp-he4-nacr\"]", "_____no_output_____" ], [ "net = pynucastro.RateCollection(rates)", "_____no_output_____" ], [ "print(net.network_overview())", "p\n consumed by:\n p + p --> d\n p + p --> d\n d + p --> he3\n produced by:\n he3 + he3 --> p + p + he4\n\nd\n consumed by:\n d + p --> he3\n produced by:\n p + p --> d\n p + p --> d\n\nhe3\n consumed by:\n he3 + he3 --> p + p + he4\n produced by:\n d + p --> he3\n\nhe4\n consumed by:\n produced by:\n he3 + he3 --> p + p + he4\n\n\n" ], [ "net.plot()", "_____no_output_____" ], [ "pynet = pynucastro.PythonNetwork(rates)", "_____no_output_____" ], [ "pynet.write_network()", "import numpy as np\nfrom pynucastro.rates import Tfactors\nimport numba\n\nip = 0\nid = 1\nihe3 = 2\nihe4 = 3\nnnuc = 4\n\nA = np.zeros((nnuc), dtype=np.int32)\n\nA[ip] = 1\nA[id] = 2\nA[ihe3] = 3\nA[ihe4] = 4\n\nZ = np.zeros((nnuc), dtype=np.int32)\n\nZ[ip] = 1\nZ[id] = 1\nZ[ihe3] = 2\nZ[ihe4] = 2\n\n@numba.njit()\ndef ye(Y):\n return np.sum(Z * Y)/np.sum(A * Y)\n\n@numba.njit()\ndef p_p__d__weak__bet_pos_(tf):\n # p + p --> d\n rate = 0.0\n \n # bet+w\n rate += np.exp( -34.7863 + -3.51193*tf.T913i + 3.10086*tf.T913\n + -0.198314*tf.T9 + 0.0126251*tf.T953 + -1.02517*tf.lnT9)\n \n return rate\n\n@numba.njit()\ndef p_p__d__weak__electron_capture(tf):\n # p + p --> d\n rate = 0.0\n \n # ecw\n rate += np.exp( -43.6499 + -0.00246064*tf.T9i + -2.7507*tf.T913i + -0.424877*tf.T913\n + 0.015987*tf.T9 + -0.000690875*tf.T953 + -0.207625*tf.lnT9)\n \n return rate\n\n@numba.njit()\ndef p_d__he3(tf):\n # d + p --> he3\n rate = 0.0\n \n # de04n\n rate += np.exp( 7.52898 + -3.7208*tf.T913i + 0.871782*tf.T913\n + -0.666667*tf.lnT9)\n # de04 \n rate += np.exp( 8.93525 + -3.7208*tf.T913i + 0.198654*tf.T913\n + 0.333333*tf.lnT9)\n \n return rate\n\n@numba.njit()\ndef he3_he3__p_p_he4(tf):\n # he3 + he3 --> p + p + he4\n rate = 0.0\n \n # nacrn\n rate += np.exp( 24.7788 + -12.277*tf.T913i + -0.103699*tf.T913\n + -0.0649967*tf.T9 + 0.0168191*tf.T953 + -0.666667*tf.lnT9)\n \n return rate\n\ndef rhs(t, Y, rho, T):\n return rhs_eq(t, Y, rho, T)\n\n@numba.njit()\ndef rhs_eq(t, Y, rho, T):\n\n ip = 0\n id = 1\n ihe3 = 2\n ihe4 = 3\n nnuc = 4\n\n tf = Tfactors(T)\n\n lambda_p_p__d__weak__bet_pos_ = p_p__d__weak__bet_pos_(tf)\n lambda_p_p__d__weak__electron_capture = p_p__d__weak__electron_capture(tf)\n lambda_p_d__he3 = p_d__he3(tf)\n lambda_he3_he3__p_p_he4 = he3_he3__p_p_he4(tf)\n\n dYdt = np.zeros((nnuc), dtype=np.float64)\n\n dYdt[ip] = (\n -2*5.00000000000000e-01*rho*Y[ip]**2*lambda_p_p__d__weak__bet_pos_\n -2*5.00000000000000e-01*rho**2*ye(Y)*Y[ip]**2*lambda_p_p__d__weak__electron_capture\n -rho*Y[ip]*Y[id]*lambda_p_d__he3\n +2*5.00000000000000e-01*rho*Y[ihe3]**2*lambda_he3_he3__p_p_he4\n )\n\n dYdt[id] = (\n -rho*Y[ip]*Y[id]*lambda_p_d__he3\n +5.00000000000000e-01*rho*Y[ip]**2*lambda_p_p__d__weak__bet_pos_\n +5.00000000000000e-01*rho**2*ye(Y)*Y[ip]**2*lambda_p_p__d__weak__electron_capture\n )\n\n dYdt[ihe3] = (\n -2*5.00000000000000e-01*rho*Y[ihe3]**2*lambda_he3_he3__p_p_he4\n +rho*Y[ip]*Y[id]*lambda_p_d__he3\n )\n\n dYdt[ihe4] = (\n +5.00000000000000e-01*rho*Y[ihe3]**2*lambda_he3_he3__p_p_he4\n )\n\n return dYdt\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
d017b46d4c0d7f9c9fd7660b0f42e37aeabac3e5
48,466
ipynb
Jupyter Notebook
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
0db7b28fcc62338858104192d3bdbf7b08edbb94
[ "Apache-2.0" ]
null
null
null
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
0db7b28fcc62338858104192d3bdbf7b08edbb94
[ "Apache-2.0" ]
null
null
null
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
0db7b28fcc62338858104192d3bdbf7b08edbb94
[ "Apache-2.0" ]
null
null
null
37.834504
562
0.583419
[ [ [ "# Building a Bayesian Network\n\n---\n\nIn this tutorial, we introduce how to build a **Bayesian (belief) network** based on domain knowledge of the problem.\n\nIf we build the Bayesian network in different ways, the built network can have different graphs and sizes, which can greatly affect the memory requirement and inference efficience. To represent the size of the Bayesian network, we first introduce the **number of free parameters**.", "_____no_output_____" ], [ "## Number of Free Parameters <a name=\"freepara\"></a>\n\n---\n\nThe size of a Bayesian network includes the size of the graph and the probability tables of each node. Obviously, the probability tables dominate the graph, thus we focus on the size of the probability tables.\n\nFor the sake of convenience, we only consider **discrete** variables in the network, and the continuous variables will be discretised. Then, for each variable $X$ in the network, we have the following notations.\n\n- $\\Omega(X)$: the domain (set of possible values) of $X$\n- $|\\Omega(X)|$: the number of possible values of $X$\n- $parents(X)$: the parents (direct causes) of $X$ in the network\n\nFor each variable $X$, the probability table contains the probabilities for $P(X\\ |\\ parents(X))$ for all possible $X$ values and $parent(X)$ values. Let's consider the following situations:\n\n1. $X$ does not have any parent. In this case, the table stores $P(X)$. There are $|\\Omega(X)|$ probabilities, each for a possible value of $X$. However, due to the [normalisation rule](https://github.com/meiyi1986/tutorials/blob/master/notebooks/reasoning-under-uncertainty-basics.ipynb), all the probabilities add up to 1. Thus, we need to store only $|\\Omega(X)|-1$ probabilities, and the last probability can be calculated by ($1-$the sum of the stored probabilities). Therefore, the probability table contains $|\\Omega(X)|-1$ rows/probabilities.\n2. $X$ has one parent $Y$. In this case, for each condition $y \\in \\Omega(Y)$, we need to store the conditional probabilities $P(X\\ |\\ Y = y)$. Again, we need to store $|\\Omega(X)|-1$ conditional probabilities for $P(X\\ |\\ Y = y)$, and can calculate the last conditional probability by the normalisation rule. Therefore, the probability table contains $(|\\Omega(X)|-1)*|\\Omega(Y)|$ rows/probabilities.\n3. $X$ has multiple parents $Y_1, \\dots, Y_m$. In this case, there are $|\\Omega(Y_1)|*\\dots * |\\Omega(Y_m)|$ possible conditions $[Y_1 = y_1, \\dots, Y_m = y_m]$. For each condition, we need to store $|\\Omega(X)|-1$ conditional probabilities for $P(X\\ |\\ Y_1 = y_1, \\dots, Y_m = y_m)$. Therefore, the probability table contains $(|\\Omega(X)|-1)*|\\Omega(Y_1)|*\\dots * |\\Omega(Y_m)|$ rows/probabilities.\n\nAs shown in the above alarm network, all the variables are binary, i.e. $|\\Omega(X)| = 2$. Therefore, $B$ and $E$ have only 1 row in their probability tables, since they have no parent. $A$ has $1 \\times 2 \\times 2 = 4$ rows in its probability tables, since it has two binary parents $B$ and $E$, leading to four possible conditions.\n\n> **DEFINITION**: The **number of free parameters** of a Bayesian network is the number of probabilities we need to estimate (can NOT be derived/calculated) in the probability tables.", "_____no_output_____" ], [ "Consider a Bayesian network with the factorisation\n\n$$\n\\begin{aligned}\n& P(X_1, \\dots, X_n) \\\\\n& = P(X_1\\ |\\ parents(X_1)) \\dots * P(X_n\\ |\\ parents(X_n)),\n\\end{aligned}\n$$\n\nthe number of free parameters is\n\n$$\n\\begin{aligned}\nP(X_1, \\dots, X_n) & = (|\\Omega(X_1)|-1)*\\prod_{Y \\in parents(X_1)}|\\Omega(Y)| \\\\\n& + (|\\Omega(X_2)|-1)*\\prod_{Y \\in parents(X_2)}|\\Omega(Y)| \\\\\n& + \\dots \\\\\n& + (|\\Omega(X_n)|-1)*\\prod_{Y \\in parents(X_n)}|\\Omega(Y)|. \\\\\n\\end{aligned}\n$$", "_____no_output_____" ], [ "Let's calculate the number of free parameters of the following simple networks, assuming that all the variables are binary.\n\n<img src=\"img/cause-effect.png\" width=550></img>\n\n- **Direct cause**: $P(A)$ has 1 free parameter, $P(B\\ |\\ A)$ has 2 free parameters. The network has $1+2 = 3$ free parameters.\n- **Indirect cause**: $P(A)$ has 1 free parameter, $P(B\\ |\\ A)$ and $P(C\\ |\\ B)$ have 2 free parameters. The network has $1+2+2 = 5$ free parameters.\n- **Common cause**: $P(A)$ has 1 free parameter, $P(B\\ |\\ A)$ and $P(C\\ |\\ A)$ have 2 free parameters. The network has $1+2+2 = 5$ free parameters.\n- **Common effect**: $P(A)$ and $P(B)$ have 1 free parameter, $P(C\\ |\\ A, B)$ has $2\\times 2 = 4$ free parameters. The network has $1+1+4 = 6$ free parameters.\n\n> **NOTE**: We can see that the common effect dependency causes the most free parameters required for the network. Therefore, when building a Bayesian network, we should try to reduce the number of such dependencies to reduce the number of free parameters of the network.", "_____no_output_____" ], [ "## Building Bayesian Network from Domain Knowledge<a name=\"building\"></a>\n\n---\n\nBuilding a Bayesian network mainly consists of the following three steps:\n\n1. Identify a set of **random variables** that describe the problem, using domain knowledge.\n2. Build the **directed acyclic graph**, i.e., the **directed links** between the random variables based on domain knowledge about the causal relationships between the variables.\n3. Build the **conditional probability table** for each variable, by estimating the necessary probabilities using domain knowledge or historical data.\n\nHere, we introduce the Pearl's network construction algorithm, which is a way to build the network based on **node ordering**.", "_____no_output_____" ], [ "```Python\n# Step 1: identify variables\nIdentify the random variables that describe the world of reasoning\n# Step 2: build the graph, add the links\nSort the random variables by some order\nSet bn = []\nfor var in sorted_vars:\n Find the minimum subset of variables in bn so that P(var | bn) = P(var | subset)\n \n Add var into bn\n for bn_var in subset:\n Add a direct link [bn_var, var]\n # Step 3: estimate the conditional probability table\n Estimate the conditional probabilities P(var | subset)\n```", "_____no_output_____" ], [ "In this algorithm, the **node ordering** is critical to determine the number of links between the nodes, and thus the size of the conditional probability tables. \n\nWe show how the links are added in to the network under different node orders, using the alarm network as an example.\n\n----------\n\n#### Order 1: $B \\rightarrow E \\rightarrow A \\rightarrow J \\rightarrow M$\n\n- **Step 1**: The node $B$ is added into the network. No edge is added, since there is only one node in the network.\n- **Step 2**: The node $E$ is added into the network. No edge from $B$ to $E$ is added, since $B$ and $E$ are <span style=\"color: blue;\">independent</span>.\n- **Step 3**: The node $A$ is added into the network. Two edges $[B, A]$ and $[E, A]$ are added. This is because $B$ and $E$ are both direct causes of $A$, and thus $A$ is <span style=\"color: red;\">dependent</span> on $B$ and $E$. \n- **Step 4**: The node $J$ is added into the network. The minimum subset $A \\subseteq \\{B, E, A\\}$ in the network is found to be the parent of $J$, since $J$ is <span style=\"color: blue;\">conditionally independent</span> from $B$ and $E$ given $A$, i.e., $P(J\\ |\\ B, E, A) = P(J\\ |\\ A)$. An edge $[A, J]$ is added into the network.\n- **Step 5**: The node $M$ is added into the network. The minimum subset $A \\subseteq \\{B, E, A, J\\}$ in the network is found to be the parent of $M$, since $M$ is <span style=\"color: blue;\">conditionally independent</span> from $B$, $E$ and $J$ given $A$, i.e., $P(M\\ |\\ B, E, A, J) = P(M\\ |\\ A)$. An edge $[A, M]$ is added into the network.\n\nThe built network is shows as follows. The number of free parameters in this network is $1 + 1 + 4 + 2 + 2 = 10$.\n\n<img src=\"img/alarm-dag.png\" width=150></img>\n\n----------\n\n#### Order 2: $J \\rightarrow M \\rightarrow A \\rightarrow B \\rightarrow E$\n\n- **Step 1**: The node $J$ is added into the network. No edge is added, since there is only one node in the network.\n- **Step 2**: The node $M$ is added into the network. $M$ and $J$ are <span style=\"color: red;\">dependent</span> (_note that the common cause $A$ has not been given yet at this step_), i.e., $P(M\\ |\\ J) \\neq P(M)$. Therefore, an edge $[J, M]$ is added into the network.\n- **Step 3**: The node $A$ is added into the network. Two edges $[J, A]$ and $[M, A]$ are added, since $J$ and $M$ are both <span style=\"color: red;\">dependent</span> on $A$.\n- **Step 4**: The node $B$ is added into the network. The minimum subset $A \\subseteq \\{J, M, A\\}$ in the network is found to be the parent of $B$, since $B$ is <span style=\"color: blue;\">conditionally independent</span> from $J$ and $M$ given $A$, i.e., $P(B\\ |\\ J, M, A) = P(B\\ |\\ A)$. An edge $[A, B]$ is added into the network.\n- **Step 5**: The node $E$ is added into the network. The minimum subset $\\{A, B\\} \\subseteq \\{J, M, A, B\\}$ in the network is found to be the parent of $E$, since $E$ is <span style=\"color: blue;\">conditionally independent</span> from $J$ and $M$ given $A$ and $E$, i.e., $P(M\\ |\\ J, M, A, B) = P(M\\ |\\ A, B)$ (_note that $B$ and $E$ have the common effect $A$, thus when $A$ is given, $B$ and $E$ are <span style=\"color: red;\">conditionally dependent</span>_). Two edges $[A, E]$ and $[B, E]$ are added into the network.\n\nThe built network is shows as follows. The number of free parameters in this network is $1 + 2 + 4 + 2 + 4 = 13$.\n\n<img src=\"img/alarm-dag2.png\" width=150></img>\n\n----------\n\n#### Order 3: $J \\rightarrow M \\rightarrow B \\rightarrow E \\rightarrow A$\n\n- **Step 1**: The node $J$ is added into the network. No edge is added, since there is only one node in the network.\n- **Step 2**: The node $M$ is added into the network. $M$ and $J$ are <span style=\"color: red;\">dependent</span> (note that the common cause $A$ is not given at this step), i.e., $P(M\\ |\\ J) \\neq P(M)$. Therefore, an edge $[J, M]$ is added into the network.\n- **Step 3**: The node $B$ is added into the network. Two edges $[J, B]$ and $[M, B]$ are added, since $J$ and $M$ are both <span style=\"color: red;\">dependent</span> on $B$ (through $A$, which has not been added yet).\n- **Step 4**: The node $E$ is added into the network. There is NO conditional independence found among $\\{J, M, B, E\\}$ without giving $A$. Therefore, three edges $[J, E]$, $[M, E]$, $[B, E]$ are added into the network.\n- **Step 5**: The node $A$ is added into the network. First, two edges Two edges $[J, A]$ and $[M, A]$ are added, since $J$ and $M$ are both <span style=\"color: red;\">dependent</span> on $A$. Then, another two edges $[B, A]$ and $[E, A]$ are also added, since $B$ and $E$ are both direct causes of $A$.\n\nThe built network is shows as follows. The number of free parameters in this network is $1 + 2 + 4 + 8 + 16 = 31$.\n\n<img src=\"img/alarm-dag3.png\" width=200></img>\n\n---------", "_____no_output_____" ], [ "We can see that different node orders can lead to greatly different graphs and numbers of free parameters. Therefore, we should find the **optimal node order** that leads to the most **compact** network (with the fewest free parameters).\n\n> **QUESTION**: How to find the optimal node order that leads to the most compact Bayesian network?\n\nThe node order is mainly determined based on our **domain knowledge** about **cause and effect**. At first, we add the nodes with no cause (i.e., the root causes) into the ordered list. Then, at each step, we find the remaining nodes whose direct causes are all in the current ordered list (i.e., all their direct causes are given) and append them into the end of the ordered list. This way, we only need to add direct links from their direct causes to them.\n\nThe pseucode of the node ordering is shown as follows.", "_____no_output_____" ], [ "```Python\ndef node_ordering(all_nodes):\n Set ordered_nodes = [], remaining_nodes = all_nodes\n while remaining_nodes is not empty:\n Select the nodes whose direct causes are all in ordered_nodes\n Append the selected nodes into ordered_nodes\n Remove the selected nodes from remaining_nodes\n return ordered_nodes\n```", "_____no_output_____" ], [ "For the alarm network, first we add two nodes $\\{B, E\\}$ into the ordered list, since they are the root causes, and have no direct cause. Then, we add $A$ into the ordered list, since it has two direct causes $B$ and $E$, both are already in the ordered list. Finally, we add $J$ and $M$ into the list, since their direct cause $A$ is already in the ordered list.", "_____no_output_____" ], [ "## Building Alarm Network through `pgmpy` <a name=\"pgmpy\"></a>\n\n---\n\nHere, we show how to build the alarm network through the Python [pgmpy](https://pgmpy.org) library. The alarm network is displayed again below.\n\n<img src=\"img/alarm-bn.png\" width=500></img>\n\nFirst, we install the library using `pip`.", "_____no_output_____" ] ], [ [ "pip install pgmpy", "Requirement already satisfied: pgmpy in /Users/yimei/miniforge3/lib/python3.9/site-packages (0.1.17)\nRequirement already satisfied: torch in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.11.0)\nRequirement already satisfied: statsmodels in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (0.13.2)\nRequirement already satisfied: pyparsing in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (3.0.6)\nRequirement already satisfied: networkx in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (2.7.1)\nRequirement already satisfied: scipy in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.8.0)\nRequirement already satisfied: scikit-learn in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.0.2)\nRequirement already satisfied: tqdm in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (4.62.1)\nRequirement already satisfied: pandas in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.3.2)\nRequirement already satisfied: joblib in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.1.0)\nRequirement already satisfied: numpy in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.21.2)\nRequirement already satisfied: pytz>=2017.3 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pandas->pgmpy) (2021.1)\nRequirement already satisfied: python-dateutil>=2.7.3 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pandas->pgmpy) (2.8.2)\nRequirement already satisfied: six>=1.5 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from python-dateutil>=2.7.3->pandas->pgmpy) (1.16.0)\nRequirement already satisfied: threadpoolctl>=2.0.0 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from scikit-learn->pgmpy) (3.1.0)\nRequirement already satisfied: patsy>=0.5.2 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from statsmodels->pgmpy) (0.5.2)\nRequirement already satisfied: packaging>=21.3 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from statsmodels->pgmpy) (21.3)\nRequirement already satisfied: typing-extensions in /Users/yimei/miniforge3/lib/python3.9/site-packages (from torch->pgmpy) (4.1.1)\nNote: you may need to restart the kernel to use updated packages.\n" ] ], [ [ "Then, we import the necessary modules for the Bayesian network as follows.", "_____no_output_____" ] ], [ [ "from pgmpy.models import BayesianNetwork\nfrom pgmpy.factors.discrete import TabularCPD", "_____no_output_____" ] ], [ [ "Now, we build the alarm Bayesian network as follows.\n\n1. We define the network structure by specifying the four links.\n2. We define (estimate) the discrete conditional probability tables, represented as the `TabularCPD` class.", "_____no_output_____" ] ], [ [ "# Define the network structure\nalarm_model = BayesianNetwork(\n [\n (\"Burglary\", \"Alarm\"),\n (\"Earthquake\", \"Alarm\"),\n (\"Alarm\", \"JohnCall\"),\n (\"Alarm\", \"MaryCall\"),\n ]\n)\n\n# Define the probability tables by TabularCPD\ncpd_burglary = TabularCPD(\n variable=\"Burglary\", variable_card=2, values=[[0.999], [0.001]]\n)\n\ncpd_earthquake = TabularCPD(\n variable=\"Earthquake\", variable_card=2, values=[[0.998], [0.002]]\n)\n\ncpd_alarm = TabularCPD(\n variable=\"Alarm\",\n variable_card=2,\n values=[[0.999, 0.71, 0.06, 0.05], [0.001, 0.29, 0.94, 0.95]],\n evidence=[\"Burglary\", \"Earthquake\"],\n evidence_card=[2, 2],\n)\n\ncpd_johncall = TabularCPD(\n variable=\"JohnCall\",\n variable_card=2,\n values=[[0.95, 0.1], [0.05, 0.9]],\n evidence=[\"Alarm\"],\n evidence_card=[2],\n)\n\ncpd_marycall = TabularCPD(\n variable=\"MaryCall\",\n variable_card=2,\n values=[[0.99, 0.3], [0.01, 0.7]],\n evidence=[\"Alarm\"],\n evidence_card=[2],\n)\n\n# Associating the probability tables with the model structure\nalarm_model.add_cpds(\n cpd_burglary, cpd_earthquake, cpd_alarm, cpd_johncall, cpd_marycall\n)", "_____no_output_____" ] ], [ [ "We can view the nodes of the alarm network.", "_____no_output_____" ] ], [ [ "# Viewing nodes of the model\nalarm_model.nodes()", "_____no_output_____" ] ], [ [ "We can also view the edges of the alarm network.", "_____no_output_____" ] ], [ [ "# Viewing edges of the model\nalarm_model.edges()", "_____no_output_____" ] ], [ [ "We can show the probability tables using the `print()` method. \n\n> **NOTE**: the `pgmpy` library stores ALL the probabilities (including the last probability). This requires a bit more memory, but can save time for calculating the last probability by normalisation rule.\n\nLet's print the probability tables for **Alarm** and **MaryCalls**. For each variable, the value (0) stands for `False`, while the value (1) is `True`.", "_____no_output_____" ] ], [ [ "# Print the probability table of the Alarm node\nprint(cpd_alarm)\n\n# Print the probability table of the MaryCalls node\nprint(cpd_marycall)", "+------------+---------------+---------------+---------------+---------------+\n| Burglary | Burglary(0) | Burglary(0) | Burglary(1) | Burglary(1) |\n+------------+---------------+---------------+---------------+---------------+\n| Earthquake | Earthquake(0) | Earthquake(1) | Earthquake(0) | Earthquake(1) |\n+------------+---------------+---------------+---------------+---------------+\n| Alarm(0) | 0.999 | 0.71 | 0.06 | 0.05 |\n+------------+---------------+---------------+---------------+---------------+\n| Alarm(1) | 0.001 | 0.29 | 0.94 | 0.95 |\n+------------+---------------+---------------+---------------+---------------+\n+-------------+----------+----------+\n| Alarm | Alarm(0) | Alarm(1) |\n+-------------+----------+----------+\n| MaryCall(0) | 0.99 | 0.3 |\n+-------------+----------+----------+\n| MaryCall(1) | 0.01 | 0.7 |\n+-------------+----------+----------+\n" ] ], [ [ "We can find all the **(conditional) independencies** between the nodes in the network.", "_____no_output_____" ] ], [ [ "alarm_model.get_independencies()", "_____no_output_____" ] ], [ [ "We can also find the **local (conditional) independencies of a specific node** in the network as follows.", "_____no_output_____" ] ], [ [ "# Checking independcies of a node\nalarm_model.local_independencies(\"JohnCall\")", "_____no_output_____" ] ], [ [ "---\n\n- More tutorials can be found [here](https://github.com/meiyi1986/tutorials).\n- [Yi Mei's homepage](https://meiyi1986.github.io/)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d017c22fa8412d2246fb20ef3d45c6ddee2d7ab6
63,494
ipynb
Jupyter Notebook
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
143b9601ee8e5b9da2d064d30aa1c06209025696
[ "MIT" ]
null
null
null
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
143b9601ee8e5b9da2d064d30aa1c06209025696
[ "MIT" ]
null
null
null
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
143b9601ee8e5b9da2d064d30aa1c06209025696
[ "MIT" ]
null
null
null
250.964427
38,060
0.916591
[ [ [ "# 1. Introduction", "_____no_output_____" ] ], [ [ "import os\nimport sys\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "from prml.linear import (\n LinearRegression,\n RidgeRegression,\n BayesianRegression\n)\nfrom prml.preprocess.polynomial import PolynomialFeature", "_____no_output_____" ] ], [ [ "## 1.1. Example: Polynomial Curve Fitting", "_____no_output_____" ] ], [ [ "def create_toy_data(func, sample_size, std):\n x = np.linspace(0, 1, sample_size)\n t = func(x) + np.random.normal(scale=std, size=x.shape)\n return x, t\n\ndef func(x):\n return np.sin(2 * np.pi * x)\n\nx_train, y_train = create_toy_data(func, 10, 0.25)\nx_test = np.linspace(0, 1, 100)\ny_test = func(x_test)\n\nplt.scatter(x_train, y_train, facecolor=\"none\", edgecolor=\"b\", s=50, label=\"training data\")\nplt.plot(x_test, y_test, c=\"g\", label=\"$\\sin(2\\pi x)$\")\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "for i, degree in enumerate([0, 1, 3, 9]):\n plt.subplot(2, 2, i + 1)\n feature = PolynomialFeature(degree)\n X_train = feature.transform(x_train)\n X_test = feature.transform(x_test)\n\n model = LinearRegression()\n model.fit(X_train, y_train)\n y = model.predict(X_test)\n\n plt.scatter(x_train, y_train, facecolor=\"none\", edgecolor=\"b\", s=50, label=\"training data\")\n plt.plot(x_test, y_test, c=\"g\", label=\"$\\sin(2\\pi x)$\")\n plt.plot(x_test, y, c=\"r\", label=\"fitting\")\n plt.ylim(-1.5, 1.5)\n plt.annotate(\"M={}\".format(degree), xy=(-0.15, 1))\nplt.legend(bbox_to_anchor=(1.05, 0.64), loc=2, borderaxespad=0.)\nplt.show()", "_____no_output_____" ], [ "def rmse(a, b):\n return np.sqrt(np.mean(np.square(a - b)))\n\ntraining_errors = []\ntest_errors = []\n\nfor i in range(10):\n feature = PolynomialFeature(i)\n X_train = feature.transform(x_train)\n X_test = feature.transform(x_test)\n\n model = LinearRegression()\n model.fit(X_train, y_train)\n y = model.predict(X_test)\n training_errors.append(rmse(model.predict(X_train), y_train))\n test_errors.append(rmse(model.predict(X_test), y_test + np.random.normal(scale=0.25, size=len(y_test))))\n\nplt.plot(training_errors, 'o-', mfc=\"none\", mec=\"b\", ms=10, c=\"b\", label=\"Training\")\nplt.plot(test_errors, 'o-', mfc=\"none\", mec=\"r\", ms=10, c=\"r\", label=\"Test\")\nplt.legend()\nplt.xlabel(\"degree\")\nplt.ylabel(\"RMSE\")\nplt.show()", "_____no_output_____" ] ], [ [ "#### Regularization", "_____no_output_____" ] ], [ [ "feature = PolynomialFeature(9)\nX_train = feature.transform(x_train)\nX_test = feature.transform(x_test)\n\nmodel = RidgeRegression(alpha=1e-3)\nmodel.fit(X_train, y_train)\ny = model.predict(X_test)\n\ny = model.predict(X_test)\nplt.scatter(x_train, y_train, facecolor=\"none\", edgecolor=\"b\", s=50, label=\"training data\")\nplt.plot(x_test, y_test, c=\"g\", label=\"$\\sin(2\\pi x)$\")\nplt.plot(x_test, y, c=\"r\", label=\"fitting\")\nplt.ylim(-1.5, 1.5)\nplt.legend()\nplt.annotate(\"M=9\", xy=(-0.15, 1))\nplt.show()", "_____no_output_____" ] ], [ [ "### 1.2.6 Bayesian curve fitting", "_____no_output_____" ] ], [ [ "model = BayesianRegression(alpha=2e-3, beta=2)\nmodel.fit(X_train, y_train)\n\ny, y_err = model.predict(X_test, return_std=True)\nplt.scatter(x_train, y_train, facecolor=\"none\", edgecolor=\"b\", s=50, label=\"training data\")\nplt.plot(x_test, y_test, c=\"g\", label=\"$\\sin(2\\pi x)$\")\nplt.plot(x_test, y, c=\"r\", label=\"mean\")\nplt.fill_between(x_test, y - y_err, y + y_err, color=\"pink\", label=\"std.\", alpha=0.5)\nplt.xlim(-0.1, 1.1)\nplt.ylim(-1.5, 1.5)\nplt.annotate(\"M=9\", xy=(0.8, 1))\nplt.legend(bbox_to_anchor=(1.05, 1.), loc=2, borderaxespad=0.)\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d017c32bebc6ba4ff6ad6e00a2cc77f57b486648
10,064
ipynb
Jupyter Notebook
notebook/Prototyping Network Part 1.ipynb
kwierman/discriminate_agkistrodon
f6eac7d3f4898ad5362ac840de95647282d24f23
[ "MIT" ]
null
null
null
notebook/Prototyping Network Part 1.ipynb
kwierman/discriminate_agkistrodon
f6eac7d3f4898ad5362ac840de95647282d24f23
[ "MIT" ]
null
null
null
notebook/Prototyping Network Part 1.ipynb
kwierman/discriminate_agkistrodon
f6eac7d3f4898ad5362ac840de95647282d24f23
[ "MIT" ]
null
null
null
59.2
1,129
0.547993
[ [ [ "from keras.layers import Input, Dropout, Dense, Flatten, concatenate\nfrom keras.layers.convolutional import MaxPooling3D, Conv3D, Conv3DTranspose\nfrom keras.models import Model", "Using TensorFlow backend.\n" ], [ "_input = Input(shape=(1, 3, 9600, 3600))", "_____no_output_____" ], [ "conv1 = Conv3D(32, (1, 2, 2), strides=(1, 2, 2),\n activation='relu', padding='same',\n data_format='channels_first',\n name='block1_conv1')(_input)\npool1 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2),\n data_format='channels_first',\n name='block1_pool')(conv1)\n\n# Block 2\nconv2 = Conv3D(64, (1, 2, 2), strides=(1, 2, 2),\n activation='relu', padding='same',\n data_format='channels_first',\n name='block2_conv1')(pool1)\npool2 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2),\n data_format='channels_first',\n name='block2_pool')(conv2)\n\n# Block 3\nconv3 = Conv3D(128, (3, 2, 2), strides=(3, 2, 2),\n activation='relu', padding='same',\n data_format='channels_first',\n name='block3_conv1')(pool2)\npool3 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2),\n data_format='channels_first',\n name='block3_pool')(conv3)\n\n# Block 4\nconv4 = Conv3D(256, (1, 2, 2), strides=(1, 2, 2),\n activation='relu', padding='same',\n data_format='channels_first',\n name='block4_conv1')(pool3)\npool4 = MaxPooling3D((1, 2, 2), strides=(1, 2, 2), name='block4_pool',\n data_format='channels_first')(conv4)\n\n# Block 5\nconv5 = Conv3D(512, (1, 2, 2), strides=(1, 2, 2), activation='relu',\n padding='same', data_format='channels_first')(pool4)\n\n# Block 6^T\nup6 = concatenate([Conv3DTranspose(256, (1, 4, 4),\n strides=(1, 4, 4), padding='same',\n data_format='channels_first')(conv5),\n conv4], axis=1)\nconv6 = Conv3D(256, (1, 2, 2), strides=(1, 2, 2), activation='relu',\n padding='same', data_format='channels_first')(up6)\n\n# Block 7^T\nup7 = concatenate([Conv3DTranspose(128, (1, 4, 4),\n strides=(1, 4, 4), padding='same',\n data_format='channels_first')(conv6),\n conv3], axis=1)\nconv7 = Conv3D(128, (1, 2, 2), strides=(1, 2, 2), activation='relu', \n padding='same', data_format='channels_first')(up7)\n\n# Block 8^T\nup8 = concatenate([Conv3DTranspose(64, (3, 4, 4),\n strides=(3, 4, 7), padding='same',\n data_format='channels_first')(conv7),\n conv2], axis=1)\nconv8 = Conv3D(64, (1, 3, 6), activation='relu', padding='same', \n data_format='channels_first')(up8)\n\n# Block 9^T\nup9 = concatenate([Conv3DTranspose(32, (3, 3, 6),\n strides=(3, 3, 6), padding='same',\n data_format='channels_first')(up8),\n conv1], axis=1)\nconv9 = Conv3D(32, (1, 3, 6), activation='relu', padding='same', \n data_format='channels_first')(up9)\n\nmodel = Model(_input, conv9)\nmodel.compile(loss='categorical_crossentropy', optimizer='sgd',\n metrics=['accuracy'])", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
d017def96b4fe51608379836e28346acd53dfb26
320,536
ipynb
Jupyter Notebook
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
fca201055dbd669d4ffc0f65fe74d593754cdac4
[ "Apache-2.0" ]
7
2018-12-13T04:59:21.000Z
2019-03-12T10:18:38.000Z
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
fca201055dbd669d4ffc0f65fe74d593754cdac4
[ "Apache-2.0" ]
115
2020-01-28T22:21:35.000Z
2022-03-11T23:42:46.000Z
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
fca201055dbd669d4ffc0f65fe74d593754cdac4
[ "Apache-2.0" ]
6
2018-07-27T06:18:20.000Z
2020-02-09T17:12:43.000Z
143.288333
67,413
0.694752
[ [ [ "**Chapter 10 – Introduction to Artificial Neural Networks**", "_____no_output_____" ], [ "_This notebook contains all the sample code and solutions to the exercises in chapter 10._", "_____no_output_____" ], [ "# Setup", "_____no_output_____" ], [ "First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:", "_____no_output_____" ] ], [ [ "# To support both python 2 and python 3\nfrom __future__ import division, print_function, unicode_literals\n\n# Common imports\nimport numpy as np\nimport os\n\n# to make this notebook's output stable across runs\ndef reset_graph(seed=42):\n tf.reset_default_graph()\n tf.set_random_seed(seed)\n np.random.seed(seed)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"ann\"\n\ndef save_fig(fig_id, tight_layout=True):\n path = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID, fig_id + \".png\")\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format='png', dpi=300)", "_____no_output_____" ] ], [ [ "# Perceptrons", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom sklearn.datasets import load_iris\nfrom sklearn.linear_model import Perceptron\n\niris = load_iris()\nX = iris.data[:, (2, 3)] # petal length, petal width\ny = (iris.target == 0).astype(np.int)\n\nper_clf = Perceptron(max_iter=100, random_state=42)\nper_clf.fit(X, y)\n\ny_pred = per_clf.predict([[2, 0.5]])", "_____no_output_____" ], [ "y_pred", "_____no_output_____" ], [ "a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]\nb = -per_clf.intercept_ / per_clf.coef_[0][1]\n\naxes = [0, 5, 0, 2]\n\nx0, x1 = np.meshgrid(\n np.linspace(axes[0], axes[1], 500).reshape(-1, 1),\n np.linspace(axes[2], axes[3], 200).reshape(-1, 1),\n )\nX_new = np.c_[x0.ravel(), x1.ravel()]\ny_predict = per_clf.predict(X_new)\nzz = y_predict.reshape(x0.shape)\n\nplt.figure(figsize=(10, 4))\nplt.plot(X[y==0, 0], X[y==0, 1], \"bs\", label=\"Not Iris-Setosa\")\nplt.plot(X[y==1, 0], X[y==1, 1], \"yo\", label=\"Iris-Setosa\")\n\nplt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], \"k-\", linewidth=3)\nfrom matplotlib.colors import ListedColormap\ncustom_cmap = ListedColormap(['#9898ff', '#fafab0'])\n\nplt.contourf(x0, x1, zz, cmap=custom_cmap)\nplt.xlabel(\"Petal length\", fontsize=14)\nplt.ylabel(\"Petal width\", fontsize=14)\nplt.legend(loc=\"lower right\", fontsize=14)\nplt.axis(axes)\n\nsave_fig(\"perceptron_iris_plot\")\nplt.show()", "Saving figure perceptron_iris_plot\n" ] ], [ [ "# Activation functions", "_____no_output_____" ] ], [ [ "def logit(z):\n return 1 / (1 + np.exp(-z))\n\ndef relu(z):\n return np.maximum(0, z)\n\ndef derivative(f, z, eps=0.000001):\n return (f(z + eps) - f(z - eps))/(2 * eps)", "_____no_output_____" ], [ "z = np.linspace(-5, 5, 200)\n\nplt.figure(figsize=(11,4))\n\nplt.subplot(121)\nplt.plot(z, np.sign(z), \"r-\", linewidth=2, label=\"Step\")\nplt.plot(z, logit(z), \"g--\", linewidth=2, label=\"Logit\")\nplt.plot(z, np.tanh(z), \"b-\", linewidth=2, label=\"Tanh\")\nplt.plot(z, relu(z), \"m-.\", linewidth=2, label=\"ReLU\")\nplt.grid(True)\nplt.legend(loc=\"center right\", fontsize=14)\nplt.title(\"Activation functions\", fontsize=14)\nplt.axis([-5, 5, -1.2, 1.2])\n\nplt.subplot(122)\nplt.plot(z, derivative(np.sign, z), \"r-\", linewidth=2, label=\"Step\")\nplt.plot(0, 0, \"ro\", markersize=5)\nplt.plot(0, 0, \"rx\", markersize=10)\nplt.plot(z, derivative(logit, z), \"g--\", linewidth=2, label=\"Logit\")\nplt.plot(z, derivative(np.tanh, z), \"b-\", linewidth=2, label=\"Tanh\")\nplt.plot(z, derivative(relu, z), \"m-.\", linewidth=2, label=\"ReLU\")\nplt.grid(True)\n#plt.legend(loc=\"center right\", fontsize=14)\nplt.title(\"Derivatives\", fontsize=14)\nplt.axis([-5, 5, -0.2, 1.2])\n\nsave_fig(\"activation_functions_plot\")\nplt.show()", "Saving figure activation_functions_plot\n" ], [ "def heaviside(z):\n return (z >= 0).astype(z.dtype)\n\ndef sigmoid(z):\n return 1/(1+np.exp(-z))\n\ndef mlp_xor(x1, x2, activation=heaviside):\n return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)", "_____no_output_____" ], [ "x1s = np.linspace(-0.2, 1.2, 100)\nx2s = np.linspace(-0.2, 1.2, 100)\nx1, x2 = np.meshgrid(x1s, x2s)\n\nz1 = mlp_xor(x1, x2, activation=heaviside)\nz2 = mlp_xor(x1, x2, activation=sigmoid)\n\nplt.figure(figsize=(10,4))\n\nplt.subplot(121)\nplt.contourf(x1, x2, z1)\nplt.plot([0, 1], [0, 1], \"gs\", markersize=20)\nplt.plot([0, 1], [1, 0], \"y^\", markersize=20)\nplt.title(\"Activation function: heaviside\", fontsize=14)\nplt.grid(True)\n\nplt.subplot(122)\nplt.contourf(x1, x2, z2)\nplt.plot([0, 1], [0, 1], \"gs\", markersize=20)\nplt.plot([0, 1], [1, 0], \"y^\", markersize=20)\nplt.title(\"Activation function: sigmoid\", fontsize=14)\nplt.grid(True)", "_____no_output_____" ] ], [ [ "# FNN for MNIST", "_____no_output_____" ], [ "## Using the Estimator API (formerly `tf.contrib.learn`)", "_____no_output_____" ] ], [ [ "import tensorflow as tf", "_____no_output_____" ] ], [ [ "**Warning**: `tf.examples.tutorials.mnist` is deprecated. We will use `tf.keras.datasets.mnist` instead. Moreover, the `tf.contrib.learn` API was promoted to `tf.estimators` and `tf.feature_columns`, and it has changed considerably. In particular, there is no `infer_real_valued_columns_from_input()` function or `SKCompat` class.", "_____no_output_____" ] ], [ [ "(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()\nX_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0\nX_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0\ny_train = y_train.astype(np.int32)\ny_test = y_test.astype(np.int32)\nX_valid, X_train = X_train[:5000], X_train[5000:]\ny_valid, y_train = y_train[:5000], y_train[5000:]", "_____no_output_____" ], [ "feature_cols = [tf.feature_column.numeric_column(\"X\", shape=[28 * 28])]\ndnn_clf = tf.estimator.DNNClassifier(hidden_units=[300,100], n_classes=10,\n feature_columns=feature_cols)\n\ninput_fn = tf.estimator.inputs.numpy_input_fn(\n x={\"X\": X_train}, y=y_train, num_epochs=40, batch_size=50, shuffle=True)\ndnn_clf.train(input_fn=input_fn)", "INFO:tensorflow:Using default config.\nWARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpuflzeb_h\nINFO:tensorflow:Using config: {'_evaluation_master': '', '_session_config': None, '_model_dir': '/tmp/tmpuflzeb_h', '_task_type': 'worker', '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4fcb4e15c0>, '_save_summary_steps': 100, '_is_chief': True, '_save_checkpoints_steps': None, '_log_step_count_steps': 100, '_master': '', '_service': None, '_keep_checkpoint_every_n_hours': 10000, '_task_id': 0, '_tf_random_seed': None, '_num_ps_replicas': 0, '_global_id_in_cluster': 0, '_train_distribute': None, '_num_worker_replicas': 1, '_save_checkpoints_secs': 600, '_keep_checkpoint_max': 5}\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 1 into /tmp/tmpuflzeb_h/model.ckpt.\nINFO:tensorflow:loss = 122.883514, step = 0\nINFO:tensorflow:global_step/sec: 480.267\nINFO:tensorflow:loss = 9.599711, step = 100 (0.209 sec)\nINFO:tensorflow:global_step/sec: 599.191\nINFO:tensorflow:loss = 19.580772, step = 200 (0.167 sec)\nINFO:tensorflow:global_step/sec: 640.184\nINFO:tensorflow:loss = 2.1866307, step = 300 (0.157 sec)\nINFO:tensorflow:global_step/sec: 716.395\nINFO:tensorflow:loss = 11.493204, step = 400 (0.138 sec)\nINFO:tensorflow:global_step/sec: 713.653\nINFO:tensorflow:loss = 4.0078278, step = 500 (0.140 sec)\nINFO:tensorflow:global_step/sec: 722.021\nINFO:tensorflow:loss = 10.612131, step = 600 (0.139 sec)\nINFO:tensorflow:global_step/sec: 669.446\nINFO:tensorflow:loss = 6.692636, step = 700 (0.149 sec)\nINFO:tensorflow:global_step/sec: 720.49\nINFO:tensorflow:loss = 4.2058306, step = 800 (0.139 sec)\nINFO:tensorflow:global_step/sec: 766.548\nINFO:tensorflow:loss = 9.13055, step = 900 (0.130 sec)\nINFO:tensorflow:global_step/sec: 773.506\nINFO:tensorflow:loss = 4.1445055, step = 1000 (0.129 sec)\nINFO:tensorflow:global_step/sec: 755.713\nINFO:tensorflow:loss = 8.442559, step = 1100 (0.132 sec)\nINFO:tensorflow:global_step/sec: 762.721\nINFO:tensorflow:loss = 1.4401194, step = 1200 (0.131 sec)\nINFO:tensorflow:global_step/sec: 659.992\nINFO:tensorflow:loss = 13.526959, step = 1300 (0.152 sec)\nINFO:tensorflow:global_step/sec: 701.683\nINFO:tensorflow:loss = 5.039109, step = 1400 (0.143 sec)\nINFO:tensorflow:global_step/sec: 751.167\nINFO:tensorflow:loss = 1.8074234, step = 1500 (0.133 sec)\nINFO:tensorflow:global_step/sec: 700.915\nINFO:tensorflow:loss = 6.4867635, step = 1600 (0.142 sec)\nINFO:tensorflow:global_step/sec: 733.041\nINFO:tensorflow:loss = 0.5804969, step = 1700 (0.136 sec)\nINFO:tensorflow:global_step/sec: 764.306\nINFO:tensorflow:loss = 1.5091155, step = 1800 (0.131 sec)\nINFO:tensorflow:global_step/sec: 664.096\nINFO:tensorflow:loss = 3.6764488, step = 1900 (0.151 sec)\nINFO:tensorflow:global_step/sec: 687.762\nINFO:tensorflow:loss = 1.4820085, step = 2000 (0.145 sec)\nINFO:tensorflow:global_step/sec: 667.013\nINFO:tensorflow:loss = 1.2964534, step = 2100 (0.150 sec)\nINFO:tensorflow:global_step/sec: 704.008\nINFO:tensorflow:loss = 1.0711427, step = 2200 (0.142 sec)\nINFO:tensorflow:global_step/sec: 664.444\nINFO:tensorflow:loss = 2.6691673, step = 2300 (0.151 sec)\nINFO:tensorflow:global_step/sec: 691.387\nINFO:tensorflow:loss = 0.9668397, step = 2400 (0.144 sec)\nINFO:tensorflow:global_step/sec: 749.867\nINFO:tensorflow:loss = 1.4468323, step = 2500 (0.133 sec)\nINFO:tensorflow:global_step/sec: 725.923\nINFO:tensorflow:loss = 1.8073778, step = 2600 (0.138 sec)\nINFO:tensorflow:global_step/sec: 732.432\nINFO:tensorflow:loss = 3.8904514, step = 2700 (0.136 sec)\nINFO:tensorflow:global_step/sec: 710.886\nINFO:tensorflow:loss = 2.3015192, step = 2800 (0.141 sec)\nINFO:tensorflow:global_step/sec: 713.338\nINFO:tensorflow:loss = 4.671579, step = 2900 (0.140 sec)\nINFO:tensorflow:global_step/sec: 708.305\nINFO:tensorflow:loss = 1.3551085, step = 3000 (0.141 sec)\nINFO:tensorflow:global_step/sec: 647.185\nINFO:tensorflow:loss = 1.9145899, step = 3100 (0.155 sec)\nINFO:tensorflow:global_step/sec: 673.601\nINFO:tensorflow:loss = 1.0741266, step = 3200 (0.148 sec)\nINFO:tensorflow:global_step/sec: 721.531\nINFO:tensorflow:loss = 1.0713774, step = 3300 (0.138 sec)\nINFO:tensorflow:global_step/sec: 713.421\nINFO:tensorflow:loss = 1.3074391, step = 3400 (0.140 sec)\nINFO:tensorflow:global_step/sec: 633.857\nINFO:tensorflow:loss = 2.0073137, step = 3500 (0.158 sec)\nINFO:tensorflow:global_step/sec: 672.287\nINFO:tensorflow:loss = 13.952677, step = 3600 (0.149 sec)\nINFO:tensorflow:global_step/sec: 639.094\nINFO:tensorflow:loss = 1.6767453, step = 3700 (0.157 sec)\nINFO:tensorflow:global_step/sec: 731.891\nINFO:tensorflow:loss = 0.27798674, step = 3800 (0.137 sec)\nINFO:tensorflow:global_step/sec: 728.154\nINFO:tensorflow:loss = 3.5524733, step = 3900 (0.137 sec)\nINFO:tensorflow:global_step/sec: 714.217\nINFO:tensorflow:loss = 0.6761815, step = 4000 (0.140 sec)\nINFO:tensorflow:global_step/sec: 744.721\nINFO:tensorflow:loss = 0.79083383, step = 4100 (0.134 sec)\nINFO:tensorflow:global_step/sec: 685.419\nINFO:tensorflow:loss = 1.3305103, step = 4200 (0.146 sec)\nINFO:tensorflow:global_step/sec: 632.625\nINFO:tensorflow:loss = 0.14447726, step = 4300 (0.158 sec)\nINFO:tensorflow:global_step/sec: 679.755\nINFO:tensorflow:loss = 1.8386902, step = 4400 (0.148 sec)\nINFO:tensorflow:global_step/sec: 751.428\nINFO:tensorflow:loss = 0.94889283, step = 4500 (0.132 sec)\nINFO:tensorflow:global_step/sec: 750.376\nINFO:tensorflow:loss = 0.28424773, step = 4600 (0.134 sec)\nINFO:tensorflow:global_step/sec: 735.964\nINFO:tensorflow:loss = 3.266353, step = 4700 (0.136 sec)\nINFO:tensorflow:global_step/sec: 742.054\nINFO:tensorflow:loss = 3.171119, step = 4800 (0.134 sec)\nINFO:tensorflow:global_step/sec: 779.657\nINFO:tensorflow:loss = 1.12006, step = 4900 (0.128 sec)\nINFO:tensorflow:global_step/sec: 730.952\nINFO:tensorflow:loss = 0.5669488, step = 5000 (0.137 sec)\nINFO:tensorflow:global_step/sec: 715.897\nINFO:tensorflow:loss = 0.3067366, step = 5100 (0.140 sec)\nINFO:tensorflow:global_step/sec: 631.438\nINFO:tensorflow:loss = 0.5437011, step = 5200 (0.159 sec)\nINFO:tensorflow:global_step/sec: 654.717\nINFO:tensorflow:loss = 0.25085437, step = 5300 (0.153 sec)\nINFO:tensorflow:global_step/sec: 658.829\nINFO:tensorflow:loss = 0.30891788, step = 5400 (0.151 sec)\nINFO:tensorflow:global_step/sec: 672.908\nINFO:tensorflow:loss = 0.8258436, step = 5500 (0.149 sec)\nINFO:tensorflow:global_step/sec: 574.211\nINFO:tensorflow:loss = 0.19280735, step = 5600 (0.173 sec)\nINFO:tensorflow:global_step/sec: 653.783\nINFO:tensorflow:loss = 0.17635345, step = 5700 (0.155 sec)\nINFO:tensorflow:global_step/sec: 635.343\nINFO:tensorflow:loss = 2.1531484, step = 5800 (0.155 sec)\nINFO:tensorflow:global_step/sec: 628.505\nINFO:tensorflow:loss = 1.3385924, step = 5900 (0.159 sec)\nINFO:tensorflow:global_step/sec: 743.577\nINFO:tensorflow:loss = 1.7493693, step = 6000 (0.134 sec)\nINFO:tensorflow:global_step/sec: 759.487\nINFO:tensorflow:loss = 1.3990214, step = 6100 (0.132 sec)\nINFO:tensorflow:global_step/sec: 715.001\nINFO:tensorflow:loss = 0.08636901, step = 6200 (0.140 sec)\nINFO:tensorflow:global_step/sec: 682.812\nINFO:tensorflow:loss = 1.2878852, step = 6300 (0.147 sec)\nINFO:tensorflow:global_step/sec: 668.255\nINFO:tensorflow:loss = 2.8647041, step = 6400 (0.149 sec)\nINFO:tensorflow:global_step/sec: 702.609\nINFO:tensorflow:loss = 0.38349468, step = 6500 (0.143 sec)\nINFO:tensorflow:global_step/sec: 717.91\nINFO:tensorflow:loss = 0.71950877, step = 6600 (0.139 sec)\nINFO:tensorflow:global_step/sec: 719.603\nINFO:tensorflow:loss = 0.8812942, step = 6700 (0.139 sec)\nINFO:tensorflow:global_step/sec: 697.269\nINFO:tensorflow:loss = 0.5575855, step = 6800 (0.143 sec)\nINFO:tensorflow:global_step/sec: 733.281\nINFO:tensorflow:loss = 0.97547567, step = 6900 (0.136 sec)\nINFO:tensorflow:global_step/sec: 731.343\nINFO:tensorflow:loss = 0.10276175, step = 7000 (0.137 sec)\nINFO:tensorflow:global_step/sec: 739.462\nINFO:tensorflow:loss = 1.1644993, step = 7100 (0.135 sec)\nINFO:tensorflow:global_step/sec: 674.078\nINFO:tensorflow:loss = 0.87335706, step = 7200 (0.149 sec)\n" ], [ "test_input_fn = tf.estimator.inputs.numpy_input_fn(\n x={\"X\": X_test}, y=y_test, shuffle=False)\neval_results = dnn_clf.evaluate(input_fn=test_input_fn)", "INFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-05-18-19:12:49\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpuflzeb_h/model.ckpt-44000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Finished evaluation at 2018-05-18-19:12:50\nINFO:tensorflow:Saving dict for global step 44000: accuracy = 0.9798, average_loss = 0.10096103, global_step = 44000, loss = 12.779877\n" ], [ "eval_results", "_____no_output_____" ], [ "y_pred_iter = dnn_clf.predict(input_fn=test_input_fn)\ny_pred = list(y_pred_iter)\ny_pred[0]", "INFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpuflzeb_h/model.ckpt-44000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\n" ] ], [ [ "## Using plain TensorFlow", "_____no_output_____" ] ], [ [ "import tensorflow as tf\n\nn_inputs = 28*28 # MNIST\nn_hidden1 = 300\nn_hidden2 = 100\nn_outputs = 10", "_____no_output_____" ], [ "reset_graph()\n\nX = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\ny = tf.placeholder(tf.int32, shape=(None), name=\"y\")", "_____no_output_____" ], [ "def neuron_layer(X, n_neurons, name, activation=None):\n with tf.name_scope(name):\n n_inputs = int(X.get_shape()[1])\n stddev = 2 / np.sqrt(n_inputs)\n init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)\n W = tf.Variable(init, name=\"kernel\")\n b = tf.Variable(tf.zeros([n_neurons]), name=\"bias\")\n Z = tf.matmul(X, W) + b\n if activation is not None:\n return activation(Z)\n else:\n return Z", "_____no_output_____" ], [ "with tf.name_scope(\"dnn\"):\n hidden1 = neuron_layer(X, n_hidden1, name=\"hidden1\",\n activation=tf.nn.relu)\n hidden2 = neuron_layer(hidden1, n_hidden2, name=\"hidden2\",\n activation=tf.nn.relu)\n logits = neuron_layer(hidden2, n_outputs, name=\"outputs\")", "_____no_output_____" ], [ "with tf.name_scope(\"loss\"):\n xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,\n logits=logits)\n loss = tf.reduce_mean(xentropy, name=\"loss\")", "_____no_output_____" ], [ "learning_rate = 0.01\n\nwith tf.name_scope(\"train\"):\n optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n training_op = optimizer.minimize(loss)", "_____no_output_____" ], [ "with tf.name_scope(\"eval\"):\n correct = tf.nn.in_top_k(logits, y, 1)\n accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))", "_____no_output_____" ], [ "init = tf.global_variables_initializer()\nsaver = tf.train.Saver()", "_____no_output_____" ], [ "n_epochs = 40\nbatch_size = 50", "_____no_output_____" ], [ "def shuffle_batch(X, y, batch_size):\n rnd_idx = np.random.permutation(len(X))\n n_batches = len(X) // batch_size\n for batch_idx in np.array_split(rnd_idx, n_batches):\n X_batch, y_batch = X[batch_idx], y[batch_idx]\n yield X_batch, y_batch", "_____no_output_____" ], [ "with tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n print(epoch, \"Batch accuracy:\", acc_batch, \"Val accuracy:\", acc_val)\n\n save_path = saver.save(sess, \"./my_model_final.ckpt\")", "0 Batch accuracy: 0.9 Val accuracy: 0.9146\n1 Batch accuracy: 0.92 Val accuracy: 0.936\n2 Batch accuracy: 0.96 Val accuracy: 0.945\n3 Batch accuracy: 0.92 Val accuracy: 0.9512\n4 Batch accuracy: 0.98 Val accuracy: 0.9558\n5 Batch accuracy: 0.96 Val accuracy: 0.9566\n6 Batch accuracy: 1.0 Val accuracy: 0.9612\n7 Batch accuracy: 0.94 Val accuracy: 0.963\n8 Batch accuracy: 0.98 Val accuracy: 0.9652\n9 Batch accuracy: 0.96 Val accuracy: 0.966\n10 Batch accuracy: 0.92 Val accuracy: 0.9688\n11 Batch accuracy: 0.98 Val accuracy: 0.969\n12 Batch accuracy: 0.98 Val accuracy: 0.967\n13 Batch accuracy: 0.98 Val accuracy: 0.9706\n14 Batch accuracy: 1.0 Val accuracy: 0.9714\n15 Batch accuracy: 0.94 Val accuracy: 0.9732\n16 Batch accuracy: 1.0 Val accuracy: 0.9736\n17 Batch accuracy: 1.0 Val accuracy: 0.9742\n18 Batch accuracy: 1.0 Val accuracy: 0.9746\n19 Batch accuracy: 0.98 Val accuracy: 0.9748\n20 Batch accuracy: 1.0 Val accuracy: 0.9752\n21 Batch accuracy: 1.0 Val accuracy: 0.9752\n22 Batch accuracy: 0.98 Val accuracy: 0.9764\n23 Batch accuracy: 0.98 Val accuracy: 0.9752\n24 Batch accuracy: 0.98 Val accuracy: 0.9772\n25 Batch accuracy: 1.0 Val accuracy: 0.977\n26 Batch accuracy: 0.98 Val accuracy: 0.9778\n27 Batch accuracy: 1.0 Val accuracy: 0.9774\n28 Batch accuracy: 0.96 Val accuracy: 0.9754\n29 Batch accuracy: 0.98 Val accuracy: 0.9776\n30 Batch accuracy: 1.0 Val accuracy: 0.9756\n31 Batch accuracy: 0.98 Val accuracy: 0.9772\n32 Batch accuracy: 0.98 Val accuracy: 0.9772\n33 Batch accuracy: 0.98 Val accuracy: 0.979\n34 Batch accuracy: 1.0 Val accuracy: 0.9784\n35 Batch accuracy: 1.0 Val accuracy: 0.9778\n36 Batch accuracy: 0.98 Val accuracy: 0.978\n37 Batch accuracy: 1.0 Val accuracy: 0.9776\n38 Batch accuracy: 1.0 Val accuracy: 0.9792\n39 Batch accuracy: 1.0 Val accuracy: 0.9776\n" ], [ "with tf.Session() as sess:\n saver.restore(sess, \"./my_model_final.ckpt\") # or better, use save_path\n X_new_scaled = X_test[:20]\n Z = logits.eval(feed_dict={X: X_new_scaled})\n y_pred = np.argmax(Z, axis=1)", "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n" ], [ "print(\"Predicted classes:\", y_pred)\nprint(\"Actual classes: \", y_test[:20])", "Predicted classes: [7 2 1 0 4 1 4 9 5 9 0 6 9 0 1 5 9 7 3 4]\nActual classes: [7 2 1 0 4 1 4 9 5 9 0 6 9 0 1 5 9 7 3 4]\n" ], [ "from tensorflow_graph_in_jupyter import show_graph", "_____no_output_____" ], [ "show_graph(tf.get_default_graph())", "_____no_output_____" ] ], [ [ "## Using `dense()` instead of `neuron_layer()`", "_____no_output_____" ], [ "Note: previous releases of the book used `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function, except for a few minor differences:\n* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.\n* the default `activation` is now `None` rather than `tf.nn.relu`.\n* a few more differences are presented in chapter 11.", "_____no_output_____" ] ], [ [ "n_inputs = 28*28 # MNIST\nn_hidden1 = 300\nn_hidden2 = 100\nn_outputs = 10", "_____no_output_____" ], [ "reset_graph()\n\nX = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\ny = tf.placeholder(tf.int32, shape=(None), name=\"y\") ", "_____no_output_____" ], [ "with tf.name_scope(\"dnn\"):\n hidden1 = tf.layers.dense(X, n_hidden1, name=\"hidden1\",\n activation=tf.nn.relu)\n hidden2 = tf.layers.dense(hidden1, n_hidden2, name=\"hidden2\",\n activation=tf.nn.relu)\n logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")\n y_proba = tf.nn.softmax(logits)", "_____no_output_____" ], [ "with tf.name_scope(\"loss\"):\n xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n loss = tf.reduce_mean(xentropy, name=\"loss\")", "_____no_output_____" ], [ "learning_rate = 0.01\n\nwith tf.name_scope(\"train\"):\n optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n training_op = optimizer.minimize(loss)", "_____no_output_____" ], [ "with tf.name_scope(\"eval\"):\n correct = tf.nn.in_top_k(logits, y, 1)\n accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))", "_____no_output_____" ], [ "init = tf.global_variables_initializer()\nsaver = tf.train.Saver()", "_____no_output_____" ], [ "n_epochs = 20\nn_batches = 50\n\nwith tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n acc_valid = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n print(epoch, \"Batch accuracy:\", acc_batch, \"Validation accuracy:\", acc_valid)\n\n save_path = saver.save(sess, \"./my_model_final.ckpt\")", "0 Batch accuracy: 0.9 Validation accuracy: 0.9024\n1 Batch accuracy: 0.92 Validation accuracy: 0.9254\n2 Batch accuracy: 0.94 Validation accuracy: 0.9372\n3 Batch accuracy: 0.9 Validation accuracy: 0.9416\n4 Batch accuracy: 0.94 Validation accuracy: 0.9472\n5 Batch accuracy: 0.94 Validation accuracy: 0.9512\n6 Batch accuracy: 1.0 Validation accuracy: 0.9548\n7 Batch accuracy: 0.94 Validation accuracy: 0.961\n8 Batch accuracy: 0.96 Validation accuracy: 0.962\n9 Batch accuracy: 0.94 Validation accuracy: 0.9648\n10 Batch accuracy: 0.92 Validation accuracy: 0.9656\n11 Batch accuracy: 0.98 Validation accuracy: 0.9668\n12 Batch accuracy: 0.98 Validation accuracy: 0.9684\n13 Batch accuracy: 0.98 Validation accuracy: 0.9702\n14 Batch accuracy: 1.0 Validation accuracy: 0.9696\n15 Batch accuracy: 0.94 Validation accuracy: 0.9718\n16 Batch accuracy: 0.98 Validation accuracy: 0.9728\n17 Batch accuracy: 1.0 Validation accuracy: 0.973\n18 Batch accuracy: 0.98 Validation accuracy: 0.9748\n19 Batch accuracy: 0.98 Validation accuracy: 0.9756\n" ], [ "show_graph(tf.get_default_graph())", "_____no_output_____" ] ], [ [ "# Exercise solutions", "_____no_output_____" ], [ "## 1. to 8.", "_____no_output_____" ], [ "See appendix A.", "_____no_output_____" ], [ "## 9.", "_____no_output_____" ], [ "_Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on)._", "_____no_output_____" ], [ "First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a `tf.summary.scalar()` to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.", "_____no_output_____" ] ], [ [ "n_inputs = 28*28 # MNIST\nn_hidden1 = 300\nn_hidden2 = 100\nn_outputs = 10", "_____no_output_____" ], [ "reset_graph()\n\nX = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\ny = tf.placeholder(tf.int32, shape=(None), name=\"y\") ", "_____no_output_____" ], [ "with tf.name_scope(\"dnn\"):\n hidden1 = tf.layers.dense(X, n_hidden1, name=\"hidden1\",\n activation=tf.nn.relu)\n hidden2 = tf.layers.dense(hidden1, n_hidden2, name=\"hidden2\",\n activation=tf.nn.relu)\n logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")", "_____no_output_____" ], [ "with tf.name_scope(\"loss\"):\n xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n loss = tf.reduce_mean(xentropy, name=\"loss\")\n loss_summary = tf.summary.scalar('log_loss', loss)", "_____no_output_____" ], [ "learning_rate = 0.01\n\nwith tf.name_scope(\"train\"):\n optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n training_op = optimizer.minimize(loss)", "_____no_output_____" ], [ "with tf.name_scope(\"eval\"):\n correct = tf.nn.in_top_k(logits, y, 1)\n accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n accuracy_summary = tf.summary.scalar('accuracy', accuracy)", "_____no_output_____" ], [ "init = tf.global_variables_initializer()\nsaver = tf.train.Saver()", "_____no_output_____" ] ], [ [ "Now we need to define the directory to write the TensorBoard logs to:", "_____no_output_____" ] ], [ [ "from datetime import datetime\n\ndef log_dir(prefix=\"\"):\n now = datetime.utcnow().strftime(\"%Y%m%d%H%M%S\")\n root_logdir = \"tf_logs\"\n if prefix:\n prefix += \"-\"\n name = prefix + \"run-\" + now\n return \"{}/{}/\".format(root_logdir, name)", "_____no_output_____" ], [ "logdir = log_dir(\"mnist_dnn\")", "_____no_output_____" ] ], [ [ "Now we can create the `FileWriter` that we will use to write the TensorBoard logs:", "_____no_output_____" ] ], [ [ "file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())", "_____no_output_____" ] ], [ [ "Hey! Why don't we implement early stopping? For this, we are going to need to use the validation set.", "_____no_output_____" ] ], [ [ "m, n = X_train.shape", "_____no_output_____" ], [ "n_epochs = 10001\nbatch_size = 50\nn_batches = int(np.ceil(m / batch_size))\n\ncheckpoint_path = \"/tmp/my_deep_mnist_model.ckpt\"\ncheckpoint_epoch_path = checkpoint_path + \".epoch\"\nfinal_model_path = \"./my_deep_mnist_model\"\n\nbest_loss = np.infty\nepochs_without_progress = 0\nmax_epochs_without_progress = 50\n\nwith tf.Session() as sess:\n if os.path.isfile(checkpoint_epoch_path):\n # if the checkpoint file exists, restore the model and load the epoch number\n with open(checkpoint_epoch_path, \"rb\") as f:\n start_epoch = int(f.read())\n print(\"Training was interrupted. Continuing at epoch\", start_epoch)\n saver.restore(sess, checkpoint_path)\n else:\n start_epoch = 0\n sess.run(init)\n\n for epoch in range(start_epoch, n_epochs):\n for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})\n file_writer.add_summary(accuracy_summary_str, epoch)\n file_writer.add_summary(loss_summary_str, epoch)\n if epoch % 5 == 0:\n print(\"Epoch:\", epoch,\n \"\\tValidation accuracy: {:.3f}%\".format(accuracy_val * 100),\n \"\\tLoss: {:.5f}\".format(loss_val))\n saver.save(sess, checkpoint_path)\n with open(checkpoint_epoch_path, \"wb\") as f:\n f.write(b\"%d\" % (epoch + 1))\n if loss_val < best_loss:\n saver.save(sess, final_model_path)\n best_loss = loss_val\n else:\n epochs_without_progress += 5\n if epochs_without_progress > max_epochs_without_progress:\n print(\"Early stopping\")\n break", "Epoch: 0 \tValidation accuracy: 92.180% \tLoss: 0.30208\nEpoch: 5 \tValidation accuracy: 95.980% \tLoss: 0.15037\nEpoch: 10 \tValidation accuracy: 97.100% \tLoss: 0.11160\nEpoch: 15 \tValidation accuracy: 97.700% \tLoss: 0.09562\nEpoch: 20 \tValidation accuracy: 97.840% \tLoss: 0.08309\nEpoch: 25 \tValidation accuracy: 98.040% \tLoss: 0.07706\nEpoch: 30 \tValidation accuracy: 98.140% \tLoss: 0.07287\nEpoch: 35 \tValidation accuracy: 98.280% \tLoss: 0.07133\nEpoch: 40 \tValidation accuracy: 98.220% \tLoss: 0.06968\nEpoch: 45 \tValidation accuracy: 98.220% \tLoss: 0.06993\nEpoch: 50 \tValidation accuracy: 98.160% \tLoss: 0.07093\nEpoch: 55 \tValidation accuracy: 98.280% \tLoss: 0.06994\nEpoch: 60 \tValidation accuracy: 98.200% \tLoss: 0.06894\nEpoch: 65 \tValidation accuracy: 98.260% \tLoss: 0.06906\nEpoch: 70 \tValidation accuracy: 98.220% \tLoss: 0.07057\nEpoch: 75 \tValidation accuracy: 98.280% \tLoss: 0.06963\nEpoch: 80 \tValidation accuracy: 98.320% \tLoss: 0.07264\nEpoch: 85 \tValidation accuracy: 98.200% \tLoss: 0.07403\nEpoch: 90 \tValidation accuracy: 98.300% \tLoss: 0.07332\nEpoch: 95 \tValidation accuracy: 98.180% \tLoss: 0.07535\nEpoch: 100 \tValidation accuracy: 98.260% \tLoss: 0.07542\nEarly stopping\n" ], [ "os.remove(checkpoint_epoch_path)", "_____no_output_____" ], [ "with tf.Session() as sess:\n saver.restore(sess, final_model_path)\n accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})", "INFO:tensorflow:Restoring parameters from ./my_deep_mnist_model\n" ], [ "accuracy_val", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d017e0f120db36201231f7bc2264fcd3cc761f0c
17,312
ipynb
Jupyter Notebook
BasicGates/BasicGates.ipynb
kant/QuantumKatas
1ce273d5871ac4f1c88680766597f3f47cafa6b0
[ "MIT" ]
1
2020-10-23T10:11:56.000Z
2020-10-23T10:11:56.000Z
BasicGates/BasicGates.ipynb
kant/QuantumKatas
1ce273d5871ac4f1c88680766597f3f47cafa6b0
[ "MIT" ]
null
null
null
BasicGates/BasicGates.ipynb
kant/QuantumKatas
1ce273d5871ac4f1c88680766597f3f47cafa6b0
[ "MIT" ]
null
null
null
33.038168
360
0.539683
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d017fc37fb47689fd38e8d8b8bad53120acf1b08
6,416
ipynb
Jupyter Notebook
astr-119-session-7/bisection_search_demo.ipynb
spaceghst007/astro-119
bb9aa0c27781774ffa9dfbeefcd5267934eaaece
[ "MIT" ]
null
null
null
astr-119-session-7/bisection_search_demo.ipynb
spaceghst007/astro-119
bb9aa0c27781774ffa9dfbeefcd5267934eaaece
[ "MIT" ]
9
2021-09-23T18:54:54.000Z
2021-12-09T19:56:08.000Z
astr-119-session-7/bisection_search_demo.ipynb
spaceghst007/astro-119
bb9aa0c27781774ffa9dfbeefcd5267934eaaece
[ "MIT" ]
null
null
null
28.7713
94
0.467893
[ [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Define a function for which we'd like to find the roots", "_____no_output_____" ] ], [ [ "def function_for_roots(x):\n a = 1.01\n b = -3.04\n c = 2.07\n return a*x**2 + b*x + c #get the roots of ax^2 + bx + c", "_____no_output_____" ] ], [ [ "## We need a function to check whether our initial values are valid", "_____no_output_____" ] ], [ [ "def check_initial_values(f, x_min, x_max, tol):\n \n #check our initial guesses\n y_min = f(x_min)\n y_max = f(x_max)\n \n #check that x_min and x_max contain a zero crossing\n if(y_min*y_max>=0.0):\n print(\"No zero crossing found in the range = \",x_min,x_max)\n s = \"f(%f) = %f, f(%f) = %f\" % (x_min,y_min,x_max,y_max)\n print(s)\n return 0\n \n #if x_min is a root, then return flag == 1\n if(np.fabs(y_min)<tol):\n return 1\n \n #if x_max is a root, then return flag == 2\n if(np.fabs(y_max)<tol):\n return 2\n \n #if we reach this point, the bracket is valid\n #and we will return 3\n return 3", "_____no_output_____" ] ], [ [ "## Now we will define the main work function that actually performs the iterative search", "_____no_output_____" ] ], [ [ "def bisection_root_finding(f, x_min_start, x_max_start, tol):\n \n #this function uses bisection search to find a root\n \n x_min = x_min_start #minimum x in bracket\n x_max = x_max_start #maximum x in bracket\n x_mid = 0.0 #mid point\n \n y_min = f(x_min) #function value at x_min\n y_max = f(x_max) #function value at x_max\n y_mid = 0.0 #function value at mid point\n \n imax = 10000 #set a maximum number of iterations\n i = 0 #iteration counter\n \n #check the initial values\n flag = check_initial_values(f,x_min,x_max,tol)\n if(flag==0):\n print(\"Error in bisection_root_finding().\")\n raise ValueError('Intial values invalid',x_min,x_max)\n elif(flag==1):\n #lucky guess\n return x_min\n elif(flag==2):\n #another lucky guess\n return x_max\n \n #if we reach here, then we need to conduct the search\n \n #set a flag\n flag = 1\n \n #enter a while loop\n while(flag):\n x_mid = 0.5*(x_min+x_max) #mid point\n y_mid = f(x_mid) #function value at x_mid\n \n #check if x_mid is a root\n if(np.fabs(y_mid)<tol):\n flag = 0\n else:\n #x_mid is not a root\n \n #if the product of the functio at the midpoint\n #and at one of the end points is greater than\n #zero, replace this end point\n if(f(x_min)*f(x_mid)>0):\n #replace x_min with x_mid \n x_min = x_mid\n else:\n #repalce x_max with x_mid\n x_max = x_mid\n \n #print out the iteration\n print(x_min,f(x_min),x_max,f(x_max))\n \n #count the iteration\n i += 1\n \n #if we have exceeded the max number\n #of iterations, exit\n if(i>=imax):\n print(\"Exceeded max number of iterations = \",i)\n s = \"Min bracket f(%f) = %f\" % (x_min,f(x_min))\n print(s)\n s = \"Max bracket f(%f) = %f\" % (x_max,f(x_max))\n print(s)\n s = \"Mid bracket f(%f) = %f\" % (x_mid,f(x_mid))\n print(s)\n raise StopIteration('Stopping iterations after ',i)\n \n #we are done!\n return x_mid", "_____no_output_____" ], [ "x_min = 0.0\nx_max = 1.5\ntolerance = 1.0e-6\n\n#print the initial guess\nprint(x_min,function_for_roots(x_min))\nprint(x_max,function_for_roots(x_max))\n\nx_root = bisection_root_finding(function_for_roots,x_min,x_max,tolerance)\ny_root = function_for_roots(x_root)\n\ns = \"Root found with y(%f) = %f\" % (x_root,y_root)\nprint(s)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d018064ba4946017bdc7dd4454465353af089481
124,825
ipynb
Jupyter Notebook
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
fdd2cf59ec589f952718e63ff96c04effffb3144
[ "Apache-2.0" ]
24,753
2015-06-01T10:56:36.000Z
2022-03-31T19:19:58.000Z
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
fdd2cf59ec589f952718e63ff96c04effffb3144
[ "Apache-2.0" ]
52
2015-06-16T11:09:33.000Z
2021-09-09T09:19:03.000Z
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
fdd2cf59ec589f952718e63ff96c04effffb3144
[ "Apache-2.0" ]
7,653
2015-06-06T23:19:20.000Z
2022-03-31T06:57:39.000Z
316.814721
37,706
0.922259
[ [ [ "<!--BOOK_INFORMATION-->\n<img align=\"left\" style=\"padding-right:10px;\" src=\"figures/PDSH-cover-small.png\">\n*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*\n\n*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*\n\n*No changes were made to the contents of this notebook from the original.*", "_____no_output_____" ], [ "<!--NAVIGATION-->\n< [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb) | [Contents](Index.ipynb) | [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) >", "_____no_output_____" ], [ "# Histograms, Binnings, and Density", "_____no_output_____" ], [ "A simple histogram can be a great first step in understanding a dataset.\nEarlier, we saw a preview of Matplotlib's histogram function (see [Comparisons, Masks, and Boolean Logic](02.06-Boolean-Arrays-and-Masks.ipynb)), which creates a basic histogram in one line, once the normal boiler-plate imports are done:", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-white')\n\ndata = np.random.randn(1000)", "_____no_output_____" ], [ "plt.hist(data);", "_____no_output_____" ] ], [ [ "The ``hist()`` function has many options to tune both the calculation and the display; \nhere's an example of a more customized histogram:", "_____no_output_____" ] ], [ [ "plt.hist(data, bins=30, normed=True, alpha=0.5,\n histtype='stepfilled', color='steelblue',\n edgecolor='none');", "_____no_output_____" ] ], [ [ "The ``plt.hist`` docstring has more information on other customization options available.\nI find this combination of ``histtype='stepfilled'`` along with some transparency ``alpha`` to be very useful when comparing histograms of several distributions:", "_____no_output_____" ] ], [ [ "x1 = np.random.normal(0, 0.8, 1000)\nx2 = np.random.normal(-2, 1, 1000)\nx3 = np.random.normal(3, 2, 1000)\n\nkwargs = dict(histtype='stepfilled', alpha=0.3, normed=True, bins=40)\n\nplt.hist(x1, **kwargs)\nplt.hist(x2, **kwargs)\nplt.hist(x3, **kwargs);", "_____no_output_____" ] ], [ [ "If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the ``np.histogram()`` function is available:", "_____no_output_____" ] ], [ [ "counts, bin_edges = np.histogram(data, bins=5)\nprint(counts)", "[ 12 190 468 301 29]\n" ] ], [ [ "## Two-Dimensional Histograms and Binnings\n\nJust as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins.\nWe'll take a brief look at several ways to do this here.\nWe'll start by defining some data—an ``x`` and ``y`` array drawn from a multivariate Gaussian distribution:", "_____no_output_____" ] ], [ [ "mean = [0, 0]\ncov = [[1, 1], [1, 2]]\nx, y = np.random.multivariate_normal(mean, cov, 10000).T", "_____no_output_____" ] ], [ [ "### ``plt.hist2d``: Two-dimensional histogram\n\nOne straightforward way to plot a two-dimensional histogram is to use Matplotlib's ``plt.hist2d`` function:", "_____no_output_____" ] ], [ [ "plt.hist2d(x, y, bins=30, cmap='Blues')\ncb = plt.colorbar()\ncb.set_label('counts in bin')", "_____no_output_____" ] ], [ [ "Just as with ``plt.hist``, ``plt.hist2d`` has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring.\nFurther, just as ``plt.hist`` has a counterpart in ``np.histogram``, ``plt.hist2d`` has a counterpart in ``np.histogram2d``, which can be used as follows:", "_____no_output_____" ] ], [ [ "counts, xedges, yedges = np.histogram2d(x, y, bins=30)", "_____no_output_____" ] ], [ [ "For the generalization of this histogram binning in dimensions higher than two, see the ``np.histogramdd`` function.", "_____no_output_____" ], [ "### ``plt.hexbin``: Hexagonal binnings\n\nThe two-dimensional histogram creates a tesselation of squares across the axes.\nAnother natural shape for such a tesselation is the regular hexagon.\nFor this purpose, Matplotlib provides the ``plt.hexbin`` routine, which will represents a two-dimensional dataset binned within a grid of hexagons:", "_____no_output_____" ] ], [ [ "plt.hexbin(x, y, gridsize=30, cmap='Blues')\ncb = plt.colorbar(label='count in bin')", "_____no_output_____" ] ], [ [ "``plt.hexbin`` has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.).", "_____no_output_____" ], [ "### Kernel density estimation\n\nAnother common method of evaluating densities in multiple dimensions is *kernel density estimation* (KDE).\nThis will be discussed more fully in [In-Depth: Kernel Density Estimation](05.13-Kernel-Density-Estimation.ipynb), but for now we'll simply mention that KDE can be thought of as a way to \"smear out\" the points in space and add up the result to obtain a smooth function.\nOne extremely quick and simple KDE implementation exists in the ``scipy.stats`` package.\nHere is a quick example of using the KDE on this data:", "_____no_output_____" ] ], [ [ "from scipy.stats import gaussian_kde\n\n# fit an array of size [Ndim, Nsamples]\ndata = np.vstack([x, y])\nkde = gaussian_kde(data)\n\n# evaluate on a regular grid\nxgrid = np.linspace(-3.5, 3.5, 40)\nygrid = np.linspace(-6, 6, 40)\nXgrid, Ygrid = np.meshgrid(xgrid, ygrid)\nZ = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel()]))\n\n# Plot the result as an image\nplt.imshow(Z.reshape(Xgrid.shape),\n origin='lower', aspect='auto',\n extent=[-3.5, 3.5, -6, 6],\n cmap='Blues')\ncb = plt.colorbar()\ncb.set_label(\"density\")", "_____no_output_____" ] ], [ [ "KDE has a smoothing length that effectively slides the knob between detail and smoothness (one example of the ubiquitous bias–variance trade-off).\nThe literature on choosing an appropriate smoothing length is vast: ``gaussian_kde`` uses a rule-of-thumb to attempt to find a nearly optimal smoothing length for the input data.\n\nOther KDE implementations are available within the SciPy ecosystem, each with its own strengths and weaknesses; see, for example, ``sklearn.neighbors.KernelDensity`` and ``statsmodels.nonparametric.kernel_density.KDEMultivariate``.\nFor visualizations based on KDE, using Matplotlib tends to be overly verbose.\nThe Seaborn library, discussed in [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb), provides a much more terse API for creating KDE-based visualizations.", "_____no_output_____" ], [ "<!--NAVIGATION-->\n< [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb) | [Contents](Index.ipynb) | [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) >", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
d01809d776cb16cc1d9e40cae4f125977b97b515
119,272
ipynb
Jupyter Notebook
notebooks/benchmark_vih.ipynb
victorfica/Master-thesis
5390d8d2df50300639d860a8d17ccd54445cf3a3
[ "MIT" ]
1
2021-03-20T04:56:56.000Z
2021-03-20T04:56:56.000Z
notebooks/benchmark_vih.ipynb
victorfica/Master-thesis
5390d8d2df50300639d860a8d17ccd54445cf3a3
[ "MIT" ]
null
null
null
notebooks/benchmark_vih.ipynb
victorfica/Master-thesis
5390d8d2df50300639d860a8d17ccd54445cf3a3
[ "MIT" ]
null
null
null
45.247344
24,848
0.510598
[ [ [ "import pandas as pd\nimport numpy as np\nimport os\nimport prody\nimport math\n\nfrom pathlib import Path\nimport pickle\nimport sys\nfrom sklearn.externals import joblib\nfrom sklearn.metrics import r2_score,mean_squared_error\n\n\nfrom abpred.Pipeline import PreparePredictions", "_____no_output_____" ], [ "def Kd_2_dG(Kd):\n if Kd == 0:\n \n deltaG = np.log(Kd+1)*(8.314/4184)*(298.15)\n else:\n deltaG = np.log(Kd)*(8.314/4184)*(298.15)\n \n return deltaG\n\ndef deltaG_to_Kd(delg):\n Kd_value = math.exp((delg)/((8.314/4184)*298.15))\n return Kd_value", "_____no_output_____" ] ], [ [ "The effect of a given mutation on antibody binding was represented by apparent affinity (avidity) relative to those for wild-type (WT) gp120, calculated with the formula ([(EC50_WT/EC50_mutant)/(EC50_WT for 2G12/EC50_mutant for 2G12)] × 100)", "_____no_output_____" ] ], [ [ "# Test data\nVIH_final = pd.read_csv('../data/VIH_Test15.csv',index_col=0)\n\n# original info data\nvih_data = pd.read_csv(\"../data/HIV_escape_mutations.csv\",sep=\"\\t\")", "_____no_output_____" ], [ "\n#vih_data[\"pred_ddg2EC50\"] = vih_data[\"mCSM-AB_Pred\"].apply(deltaG_to_Kd)*100", "_____no_output_____" ], [ "vih_original = vih_data.loc[vih_data[\"Mutation_type\"]==\"ORIGINAL\"].copy()\nvih_reverse = vih_data.loc[vih_data[\"Mutation_type\"]==\"REVERSE\"]\n\n#sort values to appedn to prediction data table\nvih_original.loc[:,\"mut_code\"] = (vih_reverse[\"Chain\"]+vih_reverse[\"Mutation\"].str[1:]).values\nvih_original.sort_values(by='mut_code',inplace=True)\n", "_____no_output_____" ], [ "vih_original[\"Mutation_original\"] = vih_original[\"Mutation\"].str[-1]+vih_original[\"Mutation\"].str[1:-1]+vih_original[\"Mutation\"].str[0]", "_____no_output_____" ], [ "vih_original.loc[(vih_original['Exptal'] <= 33 ),\"mutation-effect\"] = \"decreased\"\nvih_original.loc[(vih_original['Exptal'] > 300 ),\"mutation-effect\"] = \"increased\"\nvih_original.loc[(vih_original['Exptal'] < 300 )&(vih_original['Exptal'] > 33 ),\"mutation-effect\"] = \"neutral\"\n\n\nvih_reverse.loc[(vih_reverse['Exptal'] <= 33 ),\"mutation-effect\"] = \"decreased\"\nvih_reverse.loc[(vih_reverse['Exptal'] > 300 ),\"mutation-effect\"] = \"increased\"\nvih_reverse.loc[(vih_reverse['Exptal'] < 300 )&(vih_reverse['Exptal'] > 33 ),\"mutation-effect\"] = \"neutral\"\n", "/home/vilion/miniconda3/envs/bioinf/lib/python3.7/site-packages/pandas/core/indexing.py:494: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self.obj[item] = s\n" ], [ "#\n#xgbr = XGBRegressor()\n#xgbr.load_model(fname='xgb_final_400F_smote_032019.sav')\n\n#xgbr_borderline = XGBRegressor()\n#xgbr_borderline.load_model(fname='xgb_final_400F_borderlinesmote_032019.sav')\n", "_____no_output_____" ], [ "# X and y data transformed to delta G\nX = VIH_final.drop(\"Exptal\",axis=1)\ny_energy = (VIH_final[\"Exptal\"]/1000).apply(Kd_2_dG)\ny_binding = VIH_final[\"Exptal\"].values", "_____no_output_____" ], [ "PreparePredictions(X).run()", "_____no_output_____" ], [ "X.ddg.sort_values().head(10)", "_____no_output_____" ], [ "vih_original.loc[vih_original[\"mutation-effect\"]==\"increased\"]", "_____no_output_____" ], [ "461", "_____no_output_____" ], [ "197", "_____no_output_____" ], [ "#ridge_model = joblib.load('ridgeLinear_train15skempiAB_FINAL.pkl')\nlasso_model = joblib.load('Lasso_train15skempiAB_FINAL.pkl')\nelasticnet_model = joblib.load('elasticNet_train15skempiAB_FINAL.pkl')\nsvr_model = joblib.load('rbfSVRmodel_train15skempiAB_FINAL.pkl')\npoly_model = joblib.load(\"poly2SVRmodel_train15skempiAB_FINAL.pkl\")\n#rf_model = joblib.load('RFmodel_train15skempiAB_FINAL.pkl')\ngbt_model = joblib.load('GBTmodel_train15skempiAB_FINAL.overf.pkl')\n#xgb_model = joblib.load('XGBmodel_train15skempiAB_FINAL.pkl')", "_____no_output_____" ], [ "#ridge_pred = ridge_model.predict(X)\nlasso_pred = lasso_model.predict(X)\nelasticnet_pred = elasticnet_model.predict(X)\nsvr_pred = svr_model.predict(X)\npoly_pred = poly_model.predict(X)\n#rf_pred = rf_model.predict(X)\ngbt_pred = gbt_model.predict(X)\n#xgb_pred = xgb_model.predict(X)", "_____no_output_____" ], [ "pred_stack = np.hstack([vih_original[[\"mutation-effect\",\"mCSM-AB_Pred\",\"Exptal\"]].values,\n lasso_pred.reshape((-1,1)),gbt_pred.reshape((-1,1)),svr_pred.reshape((-1,1)),poly_pred.reshape((-1,1))])\npred_data = pd.DataFrame(pred_stack,columns=[\"mutation-effect\",\"mCSM-AB_Pred\",\"Exptal\",\"Lasso_pred\",\"gbt_pred\",\"svr_pred\",\"poly_pred\"])\n# transform prediction score to relative to kd , refered in paper\n#pred_data_binding = pred_data.applymap(deltaG_to_Kd)*100", "_____no_output_____" ], [ "pred_data[\"mean-pred\"] = pred_data.loc[:,[\"Lasso_pred\",\"gbt_pred\",\"svr_pred\"]].mean(axis=1)", "_____no_output_____" ], [ "pred_data", "_____no_output_____" ], [ "pred_data.loc[pred_data[\"mutation-effect\"]==\"increased\"]", "_____no_output_____" ], [ "pred_data.loc[(pred_data[\"mean-pred\"].abs() > 0.1)]", "_____no_output_____" ], [ "pred_data[\"True\"] = y_energy.values\npred_data_binding[\"True\"] = y_binding", "_____no_output_____" ], [ "#pred_data_converted.corr()\npred_data_binding.corr()", "_____no_output_____" ], [ "pred_data", "_____no_output_____" ], [ "average_pred_binding = pred_data_binding.drop(\"True\",axis=1).loc[:,[\"gbt_pred\",\"elasticnet_pred\"]].mean(axis=1)\naverage_pred_energy = pred_data.drop(\"True\",axis=1).loc[:,[\"gbt_pred\",\"elasticnet_pred\"]].mean(axis=1)", "_____no_output_____" ], [ "r2score = r2_score(y_energy,average_pred_energy)\nrmse = mean_squared_error(y_energy,average_pred_energy)\n\nprint(\"R2 score:\", r2score)\nprint(\"RMSE score:\", np.sqrt(rmse))", "_____no_output_____" ], [ "np.corrcoef(y[\"Exptal\"],average_pred)", "_____no_output_____" ], [ "# Corr mCSM-AB with converted mCSM AB data\nnp.corrcoef(y_binding,vih_reverse[\"pred_ddg2EC50\"])", "_____no_output_____" ], [ "# Corr mCSM-AB with converted VIH paper data\nnp.corrcoef(y_energy,vih_reverse[\"mCSM-AB_Pred\"])", "_____no_output_____" ], [ "# Corr FoldX feature alone\nnp.corrcoef(y[\"Exptal\"],VIH_final[\"dg_change\"].apply(deltaG_to_Kd)*100)\n", "_____no_output_____" ], [ "import seaborn as sns", "_____no_output_____" ], [ "#rmse_test = np.round(np.sqrt(mean_squared_error(y_test, y_pred_test)), 3)\ndf_pred = pd.DataFrame({\"Predicted ddG(kcal/mol)\": pred_data[\"gbt_pred\"], \"Actual ddG(kcal/mol)\": y_energy.values})\npearsonr_test = round(df_pred.corr().iloc[0,1],3)\n\ng = sns.regplot(x=\"Actual ddG(kcal/mol)\", y=\"Predicted ddG(kcal/mol)\",data=df_pred)\nplt.title(\"Predicted vs Experimental ddG (Independent set: 123 complexes)\")\n\nplt.text(-2,3,\"pearsonr = %s\" %pearsonr_test)\n#plt.text(4.5,-0.5,\"RMSE = %s\" %rmse_test)\n\n#plt.savefig(\"RFmodel_300_testfit.png\",dpi=600)", "_____no_output_____" ], [ "PredictionError?", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d018153fce227adf0eb331d05d8b8a0bf258d633
3,305
ipynb
Jupyter Notebook
jupyter_notebook/sparse_tensor_hash_bucket.ipynb
LianShuaiLong/Codebook
fd67440d2de80b48aa90b9f7ea5d459baee0a6d8
[ "MIT" ]
null
null
null
jupyter_notebook/sparse_tensor_hash_bucket.ipynb
LianShuaiLong/Codebook
fd67440d2de80b48aa90b9f7ea5d459baee0a6d8
[ "MIT" ]
null
null
null
jupyter_notebook/sparse_tensor_hash_bucket.ipynb
LianShuaiLong/Codebook
fd67440d2de80b48aa90b9f7ea5d459baee0a6d8
[ "MIT" ]
null
null
null
25.620155
144
0.539183
[ [ [ "import tensorflow.compat.v1 as tf", "_____no_output_____" ], [ "xulie = '2624589463;76523725;287554729;276107047;4203937162;4078814928;1805417079;313935447;1171899098;2047572414'\ntmp = tf.strings.split([xulie],';')\ntmpValues = tf.string_to_number(tmp.values,out_type=tf.int64)\nv_new = tf.SparseTensor(tmp.indices, tmpValues, tmp.dense_shape)", "_____no_output_____" ], [ "sess=tf.Session()\nsess.run(v_new)\n#indices:非零元素的位置 values:非零元素的值 dense_shape:sparse_tensor的形状", "_____no_output_____" ], [ "sess = tf.Session()\ntf.compat.v1.disable_eager_execution()\nfeatures = {\n 'xulie': [[2624589463,76523725,287554729,276107047,4203937162,4078814928,1805417079,313935447,1171899098,2047572414]]#[[]]multi-hot\n #'xulie': [2624589463,76523725,287554729,276107047,4203937162,4078814928,1805417079,313935447,1171899098,2047572414]#[] one-hot\n}\ndepartment = tf.feature_column.categorical_column_with_hash_bucket('xulie', 4, dtype=tf.int64)\ndepartment = tf.feature_column.indicator_column(department)\n#组合特征列\ncolumns = [\n department\n]\ninputs = tf.feature_column.input_layer(features, columns)\n\n#初始化并运行\ninit = tf.global_variables_initializer()\nsess.run(tf.tables_initializer())\nsess.run(init)\n\nv=sess.run(inputs)\nprint(v)", "[[2. 5. 1. 2.]]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
d0181c8bea885f49b6a0be6b3708205e7679909f
22,376
ipynb
Jupyter Notebook
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
cfee7a1e988da8cec155382cc16d12311c101c24
[ "MIT" ]
6,596
2016-10-26T13:05:43.000Z
2022-03-31T04:12:38.000Z
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
cfee7a1e988da8cec155382cc16d12311c101c24
[ "MIT" ]
849
2016-10-26T13:52:09.000Z
2022-03-11T14:34:12.000Z
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
cfee7a1e988da8cec155382cc16d12311c101c24
[ "MIT" ]
1,113
2016-10-27T19:23:54.000Z
2022-03-31T15:59:49.000Z
36.863262
340
0.402083
[ [ [ "# Example of extracting features from dataframes with Datetime indices\n\nAssuming that time-varying measurements are taken at regular intervals can be sufficient for many situations. However, for a large number of tasks it is important to take into account **when** a measurement is made. An example can be healthcare, where the interval between measurements of vital signs contains crucial information. \n\nTsfresh now supports calculator functions that use the index of the timeseries container in order to calculate the features. The only requirements for these function is that the index of the input dataframe is of type `pd.DatetimeIndex`. These functions are contained in the new class TimeBasedFCParameters.\n\nNote that the behaviour of all other functions is unaffected. The settings parameter of `extract_features()` can contain both index-dependent functions and 'regular' functions.", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom tsfresh.feature_extraction import extract_features\n# TimeBasedFCParameters contains all functions that use the Datetime index of the timeseries container\nfrom tsfresh.feature_extraction.settings import TimeBasedFCParameters ", "_____no_output_____" ] ], [ [ "# Build a time series container with Datetime indices\n\nLet's build a dataframe with a datetime index. The format must be with a `value` and a `kind` column, since each measurement has its own timestamp - i.e. measurements are not assumed to be simultaneous.", "_____no_output_____" ] ], [ [ "df = pd.DataFrame({\"id\": [\"a\", \"a\", \"a\", \"a\", \"b\", \"b\", \"b\", \"b\"], \n \"value\": [1, 2, 3, 1, 3, 1, 0, 8],\n \"kind\": [\"temperature\", \"temperature\", \"pressure\", \"pressure\",\n \"temperature\", \"temperature\", \"pressure\", \"pressure\"]},\n index=pd.DatetimeIndex(\n ['2019-03-01 10:04:00', '2019-03-01 10:50:00', '2019-03-02 00:00:00', '2019-03-02 09:04:59',\n '2019-03-02 23:54:12', '2019-03-03 08:13:04', '2019-03-04 08:00:00', '2019-03-04 08:01:00']\n ))\ndf = df.sort_index()\ndf", "_____no_output_____" ] ], [ [ "Right now `TimeBasedFCParameters` only contains `linear_trend_timewise`, which performs a calculation of a linear trend, but using the time difference in hours between measurements in order to perform the linear regression. As always, you can add your own functions in `tsfresh/feature_extraction/feature_calculators.py`.", "_____no_output_____" ] ], [ [ "settings_time = TimeBasedFCParameters()\nsettings_time", "_____no_output_____" ] ], [ [ "We extract the features as usual, specifying the column value, kind, and id.", "_____no_output_____" ] ], [ [ "X_tsfresh = extract_features(df, column_id=\"id\", column_value='value', column_kind='kind',\n default_fc_parameters=settings_time)\nX_tsfresh.head()", "Feature Extraction: 100%|██████████| 4/4 [00:00<00:00, 591.10it/s]\n" ] ], [ [ "The output looks exactly, like usual. If we compare it with the 'regular' `linear_trend` feature calculator, we can see that the intercept, p and R values are the same, as we'd expect – only the slope is now different.", "_____no_output_____" ] ], [ [ "settings_regular = {'linear_trend': [\n {'attr': 'pvalue'},\n {'attr': 'rvalue'},\n {'attr': 'intercept'},\n {'attr': 'slope'},\n {'attr': 'stderr'}\n]}", "_____no_output_____" ], [ "X_tsfresh = extract_features(df, column_id=\"id\", column_value='value', column_kind='kind',\n default_fc_parameters=settings_regular)\nX_tsfresh.head()", "Feature Extraction: 100%|██████████| 4/4 [00:00<00:00, 2517.59it/s]\n" ] ], [ [ "# Writing your own time-based feature calculators\n\nWriting your own time-based feature calculators is no different from usual. Only two new properties must be set using the `@set_property` decorator:\n\n1) `@set_property(\"input\", \"pd.Series\")` tells the function that the input of the function is a `pd.Series` rather than a numpy array. This allows the index to be used.\n2) `@set_property(\"index_type\", pd.DatetimeIndex)` tells the function that the input is a DatetimeIndex, allowing it to perform calculations based on time datatypes.\n\nFor example, if we want to write a function that calculates the time between the first and last measurement, it could look something like this:\n\n```python\n@set_property(\"input\", \"pd.Series\")\n@set_property(\"index_type\", pd.DatetimeIndex)\ndef timespan(x, param):\n ix = x.index\n\n # Get differences between the last timestamp and the first timestamp in seconds, then convert to hours.\n times_seconds = (ix[-1] - ix[0]).total_seconds()\n return times_seconds / float(3600)\n```", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
d01824008bce5be57f1cc5c978205902cdcf92a5
58,160
ipynb
Jupyter Notebook
Amenities_Niyati/Plots/Amazon_nearby_Amenities_Fitness_Ranking.ipynb
gvo34/BC_Project1
324d84aac1cc147f68382922c1ab8b73ac2c2070
[ "MIT" ]
1
2018-03-24T17:42:15.000Z
2018-03-24T17:42:15.000Z
Amenities_Niyati/Plots/Amazon_nearby_Amenities_Fitness_Ranking.ipynb
gvo34/BC_Project1
324d84aac1cc147f68382922c1ab8b73ac2c2070
[ "MIT" ]
15
2018-03-24T21:13:14.000Z
2022-03-11T23:18:33.000Z
Amenities_Niyati/Plots/Amazon_nearby_Amenities_Fitness_Ranking.ipynb
indranik/BC_Project1
0766a7fddebf0f7c0c19415a62990c9f06200169
[ "MIT" ]
null
null
null
124.806867
43,928
0.815509
[ [ [ "# Dependencies\nimport json\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ] ], [ [ "### Gymnasium", "_____no_output_____" ] ], [ [ "gym = pd.read_csv('../Results/Gym_Rating.csv')\ndel gym['Unnamed: 0']\ngym.replace('NAN', value=0, inplace=True)\ngym = gym.rename(columns={'gym Total Count':'Total Count', 'Facility gym':'Gymnasium Facility'})\ngym['Rating']=gym['Rating'].astype(float)\ngym['Total Count']=gym['Total Count'].astype(int)\ngym.head()", "_____no_output_____" ], [ "new_gym = gym.groupby(['City Name', 'Site Name'])\ngym_count_df = pd.DataFrame(new_gym['Site Name'].value_counts())\ngym_count_df = gym_count_df.rename(columns={'Site Name': 'Total Count'})\ngym_count_df = gym_count_df.reset_index(level=1)\ngym_count_df = gym_count_df.reset_index(level=0)\ngym_count_df = gym_count_df.reset_index(drop=True)\ngym_count_df.head()", "_____no_output_____" ], [ "gym_count_final = gym_count_df.groupby(['City Name'])\ngym_count_final_df = pd.DataFrame(gym_count_final['Total Count'].median())\ngym_count_final_df = gym_count_final_df.sort_values(['Total Count'])[::-1]\ngym_count_final_df = gym_count_final_df.reset_index()\ngym_count_final_df['Type']='Gymnasium'\ngym_count_final_df = gym_count_final_df.drop([6])\ngym_count_final_df = gym_count_final_df.reset_index(drop=True)\ngym_count_final_df", "_____no_output_____" ], [ "print(\"========================================\")\nprint(\"==================TEST====================\")\n\nsns.factorplot(kind='bar',x='Type',y='Total Count',data=gym_count_final_df,\n hue='City Name', size=5, aspect=2.5)\n\ntotal_count = gym_count_final_df.groupby(['City Name'])['Total Count'].median().sort_values()[::-1].reset_index()\ntotal_count_df = pd.DataFrame(total_count)\nprint(total_count_df)\nranks_dict = {}\ny=1\nfor name in total_count_df['City Name']:\n ranks_dict[name] = y\n y=y+1\nprint(ranks_dict)\n\nplt.title('City Nearby Fitness Ranking', fontsize=20, fontweight='bold')\n\nplt.xlabel(' ', fontsize=15)\nplt.ylabel('Median Count', fontsize=15)\n\nplt.xticks(fontsize=12)\nplt.yticks(fontsize=12)\n\nnew_labels = ['#1 New York', '#2 Chicago', '#3 Boston', '#4 Washington DC', '#5 Los Angeles', '#6 Austin',\n '#7 Raleigh', '#8 Atlanta']\nplt.legend(new_labels, frameon=False, title='Rank',\n bbox_to_anchor=(.85, 1), loc=1, borderaxespad=0.)\n\n\nprint(\"========================================\")\nprint(\"==================END====================\")\n\nplt.savefig('Save_Figs/Fitness.png', bbox_inches='tight')\n\nplt.show()", "========================================\n==================TEST====================\n City Name Total Count\n0 New York 20.0\n1 Chicago 20.0\n2 Boston 17.0\n3 Washington DC 13.5\n4 Los Angeles 13.0\n5 Austin 7.5\n6 Raleigh 5.0\n7 Atlanta 5.0\n{'New York': 1, 'Chicago': 2, 'Boston': 3, 'Washington DC': 4, 'Los Angeles': 5, 'Austin': 6, 'Raleigh': 7, 'Atlanta': 8}\n========================================\n==================END====================\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d01831124f94e536f9e03053df19e3292a73c42a
264,574
ipynb
Jupyter Notebook
project/code/yfinance-lstm.ipynb
cybertraining-dsc/su21-reu-361
defa6e635cbc957b391660842fe56775275332ba
[ "Apache-2.0" ]
1
2021-06-18T16:48:26.000Z
2021-06-18T16:48:26.000Z
project/code/yfinance-lstm.ipynb
cybertraining-dsc/su21-reu-361
defa6e635cbc957b391660842fe56775275332ba
[ "Apache-2.0" ]
2
2021-06-19T01:55:56.000Z
2021-06-19T21:54:28.000Z
project/code/yfinance-lstm.ipynb
cybertraining-dsc/su21-reu-361
defa6e635cbc957b391660842fe56775275332ba
[ "Apache-2.0" ]
9
2021-06-17T17:47:17.000Z
2022-03-19T00:24:57.000Z
409.557276
181,911
0.910876
[ [ [ "import yfinance as yf\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom cloudmesh.common.StopWatch import StopWatch\nfrom tensorflow import keras\nfrom pandas.plotting import register_matplotlib_converters\nfrom sklearn.metrics import mean_squared_error\nimport pathlib\nfrom pathlib import Path", "_____no_output_____" ], [ "cryptoName = input('Please enter the name of the crypto to predict.\\nExamples include \"EOS-USD\", \"DOGE-USD\",\\n\"ETH-USD\", and \"BTC-USD\" without double quotes')\nprint(cryptoName+' selected')\n", "EOS-USD selected\n" ], [ "StopWatch.start(\"Overall time\")\n\n# Creating desktop path to save figures to the desktop\ndesktop = pathlib.Path.home() / 'Desktop'\ndesktop2 = str(Path(desktop))\nfullpath = desktop2 + \"\\\\\"+cryptoName+\"-prediction-model.png\"\nfullpath2 = desktop2 + \"\\\\\"+cryptoName+\"-prediction-model-zoomed.png\"\nfullpath3 = desktop2 + \"\\\\\"+cryptoName+\"-price.png\"\nfullpath4 = desktop2 + \"\\\\\"+cryptoName+\"-training-loss.png\"\npdfpath = desktop2 + \"\\\\\"+cryptoName+\"-prediction-model.pdf\"\npdfpath2 = desktop2 + \"\\\\\"+cryptoName+\"-prediction-model-zoomed.pdf\"\npdfpath3 = desktop2 + \"\\\\\"+cryptoName+\"-price.pdf\"\npdfpath4 = desktop2 + \"\\\\\"+cryptoName+\"-training-loss.pdf\"\n\n\nregister_matplotlib_converters()", "_____no_output_____" ], [ "ticker = yf.Ticker(cryptoName)\ndata = ticker.history(period = \"max\", interval = \"1d\")\n#print(data)\n# Sort the dataframe according to the date\ndata.sort_values('Date', inplace=True, ascending=True)\n\n# Print the dataframe top\ndata.head()\n", "_____no_output_____" ], [ "# Visualization of data. Plotting the price close.\nplt.figure(num=None, figsize=(7, 4), dpi=300, facecolor='w', edgecolor='k')\ndata['Close'].plot()\nplt.tight_layout()\nplt.grid()\nplt.ylabel('Close Price in USD')\nplt.xlabel('Date')\nplt.tight_layout()\n#plt.savefig(fullpath3, dpi=300, facecolor=\"#FFFFFF\")\nplt.savefig(pdfpath3, dpi=300)\nplt.show()", "_____no_output_____" ], [ "print(data.index[0])\nfirstDate = data.index[0]\nfirstDateFormatted = pd.to_datetime(data.index[0], utc=False)\nprint(firstDateFormatted)\ndate_time_obj = firstDateFormatted.to_pydatetime()\ntrueFirstDate = date_time_obj.strftime('%m/%d/%Y')\nprint(trueFirstDate)\n", "2017-11-09 00:00:00\n2017-11-09 00:00:00\n11/09/2017\n" ], [ "print(data.head())\n", " Open High Low Close Volume \\\nDate \n2017-11-09 308.644989 329.451996 307.056000 320.884003 893249984 \n2017-11-10 320.670990 324.717987 294.541992 299.252991 885985984 \n2017-11-11 298.585999 319.453003 298.191986 314.681000 842300992 \n2017-11-12 314.690002 319.153015 298.513000 307.907990 1613479936 \n2017-11-13 307.024994 328.415009 307.024994 316.716003 1041889984 \n\n Dividends Stock Splits \nDate \n2017-11-09 0 0 \n2017-11-10 0 0 \n2017-11-11 0 0 \n2017-11-12 0 0 \n2017-11-13 0 0 \n" ], [ "# Get Close data\ndf = data[['Close']].copy()\n# Split data into train and test\ntrain, test = df.iloc[0:-200], df.iloc[-200:len(df)]\n\nprint(len(train), len(test))", "1328 200\n" ], [ "train_max = train.max()\ntrain_min = train.min()\n\n# Normalize the dataframes\ntrain = (train - train_min)/(train_max - train_min)\ntest = (test - train_min)/(train_max - train_min)", "_____no_output_____" ], [ "def create_dataset(X, y, time_steps=1):\n Xs, ys = [], []\n for i in range(len(X) - time_steps):\n v = X.iloc[i:(i + time_steps)].values\n Xs.append(v)\n ys.append(y.iloc[i + time_steps])\n return np.array(Xs), np.array(ys)\n\n\ntime_steps = 10\n\nX_train, y_train = create_dataset(train, train.Close, time_steps)\nX_test, y_test = create_dataset(test, test.Close, time_steps)", "_____no_output_____" ], [ "StopWatch.start(\"Training time\")\n\nmodel = keras.Sequential()\nmodel.add(keras.layers.LSTM(250, input_shape=(X_train.shape[1], X_train.shape[2])))\nmodel.add(keras.layers.Dropout(0.2))\nmodel.add(keras.layers.Dense(1))\nmodel.compile(loss='mae', optimizer='adam')\nmodel.summary()\n\nhistory = model.fit(\n X_train, y_train,\n epochs=50,\n batch_size=32,\n shuffle=False\n)\n\nStopWatch.stop(\"Training time\")", "Model: \"sequential_4\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n lstm_4 (LSTM) (None, 250) 252000 \n \n dropout_4 (Dropout) (None, 250) 0 \n \n dense_4 (Dense) (None, 1) 251 \n \n=================================================================\nTotal params: 252,251\nTrainable params: 252,251\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/50\n42/42 [==============================] - 3s 19ms/step - loss: 0.0541\nEpoch 2/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0505\nEpoch 3/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0699\nEpoch 4/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0504\nEpoch 5/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0421\nEpoch 6/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0398\nEpoch 7/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0311\nEpoch 8/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0269\nEpoch 9/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0280\nEpoch 10/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0364\nEpoch 11/50\n42/42 [==============================] - 1s 16ms/step - loss: 0.0272\nEpoch 12/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0261\nEpoch 13/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0251\nEpoch 14/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0297\nEpoch 15/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0275\nEpoch 16/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0289\nEpoch 17/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0242\nEpoch 18/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0206\nEpoch 19/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0211\nEpoch 20/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0242\nEpoch 21/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0215\nEpoch 22/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0203\nEpoch 23/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0213\nEpoch 24/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0216\nEpoch 25/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0207\nEpoch 26/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0206\nEpoch 27/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0209\nEpoch 28/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0197\nEpoch 29/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0191\nEpoch 30/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0183\nEpoch 31/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0163\nEpoch 32/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0164\nEpoch 33/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0151\nEpoch 34/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0166\nEpoch 35/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0160\nEpoch 36/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0157\nEpoch 37/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0162\nEpoch 38/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0167\nEpoch 39/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0178\nEpoch 40/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0162\nEpoch 41/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0192\nEpoch 42/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0178\nEpoch 43/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0181\nEpoch 44/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0170\nEpoch 45/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0154\nEpoch 46/50\n42/42 [==============================] - 1s 18ms/step - loss: 0.0161\nEpoch 47/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0163\nEpoch 48/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0168\nEpoch 49/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0167\nEpoch 50/50\n42/42 [==============================] - 1s 17ms/step - loss: 0.0154\n" ], [ "# Plotting the loss\nplt.plot(history.history['loss'], label='train')\nplt.legend();\nplt.ylabel('Model Loss')\nplt.xlabel('Number of Epochs')\nplt.savefig(pdfpath4, dpi=300)\nplt.show()", "_____no_output_____" ], [ "StopWatch.start(\"Prediction time\")\n\ny_pred = model.predict(X_test)\n\nStopWatch.stop(\"Prediction time\")\n\n# Rescale the data back to the original scale\ny_test = y_test*(train_max[0] - train_min[0]) + train_min[0]\ny_pred = y_pred*(train_max[0] - train_min[0]) + train_min[0]\ny_train = y_train*(train_max[0] - train_min[0]) + train_min[0]\n", "_____no_output_____" ], [ "# Plotting the results\nplt.plot(np.arange(len(y_train), len(y_train) + len(y_test)), y_test.flatten(), marker='.', markersize=1, label=\"true\")\nplt.plot(np.arange(len(y_train), len(y_train) + len(y_test)), y_pred.flatten(), 'r', marker='.', markersize=1, label=\"prediction\")\nplt.plot(np.arange(0, len(y_train)), y_train.flatten(), 'g', marker='.', markersize=1, label=\"history\")\nplt.ylabel('Close Price in USD')\nplt.xlabel('Days Since '+trueFirstDate)\nleg = plt.legend()\nleg_lines = leg.get_lines()\nleg_texts = leg.get_texts()\nplt.setp(leg_lines, linewidth=1)\nplt.setp(leg_texts, fontsize='x-large')\nplt.savefig(pdfpath, dpi=300)\n#doge plt.axis([1350, 1450, 0.14, 0.35])\n#btc plt.axis([2490, 2650, 34000, 73000])\n#eth plt.axis([1370, 1490, 2200, 5800])\nplt.axis([1370, 1490, 2200, 5800])\nplt.savefig(pdfpath2, dpi=300)\nplt.show()\n", "_____no_output_____" ], [ "print(y_test.shape)\nprint(y_pred.shape)\n", "(190,)\n(190, 1)\n" ], [ "## Outputs error in United States Dollars\nmean_squared_error(y_test, y_pred)\n\n## Create a table of the error against the number of epochs", "_____no_output_____" ], [ "StopWatch.stop(\"Overall time\")\nStopWatch.benchmark()", "\n+------------------+--------------------------------------------------------------------------------+\n| Attribute | Value |\n|------------------+--------------------------------------------------------------------------------|\n| cpu | |\n| cpu_cores | 6 |\n| cpu_count | 12 |\n| cpu_threads | 12 |\n| frequency | scpufreq(current=3600.0, min=0.0, max=3600.0) |\n| mem.available | 7.4 GiB |\n| mem.free | 7.4 GiB |\n| mem.percent | 53.5 % |\n| mem.total | 16.0 GiB |\n| mem.used | 8.5 GiB |\n| platform.version | ('10', '10.0.19043', 'SP0', 'Multiprocessor Free') |\n| python | 3.9.5 (tags/v3.9.5:0a7dcbd, May 3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)] |\n| python.pip | 21.1.3 |\n| python.version | 3.9.5 |\n| sys.platform | win32 |\n| uname.machine | AMD64 |\n| uname.node | Sledgehammer |\n| uname.processor | AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD |\n| uname.release | 10 |\n| uname.system | Windows |\n| uname.version | 10.0.19043 |\n| user | Sledgehammer |\n+------------------+--------------------------------------------------------------------------------+\n\n+-----------------+----------+--------+---------+---------------------+-------+-------+--------------+--------------+---------+----------------------------------------------------+\n| Name | Status | Time | Sum | Start | tag | msg | Node | User | OS | Version |\n|-----------------+----------+--------+---------+---------------------+-------+-------+--------------+--------------+---------+----------------------------------------------------|\n| Overall time | ok | 26.97 | 350.063 | 2021-07-26 19:24:54 | | | Sledgehammer | Sledgehammer | Windows | ('10', '10.0.19043', 'SP0', 'Multiprocessor Free') |\n| Training time | ok | 25.498 | 325.43 | 2021-07-26 19:24:55 | | | Sledgehammer | Sledgehammer | Windows | ('10', '10.0.19043', 'SP0', 'Multiprocessor Free') |\n| Prediction time | ok | 0.227 | 3.67 | 2021-07-26 19:25:21 | | | Sledgehammer | Sledgehammer | Windows | ('10', '10.0.19043', 'SP0', 'Multiprocessor Free') |\n+-----------------+----------+--------+---------+---------------------+-------+-------+--------------+--------------+---------+----------------------------------------------------+\n\n# csv,timer,status,time,sum,start,tag,msg,uname.node,user,uname.system,platform.version\n# csv,Overall time,ok,26.97,350.063,2021-07-26 19:24:54,,None,Sledgehammer,Sledgehammer,Windows,('10', '10.0.19043', 'SP0', 'Multiprocessor Free')\n# csv,Training time,ok,25.498,325.43,2021-07-26 19:24:55,,None,Sledgehammer,Sledgehammer,Windows,('10', '10.0.19043', 'SP0', 'Multiprocessor Free')\n# csv,Prediction time,ok,0.227,3.67,2021-07-26 19:25:21,,None,Sledgehammer,Sledgehammer,Windows,('10', '10.0.19043', 'SP0', 'Multiprocessor Free')\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d018401f5193b7c6439701d9e807b4a5e174ca4e
268,760
ipynb
Jupyter Notebook
mnist2.ipynb
howardlin02/My-first-repo
12a798cef1c821aedf0139f8f11ec287a99664d5
[ "MIT" ]
null
null
null
mnist2.ipynb
howardlin02/My-first-repo
12a798cef1c821aedf0139f8f11ec287a99664d5
[ "MIT" ]
null
null
null
mnist2.ipynb
howardlin02/My-first-repo
12a798cef1c821aedf0139f8f11ec287a99664d5
[ "MIT" ]
null
null
null
423.244094
17,642
0.923787
[ [ [ "%pylab inline", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "import os\nimport urllib\ndataset = 'mnist.pkl.gz'\ndef reporthook(a,b,c):\n print \"\\rdownloading: %5.1f%%\"%(a*b*100.0/c),\n \nif not os.path.isfile(dataset):\n origin = \"https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz\"\n print('Downloading data from %s' % origin)\n urllib.urlretrieve(origin, dataset, reporthook=reporthook)", "Downloading data from https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz\ndownloading: 100.0%\n" ], [ "import gzip\nimport pickle\nwith gzip.open(dataset, 'rb') as f:\n train_set, valid_set, test_set = pickle.load(f)", "_____no_output_____" ], [ "print \"train_set\", train_set[0].shape, train_set[1].shape\nprint \"valid_set\", valid_set[0].shape, valid_set[1].shape\nprint \"test_set\", test_set[0].shape, test_set[1].shape", "train_set (50000L, 784L) (50000L,)\nvalid_set (10000L, 784L) (10000L,)\ntest_set (10000L, 784L) (10000L,)\n" ], [ "imshow(train_set[0][0].reshape((28, 28)), cmap=\"gray\")", "_____no_output_____" ], [ "def show(x, i=[0]):\n plt.figure(i[0])\n imshow(x.reshape((28,28)), cmap=\"gray\")\n i[0]+=1\nfor i in range(5):\n print train_set[1][i]\n show(train_set[0][i])", "5\n0\n4\n1\n9\n" ], [ "W = np.random.uniform(low=-1, high=1, size=(28*28,10))\nb = np.random.uniform(low=-1, high=1, size=10)", "_____no_output_____" ], [ "x = train_set[0][0]\ny = train_set[1][0]", "_____no_output_____" ], [ "show(x)\ny", "_____no_output_____" ], [ "Pr = exp(dot(x, W)+b)\nPr.shape", "_____no_output_____" ], [ "Pr = Pr/Pr.sum()\nprint Pr", "[ 8.58575109e-03 5.37878981e-04 1.02362391e-04 2.41962903e-04\n 6.40651239e-04 8.74474388e-01 4.59673412e-02 5.40416346e-02\n 1.48726624e-02 5.35367136e-04]\n" ], [ "Pr.argmax()", "_____no_output_____" ], [ "loss = -log(Pr[y])\nloss", "_____no_output_____" ], [ "gradb = Pr.copy()\ngradb[y] -= 1\nprint gradb", "[ 8.58575109e-03 5.37878981e-04 1.02362391e-04 2.41962903e-04\n 6.40651239e-04 -1.25525612e-01 4.59673412e-02 5.40416346e-02\n 1.48726624e-02 5.35367136e-04]\n" ], [ "print Pr.shape, x.shape, W.shape\ngradW = dot(x.reshape(784,1), Pr.reshape(1,10), )\ngradW[:, y] -= x", "(10L,) (784L,) (784L, 10L)\n" ], [ "W -= 0.1 * gradW\nb -= 0.1 * gradb", "_____no_output_____" ], [ "def compute_Pr(x):\n Pr = exp(dot(x, W)+b)\n return Pr/Pr.sum(axis=1, keepdims=True)\ndef compute_accuracy(Pr, y):\n return mean(Pr.argmax(axis=1)==y)", "_____no_output_____" ], [ "W = np.random.uniform(low=-1, high=1, size=(28*28,10))\nb = np.random.uniform(low=-1, high=1, size=10)\nscore = 0\nN=50000*20\nd = 0.001\nlearning_rate = 1e-2\nfor i in xrange(N):\n if i%50000==0:\n print i, \"%5.3f%%\"%(score*100)\n x = train_set[0][i%50000]\n y = train_set[1][i%50000]\n Pr = exp(dot(x, W)+b)\n Pr = Pr/Pr.sum()\n loss = -log(Pr[y])\n score *=(1-d)\n if Pr.argmax() == y:\n score += d\n gradb = Pr.copy()\n gradb[y] -= 1\n gradW = dot(x.reshape(784,1), Pr.reshape(1,10), )\n gradW[:, y] -= x\n W -= learning_rate * gradW\n b -= learning_rate * gradb", "0 0.000%\n50000 87.886%\n100000 89.619%\n150000 90.321%\n200000 90.828%\n250000 91.035%\n300000 91.382%\n350000 91.712%\n400000 91.801%\n450000 91.821%\n500000 91.854%\n550000 91.922%\n600000 92.013%\n650000 92.168%\n700000 92.267%\n750000 92.299%\n800000 92.334%\n850000 92.341%\n900000 92.359%\n950000 92.307%\n" ], [ "x = test_set[0][:10]\ny = test_set[1][:10]\nPr = compute_Pr(x)\nprint Pr.argmax(axis=1)\nprint y\nfor i in range(10):\n show(x[i])", "[7 2 1 0 4 1 4 9 6 9]\n[7 2 1 0 4 1 4 9 5 9]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d018584c3893613fcf5beb14bfa444e72d397415
344,765
ipynb
Jupyter Notebook
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
a8126c21b7d7c277c8c771002622e2dab9693c08
[ "MIT" ]
null
null
null
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
a8126c21b7d7c277c8c771002622e2dab9693c08
[ "MIT" ]
null
null
null
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
a8126c21b7d7c277c8c771002622e2dab9693c08
[ "MIT" ]
null
null
null
79.42064
44,632
0.750708
[ [ [ "# TalkingData: Fraudulent Click Prediction\n\n\n\n\n", "_____no_output_____" ], [ "In this notebook, we will apply various boosting algorithms to solve an interesting classification problem from the domain of 'digital fraud'.\n\nThe analysis is divided into the following sections:\n- Understanding the business problem\n- Understanding and exploring the data\n- Feature engineering: Creating new features\n- Model building and evaluation: AdaBoost\n- Modelling building and evaluation: Gradient Boosting\n- Modelling building and evaluation: XGBoost\n", "_____no_output_____" ], [ "## Understanding the Business Problem\n\n<a href=\"https://www.talkingdata.com/\">TalkingData</a> is a Chinese big data company, and one of their areas of expertise is mobile advertisements.\n\nIn mobile advertisements, **click fraud** is a major source of losses. Click fraud is the practice of repeatedly clicking on an advertisement hosted on a website with the intention of generating revenue for the host website or draining revenue from the advertiser.\n\nIn this case, TalkingData happens to be serving the advertisers (their clients). TalkingData cover a whopping **approx. 70% of the active mobile devices in China**, of which 90% are potentially fraudulent (i.e. the user is actually not going to download the app after clicking).\n\nYou can imagine the amount of money they can help clients save if they are able to predict whether a given click is fraudulent (or equivalently, whether a given click will result in a download). \n\nTheir current approach to solve this problem is that they've generated a blacklist of IP addresses - those IPs which produce lots of clicks, but never install any apps. Now, they want to try some advanced techniques to predict the probability of a click being genuine/fraud.\n\nIn this problem, we will use the features associated with clicks, such as IP address, operating system, device type, time of click etc. to predict the probability of a click being fraud.\n\nThey have released <a href=\"https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection\">the problem on Kaggle here.</a>.", "_____no_output_____" ], [ "## Understanding and Exploring the Data\n\nThe data contains observations of about 240 million clicks, and whether a given click resulted in a download or not (1/0). \n\nOn Kaggle, the data is split into train.csv and train_sample.csv (100,000 observations). We'll use the smaller train_sample.csv in this notebook for speed, though while training the model for Kaggle submissions, the full training data will obviously produce better results.\n\nThe detailed data dictionary is mentioned here:\n- ```ip```: ip address of click.\n- ```app```: app id for marketing.\n- ```device```: device type id of user mobile phone (e.g., iphone 6 plus, iphone 7, huawei mate 7, etc.)\n- ```os```: os version id of user mobile phone\n- ```channel```: channel id of mobile ad publisher\n- ```click_time```: timestamp of click (UTC)\n- ```attributed_time```: if user download the app for after clicking an ad, this is the time of the app download\n- ```is_attributed```: the target that is to be predicted, indicating the app was downloaded\n\nLet's try finding some useful trends in the data.", "_____no_output_____" ] ], [ [ "import numpy as np \nimport pandas as pd \nimport sklearn\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn import metrics\n\nimport xgboost as xgb\nfrom xgboost import XGBClassifier\nfrom xgboost import plot_importance\nimport gc # for deleting unused variables\n%matplotlib inline\n\nimport os\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "#### Reading the Data \n\nThe code below reads the train_sample.csv file if you set testing = True, else reads the full train.csv file. You can read the sample while tuning the model etc., and then run the model on the full data once done.\n\n#### Important Note: Save memory when the data is huge\n\nSince the training data is quite huge, the program will be quite slow if you don't consciously follow some best practices to save memory. This notebook demonstrates some of those practices. ", "_____no_output_____" ] ], [ [ "# reading training data\n\n# specify column dtypes to save memory (by default pandas reads some columns as floats)\ndtypes = {\n 'ip' : 'uint16',\n 'app' : 'uint16',\n 'device' : 'uint16',\n 'os' : 'uint16',\n 'channel' : 'uint16',\n 'is_attributed' : 'uint8',\n 'click_id' : 'uint32' # note that click_id is only in test data, not training data\n }\n\n# read training_sample.csv for quick testing/debug, else read the full train.csv\ntesting = True\nif testing:\n train_path = \"train_sample.csv\"\n skiprows = None\n nrows = None\n colnames=['ip','app','device','os', 'channel', 'click_time', 'is_attributed']\nelse:\n train_path = \"train.csv\"\n skiprows = range(1, 144903891)\n nrows = 10000000\n colnames=['ip','app','device','os', 'channel', 'click_time', 'is_attributed']\n\n# read training data\ntrain_sample = pd.read_csv(train_path, skiprows=skiprows, nrows=nrows, dtype=dtypes, usecols=colnames)\n", "_____no_output_____" ], [ "# length of training data\nlen(train_sample.index)", "_____no_output_____" ], [ "# Displays memory consumed by each column ---\nprint(train_sample.memory_usage())", "Index 128\nip 200000\napp 200000\ndevice 200000\nos 200000\nchannel 200000\nclick_time 800000\nis_attributed 100000\ndtype: int64\n" ], [ "# space used by training data\nprint('Training dataset uses {0} MB'.format(train_sample.memory_usage().sum()/1024**2))", "Training dataset uses 1.812103271484375 MB\n" ], [ "# training data top rows\ntrain_sample.head()", "_____no_output_____" ] ], [ [ "### Exploring the Data - Univariate Analysis\n", "_____no_output_____" ], [ "Let's now understand and explore the data. Let's start with understanding the size and data types of the train_sample data.", "_____no_output_____" ] ], [ [ "# look at non-null values, number of entries etc.\n# there are no missing values\ntrain_sample.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 100000 entries, 0 to 99999\nData columns (total 7 columns):\nip 100000 non-null uint16\napp 100000 non-null uint16\ndevice 100000 non-null uint16\nos 100000 non-null uint16\nchannel 100000 non-null uint16\nclick_time 100000 non-null object\nis_attributed 100000 non-null uint8\ndtypes: object(1), uint16(5), uint8(1)\nmemory usage: 1.8+ MB\n" ], [ "# Basic exploratory analysis \n\n# Number of unique values in each column\ndef fraction_unique(x):\n return len(train_sample[x].unique())\n\nnumber_unique_vals = {x: fraction_unique(x) for x in train_sample.columns}\nnumber_unique_vals", "_____no_output_____" ], [ "# All columns apart from click time are originally int type, \n# though note that they are all actually categorical \ntrain_sample.dtypes", "_____no_output_____" ] ], [ [ "There are certain 'apps' which have quite high number of instances/rows (each row is a click). The plot below shows this. ", "_____no_output_____" ] ], [ [ "# # distribution of 'app' \n# # some 'apps' have a disproportionately high number of clicks (>15k), and some are very rare (3-4)\nplt.figure(figsize=(14, 8))\nsns.countplot(x=\"app\", data=train_sample)", "_____no_output_____" ], [ "# # distribution of 'device' \n# # this is expected because a few popular devices are used heavily\nplt.figure(figsize=(14, 8))\nsns.countplot(x=\"device\", data=train_sample)", "_____no_output_____" ], [ "# # channel: various channels get clicks in comparable quantities\nplt.figure(figsize=(14, 8))\nsns.countplot(x=\"channel\", data=train_sample)", "_____no_output_____" ], [ "# # os: there are a couple commos OSes (android and ios?), though some are rare and can indicate suspicion \nplt.figure(figsize=(14, 8))\nsns.countplot(x=\"os\", data=train_sample)", "_____no_output_____" ] ], [ [ "Let's now look at the distribution of the target variable 'is_attributed'.", "_____no_output_____" ] ], [ [ "# # target variable distribution\n100*(train_sample['is_attributed'].astype('object').value_counts()/len(train_sample.index))", "_____no_output_____" ] ], [ [ "Only **about 0.2% of clicks are 'fraudulent'**, which is expected in a fraud detection problem. Such high class imbalance is probably going to be the toughest challenge of this problem.", "_____no_output_____" ], [ "### Exploring the Data - Segmented Univariate Analysis\n\nLet's now look at how the target variable varies with the various predictors.", "_____no_output_____" ] ], [ [ "# plot the average of 'is_attributed', or 'download rate'\n# with app (clearly this is non-readable)\napp_target = train_sample.groupby('app').is_attributed.agg(['mean', 'count'])\napp_target", "_____no_output_____" ] ], [ [ "This is clearly non-readable, so let's first get rid of all the apps that are very rare (say which comprise of less than 20% clicks) and plot the rest.", "_____no_output_____" ] ], [ [ "frequent_apps = train_sample.groupby('app').size().reset_index(name='count')\nfrequent_apps = frequent_apps[frequent_apps['count']>frequent_apps['count'].quantile(0.80)]\nfrequent_apps = frequent_apps.merge(train_sample, on='app', how='inner')\nfrequent_apps.head()", "_____no_output_____" ], [ "plt.figure(figsize=(10,10))\nsns.countplot(y=\"app\", hue=\"is_attributed\", data=frequent_apps);", "_____no_output_____" ] ], [ [ "You can do lots of other interesting ananlysis with the existing features. For now, let's create some new features which will probably improve the model.", "_____no_output_____" ], [ "## Feature Engineering", "_____no_output_____" ], [ "Let's now derive some new features from the existing ones. There are a number of features one can extract from ```click_time``` itself, and by grouping combinations of IP with other features.", "_____no_output_____" ], [ "### Datetime Based Features\n", "_____no_output_____" ] ], [ [ "# Creating datetime variables\n# takes in a df, adds date/time based columns to it, and returns the modified df\ndef timeFeatures(df):\n # Derive new features using the click_time column\n df['datetime'] = pd.to_datetime(df['click_time'])\n df['day_of_week'] = df['datetime'].dt.dayofweek\n df[\"day_of_year\"] = df[\"datetime\"].dt.dayofyear\n df[\"month\"] = df[\"datetime\"].dt.month\n df[\"hour\"] = df[\"datetime\"].dt.hour\n return df", "_____no_output_____" ], [ "# creating new datetime variables and dropping the old ones\ntrain_sample = timeFeatures(train_sample)\ntrain_sample.drop(['click_time', 'datetime'], axis=1, inplace=True)\ntrain_sample.head()", "_____no_output_____" ], [ "# datatypes\n# note that by default the new datetime variables are int64\ntrain_sample.dtypes", "_____no_output_____" ], [ "# memory used by training data\nprint('Training dataset uses {0} MB'.format(train_sample.memory_usage().sum()/1024**2))", "Training dataset uses 4.100921630859375 MB\n" ], [ "# lets convert the variables back to lower dtype again\nint_vars = ['app', 'device', 'os', 'channel', 'day_of_week','day_of_year', 'month', 'hour']\ntrain_sample[int_vars] = train_sample[int_vars].astype('uint16')", "_____no_output_____" ], [ "train_sample.dtypes", "_____no_output_____" ], [ "# space used by training data\nprint('Training dataset uses {0} MB'.format(train_sample.memory_usage().sum()/1024**2))", "Training dataset uses 1.812103271484375 MB\n" ] ], [ [ "### IP Grouping Based Features", "_____no_output_____" ], [ "Let's now create some important features by grouping IP addresses with features such as os, channel, hour, day etc. Also, count of each IP address will also be a feature.\n\nNote that though we are deriving new features by grouping IP addresses, using IP adress itself as a features is not a good idea. This is because (in the test data) if a new IP address is seen, the model will see a new 'category' and will not be able to make predictions (IP is a categorical variable, it has just been encoded with numbers).", "_____no_output_____" ] ], [ [ "# number of clicks by count of IP address\n# note that we are explicitly asking pandas to re-encode the aggregated features \n# as 'int16' to save memory\nip_count = train_sample.groupby('ip').size().reset_index(name='ip_count').astype('int16')\nip_count.head()", "_____no_output_____" ] ], [ [ "We can now merge this dataframe with the original training df. Similarly, we can create combinations of various features such as ip_day_hour (count of ip-day-hour combinations), ip_hour_channel, ip_hour_app, etc. \n\nThe following function takes in a dataframe and creates these features.", "_____no_output_____" ] ], [ [ "# creates groupings of IP addresses with other features and appends the new features to the df\ndef grouped_features(df):\n # ip_count\n ip_count = df.groupby('ip').size().reset_index(name='ip_count').astype('uint16')\n ip_day_hour = df.groupby(['ip', 'day_of_week', 'hour']).size().reset_index(name='ip_day_hour').astype('uint16')\n ip_hour_channel = df[['ip', 'hour', 'channel']].groupby(['ip', 'hour', 'channel']).size().reset_index(name='ip_hour_channel').astype('uint16')\n ip_hour_os = df.groupby(['ip', 'hour', 'os']).channel.count().reset_index(name='ip_hour_os').astype('uint16')\n ip_hour_app = df.groupby(['ip', 'hour', 'app']).channel.count().reset_index(name='ip_hour_app').astype('uint16')\n ip_hour_device = df.groupby(['ip', 'hour', 'device']).channel.count().reset_index(name='ip_hour_device').astype('uint16')\n \n # merge the new aggregated features with the df\n df = pd.merge(df, ip_count, on='ip', how='left')\n del ip_count\n df = pd.merge(df, ip_day_hour, on=['ip', 'day_of_week', 'hour'], how='left')\n del ip_day_hour\n df = pd.merge(df, ip_hour_channel, on=['ip', 'hour', 'channel'], how='left')\n del ip_hour_channel\n df = pd.merge(df, ip_hour_os, on=['ip', 'hour', 'os'], how='left')\n del ip_hour_os\n df = pd.merge(df, ip_hour_app, on=['ip', 'hour', 'app'], how='left')\n del ip_hour_app\n df = pd.merge(df, ip_hour_device, on=['ip', 'hour', 'device'], how='left')\n del ip_hour_device\n \n return df", "_____no_output_____" ], [ "train_sample = grouped_features(train_sample)", "_____no_output_____" ], [ "train_sample.head()", "_____no_output_____" ], [ "print('Training dataset uses {0} MB'.format(train_sample.memory_usage().sum()/1024**2))", "Training dataset uses 3.719329833984375 MB\n" ], [ "# garbage collect (unused) object\ngc.collect()", "_____no_output_____" ] ], [ [ "## Modelling\n\nLet's now build models to predict the variable ```is_attributed``` (downloaded). We'll try the several variants of boosting (adaboost, gradient boosting and XGBoost), tune the hyperparameters in each model and choose the one which gives the best performance.\n\nIn the original Kaggle competition, the metric for model evaluation is **area under the ROC curve**.\n", "_____no_output_____" ] ], [ [ "# create x and y train\nX = train_sample.drop('is_attributed', axis=1)\ny = train_sample[['is_attributed']]\n\n# split data into train and test/validation sets\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=101)\nprint(X_train.shape)\nprint(y_train.shape)\nprint(X_test.shape)\nprint(y_test.shape)", "(80000, 15)\n(80000, 1)\n(20000, 15)\n(20000, 1)\n" ], [ "# check the average download rates in train and test data, should be comparable\nprint(y_train.mean())\nprint(y_test.mean())", "is_attributed 0.002275\ndtype: float64\nis_attributed 0.00225\ndtype: float64\n" ] ], [ [ "### AdaBoost", "_____no_output_____" ] ], [ [ "# adaboost classifier with max 600 decision trees of depth=2\n# learning_rate/shrinkage=1.5\n\n# base estimator\ntree = DecisionTreeClassifier(max_depth=2)\n\n# adaboost with the tree as base estimator\nadaboost_model_1 = AdaBoostClassifier(\n base_estimator=tree,\n n_estimators=600,\n learning_rate=1.5,\n algorithm=\"SAMME\")", "_____no_output_____" ], [ "# fit\nadaboost_model_1.fit(X_train, y_train)", "_____no_output_____" ], [ "# predictions\n# the second column represents the probability of a click resulting in a download\npredictions = adaboost_model_1.predict_proba(X_test)\npredictions[:10]", "_____no_output_____" ], [ "# metrics: AUC\nmetrics.roc_auc_score(y_test, predictions[:,1])", "_____no_output_____" ] ], [ [ "### AdaBoost - Hyperparameter Tuning\n\nLet's now tune the hyperparameters of the AdaBoost classifier. In this case, we have two types of hyperparameters - those of the component trees (max_depth etc.) and those of the ensemble (n_estimators, learning_rate etc.). \n\n\nWe can tune both using the following technique - the keys of the form ```base_estimator_parameter_name``` belong to the trees (base estimator), and the rest belong to the ensemble.", "_____no_output_____" ] ], [ [ "# parameter grid\nparam_grid = {\"base_estimator__max_depth\" : [2, 5],\n \"n_estimators\": [200, 400, 600]\n }", "_____no_output_____" ], [ "# base estimator\ntree = DecisionTreeClassifier()\n\n# adaboost with the tree as base estimator\n# learning rate is arbitrarily set to 0.6, we'll discuss learning_rate below\nABC = AdaBoostClassifier(\n base_estimator=tree,\n learning_rate=0.6,\n algorithm=\"SAMME\")", "_____no_output_____" ], [ "# run grid search\nfolds = 3\ngrid_search_ABC = GridSearchCV(ABC, \n cv = folds,\n param_grid=param_grid, \n scoring = 'roc_auc', \n return_train_score=True, \n verbose = 1)\n", "_____no_output_____" ], [ "# fit \ngrid_search_ABC.fit(X_train, y_train)", "Fitting 3 folds for each of 6 candidates, totalling 18 fits\n" ], [ "# cv results\ncv_results = pd.DataFrame(grid_search_ABC.cv_results_)\ncv_results", "_____no_output_____" ], [ "# plotting AUC with hyperparameter combinations\n\nplt.figure(figsize=(16,6))\nfor n, depth in enumerate(param_grid['base_estimator__max_depth']):\n \n\n # subplot 1/n\n plt.subplot(1,3, n+1)\n depth_df = cv_results[cv_results['param_base_estimator__max_depth']==depth]\n\n plt.plot(depth_df[\"param_n_estimators\"], depth_df[\"mean_test_score\"])\n plt.plot(depth_df[\"param_n_estimators\"], depth_df[\"mean_train_score\"])\n plt.xlabel('n_estimators')\n plt.ylabel('AUC')\n plt.title(\"max_depth={0}\".format(depth))\n plt.ylim([0.60, 1])\n plt.legend(['test score', 'train score'], loc='upper left')\n plt.xscale('log')\n\n \n", "_____no_output_____" ] ], [ [ "The results above show that:\n- The ensemble with max_depth=5 is clearly overfitting (training auc is almost 1, while the test score is much lower)\n- At max_depth=2, the model performs slightly better (approx 95% AUC) with a higher test score \n\nThus, we should go ahead with ```max_depth=2``` and ```n_estimators=200```.\n\nNote that we haven't experimented with many other important hyperparameters till now, such as ```learning rate```, ```subsample``` etc., and the results might be considerably improved by tuning them. We'll next experiment with these hyperparameters.", "_____no_output_____" ] ], [ [ "# model performance on test data with chosen hyperparameters\n\n# base estimator\ntree = DecisionTreeClassifier(max_depth=2)\n\n# adaboost with the tree as base estimator\n# learning rate is arbitrarily set, we'll discuss learning_rate below\nABC = AdaBoostClassifier(\n base_estimator=tree,\n learning_rate=0.6,\n n_estimators=200,\n algorithm=\"SAMME\")\n\nABC.fit(X_train, y_train)", "_____no_output_____" ], [ "# predict on test data\npredictions = ABC.predict_proba(X_test)\npredictions[:10]", "_____no_output_____" ], [ "# roc auc\nmetrics.roc_auc_score(y_test, predictions[:, 1])", "_____no_output_____" ] ], [ [ "### Gradient Boosting Classifier\n\nLet's now try the gradient boosting classifier. We'll experiment with two main hyperparameters now - ```learning_rate``` (shrinkage) and ```subsample```. \n\nBy adjusting the learning rate to less than 1, we can regularize the model. A model with higher learning_rate learns fast, but is prone to overfitting; one with a lower learning rate learns slowly, but avoids overfitting.\n\nAlso, there's a trade-off between ```learning_rate``` and ```n_estimators``` - the higher the learning rate, the lesser trees the model needs (and thus we usually tune only one of them).\n\nAlso, by subsampling (setting ```subsample``` to less than 1), we can have the individual models built on random subsamples of size ```subsample```. That way, each tree will be trained on different subsets and reduce the model's variance.", "_____no_output_____" ] ], [ [ "# parameter grid\nparam_grid = {\"learning_rate\": [0.2, 0.6, 0.9],\n \"subsample\": [0.3, 0.6, 0.9]\n }", "_____no_output_____" ], [ "# adaboost with the tree as base estimator\nGBC = GradientBoostingClassifier(max_depth=2, n_estimators=200)", "_____no_output_____" ], [ "# run grid search\nfolds = 3\ngrid_search_GBC = GridSearchCV(GBC, \n cv = folds,\n param_grid=param_grid, \n scoring = 'roc_auc', \n return_train_score=True, \n verbose = 1)\n\ngrid_search_GBC.fit(X_train, y_train)", "Fitting 3 folds for each of 9 candidates, totalling 27 fits\n" ], [ "cv_results = pd.DataFrame(grid_search_GBC.cv_results_)\ncv_results.head()", "_____no_output_____" ], [ "# # plotting\nplt.figure(figsize=(16,6))\n\n\nfor n, subsample in enumerate(param_grid['subsample']):\n \n\n # subplot 1/n\n plt.subplot(1,len(param_grid['subsample']), n+1)\n df = cv_results[cv_results['param_subsample']==subsample]\n\n plt.plot(df[\"param_learning_rate\"], df[\"mean_test_score\"])\n plt.plot(df[\"param_learning_rate\"], df[\"mean_train_score\"])\n plt.xlabel('learning_rate')\n plt.ylabel('AUC')\n plt.title(\"subsample={0}\".format(subsample))\n plt.ylim([0.60, 1])\n plt.legend(['test score', 'train score'], loc='upper left')\n plt.xscale('log')\n", "_____no_output_____" ] ], [ [ "It is clear from the plot above that the model with a lower subsample ratio performs better, while those with higher subsamples tend to overfit. \n\nAlso, a lower learning rate results in less overfitting.", "_____no_output_____" ], [ "### XGBoost\n\nLet's finally try XGBoost. The hyperparameters are the same, some important ones being ```subsample```, ```learning_rate```, ```max_depth``` etc.\n", "_____no_output_____" ] ], [ [ "# fit model on training data with default hyperparameters\nmodel = XGBClassifier()\nmodel.fit(X_train, y_train)", "_____no_output_____" ], [ "# make predictions for test data\n# use predict_proba since we need probabilities to compute auc\ny_pred = model.predict_proba(X_test)\ny_pred[:10]", "_____no_output_____" ], [ "# evaluate predictions\nroc = metrics.roc_auc_score(y_test, y_pred[:, 1])\nprint(\"AUC: %.2f%%\" % (roc * 100.0))", "AUC: 94.85%\n" ] ], [ [ "The roc_auc in this case is about 0.95% with default hyperparameters. Let's try changing the hyperparameters - an exhaustive list of XGBoost hyperparameters is here: http://xgboost.readthedocs.io/en/latest/parameter.html\n", "_____no_output_____" ], [ "Let's now try tuning the hyperparameters using k-fold CV. We'll then use grid search CV to find the optimal values of hyperparameters.", "_____no_output_____" ] ], [ [ "# hyperparameter tuning with XGBoost\n\n# creating a KFold object \nfolds = 3\n\n# specify range of hyperparameters\nparam_grid = {'learning_rate': [0.2, 0.6], \n 'subsample': [0.3, 0.6, 0.9]} \n\n\n# specify model\nxgb_model = XGBClassifier(max_depth=2, n_estimators=200)\n\n# set up GridSearchCV()\nmodel_cv = GridSearchCV(estimator = xgb_model, \n param_grid = param_grid, \n scoring= 'roc_auc', \n cv = folds, \n verbose = 1,\n return_train_score=True) \n\n", "_____no_output_____" ], [ "# fit the model\nmodel_cv.fit(X_train, y_train) ", "Fitting 3 folds for each of 6 candidates, totalling 18 fits\n" ], [ "# cv results\ncv_results = pd.DataFrame(model_cv.cv_results_)\ncv_results", "_____no_output_____" ], [ "# convert parameters to int for plotting on x-axis\n#cv_results['param_learning_rate'] = cv_results['param_learning_rate'].astype('float')\n#cv_results['param_max_depth'] = cv_results['param_max_depth'].astype('float')\ncv_results.head()", "_____no_output_____" ], [ "# # plotting\nplt.figure(figsize=(16,6))\n\nparam_grid = {'learning_rate': [0.2, 0.6], \n 'subsample': [0.3, 0.6, 0.9]} \n\n\nfor n, subsample in enumerate(param_grid['subsample']):\n \n\n # subplot 1/n\n plt.subplot(1,len(param_grid['subsample']), n+1)\n df = cv_results[cv_results['param_subsample']==subsample]\n\n plt.plot(df[\"param_learning_rate\"], df[\"mean_test_score\"])\n plt.plot(df[\"param_learning_rate\"], df[\"mean_train_score\"])\n plt.xlabel('learning_rate')\n plt.ylabel('AUC')\n plt.title(\"subsample={0}\".format(subsample))\n plt.ylim([0.60, 1])\n plt.legend(['test score', 'train score'], loc='upper left')\n plt.xscale('log')", "_____no_output_____" ] ], [ [ "The results show that a subsample size of 0.6 and learning_rate of about 0.2 seems optimal. \nAlso, XGBoost has resulted in the highest ROC AUC obtained (across various hyperparameters). \n\n\nLet's build a final model with the chosen hyperparameters.", "_____no_output_____" ] ], [ [ "# chosen hyperparameters\n# 'objective':'binary:logistic' outputs probability rather than label, which we need for auc\nparams = {'learning_rate': 0.2,\n 'max_depth': 2, \n 'n_estimators':200,\n 'subsample':0.6,\n 'objective':'binary:logistic'}\n\n# fit model on training data\nmodel = XGBClassifier(params = params)\nmodel.fit(X_train, y_train)", "_____no_output_____" ], [ "# predict\ny_pred = model.predict_proba(X_test)\ny_pred[:10]", "_____no_output_____" ] ], [ [ "The first column in y_pred is the P(0), i.e. P(not fraud), and the second column is P(1/fraud).", "_____no_output_____" ] ], [ [ "# roc_auc\nauc = sklearn.metrics.roc_auc_score(y_test, y_pred[:, 1])\nauc", "_____no_output_____" ] ], [ [ "Finally, let's also look at the feature importances.", "_____no_output_____" ] ], [ [ "# feature importance\nimportance = dict(zip(X_train.columns, model.feature_importances_))\nimportance", "_____no_output_____" ], [ "# plot\nplt.bar(range(len(model.feature_importances_)), model.feature_importances_)\nplt.show()", "_____no_output_____" ] ], [ [ "## Predictions on Test Data\n\nSince this problem is hosted on Kaggle, you can choose to make predictions on the test data and submit your results. Please note the following points and recommendations if you go ahead with Kaggle:\n\nRecommendations for training:\n- We have used only a fraction of the training set (train_sample, 100k rows), the full training data on Kaggle (train.csv) has about 180 million rows. You'll get good results only if you train the model on a significant portion of the training dataset. \n- Because of the size, you'll need to use Kaggle kernels to train the model on full training data. Kaggle kernels provide powerful computation capacities on cloud (for free). \n- Even on the kernel, you may need to use a portion of the training dataset (try using the last 20-30 million rows).\n- Make sure you save memory by following some tricks and best practices, else you won't be able to train the model at all on a large dataset.\n\n", "_____no_output_____" ] ], [ [ "# # read submission file\n#sample_sub = pd.read_csv(path+'sample_submission.csv')\n#sample_sub.head()", "_____no_output_____" ], [ "# # predict probability of test data\n# test_final = pd.read_csv(path+'test.csv')\n# test_final.head()", "_____no_output_____" ], [ "# # predictions on test data\n# test_final = timeFeatures(test_final)\n# test_final.head()", "_____no_output_____" ], [ "# test_final.drop(['click_time', 'datetime'], axis=1, inplace=True)", "_____no_output_____" ], [ "# test_final.head()", "_____no_output_____" ], [ "# test_final[categorical_cols]=test_final[categorical_cols].apply(lambda x: le.fit_transform(x))", "_____no_output_____" ], [ "# test_final.info()", "_____no_output_____" ], [ "# # number of clicks by IP\n# ip_count = test_final.groupby('ip')['channel'].count().reset_index()\n# ip_count.columns = ['ip', 'count_by_ip']\n# ip_count.head()", "_____no_output_____" ], [ "# merge this with the training data\n# test_final = pd.merge(test_final, ip_count, on='ip', how='left')", "_____no_output_____" ], [ "# del ip_count", "_____no_output_____" ], [ "# test_final.info()", "_____no_output_____" ], [ "# # predict on test data\n# y_pred_test = model.predict_proba(test_final.drop('click_id', axis=1))\n# y_pred_test[:10]", "_____no_output_____" ], [ "# # # create submission file\n# sub = pd.DataFrame()\n# sub['click_id'] = test_final['click_id']\n# sub['is_attributed'] = y_pred_test[:, 1]\n# sub.head()", "_____no_output_____" ], [ "# sub.to_csv('kshitij_sub_03.csv', float_format='%.8f', index=False)", "_____no_output_____" ], [ "# # model\n\n# dtrain = xgb.DMatrix(X_train, y_train)\n# del X_train, y_train\n# gc.collect()\n\n# watchlist = [(dtrain, 'train')]\n# model = xgb.train(params, dtrain, 30, watchlist, maximize=True, verbose_eval=1)", "_____no_output_____" ], [ "# del dtrain\n# gc.collect()", "_____no_output_____" ], [ "# # Plot the feature importance from xgboost\n# plot_importance(model)\n# plt.gcf().savefig('feature_importance_xgb.png')\n", "_____no_output_____" ], [ "# # Load the test for predict \n# test = pd.read_csv(path+\"test.csv\")", "_____no_output_____" ], [ "# test.head()", "_____no_output_____" ], [ "# # number of clicks by IP\n# ip_count = train_sample.groupby('ip')['channel'].count().reset_index()\n# ip_count.columns = ['ip', 'count_by_ip']\n# ip_count.head()", "_____no_output_____" ], [ "# test = pd.merge(test, ip_count, on='ip', how='left', sort=False)\n# gc.collect()", "_____no_output_____" ], [ "# test = timeFeatures(test)\n# test.drop(['click_time', 'datetime'], axis=1, inplace=True)\n# test.head()", "_____no_output_____" ], [ "# print(test.columns)\n# print(train_sample.columns)", "_____no_output_____" ], [ "# test = test[['click_id','ip', 'app', 'device', 'os', 'channel', 'day_of_week',\n# 'day_of_year', 'month', 'hour', 'count_by_ip']]", "_____no_output_____" ], [ "# dtest = xgb.DMatrix(test.drop('click_id', axis=1))", "_____no_output_____" ], [ "# # Save the predictions\n# sub = pd.DataFrame()\n# sub['click_id'] = test['click_id']\n\n# sub['is_attributed'] = model.predict(dtest, ntree_limit=model.best_ntree_limit)\n# sub.to_csv('xgb_sub.csv', float_format='%.8f', index=False)", "_____no_output_____" ], [ "# sub.shape", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0185e73a20bbf08e95fd0ea92740a6eec65ab58
29,121
ipynb
Jupyter Notebook
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
bbdb8cff14bb5f6726ab36112b17e040bcc3baa9
[ "MIT" ]
null
null
null
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
bbdb8cff14bb5f6726ab36112b17e040bcc3baa9
[ "MIT" ]
null
null
null
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
bbdb8cff14bb5f6726ab36112b17e040bcc3baa9
[ "MIT" ]
1
2022-02-10T03:23:47.000Z
2022-02-10T03:23:47.000Z
29.267337
142
0.471756
[ [ [ "# Topic 2: Neural network\n\n## Lesson 1: Introduction to Neural Networks\n", "_____no_output_____" ], [ "### 1. AND perceptron\n\nComplete the cell below:\n", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nweight1 = 0.0\nweight2 = 0.0\nbias = 0.0\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [False, False, False, True]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))\n", "_____no_output_____" ] ], [ [ "My answer:", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nk = 100\nweight1 = k * 1.0\nweight2 = k * 1.0\nbias = k * (-2.0)\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [False, False, False, True]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))\n", "Nice! You got it all correct.\n\n Input 1 Input 2 Linear Combination Activation Output Is Correct\n 0 0 -200.0 0 Yes\n 0 1 -100.0 0 Yes\n 1 0 -100.0 0 Yes\n 1 1 0.0 1 Yes\n" ] ], [ [ "### 2. OR Perceptron\n\nComplete the cell below:\n", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nweight1 = 0.0\nweight2 = 0.0\nbias = 0.0\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [False, True, True, True]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))\n", "_____no_output_____" ] ], [ [ "My answer:\n", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nk = 100\nweight1 = k * 1.0\nweight2 = k * 1.0\nbias = k * (-1.0)\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [False, True, True, True]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))\n", "Nice! You got it all correct.\n\n Input 1 Input 2 Linear Combination Activation Output Is Correct\n 0 0 -100.0 0 Yes\n 0 1 0.0 1 Yes\n 1 0 0.0 1 Yes\n 1 1 100.0 1 Yes\n" ] ], [ [ "2 ways to transform AND perceptron to OR perceptron:\n\n* Increase the weights $w$\n* Decrease the magnitude of the bias $|b|$", "_____no_output_____" ], [ "### 3. NOT Perceptron\n\nComplete the code below:\n\nOnly consider the second number in ```test_inputs``` is the input, ignore the first number.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nweight1 = 0.0\nweight2 = 0.0\nbias = 0.0\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [True, False, True, False]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))", "_____no_output_____" ] ], [ [ "My answer:", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nk = 100\nweight1 = 0.0\nweight2 = k * (-1.0)\nbias = 0.0\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [True, False, True, False]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))", "Nice! You got it all correct.\n\n Input 1 Input 2 Linear Combination Activation Output Is Correct\n 0 0 0.0 1 Yes\n 0 1 -100.0 0 Yes\n 1 0 0.0 1 Yes\n 1 1 -100.0 0 Yes\n" ] ], [ [ "### 4. XOR Perceptron\n\nan XOR Perceptron can be built by an AND Perceptron, an OR Perceptron and a NOT Perceptron.\n\n<img src=\"../../imgs/xor.png\" width=\"50%\">\n\n(image source: Udacity)\n\n```NAND``` consists of an AND perceptron and a NON perceptron.\n", "_____no_output_____" ], [ "### 5. Perceptron algorithm\n\nComplete the cell below:\n", "_____no_output_____" ] ], [ [ "import numpy as np\n# Setting the random seed, feel free to change it and see different solutions.\nnp.random.seed(42)\n\ndef stepFunction(t):\n if t >= 0:\n return 1\n return 0\n\ndef prediction(X, W, b):\n return stepFunction((np.matmul(X,W)+b)[0])\n\n# TODO: Fill in the code below to implement the perceptron trick.\n# The function should receive as inputs the data X, the labels y,\n# the weights W (as an array), and the bias b,\n# update the weights and bias W, b, according to the perceptron algorithm,\n# and return W and b.\ndef perceptronStep(X, y, W, b, learn_rate = 0.01):\n # Fill in code\n return W, b\n \n# This function runs the perceptron algorithm repeatedly on the dataset,\n# and returns a few of the boundary lines obtained in the iterations,\n# for plotting purposes.\n# Feel free to play with the learning rate and the num_epochs,\n# and see your results plotted below.\ndef trainPerceptronAlgorithm(X, y, learn_rate = 0.01, num_epochs = 25):\n x_min, x_max = min(X.T[0]), max(X.T[0])\n y_min, y_max = min(X.T[1]), max(X.T[1])\n W = np.array(np.random.rand(2,1))\n b = np.random.rand(1)[0] + x_max\n # These are the solution lines that get plotted below.\n boundary_lines = []\n for i in range(num_epochs):\n # In each epoch, we apply the perceptron step.\n W, b = perceptronStep(X, y, W, b, learn_rate)\n boundary_lines.append((-W[0]/W[1], -b/W[1]))\n return boundary_lines\n", "_____no_output_____" ] ], [ [ "This is data.csv:", "_____no_output_____" ], [ "```\n0.78051,-0.063669,1\n0.28774,0.29139,1\n0.40714,0.17878,1\n0.2923,0.4217,1\n0.50922,0.35256,1\n0.27785,0.10802,1\n0.27527,0.33223,1\n0.43999,0.31245,1\n0.33557,0.42984,1\n0.23448,0.24986,1\n0.0084492,0.13658,1\n0.12419,0.33595,1\n0.25644,0.42624,1\n0.4591,0.40426,1\n0.44547,0.45117,1\n0.42218,0.20118,1\n0.49563,0.21445,1\n0.30848,0.24306,1\n0.39707,0.44438,1\n0.32945,0.39217,1\n0.40739,0.40271,1\n0.3106,0.50702,1\n0.49638,0.45384,1\n0.10073,0.32053,1\n0.69907,0.37307,1\n0.29767,0.69648,1\n0.15099,0.57341,1\n0.16427,0.27759,1\n0.33259,0.055964,1\n0.53741,0.28637,1\n0.19503,0.36879,1\n0.40278,0.035148,1\n0.21296,0.55169,1\n0.48447,0.56991,1\n0.25476,0.34596,1\n0.21726,0.28641,1\n0.67078,0.46538,1\n0.3815,0.4622,1\n0.53838,0.32774,1\n0.4849,0.26071,1\n0.37095,0.38809,1\n0.54527,0.63911,1\n0.32149,0.12007,1\n0.42216,0.61666,1\n0.10194,0.060408,1\n0.15254,0.2168,1\n0.45558,0.43769,1\n0.28488,0.52142,1\n0.27633,0.21264,1\n0.39748,0.31902,1\n0.5533,1,0\n0.44274,0.59205,0\n0.85176,0.6612,0\n0.60436,0.86605,0\n0.68243,0.48301,0\n1,0.76815,0\n0.72989,0.8107,0\n0.67377,0.77975,0\n0.78761,0.58177,0\n0.71442,0.7668,0\n0.49379,0.54226,0\n0.78974,0.74233,0\n0.67905,0.60921,0\n0.6642,0.72519,0\n0.79396,0.56789,0\n0.70758,0.76022,0\n0.59421,0.61857,0\n0.49364,0.56224,0\n0.77707,0.35025,0\n0.79785,0.76921,0\n0.70876,0.96764,0\n0.69176,0.60865,0\n0.66408,0.92075,0\n0.65973,0.66666,0\n0.64574,0.56845,0\n0.89639,0.7085,0\n0.85476,0.63167,0\n0.62091,0.80424,0\n0.79057,0.56108,0\n0.58935,0.71582,0\n0.56846,0.7406,0\n0.65912,0.71548,0\n0.70938,0.74041,0\n0.59154,0.62927,0\n0.45829,0.4641,0\n0.79982,0.74847,0\n0.60974,0.54757,0\n0.68127,0.86985,0\n0.76694,0.64736,0\n0.69048,0.83058,0\n0.68122,0.96541,0\n0.73229,0.64245,0\n0.76145,0.60138,0\n0.58985,0.86955,0\n0.73145,0.74516,0\n0.77029,0.7014,0\n0.73156,0.71782,0\n0.44556,0.57991,0\n0.85275,0.85987,0\n0.51912,0.62359,0\n```", "_____no_output_____" ], [ "My answer:\n", "_____no_output_____" ] ], [ [ "import numpy as np\n\nX = np.array([\n [0.78051,-0.063669],\n [0.28774,0.29139],\n [0.40714,0.17878],\n [0.2923,0.4217],\n [0.50922,0.35256],\n [0.27785,0.10802],\n [0.27527,0.33223],\n [0.43999,0.31245],\n [0.33557,0.42984],\n [0.23448,0.24986],\n [0.0084492,0.13658],\n [0.12419,0.33595],\n [0.25644,0.42624],\n [0.4591,0.40426],\n [0.44547,0.45117],\n [0.42218,0.20118],\n [0.49563,0.21445],\n [0.30848,0.24306],\n [0.39707,0.44438],\n [0.32945,0.39217],\n [0.40739,0.40271],\n [0.3106,0.50702],\n [0.49638,0.45384],\n [0.10073,0.32053],\n [0.69907,0.37307],\n [0.29767,0.69648],\n [0.15099,0.57341],\n [0.16427,0.27759],\n [0.33259,0.055964],\n [0.53741,0.28637],\n [0.19503,0.36879],\n [0.40278,0.035148],\n [0.21296,0.55169],\n [0.48447,0.56991],\n [0.25476,0.34596],\n [0.21726,0.28641],\n [0.67078,0.46538],\n [0.3815,0.4622],\n [0.53838,0.32774],\n [0.4849,0.26071],\n [0.37095,0.38809],\n [0.54527,0.63911],\n [0.32149,0.12007],\n [0.42216,0.61666],\n [0.10194,0.060408],\n [0.15254,0.2168],\n [0.45558,0.43769],\n [0.28488,0.52142],\n [0.27633,0.21264],\n [0.39748,0.31902],\n [0.5533,1],\n [0.44274,0.59205],\n [0.85176,0.6612],\n [0.60436,0.86605],\n [0.68243,0.48301],\n [1,0.76815],\n [0.72989,0.8107],\n [0.67377,0.77975],\n [0.78761,0.58177],\n [0.71442,0.7668],\n [0.49379,0.54226],\n [0.78974,0.74233],\n [0.67905,0.60921],\n [0.6642,0.72519],\n [0.79396,0.56789],\n [0.70758,0.76022],\n [0.59421,0.61857],\n [0.49364,0.56224],\n [0.77707,0.35025],\n [0.79785,0.76921],\n [0.70876,0.96764],\n [0.69176,0.60865],\n [0.66408,0.92075],\n [0.65973,0.66666],\n [0.64574,0.56845],\n [0.89639,0.7085],\n [0.85476,0.63167],\n [0.62091,0.80424],\n [0.79057,0.56108],\n [0.58935,0.71582],\n [0.56846,0.7406],\n [0.65912,0.71548],\n [0.70938,0.74041],\n [0.59154,0.62927],\n [0.45829,0.4641],\n [0.79982,0.74847],\n [0.60974,0.54757],\n [0.68127,0.86985],\n [0.76694,0.64736],\n [0.69048,0.83058],\n [0.68122,0.96541],\n [0.73229,0.64245],\n [0.76145,0.60138],\n [0.58985,0.86955],\n [0.73145,0.74516],\n [0.77029,0.7014],\n [0.73156,0.71782],\n [0.44556,0.57991],\n [0.85275,0.85987],\n [0.51912,0.62359]\n])\n\ny = np.array([\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [0]\n])\n\nprint(X.shape)\nprint(y.shape)\n", "(100, 2)\n(100, 1)\n" ], [ "import numpy as np\n# Setting the random seed, feel free to change it and see different solutions.\nnp.random.seed(42)\n\ndef stepFunction(t):\n if t >= 0:\n return 1\n return 0\n\ndef prediction(X, W, b):\n return stepFunction((np.matmul(X,W)+b)[0])\n\n# TODO: Fill in the code below to implement the perceptron trick.\n# The function should receive as inputs the data X, the labels y,\n# the weights W (as an array), and the bias b,\n# update the weights and bias W, b, according to the perceptron algorithm,\n# and return W and b.\ndef perceptronStep(X, y, W, b, learn_rate = 0.01):\n # Fill in code\n for i in range(len(y)):\n true_label = y[i]\n pred = prediction(X[i], W, b)\n \n if true_label == pred:\n continue\n else:\n if pred == 1 and true_label == 0:\n # the point is classified positive, but it has a negative label\n W -= learn_rate * X[i].reshape(-1, 1)\n b -= learn_rate\n elif pred == 0 and true_label == 1:\n # the point is classified negative, but it has a positive label\n W += learn_rate * X[i].reshape(-1, 1)\n b += learn_rate\n return W, b\n \n# This function runs the perceptron algorithm repeatedly on the dataset,\n# and returns a few of the boundary lines obtained in the iterations,\n# for plotting purposes.\n# Feel free to play with the learning rate and the num_epochs,\n# and see your results plotted below.\ndef trainPerceptronAlgorithm(X, y, learn_rate = 0.01, num_epochs = 25):\n x_min, x_max = min(X.T[0]), max(X.T[0])\n y_min, y_max = min(X.T[1]), max(X.T[1])\n W = np.array(np.random.rand(2,1))\n b = np.random.rand(1)[0] + x_max\n # These are the solution lines that get plotted below.\n boundary_lines = []\n for i in range(num_epochs):\n # In each epoch, we apply the perceptron step.\n W, b = perceptronStep(X, y, W, b, learn_rate)\n boundary_lines.append((-W[0]/W[1], -b/W[1]))\n return boundary_lines", "_____no_output_____" ] ], [ [ "Solution:\n```\ndef perceptronStep(X, y, W, b, learn_rate = 0.01):\n for i in range(len(X)):\n y_hat = prediction(X[i],W,b)\n if y[i]-y_hat == 1:\n W[0] += X[i][0]*learn_rate\n W[1] += X[i][1]*learn_rate\n b += learn_rate\n elif y[i]-y_hat == -1:\n W[0] -= X[i][0]*learn_rate\n W[1] -= X[i][1]*learn_rate\n b -= learn_rate\n return W, b\n\n```", "_____no_output_____" ], [ "### 6. Softmax\n\nComplete the code below:\n", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# Write a function that takes as input a list of numbers, and returns\n# the list of values given by the softmax function.\ndef softmax(L):\n pass", "_____no_output_____" ] ], [ [ "My answer:\n", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# Write a function that takes as input a list of numbers, and returns\n# the list of values given by the softmax function.\ndef softmax(L):\n return [(np.exp(L[i]) / np.sum(np.exp(L))) for i in range(len(L))]\n\nL = [0, 2, 1]\nsoftmax(L)", "_____no_output_____" ] ], [ [ "### 7. Cross-Entropy\n\nFormula: \n\n$$\nCross Entropy = - \\sum_{i=1}^{|X|}y_i log(p_i) + (1 - y_i) log(1 - p_i) \n$$\n\nwhere \n* $y_i$ is the true label for $i^{th}$ instance\n* $p_i$ is the probability of the $i^{th}$ instance is positive.\n\nComplete the code below", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# Write a function that takes as input two lists Y, P,\n# and returns the float corresponding to their cross-entropy.\ndef cross_entropy(Y, P):\n pass", "_____no_output_____" ] ], [ [ "My answer:\n", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# Write a function that takes as input two lists Y, P,\n# and returns the float corresponding to their cross-entropy.\ndef cross_entropy(Y, P):\n return -np.sum([Y[i] * np.log(P[i]) + (1 - Y[i]) * np.log(1 - P[i]) for i in range(len(Y))])", "_____no_output_____" ], [ "Y = np.array([1, 0, 1, 1])\nP = np.array([0.4, 0.6, 0.1, 0.5])\n\nassert float(format(cross_entropy(Y, P), '.10f')) == 4.8283137373", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d01862040bdd488b25d2b2187568fd345fce21ed
12,647
ipynb
Jupyter Notebook
MIT-AI/lab1/lab1.ipynb
ricsinaruto/Python-projects
26924aaca973051181f0e7ab544e8dae5ffb4eb1
[ "MIT" ]
1
2017-05-01T10:07:02.000Z
2017-05-01T10:07:02.000Z
MIT-AI/lab1/lab1.ipynb
ricsinaruto/Python-projects
26924aaca973051181f0e7ab544e8dae5ffb4eb1
[ "MIT" ]
null
null
null
MIT-AI/lab1/lab1.ipynb
ricsinaruto/Python-projects
26924aaca973051181f0e7ab544e8dae5ffb4eb1
[ "MIT" ]
1
2018-08-28T16:14:00.000Z
2018-08-28T16:14:00.000Z
47.01487
998
0.533802
[ [ [ "# lab1.py \n\n#You should start here when providing the answers to Problem Set 1.\n#Follow along in the problem set, which is at:\n#http://ai6034.mit.edu/fall12/index.php?title=Lab_1\n\n# Import helper objects that provide the logical operations\n# discussed in class.\nfrom production import IF, AND, OR, NOT, THEN, forward_chain\n\n## Section 1: Forward chaining ##\n\n# Problem 1.2: Multiple choice\n\n# Which part of a rule may change the data?\n# 1. the antecedent\n# 2. the consequent\n# 3. both\n\nANSWER_1 = 'your answer here'\n\n# A rule-based system about Monty Python's \"Dead Parrot\" sketch\n# uses the following rules:\n#\n# rule1 = IF( AND( '(?x) is a Norwegian Blue parrot',\n# '(?x) is motionless' ),\n# THEN( '(?x) is not dead' ) )\n#\n# rule2 = IF( NOT( '(?x) is dead' ),\n# THEN( '(?x) is pining for the fjords' ) )\n#\n# and the following initial data:\n#\n# ( 'Polly is a Norwegian Blue parrot',\n# 'Polly is motionless' )\n#\n\n# Will this system produce the datum 'Polly is pining for the\n# fjords'? Answer 'yes' or 'no'.\nANSWER_2 = 'your answer here'\n\n# Which rule contains a programming error? Answer '1' or '2'.\nANSWER_3 = 'your answer here'\n\n# If you're uncertain of these answers, look in tests.py for an\n# explanation.\n\n\n# In a completely different scenario, suppose we have the\n# following rules list:\n#\n# ( IF( AND( '(?x) has feathers', # rule 1\n# '(?x) has a beak' ),\n# THEN( '(?x) is a bird' ),\n# IF( AND( '(?y) is a bird', # rule 2\n# '(?y) cannot fly',\n# '(?y) can swim' ),\n# THEN( '(?y) is a penguin' ) ) )\n#\n# and the following list of initial data:\n#\n# ( 'Pendergast is a penguin',\n# 'Pendergast has feathers',\n# 'Pendergast has a beak',\n# 'Pendergast cannot fly',\n# 'Pendergast can swim' )\n#\n# In the following questions, answer '0' if neither rule does\n# what is asked. After we start the system running, which rule\n# fires first?\n\nANSWER_4 = 'your answer here'\n\n# Which rule fires second?\n\nANSWER_5 = 'your answer here'\n\n\n# Problem 1.3.1: Poker hands\n\n# You're given this data about poker hands:\npoker_data = ( 'two-pair beats pair',\n 'three-of-a-kind beats two-pair',\n 'straight beats three-of-a-kind',\n 'flush beats straight',\n 'full-house beats flush',\n 'straight-flush beats full-house' )\n\n# Fill in this rule so that it finds all other combinations of\n# which poker hands beat which, transitively. For example, it\n# should be able to deduce that a three-of-a-kind beats a pair,\n# because a three-of-a-kind beats two-pair, which beats a pair.\ntransitive_rule = IF( AND('(?x) beats (?y)','(?y) beats (?z)'), \n THEN('(?x) beats (?z)') )\n\n# You can test your rule like this:\n# print forward_chain([transitive_rule], poker_data)\n\n# Here's some other data sets for the rule. The tester uses\n# these, so don't change them.\nTEST_RESULTS_TRANS1 = forward_chain([transitive_rule],\n [ 'a beats b', 'b beats c' ])\nTEST_RESULTS_TRANS2 = forward_chain([transitive_rule],\n [ 'rock beats scissors', \n 'scissors beats paper', \n 'paper beats rock' ])\n\n\n# Problem 1.3.2: Family relations\n\n# First, define all your rules here individually. That is, give\n# them names by assigning them to variables. This way, you'll be\n# able to refer to the rules by name and easily rearrange them if\n# you need to.\n\n# Then, put them together into a list in order, and call it\n# family_rules.\nfamily_rules = [ ] # fill me in\n\n# Some examples to try it on:\n# Note: These are used for testing, so DO NOT CHANGE\nsimpsons_data = (\"male bart\",\n \"female lisa\",\n \"female maggie\",\n \"female marge\",\n \"male homer\",\n \"male abe\",\n \"parent marge bart\",\n \"parent marge lisa\",\n \"parent marge maggie\",\n \"parent homer bart\",\n \"parent homer lisa\",\n \"parent homer maggie\",\n \"parent abe homer\")\nTEST_RESULTS_6 = forward_chain(family_rules,\n simpsons_data,verbose=False)\n# You can test your results by uncommenting this line:\n# print forward_chain(family_rules, simpsons_data, verbose=True)\n\nblack_data = (\"male sirius\",\n \"male regulus\",\n \"female walburga\",\n \"male alphard\",\n \"male cygnus\",\n \"male pollux\",\n \"female bellatrix\",\n \"female andromeda\",\n \"female narcissa\",\n \"female nymphadora\",\n \"male draco\",\n \"parent walburga sirius\",\n \"parent walburga regulus\",\n \"parent pollux walburga\",\n \"parent pollux alphard\",\n \"parent pollux cygnus\",\n \"parent cygnus bellatrix\",\n \"parent cygnus andromeda\",\n \"parent cygnus narcissa\",\n \"parent andromeda nymphadora\",\n \"parent narcissa draco\")\n\n# This should generate 14 cousin relationships, representing\n# 7 pairs of people who are cousins:\n\nblack_family_cousins = [ \n x for x in \n forward_chain(family_rules, black_data, verbose=False) \n if \"cousin\" in x ]\n\n# To see if you found them all, uncomment this line:\n# print black_family_cousins\n\n# To debug what happened in your rules, you can set verbose=True\n# in the function call above.\n\n# Some other data sets to try it on. The tester uses these\n# results, so don't comment them out.\n\nTEST_DATA_1 = [ 'female alice',\n 'male bob',\n 'male chuck',\n 'parent chuck alice',\n 'parent chuck bob' ]\nTEST_RESULTS_1 = forward_chain(family_rules, \n TEST_DATA_1, verbose=False)\n\nTEST_DATA_2 = [ 'female a1', 'female b1', 'female b2', \n 'female c1', 'female c2', 'female c3', \n 'female c4', 'female d1', 'female d2', \n 'female d3', 'female d4',\n 'parent a1 b1',\n 'parent a1 b2',\n 'parent b1 c1',\n 'parent b1 c2',\n 'parent b2 c3',\n 'parent b2 c4',\n 'parent c1 d1',\n 'parent c2 d2',\n 'parent c3 d3',\n 'parent c4 d4' ]\n\nTEST_RESULTS_2 = forward_chain(family_rules, \n TEST_DATA_2, verbose=False)\n\nTEST_RESULTS_6 = forward_chain(family_rules,\n simpsons_data,verbose=False)\n\n## Section 2: Goal trees and backward chaining ##\n\n# Problem 2 is found in backchain.py.\n\nfrom backchain import backchain_to_goal_tree\n\n##; Section 3: Survey ##\n# Please answer these questions inside the double quotes.\n\nHOW_MANY_HOURS_THIS_PSET_TOOK = ''\nWHAT_I_FOUND_INTERESTING = ''\nWHAT_I_FOUND_BORING = ''", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
d018711bd4c2560b79da2f81dce4f710f2b024d2
73,070
ipynb
Jupyter Notebook
verde-examples/lodging.ipynb
markbneal/api-examples
5749bea1ef5b1bde6b1d8e161a6c72b3844ebbfe
[ "MIT" ]
null
null
null
verde-examples/lodging.ipynb
markbneal/api-examples
5749bea1ef5b1bde6b1d8e161a6c72b3844ebbfe
[ "MIT" ]
null
null
null
verde-examples/lodging.ipynb
markbneal/api-examples
5749bea1ef5b1bde6b1d8e161a6c72b3844ebbfe
[ "MIT" ]
null
null
null
73.511066
20,788
0.761612
[ [ [ "# AHDB wheat lodging risk and recommendations\nThis example notebook was inspired by the [AHDB lodging practical guidelines](https://ahdb.org.uk/knowledge-library/lodging): we evaluate the lodging risk for a field and output practical recommendations. We then adjust the estimated risk according to the Leaf Area Index (LAI) and Green Cover Fraction (GCF) obtained using the Agrimetrics GraphQL API.\n\n## AHDB lodging resistance score\nAHDB's guidelines show how a lodging resistance score can be calculated based on:\n- the crop variety's natural resistance to lodging without Plant Growth Regulators (PGR)\n- the soil Nitrogen Suply (SNS) index, a higher supply increases lodging risk\n- the sowing date, an earlier sowing increases lodging risk\n- the sowing density, higher plant density increases lodging risk\n\nThe overall lodging resistance score is the sum of the individual scores. AHDB practical advice on reducing the risk of lodging is given for 4 resistance score categories:\n\n| Lodging resistance category | Lodging risk |\n|---|---|\n| below 5 | very high |\n| 5-6.8 | high |\n| 7-8.8 | medium |\n| 9-10 | low |\n| over 10 | very low |\n", "_____no_output_____" ], [ "[Table image](img/lodging/ahdb_risk_categories.png)", "_____no_output_____" ] ], [ [ "# Input AHDB factors for evaluating lodging risks\ndef sns_index_score(sns_index):\n return 3 - 6 * sns_index / 4\n\n# Sowing dates and associated lodging resistance score\nsowing_date_scores = {'Mid Sept': -2, 'End Sept': -1, 'Mid Oct': 0, 'End Oct': 1, 'Nov onwards': 2}\n\n# Density ranges and associated lodging resistance score\nsowing_density_scores = {'<150': 1.5, '200-150': +0.75, '300-200': 0, '400-300': -1, '>400': -1.75}\n\n# AHDB resistance score categories\ndef score_category(score):\n if score < 5:\n return 'below 5'\n if score < 7:\n return '5-6.8'\n if score < 9:\n return '7-8.8' \n if score < 10:\n return '9-10'\n return 'over 10'\n\n# Combine individual factor scores\ndef lodging_resistance_category(resistance_score, sns_index, sowing_date, sowing_density):\n score = resistance_score + sns_index_score(sns_index) + sowing_date_scores[sowing_date] + sowing_density_scores[sowing_density]\n return score_category(score)", "_____no_output_____" ] ], [ [ "## AHDB practical advice\nAHDB provides practical advice for managing the risk of stem and root lodging. This advice depends on the resistance score calculated specifically for a field. AHDB recommends fertilizer and PGR actions for managing stem lodging risk. For root lodging, AHDB also advises if the crop needs to be rolled (before the crop has reached stage \"GS30\").", "_____no_output_____" ] ], [ [ "# Nitrogen fertiliser advice for stem risk\nstem_risk_N_advice = {\n 'below 5': 'Delay & reduce N',\n '5-6.8': 'Delay & reduce N',\n '7-8.8': 'Delay N',\n}\n\n# PGR advice for stem risk\nstem_risk_PGR_advice = {\n 'below 5': 'Full PGR',\n '5-6.8': 'Full PGR',\n '7-8.8': 'Single PGR',\n '9-10': 'PGR if high yield forecast'\n}\n\n# Nitrogen fertiliser advice for root risk\nroot_risk_N_advice = {\n 'below 5': 'Reduce N',\n '5-6.8': 'Reduce N',\n}\n\n# PGR advice for root risk\nroot_risk_PGR_advice = {\n 'below 5': 'Full PGR',\n '5-6.8': 'Full PGR',\n '7-8.8': 'Single PGR',\n '9-10': 'PGR if high yield forecast'\n}\n\n# Spring rolling advice for root risk\nroot_risk_Roll_advice = {\n 'below 5': 'Roll',\n '5-6.8': 'Roll',\n '7-8.8': 'Roll',\n}", "_____no_output_____" ] ], [ [ "## AHDB standard lodging risk management recommendations\nUsing the definitions above, we can calculate the AHDB recommendation according to individual factors:", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom ipywidgets import widgets\nfrom pandas.plotting import register_matplotlib_converters\nregister_matplotlib_converters()\n\nstyle = {'description_width': 'initial'}\n\ndef ahdb_lodging_recommendation(resistance_score, sns_index, sowing_date, sowing_density):\n category = lodging_resistance_category(resistance_score, sns_index, sowing_date, sowing_density)\n return pd.DataFrame(index=['Fertiliser nitrogen', 'Plant growth regulators', 'Spring rolling'], data={\n 'Stem lodging': [stem_risk_N_advice.get(category, ''), stem_risk_PGR_advice.get(category, ''), '' ],\n 'Root lodging': [root_risk_N_advice.get(category, ''), root_risk_PGR_advice.get(category, ''), root_risk_Roll_advice.get(category, '')]\n })\n\nwidgets.interact(ahdb_lodging_recommendation,\n resistance_score = widgets.IntSlider(description='Resistance score without PGR', min=1, max=9, style=style),\n sns_index = widgets.IntSlider(description='SNS index', min=0, max=4, style=style),\n sowing_date = widgets.SelectionSlider(description='Sowing date', options=sowing_date_scores.keys(), style=style),\n sowing_density = widgets.SelectionSlider(description='Sowing density', options=sowing_density_scores.keys(), style=style),\n)", "_____no_output_____" ] ], [ [ "[Widget image](img/lodging/recommendations_slider.png)", "_____no_output_____" ], [ "## Adjusting recommendations based on remote sensing information\nThe same practical guidelines from AHDB explains that crop conditions in Spring can indicate future lodging risk. In particular, Green Area Index (GAI) greater than 2 or Ground Cover Fraction (GCF) above 60% are indicative of increased stem lodging risk. For adjusting our practical advice, we will retrieve LAI and GCF from Agrimetrics GraphQL API.\n\n### Using Agrimetrics GraphQL API\nAn Agrimetrics API key must be provided with each GraphQL API in a custom request header Ocp-Apim-Subscription-Key. For more information about how to obtain and use an Agrimetrics API key, please consult the [Developer portal](https://developer.agrimetrics.co.uk). To get started with GraphQL, see [Agrimetrics Graph Explorer](https://app.agrimetrics.co.uk/#/graph-explorer) tool.", "_____no_output_____" ] ], [ [ "import os\nimport requests\n\nGRAPHQL_ENDPOINT = \"https://api.agrimetrics.co.uk/graphql/v1/\"\n\nif \"API_KEY\" in os.environ:\n API_KEY = os.environ[\"API_KEY\"]\nelse:\n API_KEY = input(\"Query API Subscription Key: \").strip()", "_____no_output_____" ] ], [ [ "We will also need a short function to help catch and report errors from making GraphQL queries.", "_____no_output_____" ] ], [ [ "def check_results(result):\n if result.status_code != 200:\n raise Exception(f\"Request failed with code {result.status_code}.\\n{result.text}\")\n errors = result.json().get(\"errors\", [])\n if errors:\n for err in errors:\n print(f\"{err['message']}:\")\n print( \" at\", \" and \".join([f\"line {loc['line']}, col {loc['column']}\" for loc in err['locations']]))\n print( \" path\", \".\".join(err['path']))\n print(f\" {err['extensions']}\")\n raise Exception(f\"GraphQL reported {len(errors)} errors\")", "_____no_output_____" ] ], [ [ "A GraphQL query is posted to the GraphQL endpoint in a json body. With our first query, we retrieve the Agrimetrics field id at a given location.", "_____no_output_____" ] ], [ [ "graphql_url = 'https://api.agrimetrics.co.uk/graphql'\nheaders = {\n 'Ocp-Apim-Subscription-Key': API_KEY,\n 'Content-Type': \"application/json\",\n 'Accept-Encoding': \"gzip, deflate, br\",\n}\n\ncentroid = (-0.929365345, 51.408374978)\nresponse = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n query getFieldAtLocation($centroid: CoordinateScalar!) {\n fields(geoFilter: {location: {type: Point, coordinates: $centroid}, distance: {LE: 10}}) {\n id\n }\n }\n ''',\n 'variables': {\n 'centroid': centroid\n }\n})\ncheck_results(response)\nfield_id = response.json()['data']['fields'][0]['id']\nprint('Agrimetrics field id:', field_id)", "Agrimetrics field id: https://data.agrimetrics.co.uk/fields/BZwCrEVaXO62NTX_Jfl1yw\n" ] ], [ [ "GraphQL API supports filtering by object ids. Here, we retrieve the sowing crop information associated to the field id obtained in our first query.", "_____no_output_____" ] ], [ [ "# Verify field was a wheat crop in 2018\nresponse = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n query getSownCrop($fieldId: [ID!]!) {\n fields(where: {id: {EQ: $fieldId}}) {\n sownCrop {\n cropType\n harvestYear\n }\n }\n }\n ''',\n 'variables': {\n 'fieldId': field_id\n }\n})\ncheck_results(response)\nprint(response.json()['data']['fields'][0]['sownCrop'])", "[{'cropType': 'WHEAT', 'harvestYear': 2016}, {'cropType': 'MAIZE', 'harvestYear': 2017}, {'cropType': 'WHEAT', 'harvestYear': 2018}]\n" ] ], [ [ "It is necessary to register for accessing Verde crop observations on our field of interest. LAI is a crop-specific attribute, so it is necessary to provide `cropType` when registering.", "_____no_output_____" ] ], [ [ "# Register for CROP_SPECIFIC verde data on our field\nresponse = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n mutation registerCropObservations($fieldId: ID!) {\n account {\n premiumData {\n addCropObservationRegistrations(registrations: {fieldId: $fieldId, layerType: CROP_SPECIFIC, cropType: WHEAT, season: SEP2017TOSEP2018}) {\n id\n }\n }\n }\n }\n ''',\n 'variables': {'fieldId': field_id}\n})\ncheck_results(response)", "_____no_output_____" ] ], [ [ "GCF is not crop specific, so we need to register as well for accessing non crop-specific attributes.", "_____no_output_____" ] ], [ [ "# Register for NON_CROP_SPECIFIC verde data on our field\nresponse = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n mutation registerCropObservations($fieldId: ID!) {\n account {\n premiumData {\n addCropObservationRegistrations(registrations: {fieldId: $fieldId, layerType: NON_CROP_SPECIFIC, season: SEP2017TOSEP2018}) {\n id \n }\n }\n }\n }\n ''',\n 'variables': {'fieldId': field_id}\n})\ncheck_results(response)", "_____no_output_____" ] ], [ [ "Once Verde data for this field is available, we can easily retrieve it, for instance:", "_____no_output_____" ] ], [ [ "response = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n query getCropObservations($fieldId: [ID!]!) {\n fields(where: {id: {EQ: $fieldId}}) {\n cropObservations {\n leafAreaIndex { dateTime mean }\n }\n }\n }\n ''',\n 'variables': {'fieldId': field_id}\n})\ncheck_results(response)", "_____no_output_____" ] ], [ [ "The data can be loaded as a pandas DataFrame:", "_____no_output_____" ] ], [ [ "results = response.json()\nleafAreaIndex = pd.io.json.json_normalize(\n results['data']['fields'],\n record_path=['cropObservations', 'leafAreaIndex'],\n)\nleafAreaIndex['date_time'] = pd.to_datetime(leafAreaIndex['dateTime'])\nleafAreaIndex['value'] = leafAreaIndex['mean']\nleafAreaIndex = leafAreaIndex[['date_time', 'value']]\nleafAreaIndex.head()", "_____no_output_____" ] ], [ [ "[Table image](img/lodging/lai_for_field.png)", "_____no_output_____" ], [ "We proceed to a second similar query to obtain green vegetation cover fraction:", "_____no_output_____" ] ], [ [ "response = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n query getCropObservations($fieldId: [ID!]!) {\n fields(where: {id: {EQ: $fieldId}}) {\n cropObservations {\n greenVegetationCoverFraction { dateTime mean }\n }\n }\n }\n ''',\n 'variables': {'fieldId': field_id}\n})\ncheck_results(response)\nresults = response.json()\ngreenCoverFraction = pd.io.json.json_normalize(\n results['data']['fields'],\n record_path=['cropObservations', 'greenVegetationCoverFraction'],\n)\ngreenCoverFraction['date_time'] = pd.to_datetime(greenCoverFraction['dateTime'])\ngreenCoverFraction['value'] = greenCoverFraction['mean']\ngreenCoverFraction = greenCoverFraction[['date_time', 'value']]", "_____no_output_____" ] ], [ [ "A year of observations was retrieved:", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nplt.plot(leafAreaIndex['date_time'], leafAreaIndex['value'], label='LAI')\nplt.plot(greenCoverFraction['date_time'], greenCoverFraction['value'], label='GCF')\nplt.legend()\nplt.show()", "_____no_output_____" ] ], [ [ "[Graph image](img/lodging/lai_gfc.png)", "_____no_output_____" ], [ "## Adjusting recommendation\nGS31 marks the beginning of the stem elongation and generally occurs around mid April. Let's filter our LAI and GCF around this time of year:", "_____no_output_____" ] ], [ [ "from datetime import datetime, timezone\nfrom_date = datetime(2018, 4, 7, tzinfo=timezone.utc)\nto_date = datetime(2018, 4, 21, tzinfo=timezone.utc)\nleafAreaIndex_mid_april = leafAreaIndex[(leafAreaIndex['date_time'] > from_date) & (leafAreaIndex['date_time'] < to_date)]\ngreenCoverFraction_mid_april = greenCoverFraction[(greenCoverFraction['date_time'] > from_date) & (greenCoverFraction['date_time'] < to_date)]", "_____no_output_____" ] ], [ [ "Check if LAI or GCF are above their respective thresholds:", "_____no_output_____" ] ], [ [ "(leafAreaIndex_mid_april['value'] > 2).any() | (greenCoverFraction_mid_april['value'] > 0.6).any()", "_____no_output_____" ] ], [ [ "Our field has an LAI below 2 in the 2 weeks around mid April and no GCF reading close enough to be taken into account. But we have now the basis for adjusting our recommendation by using Agrimetrics Verde crop observations. Let's broaden our evaluation to nearby Agrimetrics fields with a wheat crop in 2018.", "_____no_output_____" ] ], [ [ "response = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n query getFieldsWithinRadius($centroid: CoordinateScalar!, $distance: Float!) {\n fields(geoFilter: {location: {type: Point, coordinates: $centroid}, distance: {LE: $distance}}) {\n id\n sownCrop {\n cropType\n harvestYear\n }\n }\n }\n ''',\n 'variables': { 'centroid': centroid, 'distance': 2000 } # distance in m\n})\ncheck_results(response)\nresults = response.json()\nnearby_fields = pd.io.json.json_normalize(\n results['data']['fields'],\n record_path=['sownCrop'],\n meta=['id'],\n)\nnearby_wheat_fields = nearby_fields[(nearby_fields['cropType'] == 'WHEAT') \n & (nearby_fields['harvestYear'] == 2018)]\navailable_fields = nearby_wheat_fields['id']\navailable_fields.head()", "_____no_output_____" ] ], [ [ "Using the same approach as above, we implement the retrieval of Verde LAI and GCF for the selected fields:", "_____no_output_____" ] ], [ [ "def register(field_id):\n # Register for CROP_SPECIFIC verde data on our field\n response = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n mutation registerCropObservations($fieldId: ID!) {\n account {\n premiumData {\n addCropObservationRegistrations(registrations: {\n fieldId: $fieldId, layerType: CROP_SPECIFIC, season: SEP2017TOSEP2018, cropType: WHEAT\n }) {\n id\n }\n }\n }\n }\n ''',\n 'variables': {'fieldId': field_id}\n })\n check_results(response)\n # Register for NON_CROP_SPECIFIC verde data on our field\n response = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n mutation registerCropObservations($fieldId: ID!) {\n account {\n premiumData {\n addCropObservationRegistrations(registrations: {\n fieldId: $fieldId, layerType: NON_CROP_SPECIFIC, season: SEP2017TOSEP2018\n }) {\n id\n }\n }\n }\n }\n ''',\n 'variables': {'fieldId': field_id}\n })\n check_results(response)\n\ndef crop_observations(field_id, attribute):\n response = requests.post(graphql_url, headers=headers, json={\n 'query': '''\n query getCropObservations($fieldId: [ID!]!) {{\n fields(where: {{id: {{EQ: $fieldId}}}}) {{\n cropObservations {{\n {attribute} {{ mean dateTime }}\n }}\n }}\n }}\n '''.format(attribute=attribute),\n 'variables': {'fieldId': field_id}\n })\n check_results(response)\n results = response.json()\n data = pd.io.json.json_normalize(\n results['data']['fields'],\n record_path=['cropObservations', attribute],\n )\n data['date_time'] = pd.to_datetime(data['dateTime'])\n data['value'] = data['mean']\n return data[['date_time', 'value']]\n\ndef has_high_LAI(field_id, leafAreaIndex):\n if not leafAreaIndex.empty:\n leafAreaIndex_mid_april = leafAreaIndex[(leafAreaIndex['date_time'] > from_date) & (leafAreaIndex['date_time'] < to_date)]\n return (leafAreaIndex_mid_april['value'] > 2).any()\n return False\n\ndef has_high_GCF(field_id, greenCoverFraction):\n if not greenCoverFraction.empty:\n greenCoverFraction_mid_april = greenCoverFraction[(greenCoverFraction['date_time'] > from_date) & (greenCoverFraction['date_time'] < to_date)]\n return (greenCoverFraction_mid_april['value'] > 0.6).any()\n return False\n", "_____no_output_____" ] ], [ [ "We then revisit the recommendation algorithm:", "_____no_output_____" ] ], [ [ "def adjusted_lodging_recommendation(field_id, resistance_score, sns_index, sowing_date, sowing_density):\n register(field_id)\n leafAreaIndex = crop_observations(field_id, 'leafAreaIndex')\n greenCoverFraction = crop_observations(field_id, 'greenVegetationCoverFraction')\n \n high_LAI = has_high_LAI(field_id, leafAreaIndex)\n high_GCF = has_high_GCF(field_id, greenCoverFraction)\n \n plt.plot(leafAreaIndex['date_time'], leafAreaIndex['value'], label='LAI')\n plt.plot(greenCoverFraction['date_time'], greenCoverFraction['value'], label='GCF')\n plt.legend()\n plt.show()\n \n if high_LAI and high_GCF:\n print('High LAI and GCF were observed around GS31 for this crop, please consider adjusting the recommendation')\n elif high_LAI:\n print('High LAI was observed around GS31 for this crop, please consider adjusting the recommendation')\n elif high_GCF:\n print('High GCF was observed around GS31 for this crop, please consider adjusting the recommendation')\n else:\n print('High LAI and GCF were not observed around GS31 for this crop')\n\n return ahdb_lodging_recommendation(resistance_score, sns_index, sowing_date, sowing_density)\n\nwidgets.interact(adjusted_lodging_recommendation,\n field_id=widgets.Dropdown(description='Agrimetrics field id', options=available_fields, style=style), \n resistance_score=widgets.IntSlider(description='Resistance score without PGR', min=1, max=9, style=style),\n sns_index=widgets.IntSlider(description='SNS index', min=0, max=4, style=style),\n sowing_date=widgets.SelectionSlider(description='Sowing date', options=sowing_date_scores.keys(), style=style),\n sowing_density=widgets.SelectionSlider(description='Sowing density', options=sowing_density_scores.keys(), style=style),\n)", "_____no_output_____" ] ], [ [ "[Widget image: Low LAI](img/lodging/output_1.png) [Widget image: High LAI](img/lodging/output_2_high_lai.png)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d0187261e02c03e36805813e510b825a77c94ac2
60,441
ipynb
Jupyter Notebook
self/pandas_basic_2.ipynb
Karmantez/Tensorflow_Practice
fa4ced813a494e93ab58aa1c04aec10c6ca740ae
[ "MIT" ]
null
null
null
self/pandas_basic_2.ipynb
Karmantez/Tensorflow_Practice
fa4ced813a494e93ab58aa1c04aec10c6ca740ae
[ "MIT" ]
null
null
null
self/pandas_basic_2.ipynb
Karmantez/Tensorflow_Practice
fa4ced813a494e93ab58aa1c04aec10c6ca740ae
[ "MIT" ]
null
null
null
45.858118
2,031
0.485482
[ [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "titanic_df = pd.read_csv('titanic_train.csv')\nprint('단일 컬럼 데이터 추출:\\n', titanic_df['Pclass'].head(3))\nprint('\\n여러 컬럼들의 데이터 추출:\\n', titanic_df[['Survived', 'Pclass']].head(3))\n\n# 아래처럼 코딩하는건 좋지 않다.\n# 차라리 Boolean Indexing으로 사용하는게 좋다.\nprint('[ ] 안에 숫자 index는 KeyError 오류 발생:\\n', titanic_df[0])", "단일 컬럼 데이터 추출:\n 0 3\n1 1\n2 3\nName: Pclass, dtype: int64\n\n여러 컬럼들의 데이터 추출:\n Survived Pclass\n0 0 3\n1 1 1\n2 1 3\n" ], [ "data = {'Name': ['Chulmin', 'Eunkyung','Jinwoong','Soobeom'],\n 'Year': [2011, 2016, 2015, 2015],\n 'Gender': ['Male', 'Female', 'Male', 'Male']\n }\ndata_df = pd.DataFrame(data, index=['one','two','three','four'])\ndata_df", "_____no_output_____" ], [ "print(\"\\n iloc[0]\", data_df.iloc[0])\nprint(\"\\n loc['one']\", data_df.loc['one'])", "\n iloc[0] Name Chulmin\nYear 2011\nGender Male\nName: one, dtype: object\n\n loc['one'] Name Chulmin\nYear 2011\nGender Male\nName: one, dtype: object\n" ], [ "# data_df 를 reset_index() 로 새로운 숫자형 인덱스를 생성\ndata_df_reset = data_df.reset_index()\ndata_df_reset = data_df_reset.rename(columns={'index':'old_index'})\n\n# index 값에 1을 더해서 1부터 시작하는 새로운 index값 생성\ndata_df_reset.index = data_df_reset.index+1\ndata_df_reset", "_____no_output_____" ] ], [ [ "### iloc (위치기반)", "_____no_output_____" ] ], [ [ "data_df.head()", "_____no_output_____" ], [ "data_df.iloc[0, 0]", "_____no_output_____" ], [ "# 아래 코드는 오류를 발생시킴\ndata_df.iloc['Name', 0]", "_____no_output_____" ], [ "data_df.reset_index()", "_____no_output_____" ] ], [ [ "### loc (명칭기반)", "_____no_output_____" ] ], [ [ "data_df", "_____no_output_____" ], [ "data_df.loc['one', 'Name']", "_____no_output_____" ], [ "data_df_reset.loc[1, 'Name']", "_____no_output_____" ], [ "data_df_reset.loc[0, 'Name']", "_____no_output_____" ] ], [ [ "### 불린 인덱싱(Boolean Indexing)", "_____no_output_____" ] ], [ [ "titanic_df = pd.read_csv('titanic_train.csv')", "_____no_output_____" ], [ "titanic_boolean = titanic_df[titanic_df['Age'] > 60]", "_____no_output_____" ], [ "titanic_boolean", "_____no_output_____" ], [ "var1 = titanic_df['Age'] > 60\nprint('결과:\\n', var1)\nprint(type(var1))", "결과:\n 0 False\n1 False\n2 False\n3 False\n4 False\n ... \n886 False\n887 False\n888 False\n889 False\n890 False\nName: Age, Length: 891, dtype: bool\n<class 'pandas.core.series.Series'>\n" ], [ "titanic_df[titanic_df['Age'] > 60][['Name', 'Age']].head(3)", "_____no_output_____" ], [ "titanic_df[['Name', 'Age']][titanic_df['Age'] > 60].head(3)", "_____no_output_____" ], [ "titanic_df['Age_cat'] = titanic_df['Age'].apply(lambda x : 'Child' if x<=15 else ('Adult' if x <= 60 else \n 'Elderly'))\ntitanic_df['Age_cat'].value_counts()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
d018726c3da8762d18a6ea0627ed2ef78654bf48
166,178
ipynb
Jupyter Notebook
model construction.ipynb
chao05/Predicting-the-Presence-of-Breast-Cancer
a990bac4bb27bfad7688d772f682bc8d54694c42
[ "MIT" ]
null
null
null
model construction.ipynb
chao05/Predicting-the-Presence-of-Breast-Cancer
a990bac4bb27bfad7688d772f682bc8d54694c42
[ "MIT" ]
null
null
null
model construction.ipynb
chao05/Predicting-the-Presence-of-Breast-Cancer
a990bac4bb27bfad7688d772f682bc8d54694c42
[ "MIT" ]
null
null
null
199.254197
91,280
0.877553
[ [ [ "import pandas as pd\nimport numpy as np\nfrom scipy.io import arff\nfrom scipy.stats import iqr\n\nimport os\nimport math\n\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\nimport seaborn as sns\n\nimport datetime\nimport calendar\n\nfrom numpy import mean\nfrom numpy import std\n\nfrom sklearn.preprocessing import normalize\nfrom sklearn.preprocessing import scale\nfrom sklearn.feature_selection import f_regression\nfrom sklearn.feature_selection import f_classif\nfrom sklearn.feature_selection import mutual_info_classif\nfrom sklearn.feature_selection import mutual_info_regression\nfrom sklearn.feature_selection import RFE\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.decomposition import PCA\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import learning_curve\n\nimport joblib", "_____no_output_____" ], [ "cancer = pd.read_csv('dataR2.csv')\nprint(cancer.shape)\ncancer.head(2)", "(116, 10)\n" ], [ "def print_unique(df):\n for col in df.columns:\n print(col, '\\n', df[col].sort_values().unique(), '\\n')\n \nprint_unique(cancer)", "Age \n [24 25 28 29 32 34 35 36 38 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54\n 55 57 58 59 60 61 62 64 65 66 67 68 69 71 72 73 74 75 76 77 78 81 82 83\n 85 86 89] \n\nBMI \n [18.37 18.67 19.13265306 19.56 20.26 20.69049454\n 20.76 20.82999519 20.83 20.9566075 21.08281329 21.11111111\n 21.30394858 21.35991456 21.36752137 21.47 21.51385851 22.\n 22.03 22.21 22.22222222 22.4996371 22.65625 22.7\n 22.83287935 22.85445769 22.86 22.89281998 23. 23.01\n 23.12467037 23.14049587 23.34 23.5 23.62 23.8\n 24.21875 24.24242424 24.74 25.3 25.51020408 25.59\n 25.7 25.9 26.34929208 26.5625 26.6 26.66666667\n 26.6727633 26.84 26.85 27.1 27.18 27.2\n 27.3 27.63605442 27.68877813 27.7 27.88761707 27.91551882\n 28.125 28.44444444 28.57667585 28.65013774 28.67262608 29.13631634\n 29.15451895 29.2184076 29.296875 29.38475666 29.4 29.60676726\n 29.666548 29.77777778 30.27681661 30.3 30.48 30.48315806\n 30.8012487 30.83653053 30.91557669 31.21748179 31.23140988 31.2385898\n 31.25 31.44654088 31.64036818 31.97501487 32.03895937 32.05\n 32.27078777 32.46191136 32.5 33.18 34.17489 34.42217362\n 34.5297228 34.83814777 35.09270153 35.2507611 35.56 35.58792924\n 35.85581466 36.05 36.21227888 36.51263743 36.7901662 37.03560819\n 37.109375 38.57875854] \n\nGlucose \n [ 60 70 74 75 76 77 78 79 80 82 83 84 85 86 87 88 89 90\n 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 108 110\n 112 114 116 118 119 128 131 134 138 139 152 196 199 201] \n\nInsulin \n [ 2.432 2.54 2.64 2.707 2.74 2.82 2.869 2.999 3.012 3.115\n 3.188 3.226 3.33 3.35 3.42 3.44 3.469 3.482 3.508 3.549\n 3.73 3.855 3.881 4.09 4.172 4.181 4.328 4.345 4.364 4.376\n 4.42 4.427 4.462 4.498 4.53 4.56 4.58 4.69 4.713 4.902\n 4.952 5.138 5.197 5.261 5.376 5.43 5.537 5.636 5.646 5.663\n 5.7 5.73 5.75 5.782 5.81 5.819 6.03 6.042 6.107 6.2\n 6.47 6.524 6.59 6.683 6.703 6.76 6.817 6.862 7.01 7.553\n 8.079 8.15 8.34 8.396 8.576 8.808 9.208 9.245 9.669 10.175\n 10.395 10.491 10.555 10.704 10.949 11.91 12.162 12.305 12.548 13.852\n 14.026 14.07 14.649 15.533 15.89 16.582 16.635 18.077 18.2 19.91\n 21.699 22.033 23.194 24.887 26.211 28.677 30.13 30.212 36.94 41.611\n 41.894 51.814 58.46 ] \n\nHOMA \n [ 0.46740867 0.507936 0.519184 0.56388 0.570392 0.59\n 0.60550747 0.61272493 0.61789013 0.6538048 0.6674356 0.6879706\n 0.69614267 0.70689733 0.72755813 0.73208693 0.732193 0.742368\n 0.755688 0.78065067 0.79018187 0.79125733 0.80154333 0.8053864\n 0.82727067 0.832352 0.84567693 0.89078733 0.9067072 0.92171933\n 0.96027333 0.972138 1.0011016 1.00851147 1.00965107 1.01383947\n 1.03739367 1.046286 1.0566016 1.06967 1.08963767 1.09960053\n 1.1006464 1.1174 1.13392913 1.14478 1.14543613 1.203832\n 1.229214 1.23282767 1.245642 1.30042667 1.30486667 1.30539453\n 1.33 1.341324 1.370998 1.37788 1.38399733 1.4026256\n 1.4066068 1.43223547 1.44970933 1.513374 1.55992 1.56177\n 1.6 1.65877413 1.75261107 1.8404096 1.84629013 1.86288587\n 1.8732508 1.88320133 2.05239 2.098344 2.24162527 2.2485936\n 2.3464512 2.3498848 2.38502 2.5101466 2.53493167 2.62828267\n 2.62960233 2.63353667 2.85311933 2.871792 2.94041467 3.0099796\n 3.071407 3.262364 3.4851632 3.495982 3.775036 3.79014433\n 3.86978807 4.45899333 4.468268 4.66890667 4.9242264 5.09185613\n 5.27176247 5.68541507 5.9699204 6.4834952 6.777364 7.0029234\n 7.111918 7.83620533 8.22598307 9.73600733 13.22733227 15.28534133\n 20.6307338 25.05034187] \n\nLeptin \n [ 4.311 4.47 6.3339 6.633 6.6994 6.8317 6.964 7.6476 7.65\n 7.7529 7.85 8.0163 8.438 8.6874 8.8071 8.8438 8.88 9.62\n 9.6994 9.8 9.8648 9.8827 10.16 10.2809 10.39 11.0816 11.2406\n 12.1905 12.2617 12.331 12.45 12.6757 12.87 13.08 13.74 14.09\n 14.3224 14.57 14.7485 14.9037 14.9084 15.1248 15.145 15.26 15.5325\n 16.2247 16.7353 17.022 17.127 17.87 17.9393 17.9973 18.1314 18.16\n 18.69 19.0653 19.0826 20.092 20.45 21.2117 21.778 21.78 21.7863\n 21.9033 22.8884 23.8479 24.2998 24.846 24.96 25.7816 26.5166 26.65\n 26.8081 27.1841 28.562 28.7502 29.2739 30.7729 31.0385 31.1233 31.2128\n 31.6453 32.58 33.1612 35.59 35.891 37.2234 37.843 38.8066 39.2134\n 39.9802 41.4064 42.3914 44.0217 44.7059 45.272 45.6196 45.9624 46.076\n 46.6401 47.647 49.3727 50.53 50.6094 51.3387 53.4997 54.68 56.502\n 61.48 65.926 68.5102 70.8824 74.7069 83.4821 89.27 90.28 ] \n\nAdiponectin \n [ 1.65602 2.19428 2.36495 2.78491 3.19209 3.335665 3.70523\n 3.71009 3.74122 3.886145 4.104105 4.138025 4.230105 4.267105\n 4.294705 4.617125 4.667645 4.77192 4.783985 4.7942 4.81924\n 4.935635 5.065915 5.1 5.288025 5.357135 5.429285 5.46262\n 5.47817 5.4861 5.589865 5.80762 6.160995 6.209635 6.26854\n 6.420295 6.644245 6.695585 6.78387 6.796985 6.966895 7.16956\n 7.28287 7.36996 7.53955 7.64276 7.652055 7.65222 7.780255\n 7.901685 7.9317 8.01 8.12555 8.13 8.237405 8.2863\n 8.300955 8.40443 8.412175 8.42996 8.462915 8.574655 8.6\n 9.000805 9.048185 9.16 9.34663 9.349775 9.7024 9.73138\n 9.75326 9.76 9.92365 10.06 10.22231 10.26266 10.35526\n 10.358725 10.567295 10.636525 10.73174 10.79394 11.018455 11.236235\n 11.57899 11.78796 11.9 12.1 12.71896 12.76 13.11\n 13.25132 13.494865 13.67975 14.11 16.1 16.44048 16.67\n 17.86 17.95 18.55 20.03 20.32 20.37 21.056625\n 21.42 21.57 21.823745 22.43204 22.54 23.67 26.72\n 33.75 36.06 38.04 ] \n\nResistin \n [ 3.21 3.27 3.29175 3.32 4.06405 4.19 4.2075 4.2989\n 4.35 4.49685 4.53 4.58 4.62 4.6638 4.82 5.06\n 5.1042 5.14 5.2633 5.31 5.57055 5.62592 5.68 5.768\n 6.28445 6.70188 6.7052 6.71026 6.85 6.89235 6.92 7.0913\n 7.16514 7.32 7.5767 7.64 7.84 7.99585 8.04375 8.2049\n 8.4156 8.49395 8.70448 8.89 9.1539 9.27715 9.35 9.6135\n 9.9542 10.15726 10.19299 10.26309 10.3176 10.33 10.34455 10.37518\n 10.57635 10.69548 10.96 11.50005 11.55492 11.73 11.774 11.78388\n 11.78796 12.06534 12.766 12.9361 13.56 13.68392 13.74244 13.91245\n 13.97399 14.76966 14.91922 15.55625 15.69876 15.72187 15.73606 16.1\n 16.11032 16.43706 16.48508 17.10223 17.2615 17.37615 17.55503 18.35574\n 19.46324 19.94687 20.2535 20.4685 20.76801 21.44366 22.03703 22.32024\n 22.94254 23.03306 23.03408 23.1177 23.3819 24.24591 24.3701 24.6033\n 26.0136 27.8325 28.0323 29.5583 31.6904 38.6531 42.7447 49.24184\n 53.6308 53.6717 55.2153 82.1 ] \n\nMCP.1 \n [ 45.843 63.61 90.09 90.6 99.45 136.855 165.02 174.8\n 191.72 193.87 195.94 198.4 199.055 200.976 206.802 209.19\n 209.749 215.769 218.28 220.66 225.88 232.006 232.018 244.75\n 252.449 256.001 263.499 268.23 269.487 270.142 280.694 293.123\n 301.21 312. 313.73 314.05 318.302 321.919 330.16 335.393\n 353.568 354.6 355.31 358.624 359.232 377.227 378.996 382.955\n 392.46 395.976 396.021 396.648 407.206 417.114 426.175 444.395\n 448.799 468.786 473.859 481.949 483.377 488.829 513.66 518.586\n 530.41 534.224 552.444 554.697 572.401 572.783 573.63 581.313\n 585.307 586.173 588.46 602.486 618.272 621.273 632.22 634.602\n 635.049 638.261 655.834 656.393 667.928 695.754 698.789 703.973\n 713.239 733.797 737.672 738.034 764.667 773.92 775.322 783.796\n 788.902 799.898 806.724 864.968 887.16 904.981 910.489 923.886\n 928.22 960.246 994.316 1041.843 1078.359 1102.11 1227.91 1256.083\n 1698.44 ] \n\nClassification \n [1 2] \n\n" ], [ "def snapshot(df):\n n_missing = pd.DataFrame(df.isnull().sum(), columns = ['n_missing'])\n pct_missing = pd.DataFrame(round(df.isnull().sum() / df.shape[0], 2), columns = ['pct_missing'])\n dtype = pd.DataFrame(df.dtypes, columns = ['dtype'])\n n_unique = []\n for col in df.columns:\n n_unique.append(df[col].nunique()) \n return pd.DataFrame(n_unique, index = df.columns, columns = ['n_unique']).join(dtype).join(n_missing).join(pct_missing)\n\nsnapshot = snapshot(cancer)\nsnapshot", "_____no_output_____" ], [ "np.sort(snapshot['n_unique'].unique())", "_____no_output_____" ], [ "features = cancer.columns.drop('Classification')", "_____no_output_____" ], [ "def plot_single_categorical(df, col):\n plt.figure(figsize = (4, 4))\n df[col].value_counts().plot.bar(color = mcolors.TABLEAU_COLORS)\n sns.despine(top = True)\n \n n_level = df[col].nunique()\n for x_coor in range(n_level):\n plt.annotate(df[col].value_counts().iloc[x_coor], \n xy = (x_coor, \n df[col].value_counts().iloc[x_coor] + df[col].value_counts().iloc[0]/50))\n \n plt.xticks(rotation = 0)\n plt.grid()\n plt.title(col)\n plt.show()", "_____no_output_____" ], [ "plot_single_categorical(cancer, 'Classification')", "_____no_output_____" ], [ "def feat_significance(X, y, n_feat_data_type, features):\n mi_df = pd.DataFrame(mutual_info_classif(X, y, random_state = 42), index = X.columns, columns = ['score'])\n mi_df = mi_df.sort_values(by = 'score', ascending = False)\n \n def color_cell(s): \n background = []\n for i in range(len(s.index)):\n if s.index[i] in features:\n background.append('background-color: yellow')\n else:\n background.append('')\n return background\n \n if n_feat_data_type == 1:\n return mi_df\n else:\n return mi_df.style.apply(color_cell, axis = 0)", "_____no_output_____" ], [ "feat_score = feat_significance(cancer[features], cancer['Classification'], 1, '')\nfeat_score", "_____no_output_____" ], [ "X_scaled = pd.DataFrame(scale(cancer[features]), columns = features)\ny = cancer['Classification']", "_____no_output_____" ], [ "lr = LogisticRegression(random_state = 42)\nknn = KNeighborsClassifier()\nsvc = SVC(random_state = 42)\ntree = DecisionTreeClassifier(max_features = 'auto', random_state = 42)\nalg_dict = {lr: 'lr', svc: 'svc', knn: 'knn', tree: 'tree'}", "_____no_output_____" ], [ "def num_feat_perform(algorithm, feat_ordered, X_ordered, y, metric):\n scores = []\n for i in range(1, len(feat_ordered)+1):\n pred_data = X_ordered.iloc[:, 0:i]\n score = mean(cross_val_score(algorithm, pred_data, y, scoring = metric, cv = 5))\n scores.append(score)\n\n n_features = len(feat_ordered)\n plt.plot(np.arange(n_features), scores, marker = 'x')\n plt.xticks(np.arange(n_features), np.arange(1, n_features + 1))\n for i in range(n_features):\n plt.text(i, scores[i], s = round(scores[i], 2))\n plt.grid()\n plt.xlabel('no. of features')\n plt.ylabel('score')\n \ndef num_feat_multi_alg(alg_dict, feat_ordered, X_ordered, y, metric):\n n_algorithm = len(alg_dict)\n algorithms = list(alg_dict.keys())\n alg_names = list(alg_dict.values())\n if n_algorithm <= 2:\n nrows = 1\n ncols = n_algorithm\n fig = plt.figure(figsize = (ncols * 6, 4))\n else:\n nrows = math.ceil(n_algorithm / 2)\n ncols = 2\n fig = plt.figure(figsize = (12, nrows * 4))\n\n for n in range(n_algorithm):\n ax = fig.add_subplot(nrows, ncols, n + 1)\n ax = num_feat_perform(algorithms[n], feat_ordered, X_ordered, y, metric)\n plt.title(f\"'{alg_names[n]}' performance by '{metric}'\")\n \n plt.tight_layout()\n plt.show()", "_____no_output_____" ], [ "num_feat_multi_alg(alg_dict, feat_score.index, X_scaled[feat_score.index], y, 'f1')", "_____no_output_____" ], [ "def plot_learning_curve(train_scores, test_scores, train_sizes):\n train_scores = pd.DataFrame(train_scores, index = train_sizes, columns = ['split1', 'split2', 'split3', 'split4', 'split5'])\n train_scores = train_scores.join(pd.Series(train_scores.mean(axis = 1), name = 'mean'))\n\n test_scores = pd.DataFrame(test_scores, index = train_sizes, columns = ['split1', 'split2', 'split3', 'split4', 'split5'])\n test_scores = test_scores.join(pd.Series(test_scores.mean(axis = 1), name = 'mean'))\n\n plt.plot(train_scores['mean'], label = 'train_scores')\n plt.plot(test_scores['mean'], label = 'test_scores')\n plt.legend()\n plt.grid()\n plt.xlabel('no. of training samples')\n \ndef two_metric_graph(algorithm, X, y):\n train_sizes = np.linspace(start = 20, stop = X.shape[0] * 0.8, num = 6, dtype = int)\n fig = plt.figure(figsize = (10, 4))\n\n for i, metric in enumerate(['f1', 'balanced_accuracy']):\n train_sizes_abs, train_scores, test_scores = learning_curve(algorithm, X, y, train_sizes = train_sizes, \n scoring = metric, cv = 5, shuffle = True, \n random_state = 42)\n ax = fig.add_subplot(1, 2, i + 1)\n ax = plot_learning_curve(train_scores, test_scores, train_sizes)\n plt.title(f\"'performance by '{metric}'\")\n\n plt.tight_layout()\n plt.show()", "_____no_output_____" ], [ "two_metric_graph(svc, X_scaled[feat_score.index[0:3]], y)", "_____no_output_____" ], [ "svc.fit(X_scaled[feat_score.index[0:3]], y)\njoblib.dump(svc, 'svc.joblib')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01874d3796ff050d72cdcae9bb69ee6825a88b3
28,692
ipynb
Jupyter Notebook
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
2c700da53210a16f75c468ba521061106afa6982
[ "MIT" ]
null
null
null
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
2c700da53210a16f75c468ba521061106afa6982
[ "MIT" ]
null
null
null
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
2c700da53210a16f75c468ba521061106afa6982
[ "MIT" ]
null
null
null
24.523077
180
0.54259
[ [ [ "# Settings", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "%env TF_KERAS = 1\nimport os\nsep_local = os.path.sep\n\nimport sys\nsys.path.append('..'+sep_local+'..')\nprint(sep_local)", "env: TF_KERAS=1\n\\\n" ], [ "os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')\nprint(os.getcwd())", "C:\\Users\\Khalid\\Documents\\projects\\GM\\Generative_Models\n" ], [ "import tensorflow as tf\nprint(tf.__version__)", "2.1.0\n" ] ], [ [ "# Dataset loading", "_____no_output_____" ] ], [ [ "dataset_name='Dstripes'", "_____no_output_____" ], [ "images_dir = 'C:\\\\Users\\\\Khalid\\\\Documents\\projects\\\\Dstripes\\DS06\\\\'\nvalidation_percentage = 20\nvalid_format = 'png'", "_____no_output_____" ], [ "from training.generators.file_image_generator import create_image_lists, get_generators", "Using TensorFlow backend.\n" ], [ "imgs_list = create_image_lists(\n image_dir=images_dir, \n validation_pct=validation_percentage, \n valid_imgae_formats=valid_format\n)", "\n" ], [ "inputs_shape= image_size=(200, 200, 3)\nbatch_size = 32\nlatents_dim = 32\nintermediate_dim = 50", "_____no_output_____" ], [ "training_generator, testing_generator = get_generators(\n images_list=imgs_list, \n image_dir=images_dir, \n image_size=image_size, \n batch_size=batch_size, \n class_mode=None\n)", "\n" ], [ "import tensorflow as tf", "_____no_output_____" ], [ "train_ds = tf.data.Dataset.from_generator(\n lambda: training_generator, \n output_types=tf.float32 ,\n output_shapes=tf.TensorShape((batch_size, ) + image_size)\n)\n\ntest_ds = tf.data.Dataset.from_generator(\n lambda: testing_generator, \n output_types=tf.float32 ,\n output_shapes=tf.TensorShape((batch_size, ) + image_size)\n)", "_____no_output_____" ], [ "_instance_scale=1.0\nfor data in train_ds:\n _instance_scale = float(data[0].numpy().max())\n break", "_____no_output_____" ], [ "_instance_scale", "_____no_output_____" ], [ "import numpy as np\nfrom collections.abc import Iterable", "_____no_output_____" ], [ "if isinstance(inputs_shape, Iterable):\n _outputs_shape = np.prod(inputs_shape)", "_____no_output_____" ], [ "_outputs_shape", "_____no_output_____" ] ], [ [ "# Model's Layers definition", "_____no_output_____" ] ], [ [ "units=20\nc=50\nmenc_lays = [\n tf.keras.layers.Conv2D(filters=units//2, kernel_size=3, strides=(2, 2), activation='relu'),\n tf.keras.layers.Conv2D(filters=units*9//2, kernel_size=3, strides=(2, 2), activation='relu'),\n tf.keras.layers.Flatten(),\n # No activation\n tf.keras.layers.Dense(latents_dim)\n]\n\nvenc_lays = [\n tf.keras.layers.Conv2D(filters=units//2, kernel_size=3, strides=(2, 2), activation='relu'),\n tf.keras.layers.Conv2D(filters=units*9//2, kernel_size=3, strides=(2, 2), activation='relu'),\n tf.keras.layers.Flatten(),\n # No activation\n tf.keras.layers.Dense(latents_dim)\n]\n\ndec_lays = [\n tf.keras.layers.Dense(units=units*c*c, activation=tf.nn.relu),\n tf.keras.layers.Reshape(target_shape=(c , c, units)),\n tf.keras.layers.Conv2DTranspose(filters=units, kernel_size=3, strides=(2, 2), padding=\"SAME\", activation='relu'),\n tf.keras.layers.Conv2DTranspose(filters=units*3, kernel_size=3, strides=(2, 2), padding=\"SAME\", activation='relu'),\n \n # No activation\n tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=(1, 1), padding=\"SAME\")\n]", "_____no_output_____" ] ], [ [ "# Model definition", "_____no_output_____" ] ], [ [ "model_name = dataset_name+'VAE_Convolutional_reconst_1ell_1psnr'\nexperiments_dir='experiments'+sep_local+model_name", "_____no_output_____" ], [ "from training.autoencoding_basic.autoencoders.VAE import VAE as AE", "_____no_output_____" ], [ "inputs_shape=image_size", "_____no_output_____" ], [ "variables_params = \\\n[\n {\n 'name': 'inference_mean', \n 'inputs_shape':inputs_shape,\n 'outputs_shape':latents_dim,\n 'layers': menc_lays\n }\n\n ,\n \n {\n 'name': 'inference_logvariance', \n 'inputs_shape':inputs_shape,\n 'outputs_shape':latents_dim,\n 'layers': venc_lays\n }\n\n ,\n \n {\n 'name': 'generative', \n 'inputs_shape':latents_dim,\n 'outputs_shape':inputs_shape,\n 'layers':dec_lays\n }\n]", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist", "_____no_output_____" ], [ "_restore = os.path.join(experiments_dir, 'var_save_dir')", "_____no_output_____" ], [ "create_if_not_exist(_restore)\n_restore", "_____no_output_____" ], [ "#to restore trained model, set filepath=_restore", "_____no_output_____" ], [ "ae = AE( \n name=model_name,\n latents_dim=latents_dim,\n batch_size=batch_size,\n variables_params=variables_params, \n filepath=None\n )", "Model: \"inference\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninference_inputs (InputLayer [(None, 200, 200, 3)] 0 \n_________________________________________________________________\ndense (Dense) (None, 200, 200, 32) 128 \n_________________________________________________________________\ndense_1 (Dense) (None, 200, 200, 32) 1056 \n_________________________________________________________________\nflatten (Flatten) (None, 1280000) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 32) 40960032 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 32) 128 \n_________________________________________________________________\ndropout (Dropout) (None, 32) 0 \n_________________________________________________________________\nactivity_regularization (Act (None, 32) 0 \n_________________________________________________________________\ninference_outputs (Activatio (None, 32) 0 \n=================================================================\nTotal params: 40,961,344\nTrainable params: 40,961,280\nNon-trainable params: 64\n_________________________________________________________________\n\n" ], [ "from evaluation.quantitive_metrics.peak_signal_to_noise_ratio import prepare_psnr\nfrom statistical.losses_utilities import similarty_to_distance\nfrom statistical.ae_losses import expected_loglikelihood_with_lower_bound as ellwlb", "_____no_output_____" ], [ "ae.compile(loss={'x_logits': lambda x_true, x_logits: ellwlb(x_true, x_logits)+similarity_to_distance(prepare_psnr([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)})", "Model: \"pokemonAE_Dense_reconst_1ell_1ssmi\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninference_inputs (InputLayer [(None, 200, 200, 3)] 0 \n_________________________________________________________________\ninference (Model) (None, 32) 40961344 \n_________________________________________________________________\ngenerative (Model) (None, 200, 200, 3) 3962124 \n_________________________________________________________________\ntf_op_layer_x_logits (Tensor [(None, 200, 200, 3)] 0 \n=================================================================\nTotal params: 44,923,468\nTrainable params: 44,923,398\nNon-trainable params: 70\n_________________________________________________________________\nNone\n" ] ], [ [ "# Callbacks", "_____no_output_____" ] ], [ [ "\nfrom training.callbacks.sample_generation import SampleGeneration\nfrom training.callbacks.save_model import ModelSaver", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "es = tf.keras.callbacks.EarlyStopping(\n monitor='loss', \n min_delta=1e-12, \n patience=12, \n verbose=1, \n restore_best_weights=False\n)", "_____no_output_____" ], [ "ms = ModelSaver(filepath=_restore)", "_____no_output_____" ], [ "csv_dir = os.path.join(experiments_dir, 'csv_dir')\ncreate_if_not_exist(csv_dir)\ncsv_dir = os.path.join(csv_dir, ae.name+'.csv')\ncsv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)\ncsv_dir", "_____no_output_____" ], [ "image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')\ncreate_if_not_exist(image_gen_dir)", "_____no_output_____" ], [ "sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)", "_____no_output_____" ] ], [ [ "# Model Training", "_____no_output_____" ] ], [ [ "ae.fit(\n x=train_ds,\n input_kw=None,\n steps_per_epoch=int(1e4),\n epochs=int(1e6), \n verbose=2,\n callbacks=[ es, ms, csv_log, sg, gts_mertics, gtu_mertics],\n workers=-1,\n use_multiprocessing=True,\n validation_data=test_ds,\n validation_steps=int(1e4)\n)", "_____no_output_____" ] ], [ [ "# Model Evaluation", "_____no_output_____" ], [ "## inception_score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.inception_metrics import inception_score", "_____no_output_____" ], [ "is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)\nprint(f'inception_score mean: {is_mean}, sigma: {is_sigma}')", "_____no_output_____" ] ], [ [ "## Frechet_inception_distance", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance", "_____no_output_____" ], [ "fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)\nprint(f'frechet inception distance: {fis_score}')", "_____no_output_____" ] ], [ [ "## perceptual_path_length_score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score", "_____no_output_____" ], [ "ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)\nprint(f'perceptual path length score: {ppl_mean_score}')", "_____no_output_____" ] ], [ [ "## precision score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.precision_recall import precision_score", "_____no_output_____" ], [ "_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)\nprint(f'precision score: {_precision_score}')", "_____no_output_____" ] ], [ [ "## recall score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.precision_recall import recall_score", "_____no_output_____" ], [ "_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)\nprint(f'recall score: {_recall_score}')", "_____no_output_____" ] ], [ [ "# Image Generation", "_____no_output_____" ], [ "## image reconstruction", "_____no_output_____" ], [ "### Training dataset", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "from training.generators.image_generation_testing import reconstruct_from_a_batch", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\nreconstruct_from_a_batch(ae, training_generator, save_dir)", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\nreconstruct_from_a_batch(ae, testing_generator, save_dir)", "_____no_output_____" ] ], [ [ "## with Randomness", "_____no_output_____" ] ], [ [ "from training.generators.image_generation_testing import generate_images_like_a_batch", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\ngenerate_images_like_a_batch(ae, training_generator, save_dir)", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\ngenerate_images_like_a_batch(ae, testing_generator, save_dir)", "_____no_output_____" ] ], [ [ "### Complete Randomness", "_____no_output_____" ] ], [ [ "from training.generators.image_generation_testing import generate_images_randomly", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'random_synthetic_dir')\ncreate_if_not_exist(save_dir)\n\ngenerate_images_randomly(ae, save_dir)", "_____no_output_____" ], [ "from training.generators.image_generation_testing import interpolate_a_batch", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'interpolate_dir')\ncreate_if_not_exist(save_dir)\n\ninterpolate_a_batch(ae, testing_generator, save_dir)", "100%|██████████| 15/15 [00:00<00:00, 19.90it/s]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d018788324a4546170c73e075678c315a3e141d4
71,306
ipynb
Jupyter Notebook
car.ipynb
karvaroz/CarEvaluation
84563e5dda75dab29992a27a1ca415912baada82
[ "MIT" ]
null
null
null
car.ipynb
karvaroz/CarEvaluation
84563e5dda75dab29992a27a1ca415912baada82
[ "MIT" ]
null
null
null
car.ipynb
karvaroz/CarEvaluation
84563e5dda75dab29992a27a1ca415912baada82
[ "MIT" ]
null
null
null
53.73474
17,820
0.665077
[ [ [ "import pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "data = pd.read_csv(\"car.data\", header = None)", "_____no_output_____" ], [ "data.columns =[\"Price\", \"Maintenance Cost\", \"Number of Doors\", \"Capacity\", \"Size of Luggage Boot\", \"Safety\", \"Decision\"]", "_____no_output_____" ], [ "data.head(5)", "_____no_output_____" ], [ "data.sample(5)", "_____no_output_____" ], [ "data.tail(5)", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ], [ "data.size", "_____no_output_____" ], [ "data[\"Price\"].sample(5)", "_____no_output_____" ], [ "data[\"Price\"][:5]", "_____no_output_____" ], [ "data[ [\"Price\", \"Safety\", \"Decision\"] ].tail(5)", "_____no_output_____" ], [ "decision = data[\"Decision\"].value_counts()\nprint(decision)", "unacc 1210\nacc 384\ngood 69\nvgood 65\nName: Decision, dtype: int64\n" ], [ "data[\"Decision\"].value_counts().sort_index(ascending = False)", "_____no_output_____" ], [ "decision.plot(kind = \"bar\", xlabel = \"Class Values\", ylabel = \"Counts\", legend = True, title=\"Counts type of decisions\")", "_____no_output_____" ], [ "data[\"Price\"].unique()", "_____no_output_____" ], [ "data[\"Price\"].replace(('vhigh', 'high', 'med', 'low'), (4, 3, 2, 1), inplace = True)", "_____no_output_____" ], [ "data[\"Price\"].unique()", "_____no_output_____" ], [ "price = data[\"Price\"].value_counts()", "_____no_output_____" ], [ "colors = [ 'b', 'g', 'r', 'c' ]\nprice.plot(kind = \"bar\", color = colors)\nplt.xlabel(\"Price\")\nplt.ylabel(\"Cars\")\nplt.title(\"Cars prices\")", "_____no_output_____" ], [ "data[\"Safety\"].unique()", "_____no_output_____" ], [ "data[\"Safety\"].value_counts()", "_____no_output_____" ], [ "labels = [\"Low\", \" Medium\", \"High\"]\nsize = [576, 576, 576]\ncolors =[\"cyan\", \"gray\", \"orange\"]\nexplode = [0.1, 0, 0]", "_____no_output_____" ], [ "plt.pie(size, labels = labels, colors = colors, explode = explode, shadow = True, autopct = \"%.2f%%\")\nplt.title(\"Safety Level\", fontsize = 10)\nplt.axis(\"off\")\nplt.legend(loc = \"best\")\nplt.show()", "_____no_output_____" ] ], [ [ "# Arbol de decision\n", "_____no_output_____" ] ], [ [ "data.columns = [\"price\", \"maintenance\", \"n_doors\", \"capacity\", \"size_lug\", \"safety\", \"class\"]", "_____no_output_____" ], [ "data.sample(10)", "_____no_output_____" ], [ "data.price.replace((\"vhigh\", \"high\", \"med\", \"low\"), (4, 3, 2, 1), inplace = True)\ndata.maintenance.replace((\"vhigh\", \"high\", \"med\", \"low\"), (4, 3, 2, 1), inplace = True)\ndata.n_doors.replace((\"2\", \"3\", \"4\", \"5more\"), (1, 2, 3, 4), inplace = True)\ndata.capacity.replace((\"2\", \"4\", \"more\"), (1, 2, 3), inplace = True)\ndata.size_lug.replace((\"small\", \"med\", \"big\"), (1, 2, 3), inplace = True)\ndata.safety.replace((\"low\", \"med\", \"high\"), (1, 2, 3), inplace = True)\n", "_____no_output_____" ], [ "data[\"class\"].replace((\"unacc\", \"acc\", \"good\", \"vgood\"), (1, 2, 3, 4), inplace = True)", "_____no_output_____" ], [ "data.head(5)", "_____no_output_____" ], [ "import numpy as np\n", "_____no_output_____" ], [ "dataset = data.values\nX = dataset[:, 0:6]\nY = np.asarray(dataset[:,6], dtype = \"S6\")", "_____no_output_____" ], [ "from sklearn import tree\nfrom sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn import metrics", "_____no_output_____" ], [ " X_Train, X_Test, Y_Train, Y_Test = train_test_split(X, Y, test_size=0.2, random_state=0)", "_____no_output_____" ], [ "tr = tree.DecisionTreeClassifier(max_depth = 10)", "_____no_output_____" ], [ "tr.fit(X_Train, Y_Train)", "_____no_output_____" ], [ "y_pred = tr.predict(X_Test)", "_____no_output_____" ], [ "y_pred", "_____no_output_____" ], [ "score = tr.score(X_Test, Y_Test)\nprint(\"Precisión: %0.4f\" % (score))", "Precisión: 0.9682\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0187b0a7c94eeb0ad54d6876f700fd84632ccfa
31,309
ipynb
Jupyter Notebook
examples/folium_examples.ipynb
Censio/folium
fb0ab7730e9e4f8019f5f7bf3f0f315ba12adec9
[ "MIT" ]
6
2015-09-03T16:14:28.000Z
2017-07-01T07:20:13.000Z
examples/folium_examples.ipynb
5y/folium
f7194ad976bbcccf82c258b2f37b53f1d4ed22c9
[ "MIT" ]
null
null
null
examples/folium_examples.ipynb
5y/folium
f7194ad976bbcccf82c258b2f37b53f1d4ed22c9
[ "MIT" ]
3
2016-09-28T20:04:30.000Z
2020-01-03T21:17:20.000Z
34.405495
317
0.446453
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d018877cfcb7e19219b4eeb4c9d124a31ce02674
138,767
ipynb
Jupyter Notebook
soln/oem_soln.ipynb
pmalo46/ModSimPy
dc5ef44757b59b38215aead6fc4c0d486526c1e5
[ "MIT" ]
2
2019-04-27T22:43:12.000Z
2019-11-11T15:12:23.000Z
soln/oem_soln.ipynb
pmalo46/ModSimPy
dc5ef44757b59b38215aead6fc4c0d486526c1e5
[ "MIT" ]
33
2019-10-09T18:50:22.000Z
2022-03-21T01:39:48.000Z
soln/oem_soln.ipynb
pmalo46/ModSimPy
dc5ef44757b59b38215aead6fc4c0d486526c1e5
[ "MIT" ]
null
null
null
68.425542
36,148
0.779623
[ [ [ "# Modeling and Simulation in Python\n\nCase study.\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n", "_____no_output_____" ] ], [ [ "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *", "_____no_output_____" ] ], [ [ "### Electric car", "_____no_output_____" ], [ "[Olin Electric Motorsports](https://www.olinelectricmotorsports.com/) is a club at Olin College that designs and builds electric cars, and participates in the [Formula SAE Electric](https://www.sae.org/attend/student-events/formula-sae-electric) competition.\n\nThe goal of this case study is to use simulation to guide the design of a car intended to accelerate from standing to 100 kph as quickly as possible. The [world record for this event](https://www.youtube.com/watch?annotation_id=annotation_2297602723&feature=iv&src_vid=I-NCH8ct24U&v=n2XiCYA3C9s), using a car that meets the competition requirements, is 1.513 seconds.\n\nWe'll start with a simple model that takes into account the characteristics of the motor and vehicle:\n\n* The motor is an [Emrax 228 high voltage axial flux synchronous permanent magnet motor](http://emrax.com/products/emrax-228/); according to the [data sheet](http://emrax.com/wp-content/uploads/2017/01/emrax_228_technical_data_4.5.pdf), its maximum torque is 240 Nm, at 0 rpm. But maximum torque decreases with motor speed; at 5000 rpm, maximum torque is 216 Nm.\n\n* The motor is connected to the drive axle with a chain drive with speed ratio 13:60 or 1:4.6; that is, the axle rotates once for each 4.6 rotations of the motor.\n\n* The radius of the tires is 0.26 meters.\n\n* The weight of the vehicle, including driver, is 300 kg.\n\nTo start, we will assume no slipping between the tires and the road surface, no air resistance, and no rolling resistance. Then we will relax these assumptions one at a time.\n\n* First we'll add drag, assuming that the frontal area of the vehicle is 0.6 square meters, with coefficient of drag 0.6.\n\n* Next we'll add rolling resistance, assuming a coefficient of 0.2.\n\n* Finally we'll compute the peak acceleration to see if the \"no slip\" assumption is credible.\n\nWe'll use this model to estimate the potential benefit of possible design improvements, including decreasing drag and rolling resistance, or increasing the speed ratio.\n\nI'll start by loading the units we need.", "_____no_output_____" ] ], [ [ "radian = UNITS.radian\nm = UNITS.meter\ns = UNITS.second\nminute = UNITS.minute\nhour = UNITS.hour\nkm = UNITS.kilometer\nkg = UNITS.kilogram\nN = UNITS.newton\nrpm = UNITS.rpm", "_____no_output_____" ] ], [ [ "And store the parameters in a `Params` object.", "_____no_output_____" ] ], [ [ "params = Params(r_wheel=0.26 * m,\n speed_ratio=13/60,\n C_rr=0.2,\n C_d=0.5,\n area=0.6*m**2,\n rho=1.2*kg/m**3,\n mass=300*kg)", "_____no_output_____" ] ], [ [ "`make_system` creates the initial state, `init`, and constructs an `interp1d` object that represents torque as a function of motor speed.", "_____no_output_____" ] ], [ [ "def make_system(params):\n \"\"\"Make a system object.\n \n params: Params object\n \n returns: System object\n \"\"\"\n init = State(x=0*m, v=0*m/s)\n \n rpms = [0, 2000, 5000]\n torques = [240, 240, 216]\n interpolate_torque = interpolate(Series(torques, rpms))\n \n return System(params, init=init,\n interpolate_torque=interpolate_torque,\n t_end=3*s)", "_____no_output_____" ] ], [ [ "Testing `make_system`", "_____no_output_____" ] ], [ [ "system = make_system(params)", "_____no_output_____" ], [ "system.init", "_____no_output_____" ] ], [ [ "### Torque and speed\n\nThe relationship between torque and motor speed is taken from the [Emrax 228 data sheet](http://emrax.com/wp-content/uploads/2017/01/emrax_228_technical_data_4.5.pdf). The following functions reproduce the red dotted line that represents peak torque, which can only be sustained for a few seconds before the motor overheats.", "_____no_output_____" ] ], [ [ "def compute_torque(omega, system):\n \"\"\"Maximum peak torque as a function of motor speed.\n \n omega: motor speed in radian/s\n system: System object\n \n returns: torque in Nm\n \"\"\"\n factor = (1 * radian / s).to(rpm)\n x = magnitude(omega * factor)\n return system.interpolate_torque(x) * N * m", "_____no_output_____" ], [ "compute_torque(0*radian/s, system)", "_____no_output_____" ], [ "omega = (5000 * rpm).to(radian/s)\ncompute_torque(omega, system)", "_____no_output_____" ] ], [ [ "Plot the whole curve.", "_____no_output_____" ] ], [ [ "xs = linspace(0, 525, 21) * radian / s\ntaus = [compute_torque(x, system) for x in xs]\nplot(xs, taus)\ndecorate(xlabel='Motor speed (rpm)',\n ylabel='Available torque (N m)')", "_____no_output_____" ] ], [ [ "### Simulation\n\nHere's the slope function that computes the maximum possible acceleration of the car as a function of it current speed.", "_____no_output_____" ] ], [ [ "def slope_func(state, t, system):\n \"\"\"Computes the derivatives of the state variables.\n \n state: State object\n t: time\n system: System object \n \n returns: sequence of derivatives\n \"\"\"\n x, v = state\n r_wheel, speed_ratio = system.r_wheel, system.speed_ratio\n mass = system.mass\n \n # use velocity, v, to compute angular velocity of the wheel\n omega2 = v / r_wheel\n \n # use the speed ratio to compute motor speed\n omega1 = omega2 / speed_ratio\n \n # look up motor speed to get maximum torque at the motor\n tau1 = compute_torque(omega1, system)\n \n # compute the corresponding torque at the axle\n tau2 = tau1 / speed_ratio\n \n # compute the force of the wheel on the ground\n F = tau2 / r_wheel\n \n # compute acceleration\n a = F/mass\n\n return v, a ", "_____no_output_____" ] ], [ [ "Testing `slope_func` at linear velocity 10 m/s.", "_____no_output_____" ] ], [ [ "test_state = State(x=0*m, v=10*m/s)", "_____no_output_____" ], [ "slope_func(test_state, 0*s, system)", "_____no_output_____" ] ], [ [ "Now we can run the simulation.", "_____no_output_____" ] ], [ [ "results, details = run_ode_solver(system, slope_func)\ndetails", "_____no_output_____" ] ], [ [ "And look at the results.", "_____no_output_____" ] ], [ [ "results.tail()", "_____no_output_____" ] ], [ [ "After 3 seconds, the vehicle could be at 40 meters per second, in theory, which is 144 kph.", "_____no_output_____" ] ], [ [ "v_final = get_last_value(results.v)", "_____no_output_____" ], [ "v_final.to(km/hour)", "_____no_output_____" ] ], [ [ "Plotting `x`", "_____no_output_____" ] ], [ [ "def plot_position(results):\n plot(results.x, label='x')\n decorate(xlabel='Time (s)',\n ylabel='Position (m)')\n \nplot_position(results)", "_____no_output_____" ] ], [ [ "Plotting `v`", "_____no_output_____" ] ], [ [ "def plot_velocity(results):\n plot(results.v, label='v')\n decorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n \nplot_velocity(results)", "_____no_output_____" ] ], [ [ "### Stopping at 100 kph\n\nWe'll use an event function to stop the simulation when we reach 100 kph.", "_____no_output_____" ] ], [ [ "def event_func(state, t, system):\n \"\"\"Stops when we get to 100 km/hour.\n \n state: State object\n t: time\n system: System object \n \n returns: difference from 100 km/hour\n \"\"\"\n x, v = state\n \n # convert to km/hour\n factor = (1 * m/s).to(km/hour)\n v = magnitude(v * factor)\n \n return v - 100 ", "_____no_output_____" ], [ "results, details = run_ode_solver(system, slope_func, events=event_func)\ndetails", "_____no_output_____" ] ], [ [ "Here's what the results look like.", "_____no_output_____" ] ], [ [ "subplot(2, 1, 1)\nplot_position(results)\n\nsubplot(2, 1, 2)\nplot_velocity(results)\n\nsavefig('figs/chap11-fig02.pdf')", "Saving figure to file figs/chap11-fig02.pdf\n" ] ], [ [ "According to this model, we should be able to make this run in just over 2 seconds.", "_____no_output_____" ] ], [ [ "t_final = get_last_label(results) * s", "_____no_output_____" ] ], [ [ "At the end of the run, the car has gone about 28 meters.", "_____no_output_____" ] ], [ [ "state = results.last_row()", "_____no_output_____" ] ], [ [ "If we send the final state back to the slope function, we can see that the final acceleration is about 13 $m/s^2$, which is about 1.3 times the acceleration of gravity.", "_____no_output_____" ] ], [ [ "v, a = slope_func(state, 0, system)\nv.to(km/hour)", "_____no_output_____" ], [ "a", "_____no_output_____" ], [ "g = 9.8 * m/s**2\n(a / g).to(UNITS.dimensionless)", "_____no_output_____" ] ], [ [ "It's not easy for a vehicle to accelerate faster than `g`, because that implies a coefficient of friction between the wheels and the road surface that's greater than 1. But racing tires on dry asphalt can do that; the OEM team at Olin has tested their tires and found a peak coefficient near 1.5.\n\nSo it's possible that our no slip assumption is valid, but only under ideal conditions, where weight is distributed equally on four tires, and all tires are driving.", "_____no_output_____" ], [ "**Exercise:** How much time do we lose because maximum torque decreases as motor speed increases? Run the model again with no drop off in torque and see how much time it saves.", "_____no_output_____" ], [ "### Drag", "_____no_output_____" ], [ "In this section we'll see how much effect drag has on the results.\n\nHere's a function to compute drag force, as we saw in Chapter 21.", "_____no_output_____" ] ], [ [ "def drag_force(v, system):\n \"\"\"Computes drag force in the opposite direction of `v`.\n \n v: velocity\n system: System object\n \n returns: drag force\n \"\"\"\n rho, C_d, area = system.rho, system.C_d, system.area\n \n f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2\n return f_drag", "_____no_output_____" ] ], [ [ "We can test it with a velocity of 20 m/s.", "_____no_output_____" ] ], [ [ "drag_force(20 * m/s, system)", "_____no_output_____" ] ], [ [ "Here's the resulting acceleration of the vehicle due to drag.\n", "_____no_output_____" ] ], [ [ "drag_force(20 * m/s, system) / system.mass", "_____no_output_____" ] ], [ [ "We can see that the effect of drag is not huge, compared to the acceleration we computed in the previous section, but it is not negligible.\n\nHere's a modified slope function that takes drag into account.", "_____no_output_____" ] ], [ [ "def slope_func2(state, t, system):\n \"\"\"Computes the derivatives of the state variables.\n \n state: State object\n t: time\n system: System object \n \n returns: sequence of derivatives\n \"\"\"\n x, v = state\n r_wheel, speed_ratio = system.r_wheel, system.speed_ratio\n mass = system.mass\n \n omega2 = v / r_wheel * radian\n omega1 = omega2 / speed_ratio\n tau1 = compute_torque(omega1, system)\n tau2 = tau1 / speed_ratio\n F = tau2 / r_wheel\n a_motor = F / mass\n a_drag = drag_force(v, system) / mass\n \n a = a_motor + a_drag\n return v, a ", "_____no_output_____" ] ], [ [ "And here's the next run.", "_____no_output_____" ] ], [ [ "results2, details = run_ode_solver(system, slope_func2, events=event_func)\ndetails", "_____no_output_____" ] ], [ [ "The time to reach 100 kph is a bit higher.", "_____no_output_____" ] ], [ [ "t_final2 = get_last_label(results2) * s", "_____no_output_____" ] ], [ [ "But the total effect of drag is only about 2/100 seconds.", "_____no_output_____" ] ], [ [ "t_final2 - t_final", "_____no_output_____" ] ], [ [ "That's not huge, which suggests we might not be able to save much time by decreasing the frontal area, or coefficient of drag, of the car.", "_____no_output_____" ], [ "### Rolling resistance", "_____no_output_____" ], [ "Next we'll consider [rolling resistance](https://en.wikipedia.org/wiki/Rolling_resistance), which the force that resists the motion of the car as it rolls on tires. The cofficient of rolling resistance, `C_rr`, is the ratio of rolling resistance to the normal force between the car and the ground (in that way it is similar to a coefficient of friction).\n\nThe following function computes rolling resistance.", "_____no_output_____" ] ], [ [ "system.set(unit_rr = 1 * N / kg)", "_____no_output_____" ], [ "def rolling_resistance(system):\n \"\"\"Computes force due to rolling resistance.\n \n system: System object\n \n returns: force\n \"\"\"\n return -system.C_rr * system.mass * system.unit_rr", "_____no_output_____" ] ], [ [ "The acceleration due to rolling resistance is 0.2 (it is not a coincidence that it equals `C_rr`).", "_____no_output_____" ] ], [ [ "rolling_resistance(system)", "_____no_output_____" ], [ "rolling_resistance(system) / system.mass", "_____no_output_____" ] ], [ [ "Here's a modified slope function that includes drag and rolling resistance.", "_____no_output_____" ] ], [ [ "def slope_func3(state, t, system):\n \"\"\"Computes the derivatives of the state variables.\n \n state: State object\n t: time\n system: System object \n \n returns: sequence of derivatives\n \"\"\"\n x, v = state\n r_wheel, speed_ratio = system.r_wheel, system.speed_ratio\n mass = system.mass\n \n omega2 = v / r_wheel * radian\n omega1 = omega2 / speed_ratio\n tau1 = compute_torque(omega1, system)\n tau2 = tau1 / speed_ratio\n F = tau2 / r_wheel\n a_motor = F / mass\n a_drag = drag_force(v, system) / mass\n a_roll = rolling_resistance(system) / mass\n \n a = a_motor + a_drag + a_roll\n return v, a ", "_____no_output_____" ] ], [ [ "And here's the run.", "_____no_output_____" ] ], [ [ "results3, details = run_ode_solver(system, slope_func3, events=event_func)\ndetails", "_____no_output_____" ] ], [ [ "The final time is a little higher, but the total cost of rolling resistance is only 3/100 seconds.", "_____no_output_____" ] ], [ [ "t_final3 = get_last_label(results3) * s", "_____no_output_____" ], [ "t_final3 - t_final2", "_____no_output_____" ] ], [ [ "So, again, there is probably not much to be gained by decreasing rolling resistance.\n\nIn fact, it is hard to decrease rolling resistance without also decreasing traction, so that might not help at all.", "_____no_output_____" ], [ "### Optimal gear ratio", "_____no_output_____" ], [ "The gear ratio 13:60 is intended to maximize the acceleration of the car without causing the tires to slip. In this section, we'll consider other gear ratios and estimate their effects on acceleration and time to reach 100 kph.\n\nHere's a function that takes a speed ratio as a parameter and returns time to reach 100 kph.", "_____no_output_____" ] ], [ [ "def time_to_speed(speed_ratio, params):\n \"\"\"Computes times to reach 100 kph.\n \n speed_ratio: ratio of wheel speed to motor speed\n params: Params object\n \n returns: time to reach 100 kph, in seconds\n \"\"\"\n params = Params(params, speed_ratio=speed_ratio)\n system = make_system(params)\n system.set(unit_rr = 1 * N / kg)\n \n results, details = run_ode_solver(system, slope_func3, events=event_func)\n t_final = get_last_label(results)\n a_initial = slope_func(system.init, 0, system)\n return t_final", "_____no_output_____" ] ], [ [ "We can test it with the default ratio:", "_____no_output_____" ] ], [ [ "time_to_speed(13/60, params)", "_____no_output_____" ] ], [ [ "Now we can try it with different numbers of teeth on the motor gear (assuming that the axle gear has 60 teeth):", "_____no_output_____" ] ], [ [ "for teeth in linrange(8, 18):\n print(teeth, time_to_speed(teeth/60, params))", "8 1.3230554808694261\n9 1.4683740716590767\n10 1.6154033363003908\n11 1.763893473709603\n12 1.913673186217739\n13 2.0646544476416953\n14 2.216761311453768\n15 2.369962929121199\n16 2.5242340753735495\n17 2.6795453467447845\n" ] ], [ [ "Wow! The speed ratio has a big effect on the results. At first glance, it looks like we could break the world record (1.513 seconds) just by decreasing the number of teeth.\n\nBut before we try it, let's see what effect that has on peak acceleration.", "_____no_output_____" ] ], [ [ "def initial_acceleration(speed_ratio, params):\n \"\"\"Maximum acceleration as a function of speed ratio.\n \n speed_ratio: ratio of wheel speed to motor speed\n params: Params object\n \n returns: peak acceleration, in m/s^2\n \"\"\"\n params = Params(params, speed_ratio=speed_ratio)\n system = make_system(params)\n a_initial = slope_func(system.init, 0, system)[1] * m/s**2\n return a_initial", "_____no_output_____" ] ], [ [ "Here are the results:", "_____no_output_____" ] ], [ [ "for teeth in linrange(8, 18):\n print(teeth, initial_acceleration(teeth/60, params))", "8 23.076923076923077 meter * newton / kilogram / second ** 2\n9 20.51282051282051 meter * newton / kilogram / second ** 2\n10 18.46153846153846 meter * newton / kilogram / second ** 2\n11 16.783216783216787 meter * newton / kilogram / second ** 2\n12 15.384615384615385 meter * newton / kilogram / second ** 2\n13 14.201183431952662 meter * newton / kilogram / second ** 2\n14 13.186813186813184 meter * newton / kilogram / second ** 2\n15 12.307692307692308 meter * newton / kilogram / second ** 2\n16 11.538461538461538 meter * newton / kilogram / second ** 2\n17 10.85972850678733 meter * newton / kilogram / second ** 2\n" ] ], [ [ "As we decrease the speed ratio, the peak acceleration increases. With 8 teeth on the motor gear, we could break the world record, but only if we can accelerate at 2.3 times the acceleration of gravity, which is impossible without very sticky ties and a vehicle that generates a lot of downforce.", "_____no_output_____" ] ], [ [ "23.07 / 9.8", "_____no_output_____" ] ], [ [ "These results suggest that the most promising way to improve the performance of the car (for this event) would be to improve traction.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d018a01568ff94f4c2ddfc9e06463f844f900ac2
23,750
ipynb
Jupyter Notebook
Python_Hackerrank.ipynb
VirajVShetty/Python-Hackerrank
43568c1c4324bbe23975ee16dff77b7f05260bf3
[ "MIT" ]
null
null
null
Python_Hackerrank.ipynb
VirajVShetty/Python-Hackerrank
43568c1c4324bbe23975ee16dff77b7f05260bf3
[ "MIT" ]
null
null
null
Python_Hackerrank.ipynb
VirajVShetty/Python-Hackerrank
43568c1c4324bbe23975ee16dff77b7f05260bf3
[ "MIT" ]
null
null
null
22.727273
111
0.432884
[ [ [ "# Python Solution for Hackerrank By Viraj Shetty", "_____no_output_____" ], [ "## Hello World", "_____no_output_____" ] ], [ [ "print(\"Hello, World!\")", "_____no_output_____" ] ], [ [ "## Python If-Else", "_____no_output_____" ] ], [ [ "if __name__ == '__main__':\n n = int(input().strip())\n if(n%2==1):\n print(\"Weird\")\n if(n%2==0):\n if (n in range(2,5)):\n print(\"Not Weird\")\n if (n in range(6,21)):\n print(\"Weird\")\n if (n>20):\n print(\"Not Weird\")", "_____no_output_____" ] ], [ [ "## Print Function", "_____no_output_____" ] ], [ [ "if __name__ == '__main__':\n n = int(input())se\n x = \"\"\n for i in range (1,n+1):\n x += str(i)\n print(x)", "_____no_output_____" ] ], [ [ "## Leap Year Function", "_____no_output_____" ] ], [ [ "def is_leap(year):\n leap = False\n if year % 4 == 0 and year % 100 != 0:\n leap = True\n elif year % 400 ==0:\n leap = True\n elif year % 100 == 0:\n leap = False\n else:\n leap = False\n return leap\nyear = int(input())\nprint(is_leap(year))", "_____no_output_____" ] ], [ [ "## String Validators", "_____no_output_____" ] ], [ [ "if __name__ == '__main__':\n s = input()\n an = a = d = l = u = 0\n for c in s:\n if(c.isalnum() == True):\n an += 1\n if(c.isalpha() == True):\n a += 1\n if(c.isdigit() == True):\n d += 1\n if(c.islower() == True):\n l += 1\n if(c.isupper() == True):\n u += 1\n if(an !=0):\n print(\"True\")\n else:\n print(\"False\")\n if(a !=0):\n print(\"True\")\n else:\n print(\"False\")\n if(d !=0):\n print(\"True\")\n else:\n print(\"False\")\n if(l !=0):\n print(\"True\")\n else:\n print(\"False\")\n if(u !=0):\n print(\"True\")\n else:\n print(\"False\")", "_____no_output_____" ] ], [ [ "## Runner Up", "_____no_output_____" ] ], [ [ "if __name__ == '__main__':\n n = int(input())\n arr = map(int, input().split())\n def dup(dupl):\n fl = []\n for num in dupl:\n if num not in fl:\n fl.append(num)\n return fl\n arr1 = dup(arr)\n arr1.sort()\n print(arr1[-2])", "_____no_output_____" ] ], [ [ "## What’s your Name", "_____no_output_____" ] ], [ [ "def print_full_name(a, b):\n print(\"Hello \"+a+\" \"+b+\"! You just delved into python.\" )\nif __name__ == '__main__':\n first_name = input()\n last_name = input()\n print_full_name(first_name, last_name)", "_____no_output_____" ] ], [ [ "## String Split and Join", "_____no_output_____" ] ], [ [ "def split_and_join(line):\n line = line.split(\" \")\n line = \"-\".join(line)\n return line\nif __name__ == '__main__':\n line = raw_input()\n result = split_and_join(line)\n print result", "_____no_output_____" ] ], [ [ "## Project Euler #173", "_____no_output_____" ] ], [ [ "import math\ncount = 0\nn = int(input())\nfor i in range(2,int(math.sqrt(n)),2):\n b = int(((n/i) - i)/2)\n if b > 0:\n count+=b\nprint(count)", "_____no_output_____" ] ], [ [ "## List Comprehension", "_____no_output_____" ] ], [ [ "x, y, z, n = (int(input()) for _ in range(4))\nprint ([[a,b,c] for a in range(0,x+1) for b in range(0,y+1) for c in range(0,z+1) if a + b + c != n ])", "_____no_output_____" ] ], [ [ "## Lists", "_____no_output_____" ] ], [ [ "n_of_commands = int(input())\nlist_of_commands = []\nfor command in range(n_of_commands):\n x = input()\n list_of_commands.append(x)\nlist_elements = []\nfor command in list_of_commands:\n if command == \"print\":\n print(list_elements)\n elif command[:3]==\"rem\":\n x = command.split()\n remove_elem = int(x[1])\n list_elements.remove(remove_elem)\n elif command[:3]==\"rev\":\n list_elements.reverse()\n elif command == \"pop\":\n list_elements.pop()\n elif command[:3]==\"app\":\n x = command.split()\n append_elem = int(x[1])\n list_elements.append(append_elem)\n elif command == \"sort\":\n list_elements.sort()\n elif command[:3]==\"ins\":\n x = command.split()\n index = int(x[1])\n insert_elem = int(x[2])\n list_elements.insert(index,insert_elem)\n else:\n Break", "_____no_output_____" ] ], [ [ "## Solve Me First!", "_____no_output_____" ] ], [ [ "def solveMeFirst(a,b):\n m = a+b\n return m\nnum1 = int(input())\nnum2 = int(input())\nsum = solveMeFirst(num1,num2)\nprint(sum)", "_____no_output_____" ] ], [ [ "## Simple Array Sum", "_____no_output_____" ] ], [ [ "import os\nimport sys\ndef simpleArraySum(ar):\n Sum = sum(ar)\n return Sum\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n ar_count = int(input())\nar = list(map(int, input().rstrip().split()))\nresult = simpleArraySum(ar)\nfptr.write(str(result) + '\\n')\n fptr.close()", "_____no_output_____" ] ], [ [ "## Compare The Triplets", "_____no_output_____" ] ], [ [ "import math\nimport os\nimport random\nimport re\nimport sys\ndef compareTriplets(a, b):\n counta = 0\n countb = 0\n for i in range (0,3):\n if(a[i]>b[i]):\n counta += 1\n if(a[i]<b[i]):\n countb += 1\n return counta,countb\n\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n a = list(map(int, input().rstrip().split()))\n b = list(map(int, input().rstrip().split()))\n result = compareTriplets(a, b)\n fptr.write(' '.join(map(str, result)))\n fptr.write('\\n')\n fptr.close()", "_____no_output_____" ] ], [ [ "## A Very Big Sum", "_____no_output_____" ] ], [ [ "import math\nimport os\nimport random\nimport re\nimport sys\n#function is same since python deals with \ndef aVeryBigSum(ar):\n Sum = sum(ar)\n return Sum\n \nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n ar_count = int(input())\n ar = list(map(int, input().rstrip().split()))\n result = aVeryBigSum(ar)\n fptr.write(str(result) + '\\n')\n fptr.close()", "_____no_output_____" ] ], [ [ "## Find the Point (Maths Based Problems)", "_____no_output_____" ] ], [ [ "import os\ndef findPoint(px, py, qx, qy):\n rx = (qx-px) + qx\n ry = (qy-py) + qy\n return (rx,ry)\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n n = int(input())\n for n_itr in range(n):\n pxPyQxQy = input().split()\n px = int(pxPyQxQy[0])\n py = int(pxPyQxQy[1])\n qx = int(pxPyQxQy[2])\n qy = int(pxPyQxQy[3])\n result = findPoint(px, py, qx, qy)\n fptr.write(' '.join(map(str, result)))\n fptr.write('\\n')\n fptr.close()", "_____no_output_____" ] ], [ [ "## Power of A to B and mod C", "_____no_output_____" ] ], [ [ "if __name__ == '__main__':\n import math as ms\n c = []\n while True:\n try:\n line = input()\n except EOFError:\n break\n c.append(line)\n a = int(c[0])\n b = int(c[1])\n m = int(c[2])\n x = ms.pow(a,b)\n c = (x%m)\n print(int(x))\n print(int(c))", "_____no_output_____" ] ], [ [ "## Map and Lambda", "_____no_output_____" ] ], [ [ "cube = lambda x: x**3\na = [] \ndef fibonacci(n):\n first = 0\n second = 1\n for i in range(n):\n a.append(first)\n t = first + second\n first = second\n second = t\n return a\nif __name__ == '__main__':\n n = int(input())\n print(list(map(cube, fibonacci(n))))", "_____no_output_____" ] ], [ [ "## Company Logo", "_____no_output_____" ] ], [ [ "from collections import Counter\n \nfor letter, counts in sorted(Counter(raw_input()).most_common(),key = lambda x:(-x[1],x[0]))[:3]:\n print letter, counts", "_____no_output_____" ] ], [ [ "## Merge the Tools!", "_____no_output_____" ] ], [ [ "def merge_the_tools(string,k):\n num_subsegments = int(len(string)/k)\n for index in range(num_subsegments):\n t = string[index * k : (index + 1) * k]\n u = \"\"\n for c in t:\n if c not in u:\n u += c\n print(u)\nif __name__ == '__main__':\n string, k = input(), int(input())\n merge_the_tools(string, k)", "_____no_output_____" ] ], [ [ "## Check SuperScript", "_____no_output_____" ] ], [ [ "main_set = set(map(int,input().split()))\nn = int(input())\noutput = []\nfor i in range(n):\n x = set(map(int,input().split()))\n if main_set.issuperset(x):\n output.append(True)\n else:\n output.append(False)\nprint(all(output))", "_____no_output_____" ] ], [ [ "## Check Subset", "_____no_output_____" ] ], [ [ "def common (A,B):\n a_set = set(A)\n b_set = set(B)\n if (a_set & b_set):\n answer.append(\"True\")\n else:\n answer.append(\"False\")\nn = int(input())\nanswer = []\nfor i in range(0,n):\n alen = int(input())\n A = list(map(int,input().split()))\n blen = int(input())\n B = list(map(int,input().split()))\n common(A,B)\nfor i in answer:\n print(i)", "_____no_output_____" ] ], [ [ "## Formated Sorting ortingS", "_____no_output_____" ] ], [ [ "l = []\nu = []\no = []\ne = []\n \ns = input()\nall_list = list(s)\nfor i in all_list:\n if i.islower():\n l.append(i)\n if i.isupper():\n u.append(i)\n if i.isnumeric():\n if (int(i)%2==0):\n e.append(i)\n else:\n o.append(i)\n \nlower = sorted(l)\nupper = sorted(u)\nodd = sorted(o)\neven = sorted(e)\n \ntempr = lower+upper\ntempr1 = tempr + odd\nlast = tempr1 + even\ns = \"\".join(last)\nprint(s)", "_____no_output_____" ] ], [ [ "## Exceptions", "_____no_output_____" ] ], [ [ "import re\nn = int(input())\nfor i in range(n):\n x = input()\n try:\n if re.compile(x):\n value = True\n except:\n value = False\n print(value)", "_____no_output_____" ] ], [ [ "## Iterables and Iterators", "_____no_output_____" ] ], [ [ "from itertools import combinations \n \nN = int(input())\nS = raw_input().split(' ')\nK = int(input())\n \nnum = 0\nden = 0\n \nfor c in combinations(S,K):\n den+=1\n num+='a' in c\nprint float(num)/den", "_____no_output_____" ] ], [ [ "## Day of Any MM/DD/YYYY", "_____no_output_____" ] ], [ [ "import calendar as c\nd = list(map(int,input().split()))\nans = c.weekday(d[2],d[0],d[1])\nif (ans == 0):\n print(\"MONDAY\")\nelif (ans == 1):\n print(\"TUESDAY\")\nelif (ans == 2):\n print(\"WEDNESDAY\")\nelif (ans == 3):\n print(\"THURSDAY\")\nelif (ans == 4):\n print(\"FRIDAY\")\nelif (ans == 5)\n print(\"SATURDAY\")\nelse:\n print(\"SUNDAY\")", "_____no_output_____" ] ], [ [ "## No idea!", "_____no_output_____" ] ], [ [ "main_set = set(map(int,input().split()))\nn = int(input())\noutput = []\nfor i in range(n):\n x = set(map(int,input().split()))\n if main_set.issuperset(x):\n output.append(True)\n else:\n output.append(False)\nprint(all(output))", "_____no_output_____" ] ], [ [ "## Collections.Counter()", "_____no_output_____" ] ], [ [ "n = int(input())\narr = list(map(int, input().split()))\nl = int(input())\nx=0\nfor i in range(l):\n size,price = map(int,input().split())\n if (size in arr):\n x += price\n arr.remove(size)\nprint(x)", "_____no_output_____" ] ], [ [ "## sWAP cASE", "_____no_output_____" ] ], [ [ "def swap_case(s):\n for i in s:\n if (i.islower()):\n a.append(i.upper())\n elif(i.isupper()):\n a.append(i.lower())\n else:\n a.append(i)\n b = ''.join(a)\n return b\na = []\nif __name__ == '__main__':\n s = input()\n result = swap_case(s)\n print(result)", "_____no_output_____" ] ], [ [ "## Set discard and pop", "_____no_output_____" ] ], [ [ "n = int(input())\nlist_of_int = list(map(int,input().split()))\nn_of_commands = int(input())\nlist_of_commands = []\nfor command in range(n_of_commands):\n x = input()\n list_of_commands.append(x)\nset1 = set(list_of_int)\nfor command in list_of_commands:\n if command == \"pop\":\n set1.pop()\n elif command.startswith('d'):\n discard_num = int(command[-1])\n set1.discard(discard_num)\n else:\n remove_num = int(command[-1])\n set1.remove(remove_num)\nprint(sum(set1))", "_____no_output_____" ] ], [ [ "## Find a String", "_____no_output_____" ] ], [ [ "def count_substring(string, sub_string):\n c=0\n for i in range(len(string)):\n if string[i:].startswith(sub_string):\n c +=1\n return c\n \nif __name__ == '__main__':\n string = input().strip()\n sub_string = input().strip()\n \n count = count_substring(string, sub_string)\n print(count)", "_____no_output_____" ] ], [ [ "## Introduction to Sets", "_____no_output_____" ] ], [ [ "def average(arr):\n for i in arr:\n if i not in a:\n a.append(i)\n x = float(sum(a)/len(a))\n return x\na = []\nif __name__ == '__main__':\n n = int(input())\n arr = list(map(int, input().split()))\n result = average(arr)\n print(result)", "_____no_output_____" ] ], [ [ "### Set .symmetric_difference : Symmetric Difference can be changed to difference, union and intersection", "_____no_output_____" ] ], [ [ "n = int(input())\ne = list(map(int,input().split()))\nm = int(input())\nf = list(map(int,input().split()))\n \na = set(e)\nb = set(f)\nc = 0\nres = a.symmetric_difference(b)\nfor i in res:\n c += 1\nprint(c)", "_____no_output_____" ] ], [ [ "## Div-mod", "_____no_output_____" ] ], [ [ "a = int(input())\nb = int(input())\nprint(a//b)\nprint(a%b)\nprint(divmod(a,b))", "_____no_output_____" ] ], [ [ "## Symmetric Difference", "_____no_output_____" ] ], [ [ "n = int(input())\nlist1 = list(map(int,input().split()))\nn1 = int(input())\nlist2 = list(map(int,input().split()))\n[print(i) for i in sorted(set(list1).difference(set(list2)).union(set(list2).difference(set(list1))))]", "_____no_output_____" ] ], [ [ "## Collections.deque", "_____no_output_____" ] ], [ [ "from collections import deque\nn = int(input())\nd = deque()\nlist_of_commands = []\nfor i in range(n):\n x = input()\n list_of_commands.append(x)\nfor command in list_of_commands:\n print(command[:7])\n if command[:7]==\"append\":\n x = command.split()\n d.append(int(x[1]))\n print(d)\n elif command[:7]==\"appendl\":\n x = command.split()\n d.appendleft(int(x[1]))\n print(d)\n elif command[:4]==\"pop\":\n d.pop()\n print(d)\n else:\n d.popleft()\n print(d)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d018a5e6e966a29308d9ae30d509dd63a5028394
2,803
ipynb
Jupyter Notebook
monte_carlo/notebooks/stock_walk.ipynb
oscar6echo/xtensor-finance
753f0166d89ce05df59ca939ad91361b6a910ea9
[ "BSD-3-Clause" ]
null
null
null
monte_carlo/notebooks/stock_walk.ipynb
oscar6echo/xtensor-finance
753f0166d89ce05df59ca939ad91361b6a910ea9
[ "BSD-3-Clause" ]
null
null
null
monte_carlo/notebooks/stock_walk.ipynb
oscar6echo/xtensor-finance
753f0166d89ce05df59ca939ad91361b6a910ea9
[ "BSD-3-Clause" ]
null
null
null
23.358333
124
0.534784
[ [ [ "# Stock walk\n\nThis notebook shows how a Python class can inherit from an interface of an extension module (that is, a class in C++).", "_____no_output_____" ] ], [ [ "import xtensor_monte_carlo as xmc\nimport numpy as np\nfrom bqplot import (LinearScale, Lines, Axis, Figure)", "_____no_output_____" ], [ "# Definition of a constant diffusion model\nclass ConstantDiffusionModel(xmc.diffusion_model):\n def __init__(self, drift, vol):\n xmc.diffusion_model.__init__(self)\n self.drift = drift\n self.volatility = vol\n \n def get_drift(self, time, spot, drift):\n drift.fill(self.drift)\n \n def get_volatility(self, time, spot, vol):\n vol.fill(self.volatility)\n", "_____no_output_____" ], [ "drift = 0.0016\nvol = 0.0888\nmaturity = 1.\nmodel = ConstantDiffusionModel(drift, vol)\nengine = xmc.mc_engine(model)", "_____no_output_____" ], [ "engine.run_simulation(1., maturity, 10)", "_____no_output_____" ], [ "res = engine.get_path()\ntime = np.arange(0, int(maturity * 365) + 1)", "_____no_output_____" ], [ "sc_x = LinearScale(max=365)\nsc_y = LinearScale()\nax_x = Axis(scale=sc_x, label='time')\nax_y = Axis(scale=sc_y, orientation='vertical', label='price')\nlines = [Lines(x=time, y=res[i], scales={'x': sc_x, 'y': sc_y}) for i in range(0, res.shape[0])]\nfigure = Figure(marks=lines, axes=[ax_x, ax_y], title='Stock walk')\nfigure", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
d018b989fd0e3fa2c4db4c68b71554a62ee290c4
201,626
ipynb
Jupyter Notebook
archive/NASA_data/archive/geojson_for_tableau.ipynb
ACE-P/ev_temp_map
ddae569c8b7e0c28a5803699cdf5dfbd0f8c3240
[ "MIT" ]
2
2020-04-06T02:40:58.000Z
2020-06-24T18:33:11.000Z
archive/NASA_data/archive/geojson_for_tableau.ipynb
ACE-P/ev_temp_map
ddae569c8b7e0c28a5803699cdf5dfbd0f8c3240
[ "MIT" ]
4
2020-06-17T18:32:19.000Z
2020-06-24T17:03:29.000Z
archive/NASA_data/archive/geojson_for_tableau.ipynb
ACE-P/ev_temp_map
ddae569c8b7e0c28a5803699cdf5dfbd0f8c3240
[ "MIT" ]
2
2020-04-22T18:11:16.000Z
2020-10-22T22:26:25.000Z
1,317.816993
101,416
0.960243
[ [ [ "Before running: `pip install geojsoncontour`", "_____no_output_____" ] ], [ [ "import os\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport geojsoncontour", "_____no_output_____" ], [ "# levels to draw contour lines at\nlevels = [-70, -60, -50, -40, -30, -20, 10, 20, 30, 40]", "_____no_output_____" ] ], [ [ "The colors of the map does't really matter because Tableau does't recongize color information anyway.", "_____no_output_____" ] ], [ [ "# read lon and lat coordinates. FILES ARE IN OUR GOOGLE DRIVE\nlon = pd.read_csv('./processed_min/lon.csv', index_col=0)\nlat = pd.read_csv('./processed_min/lat.csv', index_col=0)\n\n# mesh x and y (lon and lat coordinates)\nx_mesh, y_mesh = np.meshgrid(lon, lat)\n\n# z_mesh\nz_mesh = pd.read_csv(\"./processed_min/0101.csv\", index_col=0)\n\n# create the contour plot\ncontourf = plt.contourf(x_mesh, y_mesh, z_mesh, linestyles='None', levels=levels)", "_____no_output_____" ], [ "# convert matplotlib contourf to geojson file\nos.makedirs(\"./geojson_files\", exist_ok=True)\ngeojsoncontour.contourf_to_geojson(contourf, geojson_filepath=\"./geojson_files/0101.geojson\")", "_____no_output_____" ] ], [ [ "In Tableau, click \"Spatial file\" and open whatever geojson file you just saved", "_____no_output_____" ], [ "![image.png](attachment:image.png)", "_____no_output_____" ], [ "First drag \"Geometery\" to \"Detail\" and then drag \"Fill\" to \"Color\"", "_____no_output_____" ], [ "![image.png](attachment:image.png)", "_____no_output_____" ], [ "After this you can do whatever you want with the map", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
d018d01052075adc9d8790ae84070f41ad288268
442,886
ipynb
Jupyter Notebook
Mandelbrot.ipynb
MateoCh137/Julia-Modeling-the-World
9e524de22700c24c7446828ebe0834dfd15b7f7f
[ "MIT" ]
17
2016-03-08T01:30:44.000Z
2021-11-15T02:17:55.000Z
Mandelbrot.ipynb
MateoCh137/Julia-Modeling-the-World
9e524de22700c24c7446828ebe0834dfd15b7f7f
[ "MIT" ]
2
2016-02-15T17:55:21.000Z
2018-03-03T15:07:56.000Z
Mandelbrot.ipynb
MateoCh137/Julia-Modeling-the-World
9e524de22700c24c7446828ebe0834dfd15b7f7f
[ "MIT" ]
12
2016-02-14T00:04:50.000Z
2021-12-02T19:55:01.000Z
1,466.509934
346,570
0.959644
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d018d19c5c978548791f517af4920d5f1160bf77
3,144
ipynb
Jupyter Notebook
notebooks/validation.ipynb
MichoelSnow/data_science
7f6c054624268308ec4126a601c9fa8bc5de157c
[ "MIT" ]
null
null
null
notebooks/validation.ipynb
MichoelSnow/data_science
7f6c054624268308ec4126a601c9fa8bc5de157c
[ "MIT" ]
8
2020-03-24T15:29:05.000Z
2022-02-10T00:14:06.000Z
notebooks/validation.ipynb
MichoelSnow/data_science
7f6c054624268308ec4126a601c9fa8bc5de157c
[ "MIT" ]
null
null
null
17.965714
215
0.514631
[ [ [ "## Imports and Paths", "_____no_output_____" ], [ "# RF OOB", "_____no_output_____" ], [ "## Method", "_____no_output_____" ], [ "## Downsides", "_____no_output_____" ], [ "### Time series data", "_____no_output_____" ], [ "1. You are trying to predict the future and random sampling only helps you predict the past\n1. Data leakage\n 1. You assume that there is some temporal relationship to the data, so if you randommly sample you will be using points suroounding the validation samples to predict the validation samples", "_____no_output_____" ], [ "# Cross validation", "_____no_output_____" ], [ "## Method", "_____no_output_____" ], [ "Randomize your observations and then split your data into $n$ equal parts. Set 1 part aside as your validation set and join together the other $n-1$ pieces as your training set. Repeat this process $n$ times", "_____no_output_____" ], [ "## Advantages", "_____no_output_____" ], [ "Great when you only have small sample sizes", "_____no_output_____" ], [ "## Downsides", "_____no_output_____" ], [ "You need run $n$ models on your data set", "_____no_output_____" ], [ "### Time series data", "_____no_output_____" ], [ "Same issue as with RF OOB", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
d018de589321cf877539b2631dd404af5755588d
219,304
ipynb
Jupyter Notebook
examples/notebooks/4_menyanthes_file.ipynb
pgraafstra/pastas
c065059e1df5b6c8e4afeb5278de2ef70fdf726c
[ "MIT" ]
null
null
null
examples/notebooks/4_menyanthes_file.ipynb
pgraafstra/pastas
c065059e1df5b6c8e4afeb5278de2ef70fdf726c
[ "MIT" ]
null
null
null
examples/notebooks/4_menyanthes_file.ipynb
pgraafstra/pastas
c065059e1df5b6c8e4afeb5278de2ef70fdf726c
[ "MIT" ]
null
null
null
818.298507
144,508
0.949312
[ [ [ "<figure>\n <IMG SRC=\"https://raw.githubusercontent.com/pastas/pastas/master/doc/_static/Art_logo.jpg\" WIDTH=250 ALIGN=\"right\">\n</figure>\n\n# Menyanthes File\n*Developed by Ruben Caljé*", "_____no_output_____" ], [ "Menyanthes is timeseries analysis software used by many people in the Netherlands. In this example a Menyanthes-file with one observation-series is imported, and simulated. There are several stresses in the Menyanthes-file, among which are three groundwater extractions with a significant influence on groundwater head.", "_____no_output_____" ] ], [ [ "# First perform the necessary imports\nimport matplotlib.pyplot as plt\nimport pastas as ps\n\n%matplotlib notebook", "_____no_output_____" ] ], [ [ "## 1. Importing the Menyanthes-file\nImport the Menyanthes-file with observations and stresses. Then plot the observations, together with the diferent stresses in the Menyanthes file.", "_____no_output_____" ] ], [ [ "# how to use it?\nfname = '../data/MenyanthesTest.men'\nmeny = ps.read.MenyData(fname)\n\n# plot some series\nf1, axarr = plt.subplots(len(meny.IN)+1, sharex=True)\noseries = meny.H['Obsevation well'][\"values\"]\noseries.plot(ax=axarr[0])\naxarr[0].set_title(meny.H['Obsevation well'][\"Name\"])\nfor i, val in enumerate(meny.IN.items()):\n name, data = val\n data[\"values\"].plot(ax=axarr[i+1])\n axarr[i+1].set_title(name)\nplt.tight_layout(pad=0)\nplt.show()", "_____no_output_____" ] ], [ [ "## 2. Run a model\nMake a model with precipitation, evaporation and three groundwater extractions.", "_____no_output_____" ] ], [ [ "# Create the time series model\nml = ps.Model(oseries)\n\n# Add precipitation\nIN = meny.IN['Precipitation']['values']\nIN.index = IN.index.round(\"D\")\nIN2 = meny.IN['Evaporation']['values']\nIN2.index = IN2.index.round(\"D\")\nts = ps.StressModel2([IN, IN2], ps.Gamma, 'Recharge')\nml.add_stressmodel(ts)\n\n# Add well extraction 1\n# IN = meny.IN['Extraction 1']\n# # extraction amount counts for the previous month\n# ts = ps.StressModel(IN['values'], ps.Hantush, 'Extraction_1', up=False,\n# settings=\"well\")\n# ml.add_stressmodel(ts)\n\n# Add well extraction 2\nIN = meny.IN['Extraction 2']\n# extraction amount counts for the previous month\nts = ps.StressModel(IN['values'], ps.Hantush, 'Extraction_2', up=False,\n settings=\"well\")\nml.add_stressmodel(ts)\n\n# Add well extraction 3\nIN = meny.IN['Extraction 3']\n# extraction amount counts for the previous month\nts = ps.StressModel(IN['values'], ps.Hantush, 'Extraction_3', up=False,\n settings=\"well\")\nml.add_stressmodel(ts)\n\n# Solve the model (can take around 20 seconds..)\nml.solve()", "INFO: Cannot determine frequency of series None\nINFO: Inferred frequency from time series None: freq=D \nINFO: Inferred frequency from time series None: freq=D \nINFO: Cannot determine frequency of series None\nINFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide\nINFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide\nINFO: Time Series None was sampled down to freq D with method timestep_weighted_resample\nINFO: Cannot determine frequency of series None\nINFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide\nINFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide\nINFO: Time Series None was sampled down to freq D with method timestep_weighted_resample\nINFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide\nINFO: Time Series None was sampled down to freq D with method timestep_weighted_resample\nINFO: Time Series None: values of stress were transformedto daily values (frequency not altered) with: divide\nINFO: Time Series None was sampled down to freq D with method timestep_weighted_resample\nINFO: There are observations between the simulation timesteps. Linear interpolation between simulated values is used.\n/Users/Raoul/Projects/pastas/pastas/pastas/solver.py:118: RuntimeWarning: invalid value encountered in double_scalars\n pcor[i, j] = pcov[i, j] / np.sqrt(pcov[i, i] * pcov[j, j])\n" ] ], [ [ "## 3. Plot the decomposition\nShow the decomposition of the groundwater head, by plotting the influence on groundwater head of each of the stresses.", "_____no_output_____" ] ], [ [ "ax = ml.plots.decomposition(ytick_base=1.)\nax[0].set_title('Observations vs simulation')\nax[0].legend()\nax[0].figure.tight_layout(pad=0)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d018de5d09bcc258b5be46472ab879f3d7972b81
27,881
ipynb
Jupyter Notebook
Data Analysis Projects/Wine Qulaity Prediction/refactor_wine_quality_.ipynb
nirmalya8/HackPython-21
75661ea3c1350e778ade702d58028474dcff0349
[ "MIT" ]
1
2021-11-12T10:51:19.000Z
2021-11-12T10:51:19.000Z
Data Analysis Projects/Wine Qulaity Prediction/refactor_wine_quality_.ipynb
Rishikesh-kumar-7258/HackFest21
0e9ee30cc5400092cad36d45ddd1473c2cdb0cf1
[ "MIT" ]
null
null
null
Data Analysis Projects/Wine Qulaity Prediction/refactor_wine_quality_.ipynb
Rishikesh-kumar-7258/HackFest21
0e9ee30cc5400092cad36d45ddd1473c2cdb0cf1
[ "MIT" ]
null
null
null
35.929124
378
0.305656
[ [ [ "# Refactor: Wine Quality Analysis\nIn this exercise, you'll refactor code that analyzes a wine quality dataset taken from the UCI Machine Learning Repository [here](https://archive.ics.uci.edu/ml/datasets/wine+quality). Each row contains data on a wine sample, including several physicochemical properties gathered from tests, as well as a quality rating evaluated by wine experts.\n\nThe code in this notebook first renames the columns of the dataset and then calculates some statistics on how some features may be related to quality ratings. Can you refactor this code to make it more clean and modular?", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf = pd.read_csv('winequality-red.csv', sep=';')\ndf.head(10)", "_____no_output_____" ] ], [ [ "### Renaming Columns\nYou want to replace the spaces in the column labels with underscores to be able to reference columns with dot notation. Here's one way you could've done it.", "_____no_output_____" ] ], [ [ "new_df = df.rename(columns={'fixed acidity': 'fixed_acidity',\n 'volatile acidity': 'volatile_acidity',\n 'citric acid': 'citric_acid',\n 'residual sugar': 'residual_sugar',\n 'free sulfur dioxide': 'free_sulfur_dioxide',\n 'total sulfur dioxide': 'total_sulfur_dioxide'\n })\nnew_df.head()", "_____no_output_____" ] ], [ [ "And here's a slightly better way you could do it. You can avoid making naming errors due to typos caused by manual typing. However, this looks a little repetitive. Can you make it better?", "_____no_output_____" ] ], [ [ "labels = list(df.columns)\nlabels[0] = labels[0].replace(' ', '_')\nlabels[1] = labels[1].replace(' ', '_')\nlabels[2] = labels[2].replace(' ', '_')\nlabels[3] = labels[3].replace(' ', '_')\nlabels[5] = labels[5].replace(' ', '_')\nlabels[6] = labels[6].replace(' ', '_')\ndf.columns = labels\n\ndf.head()", "_____no_output_____" ] ], [ [ "### Analyzing Features\nNow that your columns are ready, you want to see how different features of this dataset relate to the quality rating of the wine. A very simple way you could do this is by observing the mean quality rating for the top and bottom half of each feature. The code below does this for four features. It looks pretty repetitive right now. Can you make this more concise? \n\nYou might challenge yourself to figure out how to make this code more efficient! But you don't need to worry too much about efficiency right now - we will cover that more in the next section.", "_____no_output_____" ] ], [ [ "median_alcohol = df.alcohol.median()\nfor i, alcohol in enumerate(df.alcohol):\n if alcohol >= median_alcohol:\n df.loc[i, 'alcohol'] = 'high'\n else:\n df.loc[i, 'alcohol'] = 'low'\ndf.groupby('alcohol').quality.mean()", "_____no_output_____" ], [ "median_pH = df.pH.median()\nfor i, pH in enumerate(df.pH):\n if pH >= median_pH:\n df.loc[i, 'pH'] = 'high'\n else:\n df.loc[i, 'pH'] = 'low'\ndf.groupby('pH').quality.mean()", "_____no_output_____" ], [ "median_sugar = df.residual_sugar.median()\nfor i, sugar in enumerate(df.residual_sugar):\n if sugar >= median_sugar:\n df.loc[i, 'residual_sugar'] = 'high'\n else:\n df.loc[i, 'residual_sugar'] = 'low'\ndf.groupby('residual_sugar').quality.mean()", "_____no_output_____" ], [ "median_citric_acid = df.citric_acid.median()\nfor i, citric_acid in enumerate(df.citric_acid):\n if citric_acid >= median_citric_acid:\n df.loc[i, 'citric_acid'] = 'high'\n else:\n df.loc[i, 'citric_acid'] = 'low'\ndf.groupby('citric_acid').quality.mean()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d018eb53da42041e0452ce4723f0632d482d9526
1,562
ipynb
Jupyter Notebook
zzz-Quickfacts.ipynb
rdhyee/diversity-census-calc
e823392c50bc15d3abc395649b7c05b1e441af02
[ "Apache-2.0" ]
2
2016-08-13T19:49:54.000Z
2021-06-21T23:23:52.000Z
zzz-Quickfacts.ipynb
rdhyee/diversity-census-calc
e823392c50bc15d3abc395649b7c05b1e441af02
[ "Apache-2.0" ]
null
null
null
zzz-Quickfacts.ipynb
rdhyee/diversity-census-calc
e823392c50bc15d3abc395649b7c05b1e441af02
[ "Apache-2.0" ]
1
2015-08-21T20:31:09.000Z
2015-08-21T20:31:09.000Z
34.711111
303
0.649808
[ [ [ "Consider the population of California. If you do a Google search...you might end up at [California QuickFacts from the US Census Bureau](http://quickfacts.census.gov/qfd/states/06000.html). Compare to the [quickfacts about Alameda County](http://quickfacts.census.gov/qfd/states/06/06001.html).\n\n\nToday we [download the data for the USA, states, and counties](http://quickfacts.census.gov/qfd/download_data.html):\n\n> The entire State and County QuickFacts dataset, with U.S., state, and county data is available for download. Downloadable data files for cities may be issued later. The current downloadable data set may include items not displayed on QuickFacts tables.\n\nDownload 3 files into a directory....perhaps where you launched iPython:\n\n 1. http://quickfacts.census.gov/qfd/download/DataSet.txt\n 2. http://quickfacts.census.gov/qfd/download/DataDict.txt\n 3. http://quickfacts.census.gov/qfd/download/FIPS_CountyName.txt\n\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
d01907662fd3c6e7b955a8974f4aac5a5631ad70
72,057
ipynb
Jupyter Notebook
.ipynb_checkpoints/whale.py-checkpoint.ipynb
charbelnehme/pandas-homework
7e1f3a51aafc2e0f58d0f8998620f836c2575b16
[ "ADSL" ]
null
null
null
.ipynb_checkpoints/whale.py-checkpoint.ipynb
charbelnehme/pandas-homework
7e1f3a51aafc2e0f58d0f8998620f836c2575b16
[ "ADSL" ]
null
null
null
.ipynb_checkpoints/whale.py-checkpoint.ipynb
charbelnehme/pandas-homework
7e1f3a51aafc2e0f58d0f8998620f836c2575b16
[ "ADSL" ]
null
null
null
28.788254
1,196
0.398948
[ [ [ " # A Whale off the Port(folio)\n ---\n\n In this assignment, you'll get to use what you've learned this week to evaluate the performance among various algorithmic, hedge, and mutual fund portfolios and compare them against the S&P TSX 60 Index.", "_____no_output_____" ] ], [ [ "# Initial imports\nimport pandas as pd\nimport numpy as np\nimport datetime as dt\nfrom pathlib import Path\n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Data Cleaning\n\nIn this section, you will need to read the CSV files into DataFrames and perform any necessary data cleaning steps. After cleaning, combine all DataFrames into a single DataFrame.\n\nFiles:\n\n* `whale_returns.csv`: Contains returns of some famous \"whale\" investors' portfolios.\n\n* `algo_returns.csv`: Contains returns from the in-house trading algorithms from Harold's company.\n\n* `sp_tsx_history.csv`: Contains historical closing prices of the S&P TSX 60 Index.", "_____no_output_____" ], [ "## Whale Returns\n\nRead the Whale Portfolio daily returns and clean the data.", "_____no_output_____" ] ], [ [ "# Set file path for CSV\nfile_path = Path(\"Resources/whale_returns.csv\")", "_____no_output_____" ], [ "# Read in the CSV into a DataFrame\nwhale_returns_csv = pd.read_csv(file_path)\nwhale_returns_csv.head()", "_____no_output_____" ], [ "# Inspect the first 10 rows of the DataFrame\nwhale_returns_csv.head(10)", "_____no_output_____" ], [ "# Inspect the last 10 rows of the DataFrame\nwhale_returns_csv.tail(10)", "_____no_output_____" ], [ "# View column data types by using the 'dtypes' attribute to list the column data types\nwhale_returns_csv.dtypes", "_____no_output_____" ], [ "# Identify data quality issues\n# Identify the number of rows\nwhale_returns_csv.count()", "_____no_output_____" ], [ "# Count nulls\nwhale_returns_csv.isnull()", "_____no_output_____" ], [ "# Determine the number of nulls \nwhale_returns_csv.isnull().sum()", "_____no_output_____" ], [ "# Determine the percentage of nulls for each column\nwhale_returns_csv.isnull().sum() / len(whale_returns_csv) * 100", "_____no_output_____" ], [ "# Drop nulls\nwhale_returns_csv.dropna()", "_____no_output_____" ], [ "# Check for duplicated rows\nwhale_returns_csv.duplicated()", "_____no_output_____" ], [ "# Use the dropna function to drop the whole records that have at least one null value\nwhale_returns_csv.dropna(inplace=True)", "_____no_output_____" ] ], [ [ "## Algorithmic Daily Returns\n\nRead the algorithmic daily returns and clean the data.", "_____no_output_____" ] ], [ [ "#Calculate and plot daily return\n\n", "_____no_output_____" ], [ "# Calculate and plot cumulative return\n\n", "_____no_output_____" ], [ "# Confirm null values have been dropped 1\nwhale_returns_csv.isnull()", "_____no_output_____" ], [ "# Confirm null values have been dropped 2\nwhale_returns_csv.isnull().sum()", "_____no_output_____" ], [ "# Reading algorithmic returns\n", "_____no_output_____" ], [ "# Count nulls\n", "_____no_output_____" ], [ "# Drop nulls\n", "_____no_output_____" ] ], [ [ "## S&P TSX 60 Returns\n\nRead the S&P TSX 60 historic closing prices and create a new daily returns DataFrame from the data. ", "_____no_output_____" ] ], [ [ "# Reading S&P TSX 60 Closing Prices\nsp_tsx_path = Path(\"Resources/sp_tsx_history.csv\")", "_____no_output_____" ], [ "# Check Data Types\nsp_tsx_df = pd.read_csv(sp_tsx_path)\nsp_tsx_df.head()", "_____no_output_____" ], [ "sp_tsx_df.tail()", "_____no_output_____" ], [ "# Use the 'dtypes' attribute to list the column data types\nsp_tsx_df.dtypes", "_____no_output_____" ], [ "# Use the 'info' attribute to list additional infor about the column data types\nsp_tsx_df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1818 entries, 0 to 1817\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Date 1818 non-null object\n 1 Close 1818 non-null object\ndtypes: object(2)\nmemory usage: 28.5+ KB\n" ], [ "# Use the 'as_type' function to convert 'Date' from 'object' to 'datetime64'\nsp_tsx_df['Date'] = sp_tsx_df['Date'].astype('datetime64')\nsp_tsx_df", "_____no_output_____" ], [ "# Sort datetime index in ascending order (past to present)\nsp_tsx_df.sort_index(inplace = True)\nsp_tsx_df.head()", "_____no_output_____" ], [ "# Confirm datetime64 conversion was proccesed correctly\nsp_tsx_df.dtypes", "_____no_output_____" ], [ "# Set the date as the index to the Dataframe\nsp_tsx_df.set_index(pd.to_datetime(sp_tsx_df['Date'], infer_datetime_format=True), inplace=True)\nsp_tsx_df.head()", "_____no_output_____" ], [ "# Drop the extra date column \nsp_tsx_df.drop(columns=['Date'], inplace=True)\nsp_tsx_df.head()", "_____no_output_____" ], [ "sp_tsx_df.dtypes", "_____no_output_____" ], [ "sp_tsx_df['Close'] = sp_tsx_df.to_numeric('Close')\nsp_tsx_df", "_____no_output_____" ], [ "daily_returns = sp_tsx_df.pct_change()\nsp_tsx_df()", "_____no_output_____" ], [ "# Plot daily close\nsp_tsx_df.plot()", "_____no_output_____" ], [ "# Calculate Daily Returns\n", "_____no_output_____" ], [ "# Drop nulls\n", "_____no_output_____" ], [ "# Rename `Close` Column to be specific to this portfolio.\n", "_____no_output_____" ] ], [ [ "## Combine Whale, Algorithmic, and S&P TSX 60 Returns", "_____no_output_____" ] ], [ [ "# Join Whale Returns, Algorithmic Returns, and the S&P TSX 60 Returns into a single DataFrame with columns for each portfolio's returns.\n", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# Conduct Quantitative Analysis\n\nIn this section, you will calculate and visualize performance and risk metrics for the portfolios.", "_____no_output_____" ], [ "## Performance Anlysis\n\n#### Calculate and Plot the daily returns.", "_____no_output_____" ] ], [ [ "# Plot daily returns of all portfolios\n", "_____no_output_____" ] ], [ [ "#### Calculate and Plot cumulative returns.", "_____no_output_____" ] ], [ [ "# Calculate cumulative returns of all portfolios\n\n# Plot cumulative returns\n", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "## Risk Analysis\n\nDetermine the _risk_ of each portfolio:\n\n1. Create a box plot for each portfolio. \n2. Calculate the standard deviation for all portfolios.\n4. Determine which portfolios are riskier than the S&P TSX 60.\n5. Calculate the Annualized Standard Deviation.", "_____no_output_____" ], [ "### Create a box plot for each portfolio\n", "_____no_output_____" ] ], [ [ "# Box plot to visually show risk\n", "_____no_output_____" ] ], [ [ "### Calculate Standard Deviations", "_____no_output_____" ] ], [ [ "# Calculate the daily standard deviations of all portfolios\n", "_____no_output_____" ] ], [ [ "### Determine which portfolios are riskier than the S&P TSX 60", "_____no_output_____" ] ], [ [ "# Calculate the daily standard deviation of S&P TSX 60\n\n# Determine which portfolios are riskier than the S&P TSX 60\n", "_____no_output_____" ] ], [ [ "### Calculate the Annualized Standard Deviation", "_____no_output_____" ] ], [ [ "# Calculate the annualized standard deviation (252 trading days)\n", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "## Rolling Statistics\n\nRisk changes over time. Analyze the rolling statistics for Risk and Beta. \n\n1. Calculate and plot the rolling standard deviation for all portfolios using a 21-day window.\n2. Calculate the correlation between each stock to determine which portfolios may mimick the S&P TSX 60.\n3. Choose one portfolio, then calculate and plot the 60-day rolling beta for it and the S&P TSX 60.", "_____no_output_____" ], [ "### Calculate and plot rolling `std` for all portfolios with 21-day window", "_____no_output_____" ] ], [ [ "# Calculate the rolling standard deviation for all portfolios using a 21-day window\n\n# Plot the rolling standard deviation\n", "_____no_output_____" ] ], [ [ "### Calculate and plot the correlation", "_____no_output_____" ] ], [ [ "# Calculate the correlation\n\n# Display de correlation matrix\n", "_____no_output_____" ] ], [ [ "### Calculate and Plot Beta for a chosen portfolio and the S&P 60 TSX", "_____no_output_____" ] ], [ [ "# Calculate covariance of a single portfolio\n\n# Calculate variance of S&P TSX\n\n# Computing beta\n\n# Plot beta trend\n", "_____no_output_____" ] ], [ [ "## Rolling Statistics Challenge: Exponentially Weighted Average \n\nAn alternative way to calculate a rolling window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the [`ewm`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html) with a 21-day half life for each portfolio, using standard deviation (`std`) as the metric of interest.", "_____no_output_____" ] ], [ [ "# Use `ewm` to calculate the rolling window\n", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# Sharpe Ratios\nIn reality, investment managers and thier institutional investors look at the ratio of return-to-risk, and not just returns alone. After all, if you could invest in one of two portfolios, and each offered the same 10% return, yet one offered lower risk, you'd take that one, right?\n\n### Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot", "_____no_output_____" ] ], [ [ "# Annualized Sharpe Ratios\n", "_____no_output_____" ], [ "# Visualize the sharpe ratios as a bar plot\n", "_____no_output_____" ] ], [ [ "### Determine whether the algorithmic strategies outperform both the market (S&P TSX 60) and the whales portfolios.\n\nWrite your answer here!", "_____no_output_____" ], [ "---", "_____no_output_____" ], [ "# Create Custom Portfolio\n\nIn this section, you will build your own portfolio of stocks, calculate the returns, and compare the results to the Whale Portfolios and the S&P TSX 60. \n\n1. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.\n2. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock.\n3. Join your portfolio returns to the DataFrame that contains all of the portfolio returns.\n4. Re-run the performance and risk analysis with your portfolio to see how it compares to the others.\n5. Include correlation analysis to determine which stocks (if any) are correlated.", "_____no_output_____" ], [ "## Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.", "_____no_output_____" ] ], [ [ "# Reading data from 1st stock\n", "_____no_output_____" ], [ "# Reading data from 2nd stock\n", "_____no_output_____" ], [ "# Reading data from 3rd stock\n", "_____no_output_____" ], [ "# Combine all stocks in a single DataFrame\n", "_____no_output_____" ], [ "# Reset Date index\n", "_____no_output_____" ], [ "# Reorganize portfolio data by having a column per symbol\n", "_____no_output_____" ], [ "# Calculate daily returns\n\n# Drop NAs\n\n# Display sample data\n", "_____no_output_____" ] ], [ [ "## Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock", "_____no_output_____" ] ], [ [ "# Set weights\nweights = [1/3, 1/3, 1/3]\n\n# Calculate portfolio return\n\n# Display sample data\n", "_____no_output_____" ] ], [ [ "## Join your portfolio returns to the DataFrame that contains all of the portfolio returns", "_____no_output_____" ] ], [ [ "# Join your returns DataFrame to the original returns DataFrame\n", "_____no_output_____" ], [ "# Only compare dates where return data exists for all the stocks (drop NaNs)\n", "_____no_output_____" ] ], [ [ "## Re-run the risk analysis with your portfolio to see how it compares to the others", "_____no_output_____" ], [ "### Calculate the Annualized Standard Deviation", "_____no_output_____" ] ], [ [ "# Calculate the annualized `std`\n", "_____no_output_____" ] ], [ [ "### Calculate and plot rolling `std` with 21-day window", "_____no_output_____" ] ], [ [ "# Calculate rolling standard deviation\n\n# Plot rolling standard deviation\n", "_____no_output_____" ] ], [ [ "### Calculate and plot the correlation", "_____no_output_____" ] ], [ [ "# Calculate and plot the correlation\n", "_____no_output_____" ] ], [ [ "### Calculate and Plot the 60-day Rolling Beta for Your Portfolio compared to the S&P 60 TSX", "_____no_output_____" ] ], [ [ "# Calculate and plot Beta\n", "_____no_output_____" ] ], [ [ "### Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot", "_____no_output_____" ] ], [ [ "# Calculate Annualized Sharpe Ratios\n", "_____no_output_____" ], [ "# Visualize the sharpe ratios as a bar plot\n", "_____no_output_____" ] ], [ [ "### How does your portfolio do?\n\nWrite your answer here!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
d019238e45dcf53bc033ad719f8b265ae4845ce8
17,529
ipynb
Jupyter Notebook
ch09fileformats/11ControlledVocabularies.ipynb
jack89roberts/rsd-engineeringcourse
d0d90be254674f2be46dda7aefc987a238e4e97c
[ "CC-BY-3.0" ]
null
null
null
ch09fileformats/11ControlledVocabularies.ipynb
jack89roberts/rsd-engineeringcourse
d0d90be254674f2be46dda7aefc987a238e4e97c
[ "CC-BY-3.0" ]
null
null
null
ch09fileformats/11ControlledVocabularies.ipynb
jack89roberts/rsd-engineeringcourse
d0d90be254674f2be46dda7aefc987a238e4e97c
[ "CC-BY-3.0" ]
null
null
null
30.015411
390
0.532945
[ [ [ "# Saying the same thing multiple ways", "_____no_output_____" ], [ "What happens when someone comes across a file in our file format? How do they know what it means?", "_____no_output_____" ], [ "If we can make the tag names in our model globally unique, then the meaning of the file can be made understandable\nnot just to us, but to people and computers all over the world.\n\nTwo file formats which give the same information, in different ways, are *syntactically* distinct,\nbut so long as they are **semantically** compatible, I can convert from one to the other.", "_____no_output_____" ], [ "This is the goal of the technologies introduced this lecture.", "_____no_output_____" ], [ "## The URI", "_____no_output_____" ], [ "The key concept that underpins these tools is the URI: uniform resource **indicator**.\n \nThese look like URLs:\n \n`www.turing.ac.uk/rsd-engineering/schema/reaction/element`\n\nBut, if I load that as a web address, there's nothing there!\n\nThat's fine.\n\nA UR**N** indicates a **name** for an entity, and, by using organisational web addresses as a prefix,\nis likely to be unambiguously unique.\n\nA URI might be a URL or a URN, or both.", "_____no_output_____" ], [ "## XML Namespaces", "_____no_output_____" ], [ "It's cumbersome to use a full URI every time we want to put a tag in our XML file.\nXML defines *namespaces* to resolve this:", "_____no_output_____" ] ], [ [ "%%writefile system.xml\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<system xmlns=\"http://www.turing.ac.uk/rsd-engineering/schema/reaction\">\n <reaction>\n <reactants>\n <molecule stoichiometry=\"2\">\n <atom symbol=\"H\" number=\"2\"/>\n </molecule>\n <molecule stoichiometry=\"1\">\n <atom symbol=\"O\" number=\"2\"/>\n </molecule>\n </reactants>\n <products>\n <molecule stoichiometry=\"2\">\n <atom symbol=\"H\" number=\"2\"/>\n <atom symbol=\"O\" number=\"1\"/>\n </molecule>\n </products>\n </reaction>\n</system>", "Overwriting system.xml\n" ], [ "from lxml import etree\n\nwith open(\"system.xml\") as xmlfile:\n tree = etree.parse(xmlfile)", "_____no_output_____" ], [ "print(etree.tostring(tree, pretty_print=True, encoding=str))", "<system xmlns=\"http://www.turing.ac.uk/rsd-engineering/schema/reaction\">\n <reaction>\n <reactants>\n <molecule stoichiometry=\"2\">\n <atom symbol=\"H\" number=\"2\"/>\n </molecule>\n <molecule stoichiometry=\"1\">\n <atom symbol=\"O\" number=\"2\"/>\n </molecule>\n </reactants>\n <products>\n <molecule stoichiometry=\"2\">\n <atom symbol=\"H\" number=\"2\"/>\n <atom symbol=\"O\" number=\"1\"/>\n </molecule>\n </products>\n </reaction>\n</system>\n\n" ] ], [ [ "Note that our previous XPath query no longer finds anything.", "_____no_output_____" ] ], [ [ "tree.xpath(\"//molecule/atom[@number='1']/@symbol\")", "_____no_output_____" ], [ "namespaces = {\"r\": \"http://www.turing.ac.uk/rsd-engineering/schema/reaction\"}", "_____no_output_____" ], [ "tree.xpath(\"//r:molecule/r:atom[@number='1']/@symbol\", namespaces=namespaces)", "_____no_output_____" ] ], [ [ "Note the prefix `r` used to bind the namespace in the query: any string will do - it's just a dummy variable.", "_____no_output_____" ], [ "The above file specified our namespace as a default namespace: this is like doing `from numpy import *` in python.\n \nIt's often better to bind the namespace to a prefix: ", "_____no_output_____" ] ], [ [ "%%writefile system.xml\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<r:system xmlns:r=\"http://www.turing.ac.uk/rsd-engineering/schema/reaction\">\n <r:reaction>\n <r:reactants>\n <r:molecule stoichiometry=\"2\">\n <r:atom symbol=\"H\" number=\"2\"/>\n </r:molecule>\n <r:molecule stoichiometry=\"1\">\n <r:atom symbol=\"O\" number=\"2\"/>\n </r:molecule>\n </r:reactants>\n <r:products>\n <r:molecule stoichiometry=\"2\">\n <r:atom symbol=\"H\" number=\"2\"/>\n <r:atom symbol=\"O\" number=\"1\"/>\n </r:molecule>\n </r:products>\n </r:reaction>\n</r:system>", "Overwriting system.xml\n" ] ], [ [ "## Namespaces and Schema", "_____no_output_____" ], [ "It's a good idea to serve the schema itself from the URI of the namespace treated as a URL, but it's *not a requirement*: it's a URN not necessarily a URL!\n", "_____no_output_____" ] ], [ [ "%%writefile reactions.xsd\n\n<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"\n targetNamespace=\"http://www.turing.ac.uk/rsd-engineering/schema/reaction\"\n xmlns:r=\"http://www.turing.ac.uk/rsd-engineering/schema/reaction\">\n\n<xs:element name=\"atom\">\n <xs:complexType>\n <xs:attribute name=\"symbol\" type=\"xs:string\"/>\n <xs:attribute name=\"number\" type=\"xs:integer\"/>\n </xs:complexType>\n</xs:element>\n \n<xs:element name=\"molecule\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"r:atom\" maxOccurs=\"unbounded\"/>\n </xs:sequence>\n <xs:attribute name=\"stoichiometry\" type=\"xs:integer\"/>\n </xs:complexType>\n</xs:element>\n \n<xs:element name=\"reactants\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"r:molecule\" maxOccurs=\"unbounded\"/>\n </xs:sequence>\n </xs:complexType>\n</xs:element>\n \n<xs:element name=\"products\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"r:molecule\" maxOccurs=\"unbounded\"/>\n </xs:sequence>\n </xs:complexType>\n</xs:element> \n \n<xs:element name=\"reaction\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"r:reactants\"/>\n <xs:element ref=\"r:products\"/>\n </xs:sequence>\n </xs:complexType>\n</xs:element>\n\n<xs:element name=\"system\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"r:reaction\" maxOccurs=\"unbounded\"/>\n </xs:sequence>\n </xs:complexType>\n</xs:element> \n \n</xs:schema>", "Overwriting reactions.xsd\n" ] ], [ [ "Note we're now defining the target namespace for our schema.", "_____no_output_____" ] ], [ [ "with open(\"reactions.xsd\") as xsdfile:\n schema_xsd = xsdfile.read()\nschema = etree.XMLSchema(etree.XML(schema_xsd)) ", "_____no_output_____" ], [ "parser = etree.XMLParser(schema=schema)", "_____no_output_____" ], [ "with open(\"system.xml\") as xmlfile:\n tree = etree.parse(xmlfile, parser)\n print(tree)", "<lxml.etree._ElementTree object at 0x106978960>\n" ] ], [ [ "Note the power of binding namespaces when using XML files addressing more than one namespace.\nHere, we can clearly see which variables are part of the schema defining XML schema itself (bound to `xs`)\nand the schema for our file format (bound to `r`)", "_____no_output_____" ], [ "## Using standard vocabularies", "_____no_output_____" ], [ "The work we've done so far will enable someone who comes across our file format to track down something about its significance, by following the URI in the namespace. But it's still somewhat ambiguous. The word \"element\" means (at least) two things: an element tag in an XML document, and a chemical element. (It also means a heating element in a toaster, and lots of other things.)", "_____no_output_____" ], [ "To make it easier to not make mistakes as to the meaning of **found data**, it is helpful to use\nstandardised namespaces that already exist for the concepts our file format refers to.\n\nSo that when somebody else picks up one of our data files, the meaning of the stuff it describes is obvious. In this example, it would be hard to get it wrong, of course, but in general, defining file formats so that they are meaningful as found data should be desirable.", "_____no_output_____" ], [ "For example, the concepts in our file format are already part of the \"DBPedia ontology\",\namong others. So, we could redesign our file format to exploit this, by referencing for example [https://dbpedia.org/ontology/ChemicalCompound](https://dbpedia.org/ontology/ChemicalCompound):", "_____no_output_____" ] ], [ [ "%%writefile chemistry_template3.mko\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<system xmlns=\"https://www.turing.ac.uk/rsd-engineering/schema/reaction\"\n xmlns:dbo=\"https://dbpedia.org/ontology/\">\n%for reaction in reactions:\n <reaction>\n <reactants>\n %for molecule in reaction.reactants.molecules:\n <dbo:ChemicalCompound stoichiometry=\"${reaction.reactants.molecules[molecule]}\">\n %for element in molecule.elements:\n <dbo:ChemicalElement symbol=\"${element.symbol}\"\n number=\"${molecule.elements[element]}\"/>\n %endfor\n </dbo:ChemicalCompound>\n %endfor\n </reactants>\n <products>\n %for molecule in reaction.products.molecules:\n <dbo:ChemicalCompound stoichiometry=\"${reaction.products.molecules[molecule]}\">\n %for element in molecule.elements:\n <dbo:ChemicalElement symbol=\"${element.symbol}\"\n number=\"${molecule.elements[element]}\"/>\n %endfor\n </dbo:ChemicalCompound>\n %endfor\n </products>\n </reaction>\n%endfor\n</system>", "Overwriting chemistry_template3.mko\n" ] ], [ [ "However, this won't work properly, because it's not up to us to define the XML schema for somebody\nelse's entity type: and an XML schema can only target one target namespace.\n\nOf course we should use somebody else's file format for chemical reaction networks: compare [SBML](http://sbml.org) for example. We already know not to reinvent the wheel - and this whole lecture series is just reinventing the wheel for pedagogical purposes. But what if we've already got a bunch of data in our own format. How can we lock down the meaning of our terms?\n\nSo, we instead need to declare that our `r:element` *represents the same concept* as `dbo:ChemicalElement`. To do this formally we will need the concepts from the next lecture, specifically `rdf:sameAs`, but first, let's understand the idea of an ontology.", "_____no_output_____" ], [ "## Taxonomies and ontologies", "_____no_output_____" ], [ "An Ontology (in computer science terms) is two things: a **controlled vocabulary** of entities (a set of URIs in a namespace), the definitions thereof, and the relationships between them.", "_____no_output_____" ], [ "People often casually use the word to mean any formalised taxonomy, but the relation of terms in the ontology to the concepts they represent, and the relationships between them, are also critical.", "_____no_output_____" ], [ "Have a look at another example: [https://dublincore.org/documents/dcmi-terms/](https://dublincore.org/documents/dcmi-terms/#terms-creator)", "_____no_output_____" ], [ "Note each concept is a URI, but some of these are also stated to be subclasses or superclasses of the others.", "_____no_output_____" ], [ "Some are properties of other things, and the domain and range of these verbs are also stated.", "_____no_output_____" ], [ "Why is this useful for us in discussing file formats?", "_____no_output_____" ], [ "One of the goals of the **semantic web** is to create a way to make file formats which are universally meaningful\nas found data: if I have a file format defined using any formalised ontology, then by tracing statements\nthrough *rdf:sameAs* relationships, I should be able to reconstruct the information I need.\n \nThat will be the goal of the next lecture.\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
d0192e99c96f920797f0bebb63e8263537e92b02
13,089
ipynb
Jupyter Notebook
docs/contents/.ipynb_checkpoints/Explorer-checkpoint.ipynb
uibcdf/OpenMembrane
c9705cb32706b882bdb3d75d19539ca323e6b741
[ "MIT" ]
null
null
null
docs/contents/.ipynb_checkpoints/Explorer-checkpoint.ipynb
uibcdf/OpenMembrane
c9705cb32706b882bdb3d75d19539ca323e6b741
[ "MIT" ]
null
null
null
docs/contents/.ipynb_checkpoints/Explorer-checkpoint.ipynb
uibcdf/OpenMembrane
c9705cb32706b882bdb3d75d19539ca323e6b741
[ "MIT" ]
null
null
null
20.809221
106
0.525327
[ [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "import molsysmt as msm\nimport openexplorer as oe\nimport numpy as np\nfrom simtk import unit\nfrom simtk.openmm import app", "_____no_output_____" ] ], [ [ "# Explorer", "_____no_output_____" ] ], [ [ "modeller = msm.convert('alanine_dipeptide.pdb', to_form='openmm.Modeller')\n\ntopology = modeller.topology\npositions = modeller.positions\n\nforcefield = app.ForceField('amber10.xml', 'amber10_obc.xml')\nsystem = forcefield.createSystem(topology, constraints=app.HBonds, nonbondedMethod=app.NoCutoff)", "_____no_output_____" ], [ "explorer = oe.Explorer(topology, system, platform='CUDA')", "_____no_output_____" ], [ "explorer.set_coordinates(positions)", "_____no_output_____" ], [ "explorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.get_potential_energy_gradient()", "_____no_output_____" ], [ "explorer.get_potential_energy_hessian()", "_____no_output_____" ], [ "coordinates = explorer.get_coordinates()", "_____no_output_____" ], [ "explorer_2 = explorer.replicate()", "_____no_output_____" ], [ "explorer_2", "_____no_output_____" ] ], [ [ "## Quenching", "_____no_output_____" ] ], [ [ "explorer.set_coordinates(positions)\nexplorer.quench.l_bfgs()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.quench.fire()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.quench.gradient_descent()\nexplorer.get_potential_energy()", "_____no_output_____" ] ], [ [ "## Moves", "_____no_output_____" ] ], [ [ "explorer.set_coordinates(positions)\nexplorer.move.random_atoms_shifts()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.move.random_atoms_max_shifts()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.move.random_atoms_rsmd()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.move.random_atoms_max_rsmd()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.move.random_dihedral_shifts()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.move.random_dihedral_max_shifts()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.move.random_dihedral_rmsd()\nexplorer.get_potential_energy()", "_____no_output_____" ], [ "explorer.set_coordinates(positions)\nexplorer.move.random_dihedral_max_rmsd()\nexplorer.get_potential_energy()", "_____no_output_____" ] ], [ [ "## Dynamics", "_____no_output_____" ] ], [ [ "explorer.set_coordinates(positions)\nexplorer.md.langevin(500)\nexplorer.get_potential_energy()", "_____no_output_____" ] ], [ [ "## Distance", "_____no_output_____" ] ], [ [ "explorer.set_coordinates(coordinates)\nexplorer.md.langevin(500)", "_____no_output_____" ], [ "explorer.distance.rmsd(positions)", "_____no_output_____" ], [ "explorer.distance.least_rmsd(positions)", "_____no_output_____" ], [ "explorer.set_coordinates(positions)", "_____no_output_____" ], [ "explorer_2 = explorer.replicate()", "_____no_output_____" ], [ "explorer.md.langevin(500)", "_____no_output_____" ], [ "explorer.distance.rmsd(explorer_2)", "_____no_output_____" ], [ "explorer.distance.least_rmsd(explorer_2)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0193d0b882be0be1d0bd054830abe82357317fb
123,271
ipynb
Jupyter Notebook
lectures/notebooks/Lecture 01 - Python for Data Science.ipynb
materialsvirtuallab/nano281
d527c5049aab3da99237cbff0cc749640b2c9c06
[ "BSD-3-Clause" ]
38
2019-12-23T13:14:53.000Z
2022-01-25T23:59:33.000Z
lectures/notebooks/Lecture 01 - Python for Data Science.ipynb
materialsvirtuallab/nano281
d527c5049aab3da99237cbff0cc749640b2c9c06
[ "BSD-3-Clause" ]
null
null
null
lectures/notebooks/Lecture 01 - Python for Data Science.ipynb
materialsvirtuallab/nano281
d527c5049aab3da99237cbff0cc749640b2c9c06
[ "BSD-3-Clause" ]
18
2020-02-10T20:43:39.000Z
2022-01-21T13:45:36.000Z
75.257021
24,736
0.70938
[ [ [ "# Basic Python\n\nIntroduction to some basic python data types.", "_____no_output_____" ] ], [ [ "x = 1\ny = 2.0\ns = \"hello\"\nl = [1, 2, 3, \"a\"]\nd = {\"a\": 1, \"b\": 2, \"c\": 3}", "_____no_output_____" ] ], [ [ "Operations behave as per what you would expect.", "_____no_output_____" ] ], [ [ "z = x * y\nprint(z)", "2.0\n" ], [ "# Getting item at index 3 - note that Python uses zero-based indexing.\nprint(l[3])\n\n# Getting the index of an element\nprint(l.index(2))\n\n# Concatenating lists is just using the '+' operator.\nprint(l + l)", "a\n1\n[1, 2, 3, 'a', 1, 2, 3, 'a']\n" ] ], [ [ "Dictionaries are essentially key-value pairs", "_____no_output_____" ] ], [ [ "print(d[\"c\"]) # Getting the value associated with \"c\"", "3\n" ] ], [ [ "# Numpy and scipy", "_____no_output_____" ], [ "By convention, numpy is import as np and scipy is imported as sp.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport scipy as sp", "_____no_output_____" ] ], [ [ "An array is essentially a tensor. It can be an arbitrary number of dimensions. For simplicity, we will stick to basic 1D vectors and 2D matrices for now.", "_____no_output_____" ] ], [ [ "x = np.array([[1, 2, 3],\n [4, 7, 6],\n [9, 4, 2]])\ny = np.array([1.5, 0.5, 3])\nprint(x)\nprint(y)", "[[1 2 3]\n [4 7 6]\n [9 4 2]]\n[1.5 0.5 3. ]\n" ] ], [ [ "By default, operations are element-wise.", "_____no_output_____" ] ], [ [ "print(x + x)\nprint(x * x)\nprint(y * y)", "[[ 2 4 6]\n [ 8 14 12]\n [18 8 4]]\n[[ 1 4 9]\n [16 49 36]\n [81 16 4]]\n[2.25 0.25 9. ]\n" ], [ "print(np.dot(x, x))", "[[36 28 21]\n [86 81 66]\n [43 54 55]]\n" ], [ "print(np.dot(x, y))", "[11.5 27.5 21.5]\n" ] ], [ [ "Or you can use the @ operator that is available in Python 3.7 onwards.", "_____no_output_____" ] ], [ [ "print(x @ x)\nprint(x @ y)", "[[36 28 21]\n [86 81 66]\n [43 54 55]]\n[11.5 27.5 21.5]\n" ] ], [ [ "Numpy also comes with standard linear algebra operations, such as getting the inverse.", "_____no_output_____" ] ], [ [ "print(np.linalg.inv(x))", "[[ 0.16949153 -0.13559322 0.15254237]\n [-0.77966102 0.42372881 -0.10169492]\n [ 0.79661017 -0.23728814 0.01694915]]\n" ] ], [ [ "Eigen values and vectors", "_____no_output_____" ] ], [ [ "print(np.linalg.eig(x))", "(array([12.50205135, -3.75787445, 1.2558231 ]), array([[-0.27909662, -0.40149786, 0.3019769 ],\n [-0.79317124, -0.32770088, -0.78112084],\n [-0.5412804 , 0.85522605, 0.54649811]]))\n" ] ], [ [ "Use of numpy vectorization is key to efficient coding. Here we use the Jupyter %time magic function to demonstrate the relative speeds to two methods of calculation the L2 norm of a very long vector.", "_____no_output_____" ] ], [ [ "r = np.random.rand(10000, 1)", "_____no_output_____" ], [ "%time sum([i**2 for i in r])**0.5\n%time np.sqrt(np.sum(r**2))\n%time np.linalg.norm(r)", "CPU times: user 17.7 ms, sys: 1.14 ms, total: 18.8 ms\nWall time: 18.6 ms\nCPU times: user 86 µs, sys: 30 µs, total: 116 µs\nWall time: 93.9 µs\nCPU times: user 1.33 ms, sys: 347 µs, total: 1.67 ms\nWall time: 723 µs\n" ] ], [ [ "Scipy has all the linear algebra functions as numpy and more. Moreover, scipy is always compiled with fast BLAS and LAPACK.", "_____no_output_____" ] ], [ [ "import scipy.linalg as linalg\nlinalg.inv(x)", "_____no_output_____" ], [ "import scipy.constants as const\nprint(const.e)\nprint(const.h)", "1.602176634e-19\n6.62607015e-34\n" ], [ "import scipy.stats as stats", "_____no_output_____" ], [ "dist = stats.norm(0, 1) # Gaussian distribution\ndist.cdf(1.96)", "_____no_output_____" ] ], [ [ "# Pandas\n\npandas is one of the most useful packages that you will be using extensively during this course. You should become very familiar with the Series and DataFrame objects in pandas. Here, we will read in a csv (comma-separated value) file downloaded from figshare. While you can certainly manually download the csv and just called pd.read_csv(filename), we will just use the request method to directly grab the file and read it in using a StringIO stream.", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom io import StringIO\nimport requests\nfrom IPython.display import display\n\n# Get the raw text of the data directly from the figshare url.\nurl = \"https://ndownloader.figshare.com/files/13007075\"\nraw = requests.get(url).text\n# Then reads in the data as a pandas DataFrame.\ndata = pd.read_csv(StringIO(raw))\ndisplay(data)", "_____no_output_____" ] ], [ [ "Here, we will get one column from the DataFrame - this is a Pandas Series object.", "_____no_output_____" ] ], [ [ "print(data[\"Enorm (eV)\"])", "0 0.000000\n1 -0.090142\n2 0.259139\n3 -0.022200\n4 0.317672\n ... \n403 -0.067020\n404 0.153850\n405 0.248110\n406 0.204140\n407 0.248040\nName: Enorm (eV), Length: 408, dtype: float64\n" ], [ "df = data[data[\"Enorm (eV)\"] >= 0]\ndf.describe()", "_____no_output_____" ] ], [ [ "Pandas dataframes come with some conveience functions for quick visualization.", "_____no_output_____" ] ], [ [ "df.plot(x=\"Enorm (eV)\", y=\"E_raw (eV)\", kind=\"scatter\");", "_____no_output_____" ] ], [ [ "# Seaborn\n\nHere we demonstrate some basic statistical data visualization using the seaborn package. A helpful resource is the [seaborn gallery](https://seaborn.pydata.org/examples/index.html) which has many useful examples with source code.", "_____no_output_____" ] ], [ [ "import seaborn as sns\n%matplotlib inline", "_____no_output_____" ], [ "sns.distplot(df[\"Enorm (eV)\"], norm_hist=False);", "_____no_output_____" ], [ "sns.scatterplot(x=\"Enorm (eV)\", y=\"E_raw (eV)\", data=df);", "_____no_output_____" ] ], [ [ "# Materials API using pymatgen", "_____no_output_____" ], [ "The MPRester.query method allows you to perform direct queries to the Materials Project to obtain data. What is returned is a list of dict of properties.", "_____no_output_____" ] ], [ [ "from pymatgen.ext.matproj import MPRester\nmpr = MPRester()\ndata = mpr.query(criteria=\"*-O\", properties=[\"pretty_formula\", \"final_energy\", \"band_gap\", \"elasticity.K_VRH\"])\n# What is returned is a list of dict. Let's just see what the first item in the list looks out. \nimport pprint\npprint.pprint(data[0])", "_____no_output_____" ] ], [ [ "The above is not very friendly for manipulation and visualization. Thankfully, we can easily convert this to a pandas DataFrame since the DataFrame constructor takes in lists of dicts as well.", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(data)\ndisplay(df)", "_____no_output_____" ] ], [ [ "Oftentimes, you only want the subset of data with valid values. In the above data, it is clear that some of the entries do not have elasticity.K_VRH data. So we will use the dropna method of the pandas DataFrame to get a new DataFrame with just valid data. Note that a lot of Pandas methods returns a new DataFrame. This ensures that you always have the original object to compare to. If you want to perform the operation in place, you can usually supply `inplace=True` to the method.", "_____no_output_____" ] ], [ [ "valid_data = df.dropna()\nprint(valid_data)", " pretty_formula final_energy band_gap elasticity.K_VRH\n1 BaO2 -16.991508 2.1206 28.0\n2 BaO -23.550004 2.3711 67.0\n5 Bi2O3 -28.415230 1.1772 117.0\n6 CeO2 -49.897720 0.6980 148.0\n7 CeO2 -51.753294 1.9556 132.0\n... ... ... ... ...\n2234 ZnO -105.067224 0.5298 167.0\n2251 WO3 -151.123549 0.0000 89.0\n2253 WO3 -120.135441 1.8967 37.0\n2261 WO3 -120.093040 1.6755 50.0\n2267 WO2 -87.834801 0.0000 116.0\n\n[387 rows x 4 columns]\n" ] ], [ [ "Seaborn works very well with Pandas DataFrames...", "_____no_output_____" ] ], [ [ "sns.scatterplot(x=\"band_gap\", y=\"elasticity.K_VRH\", data=valid_data);", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d01951b178800fed00e52c4fc1b4b1ff70c1b48d
325,054
ipynb
Jupyter Notebook
1_2_Convolutional_Filters_Edge_Detection/5. Canny Edge Detection.ipynb
Abdulrahman-Adel/CVND-Exercises
ec8618e1651b5302c37788b2383620d143fdd8e3
[ "MIT" ]
null
null
null
1_2_Convolutional_Filters_Edge_Detection/5. Canny Edge Detection.ipynb
Abdulrahman-Adel/CVND-Exercises
ec8618e1651b5302c37788b2383620d143fdd8e3
[ "MIT" ]
null
null
null
1_2_Convolutional_Filters_Edge_Detection/5. Canny Edge Detection.ipynb
Abdulrahman-Adel/CVND-Exercises
ec8618e1651b5302c37788b2383620d143fdd8e3
[ "MIT" ]
null
null
null
1,269.742188
112,676
0.96056
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport cv2\n\n%matplotlib inline\n\n# Read in the image\nimage = cv2.imread('images/brain_MR.jpg')\n\n# Change color to RGB (from BGR)\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\nplt.imshow(image)", "_____no_output_____" ], [ "# Convert the image to grayscale for processing\ngray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n\nplt.imshow(gray, cmap='gray')", "_____no_output_____" ] ], [ [ "### Implement Canny edge detection", "_____no_output_____" ] ], [ [ "# Try Canny using \"wide\" and \"tight\" thresholds\n\nwide = cv2.Canny(gray, 30, 100)\ntight = cv2.Canny(gray, 200, 240)\n \n \n# Display the images\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n\nax1.set_title('wide')\nax1.imshow(wide, cmap='gray')\n\nax2.set_title('tight')\nax2.imshow(tight, cmap='gray')", "_____no_output_____" ] ], [ [ "### TODO: Try to find the edges of this flower\n\nSet a small enough threshold to isolate the boundary of the flower.", "_____no_output_____" ] ], [ [ "# Read in the image\nimage = cv2.imread('images/sunflower.jpg')\n\n# Change color to RGB (from BGR)\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\nplt.imshow(image)", "_____no_output_____" ], [ "# Convert the image to grayscale\ngray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n\n## TODO: Define lower and upper thresholds for hysteresis\n# right now the threshold is so small and low that it will pick up a lot of noise\nlower = 70\nupper = 210\n\nedges = cv2.Canny(gray, lower, upper)\n\nplt.figure(figsize=(20,10))\nplt.imshow(edges, cmap='gray')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d01952cbf6942bfc64f25993b8315e6a75b71ead
229,056
ipynb
Jupyter Notebook
notebooks/VisualizeRotation.ipynb
stevenygd/latent_3d_points
cf8c0888f4489690fa5b692cbd44638f8db2d0ba
[ "MIT" ]
null
null
null
notebooks/VisualizeRotation.ipynb
stevenygd/latent_3d_points
cf8c0888f4489690fa5b692cbd44638f8db2d0ba
[ "MIT" ]
null
null
null
notebooks/VisualizeRotation.ipynb
stevenygd/latent_3d_points
cf8c0888f4489690fa5b692cbd44638f8db2d0ba
[ "MIT" ]
1
2020-10-12T04:48:43.000Z
2020-10-12T04:48:43.000Z
1,145.28
100,252
0.958914
[ [ [ "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\nimport sys\nsys.path.insert(0, \"/home/gy46/\")", "_____no_output_____" ], [ "import numpy as np\nimport os.path as osp\n\nfrom latent_3d_points.src.evaluation_metrics import minimum_mathing_distance, \\\njsd_between_point_cloud_sets, coverage\n\nfrom latent_3d_points.src.in_out import snc_category_to_synth_id,\\\n load_all_point_clouds_under_folder", "_____no_output_____" ] ], [ [ "Load some point-clouds and make two sets (sample_pcs, ref_pcs) from them. The ref_pcs is considered as the __ground-truth__ data while the sample_pcs corresponds to a set that is matched against it, e.g. comes from a generative model.", "_____no_output_____" ] ], [ [ "# top_in_dir = '../data/shape_net_core_uniform_samples_2048/' # Top-dir of where point-clouds are stored.\n# top_in_dir = '../data/ShapeNetV1PCOutput/' # Top-dir of where point-clouds are stored.\ntop_in_dir = '../data/ShapeNetCore.v2.PC15k/'\nclass_name = 'chair'\nsyn_id = snc_category_to_synth_id()[class_name]\nclass_dir = osp.join(top_in_dir , syn_id, 'val')\n# all_pc_data = load_all_point_clouds_under_folder(class_dir, n_threads=8, file_ending='.ply', verbose=True)\nall_pc_data = load_all_point_clouds_under_folder(\n class_dir, n_threads=8, file_ending='.npy', verbose=True, normalize=True, rotation_axis=1)", "Give me the class name (e.g. \"chair\"): chair\n662 pclouds were loaded. They belong in 1 shape-classes.\n" ], [ "from mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef plot_3d(pcl, axis=[0,1,2]):\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n ax.scatter(pcl[:,axis[0]], pcl[:,axis[1]], pcl[:,axis[2]], s=1)\n\n ax.set_xlabel('axis-0')\n ax.set_ylabel('axis-1')\n ax.set_zlabel('axis-2')\n plt.show()", "_____no_output_____" ], [ "from random import choice\npcls, _, _ = all_pc_data.next_batch(100)\nplot_3d(pcls[choice(range(pcls.shape[0]))], axis=[0,2,1])", "_____no_output_____" ], [ "\ntop_in_dir = '../data/ModelNet40.PC15k/'\n# class_name = raw_input('Give me the class name (e.g. \"chair\"): ').lower()\nclass_name = \"chair\"\nclass_dir = osp.join(top_in_dir , class_name, 'test')\n# all_pc_data = load_all_point_clouds_under_folder(class_dir, n_threads=8, file_ending='.ply', verbose=True)\nall_pc_data = load_all_point_clouds_under_folder(\n class_dir, n_threads=8, file_ending='.npy', verbose=True, normalize=True, rotation_axis=1)", "_____no_output_____" ], [ "from random import choice\npcls, _, _ = all_pc_data.next_batch(100)\nplot_3d(pcls[choice(range(pcls.shape[0]))], axis=[0,2,1])", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d0195815dbbeeb559ce6f2091c86b1fd0e41a2b0
203,770
ipynb
Jupyter Notebook
Python for beginners.ipynb
avkch/Python-for-beginners
d74a9b638dc316c5656d4d63c3c157beee6a34ea
[ "MIT" ]
null
null
null
Python for beginners.ipynb
avkch/Python-for-beginners
d74a9b638dc316c5656d4d63c3c157beee6a34ea
[ "MIT" ]
null
null
null
Python for beginners.ipynb
avkch/Python-for-beginners
d74a9b638dc316c5656d4d63c3c157beee6a34ea
[ "MIT" ]
null
null
null
29.180868
33,092
0.530157
[ [ [ "# Python programming for beginners\nanton.kichev@clarivate.com", "_____no_output_____" ], [ "## Agenda\n1. Background, why Python, [installation](#installation), IDE, setup\n2. Variables, Boolean, None, numbers (integers, floating point), check type\n3. List, Set, Dictionary, Tuple\n4. Text and regular expressions\n5. Conditions, loops\n6. Objects and Functions\n7. Special functions (range, enumerate, zip), Iterators\n8. I/O working with files, working directory, projects\n9. Packages, pip, selected packages (xmltodict, biopython, xlwings, pyautogui, sqlalchemy, cx_Oracle, pandas)\n10. Errors and debugging (try, except)\n11. virtual environments", "_____no_output_____" ], [ "## What is a Programming language?\n![images/image1.png](images/image1.png)", "_____no_output_____" ], [ "## Why Python\n\n### Advantages:\n* Opensource/ free - explanation\n* Easy to learn\n* Old\n* Popular\n* All purpose\n* Simple syntaxis\n* High level\n* Scripting\n* Dynamically typed\n\n### Disadvantages:\n* Old\n* Dynamically typed\n* Inconsistent development", "_____no_output_____" ], [ "<a id=\"installation\"></a>\n## Installation\n\n[Python](http://python.org/)\n\n[Anaconda](https://www.anaconda.com/products/individual)", "_____no_output_____" ], [ "## Integrated Development Environment (IDE)\n* IDLE – comes with Python\n* [Jupiter notebook](https://jupyter.org/install)\n* [google colab](https://colab.research.google.com/notebooks/basic_features_overview.ipynb#scrollTo=KR921S_OQSHG)\n* Spyder – comes with Anaconda\n* [Visual Studio Code](https://code.visualstudio.com/)\n* [PyCharm community](https://www.jetbrains.com/toolbox-app/)", "_____no_output_____" ], [ "## Python files\nPython files are text files with .py extension\n### Comments\nComments are pieces of code that are not going to be executed. In python everything after hashtag (#) on the same line is comment.\nComments are used to describe code: what is this particular piece of code doing and why you have created it.", "_____no_output_____" ] ], [ [ "# this is a comment it will be ignored when running the python file", "_____no_output_____" ] ], [ [ "## Variables\nAssigning value to a variable\n", "_____no_output_____" ] ], [ [ "my_variable = 3\nprint(my_variable)", "3\n" ] ], [ [ "### Naming variables\nVariable names cannot start with the number, cannot contain special characters or space except _\n\nShould not be name of python function.\n* variable1 -> <font color=green>this is OK</font>\n* 1variable -> <font color=red>this is not OK</font>\n* Important-variable! -> <font color=red>this is not OK</font>\n* myVariable -> <font color=green>this is OK</font>\n* my_variable -> <font color=green>this is OK</font>", "_____no_output_____" ], [ "## Data types\n### Numbers\n#### 1. integers (whole numbers)", "_____no_output_____" ] ], [ [ "var2 = 2\nmy_variable + var2\nprint(my_variable + 4)\nmy_variable = 6\nprint(my_variable +4)", "7\n10\n" ] ], [ [ "we can assign the result to another variable", "_____no_output_____" ] ], [ [ "result = my_variable + var2\nprint(result)", "8\n" ] ], [ [ "#### 2. Doubles (floating point number)", "_____no_output_____" ] ], [ [ "double = 2.05\nprint(double)", "2.05\n" ] ], [ [ " #### Mathematical operations\n<font color= #00B19C>- Additon and substraction</font>", "_____no_output_____" ] ], [ [ "2 + 3", "_____no_output_____" ], [ "5 - 2", "_____no_output_____" ] ], [ [ "<font color= #00B19C>- Multiplication and division</font>", "_____no_output_____" ] ], [ [ "2 * 3", "_____no_output_____" ], [ "6 / 2", "_____no_output_____" ] ], [ [ "<font color=red>Note: the result of division is float not int!</font>", "_____no_output_____" ], [ "<font color= #00B19C>- Exponential</font>", "_____no_output_____" ] ], [ [ "2 ** 4\n# 2**4 is equal to 2*2*2*2", "_____no_output_____" ] ], [ [ "<font color= #00B19C>- Floor division</font>", "_____no_output_____" ] ], [ [ "7 // 3", "_____no_output_____" ] ], [ [ "7/3 is 2.3333 the floor division is giving the whole number 2 (how many times you can fit 3 in 7)", "_____no_output_____" ], [ "<font color= #00B19C>- Modulo</font>", "_____no_output_____" ] ], [ [ "7.0 % 2", "_____no_output_____" ] ], [ [ "7//3 is 2, modulo is giving the remainder of the operation (what is left when you fit 2 times 3 in 7 ; 7 =2*3 + 1)\n<font color= red>Note: Floor division and modulo results are inegers if integers are used as arguments and float if one of the arguments is float</font>", "_____no_output_____" ], [ "### Special variables\n#### 1. None\nNone means variable without data type, nothing", "_____no_output_____" ] ], [ [ "var = None\nprint(var)", "None\n" ] ], [ [ "#### 2. Bolean\n<font color= red>Note: Bolean is type of integer that can take only values of 0 or 1</font>", "_____no_output_____" ] ], [ [ "var = True # or 1\nvar2 = False # or 0\nprint(var)\nprint(var+1)", "True\n2\n" ] ], [ [ "### Check variable type\n#### 1. type() function", "_____no_output_____" ] ], [ [ "print(type(True))\nprint(type(1))\nprint(type(my_variable))", "<class 'bool'>\n<class 'int'>\n<class 'int'>\n" ] ], [ [ "#### 2. isinstance() function", "_____no_output_____" ] ], [ [ "print(isinstance(True, bool))\nprint(isinstance(False, int))\nprint(isinstance(1, int))", "True\nTrue\nTrue\n" ] ], [ [ "## Comparing variables", "_____no_output_____" ] ], [ [ "print(1 == 1)\nprint(1 == 2)\nprint(1 != 2)\nprint(1 < 2)\nprint(1 > 2)", "True\nFalse\nTrue\nTrue\nFalse\n" ], [ "my_variable = None\nprint(my_variable == None)\nprint(my_variable is None)\nmy_variable = 1.5\nprint(my_variable == 1.5)\nprint(my_variable is 1.5)\nprint(my_variable is not None)", "True\nTrue\nTrue\nFalse\nTrue\n" ] ], [ [ "<font color= red>Note as a general rule of thumb use \"is\" \"is not\" when checking if variable is **None**, **True** or **False** in all other cases use \"==\"</format>", "_____no_output_____" ], [ "### Converting Int to Float and vs versa\n#### 1. float() function", "_____no_output_____" ] ], [ [ "float(3)", "_____no_output_____" ] ], [ [ "#### 2. int() function\n<font color= red>Note the int() conversion is taking in to account only the whole number int(2.9) = 2!</font>", "_____no_output_____" ] ], [ [ "int(2.9)", "_____no_output_____" ] ], [ [ "## Tuple\ntuple is a collection which is ordered and unchangeable.", "_____no_output_____" ] ], [ [ "my_tuple = (3, 8, 5, 7, 5) ", "_____no_output_____" ] ], [ [ "access tuple items by index \n<font color= red>Note Python is 0 indensing language = it starts to count from 0!</font>", "_____no_output_____" ], [ "![images/image3.png](images/image3.png)", "_____no_output_____" ] ], [ [ "print(my_tuple[0])\nprint(my_tuple[2:4])\nprint(my_tuple[2:])\nprint(my_tuple[:2])\nprint(my_tuple[-1])\nprint(my_tuple[::3])\nprint(my_tuple[1::2])\nprint(my_tuple[::-1])\nprint(my_tuple[-2::]) ", "3\n(5, 7)\n(5, 7, 5)\n(3, 8)\n5\n(3, 7)\n(8, 7)\n(5, 7, 5, 8, 3)\n(7, 5)\n" ] ], [ [ "### Tuple methods\nMethods are functions inside an object (every variable in Python is an object)\n#### 1.count() method - Counts number of occurrences of item in a tuple", "_____no_output_____" ] ], [ [ "my_tuple.count(6)", "_____no_output_____" ] ], [ [ "#### 2.index() method - Returns the index of first occurence of an item in a tuple", "_____no_output_____" ] ], [ [ "my_tuple.index(5)", "_____no_output_____" ] ], [ [ "#### Other operations with tuples\nAdding tuples", "_____no_output_____" ] ], [ [ "my_tuple + (7, 2, 1)", "_____no_output_____" ] ], [ [ "Nested tuples = tuples containing tuples", "_____no_output_____" ] ], [ [ "tuple_of_tuples = ((1,2,3),(3,4,5))", "_____no_output_____" ], [ "print(tuple_of_tuples)\nprint(tuple_of_tuples[0])\nprint(tuple_of_tuples[1][2])", "((1, 2, 3), (3, 4, 5))\n(1, 2, 3)\n5\n" ] ], [ [ "## List\nList is a collection which is ordered and changeable.", "_____no_output_____" ] ], [ [ "my_list = [3, 8, 5, 7, 5]", "_____no_output_____" ] ], [ [ "Accesing list members is exactly the same as accesing tuple members, .count() and .index() methods work the same way with lists.\n\nThe difference is that list members can be changed", "_____no_output_____" ] ], [ [ "my_list[1] = 9\nprint(my_list)", "[3, 9, 5, 7, 5]\n" ], [ "my_tuple[1] = 9", "_____no_output_____" ] ], [ [ "Lists are having more methods than tuples", "_____no_output_____" ], [ "#### 1.count() method \nsame as with tuple\n#### 2.index() method\nsame as with tuple\n#### 3.reverse() method\ninverting the list same as my_list[::-1]", "_____no_output_____" ] ], [ [ "my_list.reverse()\nprint(my_list)\nmy_list = my_list[::-1]\nprint(my_list)", "[5, 7, 5, 9, 3]\n[3, 9, 5, 7, 5]\n" ] ], [ [ "#### 4.sort() method\nsorting the list from smallest to largest or alphabetically in case of text", "_____no_output_____" ] ], [ [ "my_list.sort()\nprint(my_list)\nmy_list.sort(reverse=True)\nprint(my_list)", "[3, 5, 5, 7, 9]\n[9, 7, 5, 5, 3]\n" ] ], [ [ "#### 5.clear() method\nremoving everything from a list, equal to my_list = []", "_____no_output_____" ] ], [ [ "my_list.clear()\nprint(my_list)", "[]\n" ] ], [ [ "#### 6.remove() method\nRemoves the first item with the specified value", "_____no_output_____" ] ], [ [ "my_list = [3, 8, 5, 7, 5]\nmy_list.remove(7)\nprint(my_list)", "[3, 8, 5, 5]\n" ] ], [ [ "#### 7.pop() method\nRemoves the element at the specified position", "_____no_output_____" ] ], [ [ "my_list.pop(0)\nprint(my_list)", "[8, 5, 5]\n" ] ], [ [ "#### 8.copy() method\nReturns a copy of the list", "_____no_output_____" ] ], [ [ "my_list_copy = my_list.copy()\nprint(my_list_copy)", "[3, 8, 5, 7, 5]\n" ] ], [ [ "what is the problem with my_other_list = my_list?", "_____no_output_____" ] ], [ [ "my_other_list = my_list\nprint(my_other_list)", "[3, 8, 5, 7, 5]\n" ], [ "my_list.pop(0)\nprint(my_list)\nprint(my_list_copy)\nprint(my_other_list)", "[8, 5, 7, 5]\n[3, 8, 5, 7, 5]\n[8, 5, 7, 5]\n" ], [ "my_other_list.pop(0)\nprint(my_list)\nprint(my_list_copy)\nprint(my_other_list)", "[5, 7, 5]\n[3, 8, 5, 7, 5]\n[5, 7, 5]\n" ] ], [ [ "#### 9.insert() method\nAdds an element at the specified position, displacing the following members with 1 position", "_____no_output_____" ] ], [ [ "my_list = [3, 8, 5, 7, 5]\nmy_list.insert(3, 1)\nprint(my_list)", "[3, 8, 5, 1, 7, 5]\n" ] ], [ [ "#### 10.append() method\nAdds an element at the end of the list", "_____no_output_____" ] ], [ [ "my_list.append(6)\nprint(my_list)\nmy_list.append([6,7])\nprint(my_list)", "[3, 8, 5, 1, 7, 5, 6]\n[3, 8, 5, 1, 7, 5, 6, [6, 7]]\n" ] ], [ [ "#### 11.extend() method\ndd the elements of a list (or any iterable), to the end of the current list", "_____no_output_____" ] ], [ [ "another_list = [2, 6, 8]\nmy_list.extend(another_list)\nprint(my_list)", "[3, 8, 5, 1, 7, 5, 6, 2, 6, 8]\n" ], [ "another_tuple = (2, 6, 8)\nmy_list.extend(another_tuple)\nprint(my_list)", "[3, 8, 5, 1, 7, 5, 6, 2, 6, 8, 2, 6, 8]\n" ] ], [ [ "## <font color= red>End of first session</font>", "_____no_output_____" ], [ "#### Set\nSet is an unordered collection of unique objects.", "_____no_output_____" ] ], [ [ "my_set = {3, 8, 5, 7, 5}\nprint(my_set)", "{8, 3, 5, 7}\n" ], [ "print(my_set)", "{8, 3, 5, 7}\n" ] ], [ [ "### Set methods\n.add()\t- Adds an element to the set\n\n.clear()\t- Removes all the elements from the set\n\n.copy()\t- Returns a copy of the set\n\n.difference()\t- Returns a set containing the difference between two or more sets\n\n.difference_update()\t- Removes the items in this set that are also included in another, specified set\n\n.discard()\t- Remove the specified item\n\n.intersection()\t- Returns a set, that is the intersection of two other sets\n\n.intersection_update()\t- Removes the items in this set that are not present in other, specified set(s)\n\n.isdisjoint()\t- Returns whether two sets have a intersection or not\n\n.issubset()\t- Returns whether another set contains this set or not\n\n.issuperset()\t- Returns whether this set contains another set or not\n\n.pop()\t- Removes an element from the set\n\n.remove()\t- Removes the specified element\n\n.symmetric_difference()\t- Returns a set with the symmetric differences of two sets\n\n.symmetric_difference_update()\t- Inserts the symmetric differences from this set and another\n\n.union()\t- Return a set containing the union of sets\n\n.update()\t- Update the set with the union of this set and others", "_____no_output_____" ] ], [ [ "set_a = {1,2,3,4,5}\nset_b = {4,5,6,7,8}\nprint(set_a.union(set_b))\nprint(set_a.intersection(set_b))", "{1, 2, 3, 4, 5, 6, 7, 8}\n{4, 5}\n" ] ], [ [ "## Converting tuple to list to set \nwe can convert any tuple or set to list with **list()** function\n\nwe can convert any list or set to tuple with **tuple()** function\n\nwe can convert any tuple or list to set with **set()** function", "_____no_output_____" ] ], [ [ "my_list = [3, 8, 5, 7, 5]\nprint(my_list)\nmy_tuple = tuple(my_list)\nprint(my_tuple)\nmy_set =set(my_list)\nprint(my_set)\nmy_list2 = list(my_set)\nprint(my_list2)", "[3, 8, 5, 7, 5]\n(3, 8, 5, 7, 5)\n{8, 3, 5, 7}\n[8, 3, 5, 7]\n" ] ], [ [ "this functions can be nested", "_____no_output_____" ] ], [ [ "my_unique_list = list(set(my_list))\nprint(my_unique_list)", "[8, 3, 5, 7]\n" ] ], [ [ "### Checking if something is in a list, set, tuple", "_____no_output_____" ] ], [ [ "print(3 in my_set)\nprint(9 in my_set)\nprint(3 in my_list)\nprint(9 in my_tuple)", "True\nFalse\nTrue\nFalse\n" ] ], [ [ "## Dictionary\ndictionary is a collection which is unordered, changeable and indexed as a key-value pair", "_____no_output_____" ] ], [ [ "my_dict = {1: 2.3, \n 2: 8.6}\nprint(my_dict[2])", "8.6\n" ], [ "print(my_dict[3])", "_____no_output_____" ], [ "print(my_dict.keys())\nprint(my_dict.values())\nprint(1 in my_dict.keys())\nprint(2.3 in my_dict.values())\nprint(my_dict.items())", "dict_keys([1, 2])\ndict_values([2.3, 8.6])\nTrue\nTrue\ndict_items([(1, 2.3), (2, 8.6)])\n" ] ], [ [ "## Strings\nstrings are ordered sequence of characters, strings are unchangable", "_____no_output_____" ] ], [ [ "print(my_dict.get(2))", "8.6\n" ], [ "my_string = 'this is string'\nother_string = \"this is string as well\"\nmultilane_string = '''this is\na multi lane \nstring'''\nprint(my_string)\nprint(other_string)\nprint(multilane_string)", "this is string\nthis is string as well\nthis is\na multi lane \nstring\n" ], [ "my_string = 'this \"word\" is in quotes'\nmy_other_string = \"This is Maria's book\"\nprint(my_string)\nprint(my_other_string)", "this \"word\" is in quotes\nThis is Maria's book\n" ], [ "my_string = \"this \\\"word\\\" is in quotes\"\nmy_other_string = 'This is Maria\\'s book'\nprint(my_string)\nprint(my_other_string)", "this \"word\" is in quotes\nThis is Maria's book\n" ], [ "my_number = 9\nmy_string = '9'\nprint(my_number+1)\nprint(my_string+1)", "10\n" ], [ "print(my_string+'1')\nprint(int(my_string)+1)\nprint(my_number+int('1'))", "91\n10\n10\n" ] ], [ [ "Accesing list members is exactly the same as with lists and tuples", "_____no_output_____" ] ], [ [ "print(other_string)\nprint(other_string[0])\nprint(other_string[::-1])", "this is string as well\nt\nllew sa gnirts si siht\n" ] ], [ [ "## String methods\n\n.capitalize()\t- Converts the first character to upper case\n\n.casefold()\t- Converts string into lower case\n\n.center()\t- Returns a centered string\n\n.count()\t- Returns the number of times a specified value occurs in a string\n\n.encode()\t- Returns an encoded version of the string\n\n.endswith()\t- Returns true if the string ends with the specified value\n\n.expandtabs()\t- Sets the tab size of the string\n\n.find()\t- Searches the string for a specified value and returns the position of where it was found\n\n.format()\t- Formats specified values in a string\n\n.format_map()\t- Formats specified values in a string\n\n.index()\t- Searches the string for a specified value and returns the position of where it was found\n\n.isalnum()\t- Returns True if all characters in the string are alphanumeric\n\n.isalpha()\t- Returns True if all characters in the string are in the alphabet\n\n.isdecimal()\t- Returns True if all characters in the string are decimals\n\n.isdigit()\t- Returns True if all characters in the string are digits\n\n.isidentifier()\t- Returns True if the string is an identifier\n\n.islower()\t- Returns True if all characters in the string are lower case\n\n.isnumeric()\t- Returns True if all characters in the string are numeric\n\n.isprintable()\t- Returns True if all characters in the string are printable\n\n.isspace()\t- Returns True if all characters in the string are whitespaces\n\n.istitle()\t- Returns True if the string follows the rules of a title\n\n.isupper()\t- Returns True if all characters in the string are upper case\n\n.join()\t- Joins the elements of an iterable to the end of the string\n\n.ljust()\t- Returns a left justified version of the string\n\n.lower()\t- Converts a string into lower case\n\n.lstrip()\t- Returns a left trim version of the string\n\n.maketrans()\t- Returns a translation table to be used in translations\n\n.partition()\t- Returns a tuple where the string is parted into three parts\n\n.replace()\t- Returns a string where a specified value is replaced with a specified value\n\n.rfind()\t- Searches the string for a specified value and returns the last position of where it was found\n\n.rindex()\t- Searches the string for a specified value and returns the last position of where it was found\n\n.rjust()\t- Returns a right justified version of the string\n\n.rpartition()\t- Returns a tuple where the string is parted into three parts\n\n.rsplit()\t- Splits the string at the specified separator, and returns a list\n\n.rstrip()\t- Returns a right trim version of the string\n\n.split()\t- Splits the string at the specified separator, and returns a list\n\n.splitlines()\t- Splits the string at line breaks and returns a list\n\n.startswith()\t- Returns true if the string starts with the specified value\n\n.strip()\t- Returns a trimmed version of the string\n\n.swapcase()\t- Swaps cases, lower case becomes upper case and vice versa\n\n.title()\t- Converts the first character of each word to upper case\n\n.translate()\t- Returns a translated string\n\n.upper()\t- Converts a string into upper case\n\n.zfill()\t- Fills the string with a specified number of 0 values at the beginning", "_____no_output_____" ] ], [ [ "my_string = ' string with spaces '\nprint(my_string)\nmy_stripped_string = my_string.strip()\nprint(my_stripped_string)", " string with spaces \nstring with spaces\n" ], [ "print('ABC' == 'ABC')\nprint('ABC' == ' ABC ')", "True\nFalse\n" ], [ "list_of_words = my_string.split()\nprint(list_of_words)", "['string', 'with', 'spaces']\n" ], [ "text = 'id1, id2, id3, id4'\nids_list = text.split(', ')\nprint(ids_list)", "['id1', 'id2', 'id3', 'id4']\n" ], [ "new_text = ' / '.join(ids_list)\nprint(new_text)", "id1 / id2 / id3 / id4\n" ], [ "xml_text = 'this is <body>text</body> with xml tags'\nxml_text.find('<body>')", "_____no_output_____" ], [ "xml_body = xml_text[xml_text.find('<body>')+len('<body>'):xml_text.find('</body>')]\nprint(xml_body)", "text\n" ] ], [ [ "### Other operations with strings\n\ncombinig (adding) strings", "_____no_output_____" ] ], [ [ "text = 'text1'+'text2'\nprint(text)", "text1text2\n" ], [ "text = 'text1'*4\nprint(text)", "text1text1text1text1\n" ] ], [ [ "row and formated string", "_____no_output_____" ] ], [ [ "file_location = 'C:\\Users\\U6047694\\Documents\\job\\Python_Projects\\file.txt'", "_____no_output_____" ], [ "file_location = r'C:\\Users\\U6047694\\Documents\\job\\Python_Projects\\file.txt'\nprint(file_location)", "C:\\Users\\U6047694\\Documents\\job\\Python_Projects\\file.txt\n" ], [ "var1 = 5\nvar2 = 6\nprint(f'Var1 is: {var1}, var2 is: {var2} and the sum is: {var1+var2}')\n# this is the same as \nprint('Var1 is: '+str(var1)+', var2 is: '+str(var2)+' and the sum is: '+str(var1+var2))", "Var1 is: 5, var2 is: 6 and the sum is: 11\nVar1 is: 5, var2 is: 6 and the sum is: 11\n" ] ], [ [ "## Regular expressions in Python\nThe regular expressions in python are stored in separate package **re** this package should be imported in order to access its functionality (methods).\n### Methods in re package\n* re.search()\t- Check if given pattern is present anywhere in input string. Output is a re.Match object, usable in conditional expressions\n* re.fullmatch()\t- ensures pattern matches the entire input string\n* re.compile()\t- Compile a pattern for reuse, outputs re.Pattern object\n* re.sub()\t- search and replace\n* re.escape()\t- automatically escape all metacharacters\n* re.split()\t- split a string based on RE text matched by the groups will be part of the output\n* re.findall()\t- returns all the matches as a list\n* re.finditer()\t- iterator with re.Match object for each match\n* re.subn()\t- gives tuple of modified string and number of substitutions\n\n### re characters\n\n'.' - Match any character except newline\n\n'^' - Match the start of the string\n'$' - Match the end of the string\n\n'*' - Match 0 or more repetitions\n\n'+' - Match 1 or more repetitions\n\n'?' - Match 0 or 1 repetitions\n\n### re set of characters\n\n'[]' - Match a set of characters\n\n'[a-z]' - Match any lowercase ASCII letter\n\n'[lower-upper]' - Match a set of characters from lower to upper\n\n'[^]' - Match characters NOT in a set\n\n<a href=\"https://cheatography.com/davechild/cheat-sheets/regular-expressions/\" >Cheet Sheet</a>\n\n<a href=\"https://docs.python.org/3/library/re.html\">re reference</a>", "_____no_output_____" ] ], [ [ "text = 'this is a sample text for re testing'\nt_words = re.findall('t[a-z]* ', text)\nprint(t_words)\nnew_text = re.sub('t[a-z]* ', 'replace ', text)\nprint(new_text)", "['this ', 'text ']\nreplace is a sample replace for re testing\n" ] ], [ [ "## Conditions\n### IF, ELIF, ELSE conditions\nif condition sintacsis:", "_____no_output_____" ] ], [ [ "a = 3\n\nif a == 2:\n print('a is 2')", "_____no_output_____" ], [ "if a == 3:\n print('a is 3')\nelse:\n print('a is not 2')", "a is 3\n" ], [ "if a == 2:\n pring('a is 2')\nelif a == 3:\n print('a is 3')\nelse:\n print('a is not 2 or 3')", "a is 3\n" ], [ "if a == 2:\n pring('a is 2')\nif a == 3:\n print('a is 3')\nelse:\n print('a is not 2 or 3')", "a is 3\n" ], [ "if a > 2:\n print('a is bigger than 2')\nif a < 4:\n print('a is smaller than 4')\nelse: \n print('a is something else')", "a is bigger than 2\na is smaller than 4\n" ], [ "if a > 2:\n print('a is bigger than 2')\nelif a < 4:\n print('a is smaller than 4')\nelse: \n print('a is something else')", "a is bigger than 2\n" ] ], [ [ "#### OR / AND in conditional statement", "_____no_output_____" ] ], [ [ "b = 4\nif a > 2 or b < 2:\n print(f'a is: {a} b is: {b}.')", "a is: 3 b is: 4.\n" ] ], [ [ "#### Nested conditional statements", "_____no_output_____" ] ], [ [ "a = 2\nif a == 2:\n if b > a:\n print('b is bigger than a')\n else:\n print('b is not bigger than a')\nelse:\n print(f'a is {a}')", "b is bigger than a\n" ] ], [ [ "## Loops\n### FOR loop", "_____no_output_____" ] ], [ [ "my_list = [1, 3, 5]\nfor item in my_list:\n print(item)", "1\n3\n5\n" ] ], [ [ "### WHILE loop", "_____no_output_____" ] ], [ [ "a = 0\nwhile a < 5:\n a = a + 1 # or alternatively a += 1 \n print(a)", "1\n2\n3\n4\n5\n" ] ], [ [ "You can put else statement in the while loop as well", "_____no_output_____" ] ], [ [ "a = 3\nwhile a < 5:\n a = a + 1 # or alternatively a += 1 \n print(a)\nelse:\n print('This is the end!')", "4\n5\nThis is the end!\n" ] ], [ [ "Loops can be nested as well", "_____no_output_____" ] ], [ [ "columns = ['A', 'B', 'C']\nrows = [1, 2, 3]\nfor column in columns:\n print(column)\n \n for row in rows:\n print(row)", "A\n1\n2\n3\nB\n1\n2\n3\nC\n1\n2\n3\n" ] ], [ [ "Break and Continue. Break is stopping the loop, continue is skipping to the next item in the loop", "_____no_output_____" ] ], [ [ "for column in columns:\n print(column)\n if column == 'B':\n break", "A\nB\n" ] ], [ [ "If we have nested loops break will stop only the loop in which is used", "_____no_output_____" ] ], [ [ "columns = ['A', 'B', 'C']\nrows = [1, 2, 3]\nfor column in columns:\n print(column)\n \n for row in rows:\n print(row)\n if row == 2:\n break", "A\n1\n2\nB\n1\n2\nC\n1\n2\n" ], [ "i = 0\nwhile i < 6:\n i += 1\n if i == 3:\n continue\n print(i)", "1\n2\n4\n5\n6\n" ] ], [ [ "## Objects\nEverything in Python is Object\n![images/image4.png](images/image4.png)", "_____no_output_____" ] ], [ [ "class Player:\n def __init__(self, name):\n self.name = name\n print(f'{self.name} is a Player')\n\n def run(self):\n return f'{self.name} is running'", "_____no_output_____" ], [ "player1 = Player('Messi')\nplayer1.run()", "Messi is a Player\n" ] ], [ [ "### Inheritance\nInheritance allows us to define a class that inherits all the methods and properties from another class.\n\nParent class is the class being inherited from, also called base class.\n\nChild class is the class that inherits from another class, also called derived class.", "_____no_output_____" ] ], [ [ "class Futbol_player(Player):\n def kick_ball(self):\n return f'{self.name} is kicking the ball'\n\nclass Basketball_player(Player):\n def catch_ball(self):\n return f'{self.name} is catching the ball'", "_____no_output_____" ], [ "player2 = Futbol_player('Leo Messi')\nplayer2.kick_ball()", "Leo Messi is a Player\n" ], [ "player2.run()", "_____no_output_____" ], [ "player3 = Basketball_player('Pau Gasol')\nplayer3.catch_ball()", "Pau Gasol is a Player\n" ], [ "player3.kick_ball()", "_____no_output_____" ], [ "class a_list(list):\n def get_3_element(self):\n return self[3]\n\nmy_list = ['a', 'b', 'c', 'd']\nmy_a_list = a_list(['a', 'b', 'c', 'd'])\nmy_a_list.get_3_element()", "_____no_output_____" ], [ "my_list.get_3_element()", "_____no_output_____" ], [ "my_a_list.count('a')", "_____no_output_____" ] ], [ [ "## Functions\nA function is a block of code which only runs when it is called.\n\nYou can pass data, known as arguments or parameters, into a function.\n\nA function can return data as a result or not.", "_____no_output_____" ] ], [ [ "def my_func(n):\n '''this is power function'''\n result = n*n\n return result", "_____no_output_____" ] ], [ [ "You can assign the result of a function to another variable", "_____no_output_____" ] ], [ [ "power5 = my_func(5)\nprint(power5)", "25\n" ] ], [ [ "multiline string (Docstrings) can be used to describe the functions, can be accessed by \\__doc\\__ method", "_____no_output_____" ] ], [ [ "print(my_func.__doc__)", "this is power function\n" ] ], [ [ "One function can return more than one value", "_____no_output_____" ] ], [ [ "def my_function(a):\n x = a*2\n y = a+2\n return x, y \n\nvariable1 = my_function(5)\nprint(variable1)\nprint(variable2)", "(10, 7)\n7\n" ] ], [ [ "One function can have between 0 and many arguments ", "_____no_output_____" ] ], [ [ "def my_formula(a, b, c):\n y = (a*b) + c\n return y ", "_____no_output_____" ] ], [ [ "#### Positional arguments", "_____no_output_____" ] ], [ [ "my_formula(2,3,4)", "_____no_output_____" ] ], [ [ "#### Keyword arguments", "_____no_output_____" ] ], [ [ "my_formula(c=4, a=2, b=3)", "_____no_output_____" ] ], [ [ "You can pass both positional and keyword arguments to a function but the positional should always come first", "_____no_output_____" ] ], [ [ "my_formula(4, c=4, b=3)", "_____no_output_____" ] ], [ [ "#### Default arguments\nThis are arguments that are assigned when declaring the function and if not specified will take the default data", "_____no_output_____" ] ], [ [ "def my_formula(a, b, c=3):\n y = (a*b) + c\n return y \n\nmy_formula(2, 3, c=6)", "_____no_output_____" ] ], [ [ "#### Arbitrary Arguments, \\*args:\n\nIf you do not know how many arguments that will be passed into your function, add a * before the argument name in the function definition.\n\nThe function will receive a tuple of arguments, and they can be access accordingly:\n\n", "_____no_output_____" ] ], [ [ "def greeting(*args):\n greeting = f'Hi to {\", \".join(args[:-1])} and {args[-1]}'\n print(greeting)\ngreeting('Joe', 'Ben', 'Bobby')", "Hi to Joe, Ben and Bobby\n" ] ], [ [ "#### Arbitrary Keyword Arguments, \\**kwargs\n\nIf you do not know how many keyword arguments that will be passed into your function, add two asterisk: ** before the parameter name in the function definition.\n\nThis way the function will receive a dictionary of arguments, and can access the items accordingly", "_____no_output_____" ] ], [ [ "def list_names(**kwargs):\n for key, value in kwargs.items():\n print(f'{key} is: {value}')\n \nlist_names(first_name='Jonny', family_name='Walker')", "first_name is: Jonny\nfamily_name is: Walker\n" ], [ "list_names(primer_nombre='Jose', segundo_nombre='Maria', primer_apellido='Peréz', segundo_apellido='García')", "primer_nombre is: Jose\nsegundo_nombre is: Maria\nprimer_apellido is: Peréz\nsegundo_apellido is: García\n" ] ], [ [ "### Scope of the function\nScope of the function is what a function can see and use.\n\nThe function can use all global variables if there is no local assigned", "_____no_output_____" ] ], [ [ "a = 'Hello'\ndef my_function():\n print(a)\n \nmy_function()", "Hello\n" ] ], [ [ "If we have local variable with the same name the function will use the local.", "_____no_output_____" ] ], [ [ "a = 'Hello'\ndef my_function():\n a = 'Hi'\n print(a)\n \nmy_function()", "Hi\n" ], [ "a = 'Hello'\ndef my_function():\n print(a)\n a = 'Hi'\n \nmy_function()", "_____no_output_____" ] ], [ [ "This is important as this is preventing us from changing global variables inside function", "_____no_output_____" ] ], [ [ "a = 'Hello'\ndef change_a():\n a = a + 'Hi'\n\nchange_a()\nprint(a)", "_____no_output_____" ] ], [ [ "A function cannot access local variables from another function.", "_____no_output_____" ] ], [ [ "def my_function():\n b = 'Hi'\n print(a)\n \ndef my_other_function():\n print(b)\n \nmy_other_function()", "_____no_output_____" ] ], [ [ "Local variables cannot be accessed from global environment", "_____no_output_____" ] ], [ [ "print(b)", "_____no_output_____" ] ], [ [ "Similar to variables you can use functions from the global environment or define them inside a parent function", "_____no_output_____" ] ], [ [ "def add_function(a, b):\n result = a + b\n return result\n\ndef formula_function(a, b, c):\n result = add_function(a, b) * c\n return result\nprint(formula_function(2,3,4))", "20\n" ] ], [ [ "We can use the result from one function as argument for another", "_____no_output_____" ] ], [ [ "print(formula_function(add_function(4,5), 3, 2))", "24\n" ] ], [ [ "We can use function as argument for another function or return function from another function, we have Anonymous/Lambda Function in Python as well.", "_____no_output_____" ], [ "#### Recursive functions\nRecursive function is function that is using (calling) itself", "_____no_output_____" ] ], [ [ "def factorial(x):\n \"\"\"This is a recursive function\n to find the factorial of an integer (factorial(4) = 4*3*2*1)\"\"\"\n\n if x == 1:\n return 1\n else:\n result = x * factorial(x-1)\n return result\nfactorial(5)\n\ndef extract('http..'):\n result = request('http..')\n if request = None:\n time.sleep(360)\n result = extract()", "_____no_output_____" ] ], [ [ "## Special functions (range, enumerate, zip)\n### range() function - is creating sequence", "_____no_output_____" ] ], [ [ "my_range = range(5)\nprint(my_range)\nmy_list = list(range(2, 10, 2))\nmy_list", "range(0, 5)\n" ], [ "my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']\nfor i in range(3, len(my_list), 2):\n print(my_list[i])", "d\nf\nh\n" ], [ "range_list = list(range(10))\nprint(range_list)", "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n" ] ], [ [ "### enumerate() function is creating index for iterables ", "_____no_output_____" ] ], [ [ "import time\n\nmy_list = list(range(10))\nmy_second_list = []\n\nfor index, value in enumerate(my_list):\n time.sleep(1)\n my_second_list.append(value+2)\n print(f'{index+1} from {len(my_list)}')\nprint(my_second_list)", "1 from 10\n2 from 10\n3 from 10\n4 from 10\n5 from 10\n6 from 10\n7 from 10\n8 from 10\n9 from 10\n10 from 10\n[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n" ], [ "print(my_second_list)", "[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n" ] ], [ [ "### zip() function is aggregating items into tuples", "_____no_output_____" ] ], [ [ "list1 = [2, 4, 6, 7, 8]\nlist2 = ['a', 'b', 'c', 'd', 'e']\nfor item1, item2 in zip(list1, list2):\n print(f'item1 is:{item1} and item2 is: {item2}')", "item1 is:2 and item2 is: a\nitem1 is:4 and item2 is: b\nitem1 is:6 and item2 is: c\nitem1 is:7 and item2 is: d\nitem1 is:8 and item2 is: e\n" ] ], [ [ "### Iterator objects", "_____no_output_____" ] ], [ [ "string = 'abc'\nit = iter(string)\nit", "_____no_output_____" ], [ "next(it)", "_____no_output_____" ] ], [ [ "## I/O working with files, working directory, projects\nI/O = Input / Output. Loading data to python, getting data out of python", "_____no_output_____" ], [ "### Keyboard input\n#### input() function", "_____no_output_____" ] ], [ [ "str = input(\"Enter your input: \")\nprint(\"Received input is : \"+ str)", "Enter your input: Hi!\nReceived input is : Hi!\n" ] ], [ [ "### Console output\n#### print() function", "_____no_output_____" ] ], [ [ "print('Console output')", "Console output\n" ] ], [ [ "### Working with text files\n#### open() function\nopen(file_name [, access_mode][, buffering])\n\nfile_name = string with format 'C:/temp/my_file.txt'\n\naccess_mode = string with format: 'r', 'rb', 'w' etc\n\n1. r = Opens a file for reading only. The file pointer is placed at the beginning of the file. This is the default mode.\n2. rb = Opens a file for reading only in binary format. \n3. r+ = Opens a file for both reading and writing.\n4. rb = Opens a file for both reading and writing in binary format. \n5. w = Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.\n6. wb = Opens a file for writing only in binary format. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.\n7. w+ = Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.\n8. wb+ = Opens a file for both writing and reading in binary format. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.\n9. a = Opens a file for appending. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.\n10. ab = Opens a file for appending in binary format. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.\n11. a+ = Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.\n12. ab+ = Opens a file for both appending and reading in binary format. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.", "_____no_output_____" ] ], [ [ "txt_file = open('C:/temp/python test/txt_file.txt', 'w')", "_____no_output_____" ], [ "txt_file.write('some text')\n\ntxt_file.close()", "_____no_output_____" ], [ "txt_file = open('C:/temp/python test/txt_file.txt', 'r')\ntext = txt_file.read()\ntxt_file.close()\nprint(text)", "some text\n" ], [ "txt_file = open('C:/temp/python test/txt_file.txt', 'a')\ntxt_file.write('\\nsome more text')\ntxt_file.close()\n", "_____no_output_____" ], [ "txt_file = open('C:/temp/python test/txt_file.txt', 'r') \ntxt_lines = txt_file.readlines() \nprint(type(txt_lines))\ntxt_file.close()\nprint(txt_lines)", "<class 'list'>\n['some text\\n', 'some more text']\n" ], [ "txt_file = open('C:/temp/python test/txt_file.txt', 'r') \ntxt_line = txt_file.readline() \nprint(txt_line)\ntxt_line2 = txt_file.readline()\nprint(txt_line2)", "some text\n\nsome more text\n" ] ], [ [ "### Deleting files\nrequires os library this library is part of Python but is not loaded by default so to use it we should import it", "_____no_output_____" ] ], [ [ "import os\nos.remove('C:/temp/python test/txt_file.txt')", "_____no_output_____" ], [ "if os.path.exists('C:/temp/python test/txt_file.txt'):\n os.remove('C:/temp/python test/txt_file.txt')\nelse:\n print('The file does not exist')", "The file does not exist\n" ] ], [ [ "### Removing directories with os.rmdir()\nTo delete the directory with os.rmdir() the directory should be empty we can check what is inside the directory with os.listdir() or os.walk()", "_____no_output_____" ] ], [ [ "os.listdir('C:/temp/python test/')", "_____no_output_____" ], [ "os.walk('C:/temp/python test/')", "_____no_output_____" ], [ "for item in os.walk('C:/temp/python test/'):\n print(item[0])\n print(item[1])\n print(item[2])", "C:/temp/python test/\n['test dir']\n['test file.txt', 'txt_file.txt']\nC:/temp/python test/test dir\n[]\n[]\n" ] ], [ [ "### Rename file or directory", "_____no_output_____" ] ], [ [ "os.rename('C:/temp/python test/test file.txt', 'C:/temp/python test/test file renamed.txt')\nos.listdir('C:/temp/python test/')", "_____no_output_____" ] ], [ [ "### Open folder or file in Windows with the associated program", "_____no_output_____" ] ], [ [ "os.startfile('C:/temp/python test/test file renamed.txt')", "_____no_output_____" ] ], [ [ "## Working directory", "_____no_output_____" ] ], [ [ "import os\nos.getcwd()", "_____no_output_____" ], [ "os.chdir('C:/temp/python test/')\nos.getcwd()", "_____no_output_____" ], [ "os.listdir()", "_____no_output_____" ] ], [ [ "### Projects\nProject is a folder organising your files, the top level is your working directory.\nGood practices of organising your projects:\n1. Create separate folder for your python(.py) files, name this folder without space (eg. py_files or python_files)\n2. Add in your py_files fodler a file called \\_\\_init\\_\\_.py, this is an empty python file that will allow you to import all files in this folder as packages.\n3. is a good idea to make your project folder a git repository so you can track your changes.\n4. put all your source files and result files in your project directory.", "_____no_output_____" ], [ "## Packages\nPackages (or libraries) are python files with objects and functions that you can use, some of them are installed with python and are part of the programming language, others should be installed.\n\n### Package managers\nPackage managers are helping you to install, update and uninstall packages.\n#### pip package manager\nThis is the default python package manager\n* pip install package_name=version - installing a package\n* pip freeze - get the list of installed packages\n* pip freeze > requirements.txt - saves the list of installed packages as requirements.txt file\n* pip install -r requirements.txt - install all packages from requirements.txt file\n\n#### conda package manager\nThis is used by anaconda distributions of python\n\n### The Python Standard Library - packages included in python\n\n[Full list](https://docs.python.org/3/library/)\n* os - Miscellaneous operating system interfaces\n* time — Time access and conversions\n* datetime — Basic date and time types\n* math — Mathematical functions\n* random — Generate pseudo-random numbers\n* statistics — Mathematical statistics functions\n* shutil — High-level file operations\n* pickle — Python object serialization\n* logging — Logging facility for Python\n* tkinter — Python interface to Tcl/Tk (creating UI)\n* venv — Creation of virtual environments\n* re - Regular expression operations", "_____no_output_____" ], [ "#### time package examples", "_____no_output_____" ] ], [ [ "import time\nprint('start')\ntime.sleep(3)\nprint('stop')", "start\nstop\n" ], [ "time_now = time.localtime()\nprint(time_now)", "time.struct_time(tm_year=2020, tm_mon=10, tm_mday=6, tm_hour=9, tm_min=45, tm_sec=26, tm_wday=1, tm_yday=280, tm_isdst=1)\n" ] ], [ [ "convert time to string with form dd-mm-yyyy", "_____no_output_____" ] ], [ [ "date = time.strftime('%d-%m-%Y', time_now)\nprint(date)\nmonth = time.strftime('%B', time_now)\nprint(f'month is {month}')", "06-10-2020\nmonth is October\n" ] ], [ [ "convert string to time", "_____no_output_____" ] ], [ [ "as_time = time.strptime(\"30 Nov 2020\", \"%d %b %Y\")\nprint(as_time)", "time.struct_time(tm_year=2020, tm_mon=11, tm_mday=30, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=0, tm_yday=335, tm_isdst=-1)\n" ] ], [ [ "#### datatime package examples", "_____no_output_____" ] ], [ [ "import datetime\ntoday = datetime.date.today()\nprint(today)\nprint(type(today))\nweek_ago = today - datetime.timedelta(days=7)\nprint(week_ago)\ntoday_string = today.strftime('%Y/%m/%d')\nprint(today_string)\nprint(type(today_string))", "2020-10-06\n<class 'datetime.date'>\n2020-09-29\n2020/10/06\n<class 'str'>\n" ] ], [ [ "#### shutil package examples\nfunctions for file copying and removal\n\n* shutil.copy(src, dst)\n* shutil.copytree(src, dst)\n* shutil.rmtree(path)\n* shutil.move(src, dst)", "_____no_output_____" ], [ "### How to import packages and function from packages\n* Import the whole package - in this case you can use all the functions of the package including the functions in the modules of the package, you can rename the package when importing", "_____no_output_____" ] ], [ [ "import datetime\ntoday = datetime.date.today()\nprint(today)\n\nimport datetime as dt\ntoday = dt.date.today()\nprint(today)", "2020-10-06\n2020-10-06\n" ] ], [ [ "* import individual modules or individual functions - in this case you can use the functions direktly as if they are defined in your script. <font color=red>Important: be aware of function shadowing - when you import functions with the same name from different packages or you have defined function with the same name!</font>", "_____no_output_____" ] ], [ [ "from datetime import date # importing date class\ntoday = date.today()\nprint(today)\n\n# Warning this is replacing date class with string!!!\ndate = '25/06/2012'\ntoday = date.today()\nprint(today)", "2020-10-06\n" ] ], [ [ "When importing individual functions or classes from the same package you can import them together", "_____no_output_____" ] ], [ [ "from datetime import date, time, timedelta", "_____no_output_____" ] ], [ [ "## Selected external packages\nIf you are using pip package manager all the packages available are installed from [PyPI](https://pypi.org/)", "_____no_output_____" ], [ "* [Biopython](https://biopython.org/) - contains parsers for various Bioinformatics file formats (BLAST, Clustalw, FASTA, Genbank,...), access to online services (NCBI, Expasy,...) and more\n* [SQLAlchemy](https://docs.sqlalchemy.org/en/13/) - connect to SQL database and query the database\n* [cx_Oracle](https://oracle.github.io/python-cx_Oracle/) - connect to Oracle database\n* [xmltodict](https://github.com/martinblech/xmltodict) - convert xml to Python dictionary with xml tags as keys and the information inside the tags as values", "_____no_output_____" ] ], [ [ "import xmltodict\n\nxml = \"\"\"\n<root xmlns=\"http://defaultns.com/\"\nxmlns:a=\"http://a.com/\"\nxmlns:b=\"http://b.com/\">\n<x>1</x>\n<a:y>2</a:y>\n<b:z>3</b:z>\n</root>\"\"\"\n\nxml_dict = xmltodict.parse(xml)\nprint(xml_dict.keys())\nprint(xml_dict['root'].keys())\nprint(xml_dict['root'].values())", "odict_keys(['root'])\nodict_keys(['@xmlns', '@xmlns:a', '@xmlns:b', 'x', 'a:y', 'b:z'])\nodict_values(['http://defaultns.com/', 'http://a.com/', 'http://b.com/', '1', '2', '3'])\n" ] ], [ [ "### Pyautogui\n[PyAutoGUI](https://pyautogui.readthedocs.io/en/latest/index.html) lets your Python scripts control the mouse and keyboard to automate interactions with other applications.", "_____no_output_____" ] ], [ [ "import pyautogui as pa\n\nscreen_width, screen_height = pa.size() # Get the size of the primary monitor.\nprint(f'screen size is {screen_width} x {screen_height}')\nmouse_x, mouse_y = pa.position() # Get the XY position of the mouse.\nprint(f'mouse position is: {mouse_x}, {mouse_y}')\npa.moveTo(600, 500, duration=5) # Move the mouse to XY coordinates.\n", "screen size is 1920 x 1080\nmouse position is: 457, 278\n" ], [ "import time\ntime.sleep(3)\npa.moveTo(600, 500)\npa.click()\npa.write('Hello world!', interval=0.25)\npa.alert('Script finished!') ", "_____no_output_____" ], [ "pa.screenshot('C:/temp/python test/my_screenshot.png', region=(0,0, 300, 400))", "_____no_output_____" ], [ "location = pa.locateOnScreen('C:/temp/python test/python.PNG')\nprint(location)\nimage_center = pa.center(location)\nprint(image_center)\npa.moveTo(image_center, duration=3)", "Box(left=1669, top=131, width=59, height=54)\nPoint(x=1698, y=158)\n" ] ], [ [ "### Pandas\n[Pandas](https://pandas.pydata.org/docs/user_guide/index.html) - is providing high-performance, easy-to-use data structures and data analysis tools for Python\n\n[Pandas cheat sheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)\n\nIs providing 2 new data structures to Python\n1. Series - is a one-dimensional labeled (indexed) array capable of holding any data type\n2. DataFrame - is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nd = {'b': 1, 'a': 0, 'c': 2}\nmy_serie = pd.Series(d)\nprint(my_serie['a'])\nprint(type(my_serie))", "0\n<class 'pandas.core.series.Series'>\n" ], [ "list1 = [1, 2, 3]\nlist2 = [5, 6, 8]\nlist3 = [10, 12, 13]\n\ndf = pd.DataFrame({'b': list1, 'a': list2, 'c': list3})\ndf", "_____no_output_____" ], [ "print(df.index)\nprint(df.columns)\nprint(df.shape)", "RangeIndex(start=0, stop=3, step=1)\nIndex(['b', 'a', 'c'], dtype='object')\n(3, 3)\n" ], [ "df.columns = ['column1', 'column2', 'column3']\n# alternative df.rename({'a':'column1'}) in case you dont want to rename all the columns\ndf", "_____no_output_____" ], [ "df.index = ['a', 'b', 'c']\ndf", "_____no_output_____" ] ], [ [ "#### selecting values from dataframe\n* select column", "_____no_output_____" ] ], [ [ "df['column1']", "_____no_output_____" ] ], [ [ "* select multiple columns", "_____no_output_____" ] ], [ [ "df[['column3', 'column2']]", "_____no_output_____" ] ], [ [ "* selecting row", "_____no_output_____" ] ], [ [ "row1 = df.iloc[1]\nrow1", "_____no_output_____" ], [ "df.loc['a']", "_____no_output_____" ], [ "df.loc[['a', 'c']]", "_____no_output_____" ] ], [ [ "* selecting values from single cell", "_____no_output_____" ] ], [ [ "df['column1'][2]", "_____no_output_____" ], [ "df.iloc[1:2, 0:2]", "_____no_output_____" ] ], [ [ "* selecting by column only rows meeting criteria (filtering the table)", "_____no_output_____" ] ], [ [ "df[df['column1'] > 1]", "_____no_output_____" ] ], [ [ "* select random columns by number (n) or as a fraction (frac)", "_____no_output_____" ] ], [ [ "df.sample(n=2)", "_____no_output_____" ] ], [ [ "#### adding new data to Data Frame\n* add new column ", "_____no_output_____" ] ], [ [ "df['column4'] = [24, 12, 16]\ndf", "_____no_output_____" ], [ "df['column5'] = df['column1'] + df['column2']\ndf", "_____no_output_____" ], [ "df['column6'] = 7\ndf", "_____no_output_____" ] ], [ [ "* add new row", "_____no_output_____" ] ], [ [ "df = df.append({'column1':4, 'column2': 8, 'column3': 5, 'column4': 7, 'column5': 8, 'column6': 11}, ignore_index=True)\ndf", "_____no_output_____" ] ], [ [ "* add new dataframe on the bottom (columns should have the same names in both dataframes)", "_____no_output_____" ] ], [ [ "new_df = df.append(df, ignore_index=True)\nnew_df", "_____no_output_____" ] ], [ [ "* merging data frames (similar to joins in SQL), default ‘inner’", "_____no_output_____" ] ], [ [ "df2 = pd.DataFrame({'c1':[2, 3, 4, 5], 'c2': [4, 7, 11, 3]})\ndf2", "_____no_output_____" ], [ "merged_df = df.merge(df2, left_on='column1', right_on='c1', how='left')\nmerged_df", "_____no_output_____" ], [ "merged_df = pd.merge(df, df2, left_on='column1', right_on='c1')\nmerged_df", "_____no_output_____" ] ], [ [ "* copy data frames - this is important to prevent warnings and artefacts", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({'a':[1,2,3,4,5], 'b':[6,7,8,9,10]})\n\ndf2 = df1[df1['a'] > 2].copy()\n\ndf2.iloc[0, 0] = 56\ndf2", "_____no_output_____" ] ], [ [ "* change the data type in a column", "_____no_output_____" ] ], [ [ "print(type(df1['a'][0]))\ndf1['a'] = df1['a'].astype('str')\nprint(type(df1['a'][0]))\ndf1", "<class 'str'>\n<class 'str'>\n" ] ], [ [ "* value counts - counts the number of appearances of a value in a column", "_____no_output_____" ] ], [ [ "df1.iloc[0, 0] = '5'\ndf1\ndf1['a'].value_counts()", "_____no_output_____" ] ], [ [ "* drop duplicates - removes duplicated rows in a data frame", "_____no_output_____" ] ], [ [ "df1.iloc[0, 1] = 10\ndf1", "_____no_output_____" ], [ "df1.drop_duplicates(inplace=True)\ndf1", "_____no_output_____" ] ], [ [ "#### Pandas I/O\n* from / to excel file", "_____no_output_____" ] ], [ [ "excel_sheet = pd.read_excel('C:/temp/python test/example.xlsx', sheet_name='Sheet1')", "_____no_output_____" ], [ "excel_sheet.head()", "_____no_output_____" ], [ "print(excel_sheet.shape)\nprint(excel_sheet['issue'][0])\nexcel_sheet = excel_sheet[~excel_sheet['keywords'].isna()]\nprint(excel_sheet.shape)", "(39, 10)\nnan\n(35, 10)\n" ], [ "excel_sheet.to_excel('C:/temp/python test/example_1.xlsx', index=False)", "_____no_output_____" ] ], [ [ "To create excel file with multiple sheets pandas ExcelWriter method shoyld be used and sheets assigned to it", "_____no_output_____" ] ], [ [ "writer = pd.ExcelWriter('C:/temp/python test/example_2.xlsx')\ndf1.to_excel(writer, 'Sheet1', index = False)\nexcel_sheet.to_excel(writer, 'Sheet2', index = False)\nwriter.save()", "_____no_output_____" ] ], [ [ "* from html page\n\npandas read_html method is reading the whole page and is creating list of dataframes, one for every html table in the webpage", "_____no_output_____" ] ], [ [ "codons = pd.read_html('https://en.wikipedia.org/wiki/DNA_codon_table')", "_____no_output_____" ], [ "codons[2]", "_____no_output_____" ] ], [ [ "* from SQL database", "_____no_output_____" ] ], [ [ "my_data = pd.read_sql('select column1, column2 from table1', connection)", "_____no_output_____" ] ], [ [ "* from CSV file", "_____no_output_____" ] ], [ [ "my_data = pd.read_csv('data.csv')", "_____no_output_____" ] ], [ [ "### XLWings\nWorking with excel files\n\n[Documentation](https://docs.xlwings.org/en/stable/)", "_____no_output_____" ] ], [ [ "import xlwings as xw\nworkbook = xw.Book()\n", "_____no_output_____" ], [ "new_sht = workbook.sheets.add('new_sheet')", "_____no_output_____" ], [ "new_sht.range('A1').value = 'Hi from Python'\nnew_sht.range('A1').column_width = 30\nnew_sht.range('A1').color = (0,255,255)", "_____no_output_____" ], [ "a2_value = new_sht.range('A2').value\nprint(a2_value)", "56.0\n" ], [ "workbook.save('C:/temp/python test/new_file.xlsx')\nworkbook.close()", "_____no_output_____" ] ], [ [ "## Errors an debugging\n### Escaping errors in Python with try: except:", "_____no_output_____" ] ], [ [ "a = 7/0", "_____no_output_____" ], [ "import sys\n\ntry:\n a = 7/0\nexcept:\n print(f'a cannot be calculated, {sys.exc_info()[0]}!')\n a = None", "a cannot be calculated, <class 'ZeroDivisionError'>!\n" ], [ "try:\n 'something'\nexcept:\n try:\n 'something else'\n except:\n 'and another try'\nfinally:\n print('Nothing is working :(')", "Nothing is working :(\n" ] ], [ [ "### Debugging in PyCharm", "_____no_output_____" ], [ "## Virtual environments\nYou can create new virtual environment for every Python project, the virtual environment is an indipendant instalation of Python and you can install packages indipendantly of your System Python.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
d0195add1db8f8a12b7c2809262d5bb28eecd04f
4,513
ipynb
Jupyter Notebook
PDF Encrypt Decrypt/PDF_Encrypt_Decrypt.ipynb
MohapatraShibu/Python-Codes
4ba7590399c6a149e6c5a99f250f655abd5a6612
[ "MIT" ]
null
null
null
PDF Encrypt Decrypt/PDF_Encrypt_Decrypt.ipynb
MohapatraShibu/Python-Codes
4ba7590399c6a149e6c5a99f250f655abd5a6612
[ "MIT" ]
null
null
null
PDF Encrypt Decrypt/PDF_Encrypt_Decrypt.ipynb
MohapatraShibu/Python-Codes
4ba7590399c6a149e6c5a99f250f655abd5a6612
[ "MIT" ]
null
null
null
21.287736
157
0.520053
[ [ [ "#PDF ENCRYPTION\n!pip install PyPDF2", "Collecting PyPDF2\n Using cached PyPDF2-1.26.0.tar.gz (77 kB)\nUsing legacy setup.py install for PyPDF2, since package 'wheel' is not installed.\nInstalling collected packages: PyPDF2\n Running setup.py install for PyPDF2: started\n Running setup.py install for PyPDF2: finished with status 'done'\nSuccessfully installed PyPDF2-1.26.0\n" ], [ "from PyPDF2 import PdfFileReader, PdfFileWriter", "_____no_output_____" ], [ "file_pdf=PdfFileReader(\"a1.pdf\")\nout_pdf=PdfFileWriter()", "_____no_output_____" ], [ "file_pdf", "_____no_output_____" ], [ "# download and upload the file whenever we will execute\n\nfor i in range(file_pdf.numPages):\n page_details=file_pdf.getPage(i)\n out_pdf.addPage(page_details)", "_____no_output_____" ], [ "password=\"shibu@456\"", "_____no_output_____" ], [ "out_pdf.encrypt(password)", "_____no_output_____" ], [ "with open(\"a1.pdf\", \"wb\")as filename:\n out_pdf.write(filename)", "_____no_output_____" ], [ "#PDF DECRYPTION\n\nfrom PyPDF2 import PdfFileWriter, PdfFileReader\n\nout = PdfFileWriter()", "_____no_output_____" ], [ "file = PdfFileReader(\"a1.pdf\")", "_____no_output_____" ], [ "password = \"shibu@456\"", "_____no_output_____" ], [ "if file.isEncrypted:\n file.decrypt(password)\n\n for i in range(file.numPages):\n page = file.getPage(i)\n out.addPage(page)\n \n with open(\"a1.pdf\", \"wb\") as f:\n out.write(f)\n print(\"File decrypted Successfully.\")\n\nelse:\n print(\"File already decrypted.\")\n", "File decrypted Successfully.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0197877bfe85c901aa426caaf745c9f2daad1b8
275,598
ipynb
Jupyter Notebook
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
987e336e70482622c5d03428b5532349483f87f4
[ "MIT" ]
2
2020-08-19T01:59:25.000Z
2021-12-31T12:32:59.000Z
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
987e336e70482622c5d03428b5532349483f87f4
[ "MIT" ]
null
null
null
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
987e336e70482622c5d03428b5532349483f87f4
[ "MIT" ]
3
2021-03-31T22:23:46.000Z
2022-01-29T22:13:01.000Z
452.541872
47,742
0.93222
[ [ [ "# AMATH 515 Homework 2\n\n**Due Date: 02/08/2019**\n\n* Name: Tyler Chen\n* Student Number: \n\n*Homework Instruction*: Please follow order of this notebook and fill in the codes where commented as `TODO`.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport scipy.io as sio\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Please complete the solvers in `solver.py`", "_____no_output_____" ] ], [ [ "import sys\nsys.path.append('./')\nfrom solvers import *", "_____no_output_____" ] ], [ [ "## Problem 3: Compressive Sensing\n\nConsier the optimization problem,\n\n$$\n\\min_x~~\\frac{1}{2}\\|Ax - b\\|^2 + \\lambda\\|x\\|_1\n$$\n\nIn the following, please specify the $f$ and $g$ and use the proximal gradient descent solver to obtain the solution.", "_____no_output_____" ] ], [ [ "# create the data\nnp.random.seed(123)\nm = 100 # number of measurements\nn = 500 # number of variables\nk = 10 # number of nonzero variables\ns = 0.05 # measurements noise level\n#\nA_cs = np.random.randn(m, n)\nx_cs = np.zeros(n)\nx_cs[np.random.choice(range(n), k, replace=False)] = np.random.choice([-1.0, 1.0], k)\nb_cs = A_cs.dot(x_cs) + s*np.random.randn(m)\n#\nlam_cs = 0.1*norm(A_cs.T.dot(b_cs), np.inf)", "_____no_output_____" ], [ "# define the function, prox and the beta constant\ndef func_f_cs(x):\n # TODO: complete the function\n return norm(A_cs@x-b_cs)**2/2\n\ndef func_g_cs(x):\n # TODO: complete the gradient\n return lam_cs*norm(x,ord=1)\n\ndef grad_f_cs(x):\n # TODO: complete the function\n return A_cs.T@(A_cs@x-b_cs)\n\ndef prox_g_cs(x, t):\n # TODO: complete the prox of 1 norm\n leq = x <= -lam_cs*t # boolean array of coordinates where x_i <= -lam_cs * t\n geq = x >= lam_cs*t # boolean array of coordinates where x_i >= lam_cs * t\n # (leq + geq) gives components where x not in [-1,1]*lam_cs*t\n return (leq+geq) * x + leq * lam_cs*t - geq * lam_cs*t\n\n# TODO: what is the beta value for the smooth part\nbeta_f_cs = norm(A_cs,ord=2)**2", "_____no_output_____" ] ], [ [ "### Proximal gradient descent on compressive sensing", "_____no_output_____" ] ], [ [ "# apply the proximal gradient descent solver\nx0_cs_pgd = np.zeros(x_cs.size)\nx_cs_pgd, obj_his_cs_pgd, err_his_cs_pgd, exit_flag_cs_pgd = \\\n optimizeWithPGD(x0_cs_pgd, func_f_cs, func_g_cs, grad_f_cs, prox_g_cs, beta_f_cs)", "_____no_output_____" ], [ "# plot signal result\nplt.plot(x_cs)\nplt.plot(x_cs_pgd, '.')\nplt.legend(['true signal', 'recovered'])\nplt.title('Compressive Sensing Signal')\nplt.show()", "_____no_output_____" ], [ "# plot result\nfig, ax = plt.subplots(1, 2, figsize=(12,5))\nax[0].plot(obj_his_cs_pgd)\nax[0].set_title('function value')\nax[1].semilogy(err_his_cs_pgd)\nax[1].set_title('optimality condition')\nfig.suptitle('Proximal Gradient Descent on Compressive Sensing')\nplt.show()", "_____no_output_____" ], [ "# plot result\nfig, ax = plt.subplots(1, 3, figsize=(18,5))\nax[0].plot(x_cs)\nax[0].plot(x_cs_pgd, '.')\nax[0].legend(['true signal', 'recovered'])\nax[0].set_title('Compressive Sensing Signal')\nax[1].plot(obj_his_cs_pgd)\nax[1].set_title('function value')\nax[2].semilogy(err_his_cs_pgd)\nax[2].set_title('optimality condition')\n#fig.suptitle('Proximal Gradient Descent on Compressive Sensing')\nplt.savefig('img/cs_pgd.pdf',bbox_inches=\"tight\")", "_____no_output_____" ] ], [ [ "### Accelerate proximal gradient descent on compressive sensing", "_____no_output_____" ] ], [ [ "# apply the proximal gradient descent solver\nx0_cs_apgd = np.zeros(x_cs.size)\nx_cs_apgd, obj_his_cs_apgd, err_his_cs_apgd, exit_flag_cs_apgd = \\\n optimizeWithAPGD(x0_cs_apgd, func_f_cs, func_g_cs, grad_f_cs, prox_g_cs, beta_f_cs)", "9.9157469287981e-07 1e-06\n" ], [ "# plot signal result\nplt.plot(x_cs)\nplt.plot(x_cs_apgd, '.')\nplt.legend(['true signal', 'recovered'])\nplt.title('Compressive Sensing Signal')\nplt.show()", "_____no_output_____" ], [ "# plot result\nfig, ax = plt.subplots(1, 2, figsize=(12,5))\nax[0].plot(obj_his_cs_apgd)\nax[0].set_title('function value')\nax[1].semilogy(err_his_cs_apgd)\nax[1].set_title('optimality condition')\nfig.suptitle('Accelerated Proximal Gradient Descent on Compressive Sensing')\nplt.show()", "_____no_output_____" ], [ "# plot result\nfig, ax = plt.subplots(1, 3, figsize=(18,5))\nax[0].plot(x_cs)\nax[0].plot(x_cs_apgd, '.')\nax[0].legend(['true signal', 'recovered'])\nax[0].set_title('Compressive Sensing Signal')\nax[1].plot(obj_his_cs_apgd)\nax[1].set_title('function value')\nax[2].semilogy(err_his_cs_apgd)\nax[2].set_title('optimality condition')\n#fig.suptitle('Proximal Gradient Descent on Compressive Sensing')\nplt.savefig('img/cs_apgd.pdf',bbox_inches=\"tight\")", "_____no_output_____" ] ], [ [ "## Problem 4: Logistic Regression on MINST Data\n\nNow let's play with some real data, recall the logistic regression problem,\n\n$$\n\\min_x~~\\sum_{i=1}^m\\left\\{\\log(1 + \\exp(\\langle a_i,x \\rangle)) - b_i\\langle a_i,x \\rangle\\right\\} + \\frac{\\lambda}{2}\\|x\\|^2.\n$$\n\nHere our data pair $\\{a_i, b_i\\}$, $a_i$ is the image and $b_i$ is the label.\nIn this homework problem, let's consider the binary classification problem, where $b_i \\in \\{0, 1\\}$.", "_____no_output_____" ] ], [ [ "# import data\nmnist_data = np.load('mnist01.npy')\n#\nA_lgt = mnist_data[0]\nb_lgt = mnist_data[1]\nA_lgt_test = mnist_data[2]\nb_lgt_test = mnist_data[3]\n#\n# set regularizer parameter\nlam_lgt = 0.1\n#\n# beta constant of the function\nbeta_lgt = 0.25*norm(A_lgt, 2)**2 + lam_lgt", "_____no_output_____" ], [ "# plot the images\nfig, ax = plt.subplots(1, 2)\nax[0].imshow(A_lgt[0].reshape(28,28))\nax[1].imshow(A_lgt[7].reshape(28,28))\nplt.show()", "_____no_output_____" ], [ "# define function, gradient and Hessian\ndef lgt_func(x):\n # TODO: complete the function of logistic regression\n return np.sum(np.log(1+np.exp(A_lgt@x))) - b_lgt@A_lgt@x + lam_lgt*x@x/2\n#\ndef lgt_grad(x):\n # TODO: complete the gradient of logistic regression\n return A_lgt.T@ ((np.exp(A_lgt@x)/(1+np.exp(A_lgt@x))) - b_lgt) + lam_lgt*x\n#\ndef lgt_hess(x):\n # TODO: complete the hessian of logistic regression\n return A_lgt.T @ np.diag( np.exp(A_lgt@x)/(1+np.exp(A_lgt@x))**2 ) @ A_lgt + lam_lgt * np.eye(len(x))", "_____no_output_____" ] ], [ [ "### Gradient decsent on logistic regression", "_____no_output_____" ] ], [ [ "# apply the gradient descent\nx0_lgt_gd = np.zeros(A_lgt.shape[1])\nx_lgt_gd, obj_his_lgt_gd, err_his_lgt_gd, exit_flag_lgt_gd = \\\n optimizeWithGD(x0_lgt_gd, lgt_func, lgt_grad, beta_lgt)", "Gradient descent reach maximum number of iteration.\n" ], [ "# plot result\nfig, ax = plt.subplots(1, 2, figsize=(12,5))\nax[0].plot(obj_his_lgt_gd)\nax[0].set_title('function value')\nax[1].semilogy(err_his_lgt_gd)\nax[1].set_title('optimality condition')\nfig.suptitle('Gradient Descent on Logistic Regression')\nplt.savefig('img/lr_gd.pdf',bbox_inches=\"tight\")", "_____no_output_____" ] ], [ [ "### Accelerate Gradient decsent on logistic regression", "_____no_output_____" ] ], [ [ "# apply the accelerated gradient descent\nx0_lgt_agd = np.zeros(A_lgt.shape[1])\nx_lgt_agd, obj_his_lgt_agd, err_his_lgt_agd, exit_flag_lgt_agd = \\\n optimizeWithAGD(x0_lgt_agd, lgt_func, lgt_grad, beta_lgt)", "Proximal gradient descent reach maximum of iteration\n" ], [ "# plot result\nfig, ax = plt.subplots(1, 2, figsize=(12,5))\nax[0].plot(obj_his_lgt_agd)\nax[0].set_title('function value')\nax[1].semilogy(err_his_lgt_agd)\nax[1].set_title('optimality condition')\nfig.suptitle('Accelerated Gradient Descent on Logistic Regression')\nplt.savefig('img/lr_agd.pdf',bbox_inches=\"tight\")\nplt.show()", "_____no_output_____" ] ], [ [ "### Accelerate Gradient decsent on logistic regression", "_____no_output_____" ] ], [ [ "# apply the accelerated gradient descent\nx0_lgt_nt = np.zeros(A_lgt.shape[1])\nx_lgt_nt, obj_his_lgt_nt, err_his_lgt_nt, exit_flag_lgt_nt = \\\n optimizeWithNT(x0_lgt_nt, lgt_func, lgt_grad, lgt_hess)", "_____no_output_____" ], [ "# plot result\nfig, ax = plt.subplots(1, 2, figsize=(12,5))\nax[0].plot(obj_his_lgt_nt)\nax[0].set_title('function value')\nax[1].semilogy(err_his_lgt_nt)\nax[1].set_title('optimality condition')\nfig.suptitle('Newton\\'s Method on Logistic Regression')\nplt.savefig('img/lr_nm.pdf',bbox_inches=\"tight\")\nplt.show()", "_____no_output_____" ] ], [ [ "### Test Logistic Regression", "_____no_output_____" ] ], [ [ "# define accuracy function\ndef accuracy(x, A_test, b_test):\n r = A_test.dot(x)\n b_test[b_test == 0.0] = -1.0\n correct_count = np.sum((r*b_test) > 0.0)\n return correct_count/b_test.size", "_____no_output_____" ], [ "print('accuracy of the result is %0.3f' % accuracy(x_lgt_nt, A_lgt_test, b_lgt_test))", "accuracy of the result is 1.000\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d0199f8c743f528131b7c4645de21be57ed2f5bd
430,084
ipynb
Jupyter Notebook
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
e7e8f650f8c622d997f8778e21994515ff06e9dc
[ "Apache-2.0" ]
1
2020-06-22T15:25:53.000Z
2020-06-22T15:25:53.000Z
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
e7e8f650f8c622d997f8778e21994515ff06e9dc
[ "Apache-2.0" ]
null
null
null
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
e7e8f650f8c622d997f8778e21994515ff06e9dc
[ "Apache-2.0" ]
null
null
null
430,084
430,084
0.917316
[ [ [ "# Start with simplest problem\n\nI feel like clasification is the easiest problem catogory to start with.\nWe will start with simple clasification problem to predict survivals of titanic https://www.kaggle.com/c/titanic", "_____no_output_____" ], [ "# Contents\n1. [Basic pipeline for a predictive modeling problem](#1)\n1. [Exploratory Data Analysis (EDA)](#2)\n * [Overall survival stats](#2_1)\n * [Analysis features](#2_2)\n 1. [Sex](#2_2_1)\n 1. [Pclass](#2_2_2)\n 1. [Age](#2_2_3)\n 1. [Embarked](#2_2_4)\n 1. [SibSip & Parch](#2_2_5)\n 1. [Fare](#2_2_6) \n * [Observations Summary](#2_3)\n * [Correlation Between The Features](#2_4)\n1. [Feature Engineering and Data Cleaning](#4)\n * [Converting String Values into Numeric](#4_1)\n * [Convert Age into a categorical feature by binning](#4_2)\n * [Convert Fare into a categorical feature by binning](#4_3)\n * [Dropping Unwanted Features](#4_4)\n1. [Predictive Modeling](#5)\n * [Cross Validation](#5_1)\n * [Confusion Matrix](#5_2)\n * [Hyper-Parameters Tuning](#5_3)\n * [Ensembling](#5_4)\n * [Prediction](#5_5)\n1. [Feature Importance](#6)\n", "_____no_output_____" ], [ "## **Basic Pipeline for predictive modeling problem**[^](#1)<a id=\"1\" ></a><br>\n\n**<left><span style=\"color:blue\">Exploratory Data Analysis</span> -> <span style=\"color:blue\">Feature Engineering and Data Preparation</span> -> <span style=\"color:blue\">Predictive Modeling</span></left>.**\n\n1. First we need to see what the data can tell us: We call this **<span style=\"color:blue\">Exploratory Data Analysis(EDA)</span>**. Here we look at data which is hidden in rows and column format and try to visualize, summarize and interprete it looking for information.\n1. Next we can **leverage domain knowledge** to boost machine learning model performance. We call this step, **<span style=\"color:blue\">Feature Engineering and Data Cleaning</span>**. In this step we might add few features, Remove redundant features, Converting features into suitable form for modeling.\n1. Then we can move on to the **<span style=\"color:blue\">Predictive Modeling</span>**. Here we try basic ML algorthms, cross validate, ensemble and Important feature Extraction.", "_____no_output_____" ], [ "---\n\n## Exploratory Data Analysis (EDA)[^](#2)<a id=\"2\" ></a><br>\n\nWith the objective in mind that this kernal aims to explain the workflow of a predictive modelling problem for begginers, I will try to use simple easy to understand visualizations in the EDA section. Kernals with more advanced EDA sections will be mentioned at the end for you to learn more.", "_____no_output_____" ] ], [ [ "# Python 3 environment comes with many helpful analytics libraries installed\n# For example, here's several helpful packages to load in \nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport os", "_____no_output_____" ], [ "# Read data to a pandas data frame\ndata=pd.read_csv('../input/train.csv')\n# lets have a look on first few rows\ndisplay(data.head())\n# Checking shape of our data set\nprint('Shape of Data : ',data.shape)", "_____no_output_____" ] ], [ [ "* We have 891 data points (rows); each data point has 12 columns.", "_____no_output_____" ] ], [ [ "#checking for null value counts in each column\ndata.isnull().sum()", "_____no_output_____" ] ], [ [ "* The Age, Cabin and Embarked have null values.", "_____no_output_____" ], [ "### Lets look at overall survival stats[^](#2_1)<a id=\"2_1\" ></a><br>", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,2,figsize=(13,5))\ndata['Survived'].value_counts().plot.pie(explode=[0,0.05],autopct='%1.1f%%',ax=ax[0],shadow=True)\nax[0].set_title('Survived')\nax[0].set_ylabel('')\nsns.countplot('Survived',data=data,ax=ax[1])\nax[1].set_title('Survived')\nplt.show()", "_____no_output_____" ] ], [ [ "* Sad Story! Only 38% have survived. That is roughly 340 out of 891. ", "_____no_output_____" ], [ "---\n### Analyse features[^](#2_2)<a id=\"2_2\" ></a><br>", "_____no_output_____" ], [ "#### Feature: Sex[^](#3_2_1)<a id=\"2_2_1\" ></a><br>", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,3,figsize=(18,5))\ndata[['Sex','Survived']].groupby(['Sex']).mean().plot.bar(ax=ax[0])\nax[0].set_title('Fraction of Survival with respect to Sex')\nsns.countplot('Sex',hue='Survived',data=data,ax=ax[1])\nax[1].set_title('Survived vs Dead counts with respect to Sex')\nsns.barplot(x=\"Sex\", y=\"Survived\", data=data,ax=ax[2])\nax[2].set_title('Survival by Gender')\nplt.show()", "_____no_output_____" ] ], [ [ "* While survival rate for female is around 75%, same for men is about 20%.\n* It looks like they have given priority to female passengers in the rescue.\n* **Looks like Sex is a good predictor on the survival.**", "_____no_output_____" ], [ "---\n#### Feature: Pclass[^](#2_2_2)<a id=\"2_2_2\" ></a><br>\n**Meaning :** Ticket class : 1 = 1st, 2 = 2nd, 3 = 3rd", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,3,figsize=(18,5))\ndata['Pclass'].value_counts().plot.bar(color=['#BC8F8F','#F4A460','#DAA520'],ax=ax[0])\nax[0].set_title('Number Of Passengers with respect to Pclass')\nax[0].set_ylabel('Count')\nsns.countplot('Pclass',hue='Survived',data=data,ax=ax[1])\nax[1].set_title('Survived vs Dead counts with respect to Pclass')\nsns.barplot(x=\"Pclass\", y=\"Survived\", data=data,ax=ax[2])\nax[2].set_title('Survival by Pclass')\nplt.show()", "_____no_output_____" ] ], [ [ "* For Pclass 1 %survived is around 63%, for Pclass2 is around 48% and for Pclass2 is around 25%.\n* **So its clear that higher classes had higher priority while rescue.**\n* **Looks like Pclass is also an important feature.**", "_____no_output_____" ], [ "---\n#### Feature: Age[^](#2_2_3)<a id=\"2_2_3\" ></a><br>\n**Meaning :** Age in years", "_____no_output_____" ] ], [ [ "# Plot\nplt.figure(figsize=(25,6))\nsns.barplot(data['Age'],data['Survived'], ci=None)\nplt.xticks(rotation=90);", "_____no_output_____" ] ], [ [ "* Survival rate for passenegers below Age 14(i.e children) looks to be good than others.\n* So Age seems an important feature too.\n* Rememer we had 177 null values in the Age feature. How are we gonna fill them?.", "_____no_output_____" ], [ "#### Filling Age NaN\n\nWell there are many ways to do this. One can use the mean value or median .. etc.. But can we do better?. Seems yes. [EDA To Prediction(DieTanic)](https://www.kaggle.com/ash316/eda-to-prediction-dietanic#EDA-To-Prediction-(DieTanic)) has used a wonderful method which I would use here too. There is a name feature. First lets extract the initials.\n", "_____no_output_____" ] ], [ [ "data['Initial']=0\nfor i in data:\n data['Initial']=data.Name.str.extract('([A-Za-z]+)\\.') #lets extract the Salutations\n\npd.crosstab(data.Initial,data.Sex).T.style.background_gradient(cmap='summer_r') #Checking the Initials with the Sex", "_____no_output_____" ] ], [ [ "Okay so there are some misspelled Initials like Mlle or Mme that stand for Miss. Lets replace them.", "_____no_output_____" ] ], [ [ "data['Initial'].replace(['Mlle','Mme','Ms','Dr','Major','Lady','Countess','Jonkheer','Col','Rev','Capt','Sir','Don'],['Miss','Miss','Miss','Mr','Mr','Mrs','Mrs','Other','Other','Other','Mr','Mr','Mr'],inplace=True)", "_____no_output_____" ], [ "data.groupby('Initial')['Age'].mean() #lets check the average age by Initials", "_____no_output_____" ], [ "## Assigning the NaN Values with the Ceil values of the mean ages\ndata.loc[(data.Age.isnull())&(data.Initial=='Mr'),'Age']=33\ndata.loc[(data.Age.isnull())&(data.Initial=='Mrs'),'Age']=36\ndata.loc[(data.Age.isnull())&(data.Initial=='Master'),'Age']=5\ndata.loc[(data.Age.isnull())&(data.Initial=='Miss'),'Age']=22\ndata.loc[(data.Age.isnull())&(data.Initial=='Other'),'Age']=46", "_____no_output_____" ], [ "data.Age.isnull().any() #So no null values left finally ", "_____no_output_____" ] ], [ [ "---\n#### Feature: Embarked[^](#2_2_4)<a id=\"2_2_4\" ></a><br>\n**Meaning :** Port of Embarkation. C = Cherbourg, Q = Queenstown, S = Southampton", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,2,figsize=(12,5))\nsns.countplot('Embarked',data=data,ax=ax[0])\nax[0].set_title('No. Of Passengers Boarded')\nsns.countplot('Embarked',hue='Survived',data=data,ax=ax[1])\nax[1].set_title('Embarked vs Survived')\nplt.subplots_adjust(wspace=0.2,hspace=0.5)\nplt.show()", "_____no_output_____" ] ], [ [ "* Majority of passengers borded from Southampton\n* Survival counts looks better at C. Why?. Could there be an influence from sex and pclass features we already studied?. Let's find out ", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,2,figsize=(12,5))\nsns.countplot('Embarked',hue='Sex',data=data,ax=ax[0])\nax[0].set_title('Male-Female Split for Embarked')\nsns.countplot('Embarked',hue='Pclass',data=data,ax=ax[1])\nax[1].set_title('Embarked vs Pclass')\nplt.subplots_adjust(wspace=0.2,hspace=0.5)\nplt.show()", "_____no_output_____" ] ], [ [ "* We guessed correctly. higher % of 1st class passegers boarding from C might be the reason.", "_____no_output_____" ], [ "#### Filling Embarked NaN", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,1,figsize=(6,5))\ndata['Embarked'].value_counts().plot.pie(explode=[0,0,0],autopct='%1.1f%%',ax=ax)\nplt.show()", "_____no_output_____" ] ], [ [ "* Since 72.5% passengers are from Southampton, So lets fill missing 2 values using S (Southampton)", "_____no_output_____" ] ], [ [ "data['Embarked'].fillna('S',inplace=True)", "_____no_output_____" ], [ "data.Embarked.isnull().any()", "_____no_output_____" ] ], [ [ "---\n#### Features: SibSip & Parch[^](#2_2_5)<a id=\"2_2_5\" ></a><br>\n**Meaning :** \nSibSip -> Number of siblings / spouses aboard the Titanic\n\nParch -> Number of parents / children aboard the Titanic\n\nSibSip + Parch -> Family Size ", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(2,2,figsize=(15,10))\nsns.countplot('SibSp',hue='Survived',data=data,ax=ax[0,0])\nax[0,0].set_title('SibSp vs Survived')\nsns.barplot('SibSp','Survived',data=data,ax=ax[0,1])\nax[0,1].set_title('SibSp vs Survived')\n\nsns.countplot('Parch',hue='Survived',data=data,ax=ax[1,0])\nax[1,0].set_title('Parch vs Survived')\nsns.barplot('Parch','Survived',data=data,ax=ax[1,1])\nax[1,1].set_title('Parch vs Survived')\n\nplt.subplots_adjust(wspace=0.2,hspace=0.5)\nplt.show()", "_____no_output_____" ] ], [ [ "* The barplot and factorplot shows that if a passenger is alone onboard with no siblings, he have 34.5% survival rate. The graph roughly decreases if the number of siblings increase.", "_____no_output_____" ], [ "Lets combine above and analyse family size. ", "_____no_output_____" ] ], [ [ "data['FamilySize'] = data['Parch'] + data['SibSp']\nf,ax=plt.subplots(1,2,figsize=(15,4.5))\nsns.countplot('FamilySize',hue='Survived',data=data,ax=ax[0])\nax[0].set_title('FamilySize vs Survived')\nsns.barplot('FamilySize','Survived',data=data,ax=ax[1])\nax[1].set_title('FamilySize vs Survived')\nplt.subplots_adjust(wspace=0.2,hspace=0.5)\nplt.show()", "_____no_output_____" ] ], [ [ "* This looks interesting! looks like family sizes of 1-3 have better survival rates than others.", "_____no_output_____" ], [ "---\n#### Fare[^](#2_2_6)<a id=\"2_2_6\" ></a><br>\n**Meaning :** Passenger fare", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,1,figsize=(20,5))\nsns.distplot(data.Fare,ax=ax)\nax.set_title('Distribution of Fares')\nplt.show()", "_____no_output_____" ], [ "print('Highest Fare:',data['Fare'].max(),' Lowest Fare:',data['Fare'].min(),' Average Fare:',data['Fare'].mean())\ndata['Fare_Bin']=pd.qcut(data['Fare'],6)\ndata.groupby(['Fare_Bin'])['Survived'].mean().to_frame().style.background_gradient(cmap='summer_r')", "Highest Fare: 512.3292 Lowest Fare: 0.0 Average Fare: 32.2042079685746\n" ] ], [ [ "* It is clear that as Fare Bins increase chances of survival increase too.", "_____no_output_____" ], [ "#### Observations Summary[^](#2_3)<a id=\"2_3\" ></a><br>", "_____no_output_____" ], [ "**Sex:** Survival chance for female is better than that for male.\n\n**Pclass:** Being a 1st class passenger gives you better chances of survival.\n\n**Age:** Age range 5-10 years have a high chance of survival.\n\n**Embarked:** Majority of passengers borded from Southampton.The chances of survival at C looks to be better than even though the majority of Pclass1 passengers got up at S. All most all Passengers at Q were from Pclass3.\n\n**Family Size:** looks like family sizes of 1-3 have better survival rates than others.\n\n**Fare:** As Fare Bins increase chances of survival increases\n\n", "_____no_output_____" ], [ "#### Correlation Between The Features[^](#2_4)<a id=\"2_4\" ></a><br>", "_____no_output_____" ] ], [ [ "sns.heatmap(data.corr(),annot=True,cmap='RdYlGn',linewidths=0.2) #data.corr()-->correlation matrix\nfig=plt.gcf()\nfig.set_size_inches(10,8)\nplt.show()", "_____no_output_____" ] ], [ [ "---\n## Feature Engineering and Data Cleaning[^](#4)<a id=\"4\" ></a><br>\nNow what is Feature Engineering? Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work.\n\nIn this section we will be doing,\n1. Converting String Values into Numeric\n1. Convert Age into a categorical feature by binning\n1. Convert Fare into a categorical feature by binning\n1. Dropping Unwanted Features\n", "_____no_output_____" ], [ "#### Converting String Values into Numeric[^](#4_1)<a id=\"4_1\" ></a><br>\nSince we cannot pass strings to a machine learning model, we need to convert features Sex, Embarked, etc into numeric values.", "_____no_output_____" ] ], [ [ "data['Sex'].replace(['male','female'],[0,1],inplace=True)\ndata['Embarked'].replace(['S','C','Q'],[0,1,2],inplace=True)\ndata['Initial'].replace(['Mr','Mrs','Miss','Master','Other'],[0,1,2,3,4],inplace=True)", "_____no_output_____" ] ], [ [ "#### Convert Age into a categorical feature by binning[^](#4_2)<a id=\"4_2\" ></a><br>", "_____no_output_____" ] ], [ [ "print('Highest Age:',data['Age'].max(),' Lowest Age:',data['Age'].min())", "Highest Age: 80.0 Lowest Age: 0.42\n" ], [ "data['Age_cat']=0\ndata.loc[data['Age']<=16,'Age_cat']=0\ndata.loc[(data['Age']>16)&(data['Age']<=32),'Age_cat']=1\ndata.loc[(data['Age']>32)&(data['Age']<=48),'Age_cat']=2\ndata.loc[(data['Age']>48)&(data['Age']<=64),'Age_cat']=3\ndata.loc[data['Age']>64,'Age_cat']=4", "_____no_output_____" ] ], [ [ "#### Convert Fare into a categorical feature by binning[^](#4_3)<a id=\"4_3\" ></a><br>", "_____no_output_____" ] ], [ [ "data['Fare_cat']=0\ndata.loc[data['Fare']<=7.775,'Fare_cat']=0\ndata.loc[(data['Fare']>7.775)&(data['Fare']<=8.662),'Fare_cat']=1\ndata.loc[(data['Fare']>8.662)&(data['Fare']<=14.454),'Fare_cat']=2\ndata.loc[(data['Fare']>14.454)&(data['Fare']<=26.0),'Fare_cat']=3\ndata.loc[(data['Fare']>26.0)&(data['Fare']<=52.369),'Fare_cat']=4\ndata.loc[data['Fare']>52.369,'Fare_cat']=5", "_____no_output_____" ] ], [ [ "#### Dropping Unwanted Features[^](#4_4)<a id=\"4_4\" ></a><br>\n\nName--> We don't need name feature as it cannot be converted into any categorical value.\n\nAge--> We have the Age_cat feature, so no need of this.\n\nTicket--> It is any random string that cannot be categorised.\n\nFare--> We have the Fare_cat feature, so unneeded\n\nCabin--> A lot of NaN values and also many passengers have multiple cabins. So this is a useless feature.\n\nFare_Bin--> We have the fare_cat feature.\n\nPassengerId--> Cannot be categorised.\n\nSibsp & Parch --> We got FamilySize feature\n", "_____no_output_____" ] ], [ [ "#data.drop(['Name','Age','Ticket','Fare','Cabin','Fare_Range','PassengerId'],axis=1,inplace=True)\ndata.drop(['Name','Age','Fare','Ticket','Cabin','Fare_Bin','SibSp','Parch','PassengerId'],axis=1,inplace=True)", "_____no_output_____" ], [ "data.head(2)", "_____no_output_____" ], [ "sns.heatmap(data.corr(),annot=True,cmap='RdYlGn',linewidths=0.2) #data.corr()-->correlation matrix\nfig=plt.gcf()\nfig.set_size_inches(10,8)\nplt.show()", "_____no_output_____" ] ], [ [ "---\n## Predictive Modeling[^](#5)<a id=\"5\" ></a><br>\n", "_____no_output_____" ], [ "Now after data cleaning and feature engineering we are ready to train some classification algorithms that will make predictions for unseen data. We will first train few classification algorithms and see how they perform. Then we can look how an ensemble of classification algorithms perform on this data set.\nFollowing Machine Learning algorithms will be used in this kernal.\n\n* Logistic Regression Classifier\n* Naive Bayes Classifier\n* Decision Tree Classifier\n* Random Forest Classifier\n", "_____no_output_____" ] ], [ [ "#importing all the required ML packages\nfrom sklearn.linear_model import LogisticRegression #logistic regression\nfrom sklearn.ensemble import RandomForestClassifier #Random Forest\nfrom sklearn.naive_bayes import GaussianNB #Naive bayes\nfrom sklearn.tree import DecisionTreeClassifier #Decision Tree\nfrom sklearn.model_selection import train_test_split #training and testing data split\nfrom sklearn import metrics #accuracy measure\nfrom sklearn.metrics import confusion_matrix #for confusion matrix", "_____no_output_____" ], [ "#Lets prepare data sets for training. \ntrain,test=train_test_split(data,test_size=0.3,random_state=0,stratify=data['Survived'])\ntrain_X=train[train.columns[1:]]\ntrain_Y=train[train.columns[:1]]\ntest_X=test[test.columns[1:]]\ntest_Y=test[test.columns[:1]]\nX=data[data.columns[1:]]\nY=data['Survived']", "_____no_output_____" ], [ "data.head(2)", "_____no_output_____" ], [ "# Logistic Regression\nmodel = LogisticRegression(C=0.05,solver='liblinear')\nmodel.fit(train_X,train_Y.values.ravel())\nLR_prediction=model.predict(test_X)\nprint('The accuracy of the Logistic Regression model is \\t',metrics.accuracy_score(LR_prediction,test_Y))\n\n# Naive Bayes\nmodel=GaussianNB()\nmodel.fit(train_X,train_Y.values.ravel())\nNB_prediction=model.predict(test_X)\nprint('The accuracy of the NaiveBayes model is\\t\\t\\t',metrics.accuracy_score(NB_prediction,test_Y))\n\n# Decision Tree\nmodel=DecisionTreeClassifier()\nmodel.fit(train_X,train_Y)\nDT_prediction=model.predict(test_X)\nprint('The accuracy of the Decision Tree is \\t\\t\\t',metrics.accuracy_score(DT_prediction,test_Y))\n\n# Random Forest\nmodel=RandomForestClassifier(n_estimators=100)\nmodel.fit(train_X,train_Y.values.ravel())\nRF_prediction=model.predict(test_X)\nprint('The accuracy of the Random Forests model is \\t\\t',metrics.accuracy_score(RF_prediction,test_Y))", "The accuracy of the Logistic Regression model is \t 0.8134328358208955\nThe accuracy of the NaiveBayes model is\t\t\t 0.8134328358208955\nThe accuracy of the Decision Tree is \t\t\t 0.8134328358208955\nThe accuracy of the Random Forests model is \t\t 0.8171641791044776\n" ] ], [ [ "### Cross Validation[^](#5_1)<a id=\"5_1\" ></a><br>\n\nAccuracy we get here higlhy depends on the train & test data split of the original data set. We can use cross validation to avoid such problems arising from dataset splitting.\nI am using K-fold cross validation here. Watch this short [vedio](https://www.youtube.com/watch?v=TIgfjmp-4BA) to understand what it is.\n", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import KFold #for K-fold cross validation\nfrom sklearn.model_selection import cross_val_score #score evaluation\nfrom sklearn.model_selection import cross_val_predict #prediction\nkfold = KFold(n_splits=10, random_state=22) # k=10, split the data into 10 equal parts\nxyz=[]\naccuracy=[]\nstd=[]\nclassifiers=['Logistic Regression','Decision Tree','Naive Bayes','Random Forest']\nmodels=[LogisticRegression(solver='liblinear'),DecisionTreeClassifier(),GaussianNB(),RandomForestClassifier(n_estimators=100)]\nfor i in models:\n model = i\n cv_result = cross_val_score(model,X,Y, cv = kfold,scoring = \"accuracy\")\n xyz.append(cv_result.mean())\n std.append(cv_result.std())\n accuracy.append(cv_result)\nnew_models_dataframe2=pd.DataFrame({'CV Mean':xyz,'Std':std},index=classifiers) \nnew_models_dataframe2", "_____no_output_____" ] ], [ [ "Now we have looked at cross validation accuracies to get an idea how those models work. There is more we can do to understand the performances of the models we tried ; let's have a look at confusion matrix for each model.", "_____no_output_____" ], [ "### Confusion Matrix[^](#5_2)<a id=\"5_2\" ></a><br>", "_____no_output_____" ], [ "A confusion matrix is a table that is often used to describe the performance of a classification model. read more [here](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/)", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(2,2,figsize=(10,8))\ny_pred = cross_val_predict(LogisticRegression(C=0.05,solver='liblinear'),X,Y,cv=10)\nsns.heatmap(confusion_matrix(Y,y_pred),ax=ax[0,0],annot=True,fmt='2.0f')\nax[0,0].set_title('Matrix for Logistic Regression')\ny_pred = cross_val_predict(DecisionTreeClassifier(),X,Y,cv=10)\nsns.heatmap(confusion_matrix(Y,y_pred),ax=ax[0,1],annot=True,fmt='2.0f')\nax[0,1].set_title('Matrix for Decision Tree')\ny_pred = cross_val_predict(GaussianNB(),X,Y,cv=10)\nsns.heatmap(confusion_matrix(Y,y_pred),ax=ax[1,0],annot=True,fmt='2.0f')\nax[1,0].set_title('Matrix for Naive Bayes')\ny_pred = cross_val_predict(RandomForestClassifier(n_estimators=100),X,Y,cv=10)\nsns.heatmap(confusion_matrix(Y,y_pred),ax=ax[1,1],annot=True,fmt='2.0f')\nax[1,1].set_title('Matrix for Random-Forests')\nplt.subplots_adjust(hspace=0.2,wspace=0.2)\nplt.show()", "_____no_output_____" ] ], [ [ "* By looking at above matrices we can say that, if we are more concerned on making less mistakes by predicting survived as dead, then Naive Bayes model does better.\n* If we are more concerned on making less mistakes by predicting dead as survived, then Decision Tree model does better.", "_____no_output_____" ], [ "### Hyper-Parameters Tuning[^](#5_3)<a id=\"5_3\" ></a><br>\n\nYou might have noticed there are few parameters for each model which defines how the model learns. We call these hyperparameters. These hyperparameters can be tuned to improve performance. Let's try this for Random Forest classifier.", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import GridSearchCV\nn_estimators=range(100,1000,100)\nhyper={'n_estimators':n_estimators}\ngd=GridSearchCV(estimator=RandomForestClassifier(random_state=0),param_grid=hyper,verbose=True,cv=10)\ngd.fit(X,Y)\nprint(gd.best_score_)\nprint(gd.best_estimator_)", "Fitting 10 folds for each of 9 candidates, totalling 90 fits\n" ] ], [ [ "* Best Score for Random Forest is with n_estimators=100", "_____no_output_____" ], [ "### Ensembling[^](#5_4)<a id=\"5_4\" ></a><br>\n\nEnsembling is a way to increase performance of a model by combining several simple models to create a single powerful model.\nread more about ensembling [here](https://www.analyticsvidhya.com/blog/2018/06/comprehensive-guide-for-ensemble-models/).\nEnsembling can be done in ways like: Voting Classifier, Bagging, Boosting.\n\nI will use voting method in this kernal", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import VotingClassifier\nestimators=[('RFor',RandomForestClassifier(n_estimators=100,random_state=0)),\n ('LR',LogisticRegression(C=0.05,solver='liblinear')),\n ('DT',DecisionTreeClassifier()),\n ('NB',GaussianNB())]\nensemble=VotingClassifier(estimators=estimators,voting='soft')\nensemble.fit(train_X,train_Y.values.ravel())\nprint('The accuracy for ensembled model is:',ensemble.score(test_X,test_Y))\ncross=cross_val_score(ensemble,X,Y, cv = 10,scoring = \"accuracy\")\nprint('The cross validated score is',cross.mean())", "The accuracy for ensembled model is: 0.8059701492537313\nThe cross validated score is 0.803603166496425\n" ] ], [ [ "### Prediction[^](#5_5)<a id=\"5_5\" ></a><br>\n\nWe can see that ensemble model does better than individual models. lets use that for predictions.", "_____no_output_____" ] ], [ [ "Ensemble_Model_For_Prediction=VotingClassifier(estimators=[\n ('RFor',RandomForestClassifier(n_estimators=200,random_state=0)),\n ('LR',LogisticRegression(C=0.05,solver='liblinear')),\n ('DT',DecisionTreeClassifier(random_state=0)),\n ('NB',GaussianNB())\n ], \n voting='soft')\nEnsemble_Model_For_Prediction.fit(X,Y)", "_____no_output_____" ] ], [ [ "We need to do some preprocessing to this test data set before we can feed that to the trained model.", "_____no_output_____" ] ], [ [ "test=pd.read_csv('../input/test.csv')\nIDtest = test[\"PassengerId\"]\ntest.head(2)", "_____no_output_____" ], [ "test.isnull().sum()", "_____no_output_____" ], [ "# Prepare Test Data set for feeding\n\n# Construct feature Initial\ntest['Initial']=0\nfor i in test:\n test['Initial']=test.Name.str.extract('([A-Za-z]+)\\.') #lets extract the Salutations\n \ntest['Initial'].replace(['Mlle','Mme','Ms','Dr','Major','Lady','Countess','Jonkheer','Col','Rev','Capt','Sir','Don','Dona'],['Miss','Miss','Miss','Mr','Mr','Mrs','Mrs','Other','Other','Other','Mr','Mr','Mr','Other'],inplace=True)\n\n# Fill Null values in Age Column\ntest.loc[(test.Age.isnull())&(test.Initial=='Mr'),'Age']=33\ntest.loc[(test.Age.isnull())&(test.Initial=='Mrs'),'Age']=36\ntest.loc[(test.Age.isnull())&(test.Initial=='Master'),'Age']=5\ntest.loc[(test.Age.isnull())&(test.Initial=='Miss'),'Age']=22\ntest.loc[(test.Age.isnull())&(test.Initial=='Other'),'Age']=46\n\n# Fill Null values in Fare Column\ntest.loc[(test.Fare.isnull()) & (test['Pclass']==3),'Fare'] = 12.45\n\n# Construct feature Age_cat\ntest['Age_cat']=0\ntest.loc[test['Age']<=16,'Age_cat']=0\ntest.loc[(test['Age']>16)&(test['Age']<=32),'Age_cat']=1\ntest.loc[(test['Age']>32)&(test['Age']<=48),'Age_cat']=2\ntest.loc[(test['Age']>48)&(test['Age']<=64),'Age_cat']=3\ntest.loc[test['Age']>64,'Age_cat']=4\n\n# Construct feature Fare_cat\ntest['Fare_cat']=0\ntest.loc[test['Fare']<=7.775,'Fare_cat']=0\ntest.loc[(test['Fare']>7.775)&(test['Fare']<=8.662),'Fare_cat']=1\ntest.loc[(test['Fare']>8.662)&(test['Fare']<=14.454),'Fare_cat']=2\ntest.loc[(test['Fare']>14.454)&(test['Fare']<=26.0),'Fare_cat']=3\ntest.loc[(test['Fare']>26.0)&(test['Fare']<=52.369),'Fare_cat']=4\ntest.loc[test['Fare']>52.369,'Fare_cat']=5\n\n# Construct feature FamilySize\ntest['FamilySize'] = test['Parch'] + test['SibSp']\n\n# Drop unwanted features\ntest.drop(['Name','Age','Ticket','Cabin','SibSp','Parch','Fare','PassengerId'],axis=1,inplace=True)\n\n# Converting String Values into Numeric \ntest['Sex'].replace(['male','female'],[0,1],inplace=True)\ntest['Embarked'].replace(['S','C','Q'],[0,1,2],inplace=True)\ntest['Initial'].replace(['Mr','Mrs','Miss','Master','Other'],[0,1,2,3,4],inplace=True)\n\ntest.head(2)", "_____no_output_____" ], [ "# Predict\ntest_Survived = pd.Series(ensemble.predict(test), name=\"Survived\")\nresults = pd.concat([IDtest,test_Survived],axis=1)\nresults.to_csv(\"predictions.csv\",index=False)", "/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n" ] ], [ [ "## Feature Importance[^](#6)<a id=\"6\" ></a><br>\n\nWell after we have trained a model to make predictions for us, we feel curiuos on how it works. What are the features model weights more when trying to make a prediction?. As humans we seek to understand how it works. Looking at feature importances of a trained model is one way we could explain the decisions it make. Lets visualize the feature importances of the Random forest model we used inside the ensemble above.", "_____no_output_____" ] ], [ [ "f,ax=plt.subplots(1,1,figsize=(6,6))\nmodel=RandomForestClassifier(n_estimators=500,random_state=0)\nmodel.fit(X,Y)\npd.Series(model.feature_importances_,X.columns).sort_values(ascending=True).plot.barh(width=0.8,ax=ax)\nax.set_title('Feature Importance in Random Forests')\nplt.show()", "_____no_output_____" ] ], [ [ "**If You Like the notebook and think that it helped you, please upvote to It keep motivate me**", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d019bfcac27fba471ca4a026a4274eddd2bc4831
17,487
ipynb
Jupyter Notebook
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
7e66839f86384c2b93158e2a21c9495996913454
[ "MIT" ]
6
2021-01-10T22:08:23.000Z
2021-09-18T02:25:52.000Z
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
7e66839f86384c2b93158e2a21c9495996913454
[ "MIT" ]
null
null
null
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
7e66839f86384c2b93158e2a21c9495996913454
[ "MIT" ]
3
2021-02-20T02:50:04.000Z
2022-03-20T04:16:08.000Z
20.310105
971
0.441414
[ [ [ "# ディープラーニングに必要な数学と NumPy の操作", "_____no_output_____" ], [ "# 1. NumPy の基本", "_____no_output_____" ], [ "## NumPy のインポート", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "## ndarray による1次元配列の例", "_____no_output_____" ] ], [ [ "a1 = np.array([1, 2, 3]) # 1次元配列を生成\nprint('変数の型:',type(a1))\nprint('データの型 (dtype):', a1.dtype)\nprint('要素の数 (size):', a1.size)\nprint('形状 (shape):', a1.shape)\nprint('次元の数 (ndim):', a1.ndim)\nprint('中身:', a1)", "変数の型: <class 'numpy.ndarray'>\nデータの型 (dtype): int64\n要素の数 (size): 3\n形状 (shape): (3,)\n次元の数 (ndim): 1\n中身: [1 2 3]\n" ] ], [ [ "## ndarray による1次元配列の例", "_____no_output_____" ] ], [ [ "a2 = np.array([[1, 2, 3],[4, 5, 6]], dtype='float32') # データ型 float32 の2次元配列を生成\nprint('データの型 (dtype):', a2.dtype)\nprint('要素の数 (size):', a2.size)\nprint('形状 (shape):', a2.shape)\nprint('次元の数 (ndim):', a2.ndim)\nprint('中身:', a2)", "データの型 (dtype): float32\n要素の数 (size): 6\n形状 (shape): (2, 3)\n次元の数 (ndim): 2\n中身: [[1. 2. 3.]\n [4. 5. 6.]]\n" ] ], [ [ "# 2. ベクトル(1次元配列)", "_____no_output_____" ], [ "## ベクトル a の生成(1次元配列の生成)", "_____no_output_____" ] ], [ [ "a = np.array([4, 1])", "_____no_output_____" ] ], [ [ "## ベクトルのスカラー倍", "_____no_output_____" ] ], [ [ "for k in (2, 0.5, -1):\n print(k * a)", "[8 2]\n[2. 0.5]\n[-4 -1]\n" ] ], [ [ "## ベクトルの和と差", "_____no_output_____" ] ], [ [ "b = np.array([1, 2]) # ベクトル b の生成\nprint('a + b =', a + b) # ベクトル a とベクトル b の和\nprint('a - b =', a - b) # ベクトル a とベクトル b の差", "a + b = [5 3]\na - b = [ 3 -1]\n" ] ], [ [ "# 3. 行列(2次元配列)", "_____no_output_____" ], [ "## 行列を2次元配列で生成", "_____no_output_____" ] ], [ [ "A = np.array([[1, 2], [3 ,4], [5, 6]])\nB = np.array([[5, 6], [7 ,8]])\nprint('A:\\n', A)\nprint('A.shape:', A.shape )\nprint()\nprint('B:\\n', B)\nprint('B.shape:', B.shape )", "A:\n [[1 2]\n [3 4]\n [5 6]]\nA.shape: (3, 2)\n\nB:\n [[5 6]\n [7 8]]\nB.shape: (2, 2)\n" ] ], [ [ "## 行列Aの i = 3, j = 2 にアクセス", "_____no_output_____" ] ], [ [ "print(A[2][1])", "6\n" ] ], [ [ "## A の転置行列", "_____no_output_____" ] ], [ [ "print(A.T)", "[[1 3 5]\n [2 4 6]]\n" ] ], [ [ "## 行列のスカラー倍", "_____no_output_____" ] ], [ [ "print(2 * A)", "[[ 2 4]\n [ 6 8]\n [10 12]]\n" ] ], [ [ "## 行列の和と差", "_____no_output_____" ] ], [ [ "print('A + A:\\n', A + A) # 行列 A と行列 A の和\nprint()\nprint('A - A:\\n', A - A) # 行列 A と行列 A の差", "A + A:\n [[ 2 4]\n [ 6 8]\n [10 12]]\n\nA - A:\n [[0 0]\n [0 0]\n [0 0]]\n" ] ], [ [ "## 行列 A と行列 B の和", "_____no_output_____" ] ], [ [ "print(A + B)", "_____no_output_____" ] ], [ [ "## 行列の積", "_____no_output_____" ] ], [ [ "print(np.dot(A, B))", "[[19 22]\n [43 50]\n [67 78]]\n" ] ], [ [ "## 積 BA", "_____no_output_____" ] ], [ [ "print(np.dot(B, A))", "_____no_output_____" ] ], [ [ "## アダマール積 A $\\circ$ A", "_____no_output_____" ] ], [ [ "print(A * A)", "[[ 1 4]\n [ 9 16]\n [25 36]]\n" ] ], [ [ "## 行列 X と行ベクトル a の積", "_____no_output_____" ] ], [ [ "X = np.array([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\na = np.array([[1, 2, 3, 4, 5]])\nprint('X.shape:', X.shape)\nprint('a.shape:', a.shape)\nprint(np.dot(X, a))", "X.shape: (2, 5)\na.shape: (1, 5)\n" ] ], [ [ "## 行列 X と列ベクトル a の積", "_____no_output_____" ] ], [ [ "X = np.array([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\na = np.array([[1], \n [2],\n [3],\n [4],\n [5]])\nprint('X.shape:', X.shape)\nprint('a.shape:', a.shape)\nXa = np.dot(X, a)\nprint('Xa.shape:', Xa.shape)\nprint('Xa:\\n', Xa)", "X.shape: (2, 5)\na.shape: (5, 1)\nXa.shape: (2, 1)\nXa:\n [[ 40]\n [115]]\n" ] ], [ [ "## NumPy による行列 X と1次元配列の積", "_____no_output_____" ] ], [ [ "X = np.array([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\na = np.array([1, 2, 3, 4, 5]) # 1次元配列で生成\nprint('X.shape:', X.shape)\nprint('a.shape:', a.shape)\nXa = np.dot(X, a)\nprint('Xa.shape:', Xa.shape)\nprint('Xa:\\n', Xa)", "X.shape: (2, 5)\na.shape: (5,)\nXa.shape: (2,)\nXa:\n [ 40 115]\n" ], [ "import numpy as np\nnp.array([1, 0.1])", "_____no_output_____" ] ], [ [ "# 4. ndarray の 軸(axis)について", "_____no_output_____" ], [ "## Aの合計を計算", "_____no_output_____" ] ], [ [ "np.sum(A)", "_____no_output_____" ] ], [ [ "## axis = 0 で A の合計を計算", "_____no_output_____" ] ], [ [ "print(np.sum(A, axis=0).shape)\nprint(np.sum(A, axis=0))", "(2,)\n[ 9 12]\n" ] ], [ [ "## axis = 1 で A の合計を計算", "_____no_output_____" ] ], [ [ "print(np.sum(A, axis=1).shape)\nprint(np.sum(A, axis=1))", "(3,)\n[ 3 7 11]\n" ] ], [ [ "## np.max 関数の利用例", "_____no_output_____" ] ], [ [ "Y_hat = np.array([[3, 4], [6, 5], [7, 8]]) # 2次元配列を生成\nprint(np.max(Y_hat)) # axis 指定なし\nprint(np.max(Y_hat, axis=1)) # axix=1 を指定", "8\n[4 6 8]\n" ] ], [ [ "## argmax 関数の利用例", "_____no_output_____" ] ], [ [ "print(np.argmax(Y_hat)) # axis 指定なし\nprint(np.argmax(Y_hat, axis=1)) # axix=1 を指定", "5\n[1 0 1]\n" ] ], [ [ "# 5. 3次元以上の配列", "_____no_output_____" ], [ "## 行列 A を4つ持つ配列の生成", "_____no_output_____" ] ], [ [ "A_arr = np.array([A, A, A, A])\nprint(A_arr.shape)", "(4, 3, 2)\n" ] ], [ [ "## A_arr の合計を計算", "_____no_output_____" ] ], [ [ "np.sum(A_arr)", "_____no_output_____" ] ], [ [ "## axis = 0 を指定して A_arr の合計を計算", "_____no_output_____" ] ], [ [ "print(np.sum(A_arr, axis=0).shape)\nprint(np.sum(A_arr, axis=0))", "(3, 2)\n[[ 4 8]\n [12 16]\n [20 24]]\n" ] ], [ [ "## axis = (1, 2) を指定して A_arr の合計を計算", "_____no_output_____" ] ], [ [ "print(np.sum(A_arr, axis=(1, 2)))", "[21 21 21 21]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d019c01ff5151c0a1ecec3563e5383a4c2206048
112,476
ipynb
Jupyter Notebook
ManipulatingRegressionSlopes.ipynb
ShashwatVv/naiveDL
8cc6089f3e1f70719d18b41b9768ac6054a17777
[ "MIT" ]
null
null
null
ManipulatingRegressionSlopes.ipynb
ShashwatVv/naiveDL
8cc6089f3e1f70719d18b41b9768ac6054a17777
[ "MIT" ]
null
null
null
ManipulatingRegressionSlopes.ipynb
ShashwatVv/naiveDL
8cc6089f3e1f70719d18b41b9768ac6054a17777
[ "MIT" ]
null
null
null
556.811881
106,969
0.612317
[ [ [ "<a href=\"https://colab.research.google.com/github/ShashwatVv/naiveDL/blob/main/ManipulatingRegressionSlopes.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Simple Linear Regression using ANN.\ncontinuing from [\"here\"](https://colab.research.google.com/drive/1zTy_7Z5rfKHPKTTCWyou5EemqL8yBqih)", "_____no_output_____" ] ], [ [ "#importing libraries\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport matplotlib.pyplot as plt\n\nfrom IPython import display\ndisplay.set_matplotlib_formats('svg')\nprint('modules imported')", "modules imported\n" ], [ "def build_and_train(x, y, learning_rate, n_epochs):\n \n ## building\n model = nn.Sequential(\n nn.Linear(1,1),\n nn.ReLU(),\n nn.Linear(1,1)\n )\n \n ## optimizer --> stochastic gradient descent\n ## loss--> Mean Squared Error\n\n loss_fun = nn.MSELoss()\n optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)\n \n losses = torch.zeros(n_epochs)\n\n for i in range(n_epochs):\n y_hat = model(x)\n ##forward prop has been done\n\n loss = loss_fun(y_hat, y)\n losses[i] = loss\n ##loss has been computed\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n ##back prop has been done\n\n ## end for loop\n\n predictions = model(x)\n\n return predictions, losses\n ", "_____no_output_____" ], [ "## we have to create data with a generic function such that the slope could be varied\n\ndef create_data(slope, N, scale=2):\n \n x = torch.randn(N, 1)\n y = slope*x + torch.randn(N,1)/scale\n\n return x, y", "_____no_output_____" ], [ "x, y = create_data(0.75, 40)\nyhat, losses = build_and_train(x, y, 0.5, 500)\n\nfig, ax = plt.subplots(1,2, figsize=(10,4))\n\ncorr = np.corrcoef(y.T, yhat.detach().T)[0,1]\n\nax[0].plot(losses.detach(), 'o', markerfacecolor='w', linewidth=.15)\nax[0].set_xlabel('Epoch')\nax[0].set_title('Loss')\n\nax[1].plot(x, y, 'go', label='Actual Data')\nax[1].plot(x, yhat.detach(), 'rs', label='Predicted Data')\nax[1].set_xlabel('x')\nax[1].set_ylabel('y')\nax[1].set_title(f'Prediction-data-correlation {corr: .2f}')", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d019c74258407642a59c3c02905324a156c8da81
501,487
ipynb
Jupyter Notebook
Natural Language Processing in TensorFlow/Week 3 Sequence models/NLP_Course_Week_3_Exercise_Question Exploring overfitting in NLP - Glove Embedding.ipynb
mohameddhameem/TensorflowCertification
0d1fb48eda48496105d08d1151fb0272f809aa61
[ "Apache-2.0" ]
3
2021-06-07T14:01:33.000Z
2021-06-20T01:56:40.000Z
Natural Language Processing in TensorFlow/Week 3 Sequence models/NLP_Course_Week_3_Exercise_Question Exploring overfitting in NLP - Glove Embedding.ipynb
mohameddhameem/TensorflowCertification
0d1fb48eda48496105d08d1151fb0272f809aa61
[ "Apache-2.0" ]
null
null
null
Natural Language Processing in TensorFlow/Week 3 Sequence models/NLP_Course_Week_3_Exercise_Question Exploring overfitting in NLP - Glove Embedding.ipynb
mohameddhameem/TensorflowCertification
0d1fb48eda48496105d08d1151fb0272f809aa61
[ "Apache-2.0" ]
null
null
null
747.372578
432,016
0.39596
[ [ [ "<a href=\"https://colab.research.google.com/github/mohameddhameem/TensorflowCertification/blob/main/Natural%20Language%20Processing%20in%20TensorFlow/Lesson%203/NLP_Course_Week_3_Exercise_Question.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ], [ "import json\nimport tensorflow as tf\nimport csv\nimport random\nimport numpy as np\n\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras import regularizers\n\n\nembedding_dim = 100\nmax_length = 16\ntrunc_type='post'\npadding_type='post'\noov_tok = \"<OOV>\"\ntraining_size= 160000#Your dataset size here. Experiment using smaller values (i.e. 16000), but don't forget to train on at least 160000 to see the best effects\ntest_portion=.1\n\ncorpus = []\n", "_____no_output_____" ], [ "# Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader\n# You can do that yourself with:\n# iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv\n# I then hosted it on my site to make it easier to use in this notebook\n\n!wget --no-check-certificate \\\n https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \\\n -O /tmp/training_cleaned.csv\n\nnum_sentences = 0\n\nwith open(\"/tmp/training_cleaned.csv\") as csvfile:\n reader = csv.reader(csvfile, delimiter=',')\n for row in reader:\n # Your Code here. Create list items where the first item is the text, found in row[5], and the second is the label. Note that the label is a '0' or a '4' in the text. When it's the former, make\n # your label to be 0, otherwise 1. Keep a count of the number of sentences in num_sentences\n list_item=[]\n list_item.append(row[5])\n this_label=row[0]\n if this_label == '0':\n list_item.append(0)\n else:\n list_item.append(1)\n\n # YOUR CODE HERE\n num_sentences = num_sentences + 1\n corpus.append(list_item)\n", "--2021-05-09 14:06:54-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv\nResolving storage.googleapis.com (storage.googleapis.com)... 74.125.203.128, 74.125.204.128, 64.233.189.128, ...\nConnecting to storage.googleapis.com (storage.googleapis.com)|74.125.203.128|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 238942690 (228M) [application/octet-stream]\nSaving to: ‘/tmp/training_cleaned.csv’\n\n/tmp/training_clean 100%[===================>] 227.87M 181MB/s in 1.3s \n\n2021-05-09 14:06:56 (181 MB/s) - ‘/tmp/training_cleaned.csv’ saved [238942690/238942690]\n\n" ], [ "print(num_sentences)\nprint(len(corpus))\nprint(corpus[1])\n\n# Expected Output:\n# 1600000\n# 1600000\n# [\"is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!\", 0]", "1600000\n1600000\n[\"is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!\", 0]\n" ], [ "sentences=[]\nlabels=[]\nrandom.shuffle(corpus)\nfor x in range(training_size):\n sentences.append(corpus[x][0])\n labels.append(corpus[x][1])\n\n\ntokenizer = Tokenizer(oov_token=oov_tok)\ntokenizer.fit_on_texts(sentences)# YOUR CODE HERE\n\nword_index = tokenizer.word_index\nvocab_size=len(word_index)\n\nsequences = tokenizer.texts_to_sequences(sentences)# YOUR CODE HERE\npadded = pad_sequences(sequences,maxlen=max_length, padding=padding_type,truncating=trunc_type)# YOUR CODE HERE\n\nsplit = int(test_portion * training_size)\nprint(split)\ntest_sequences = padded[0:split]\ntraining_sequences = padded[split:training_size]\ntest_labels = labels[0:split]\ntraining_labels = labels[split:training_size]", "16000\n" ], [ "print(vocab_size)\nprint(word_index['i'])\n# Expected Output\n# 138858\n# 1", "138329\n2\n" ], [ "!wget http://nlp.stanford.edu/data/glove.6B.zip", "--2021-05-09 13:07:52-- http://nlp.stanford.edu/data/glove.6B.zip\nResolving nlp.stanford.edu (nlp.stanford.edu)... 171.64.67.140\nConnecting to nlp.stanford.edu (nlp.stanford.edu)|171.64.67.140|:80... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://nlp.stanford.edu/data/glove.6B.zip [following]\n--2021-05-09 13:07:52-- https://nlp.stanford.edu/data/glove.6B.zip\nConnecting to nlp.stanford.edu (nlp.stanford.edu)|171.64.67.140|:443... connected.\nHTTP request sent, awaiting response... 301 Moved Permanently\nLocation: http://downloads.cs.stanford.edu/nlp/data/glove.6B.zip [following]\n--2021-05-09 13:07:53-- http://downloads.cs.stanford.edu/nlp/data/glove.6B.zip\nResolving downloads.cs.stanford.edu (downloads.cs.stanford.edu)... 171.64.64.22\nConnecting to downloads.cs.stanford.edu (downloads.cs.stanford.edu)|171.64.64.22|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 862182613 (822M) [application/zip]\nSaving to: ‘glove.6B.zip.1’\n\nglove.6B.zip.1 100%[===================>] 822.24M 5.14MB/s in 2m 45s \n\n2021-05-09 13:10:38 (4.97 MB/s) - ‘glove.6B.zip.1’ saved [862182613/862182613]\n\n" ], [ "!unzip /content/glove.6B.zip", "Archive: /content/glove.6B.zip\n inflating: glove.6B.50d.txt \n inflating: glove.6B.100d.txt \n inflating: glove.6B.200d.txt \n inflating: glove.6B.300d.txt \n" ], [ "# Note this is the 100 dimension version of GloVe from Stanford\n# I unzipped and hosted it on my site to make this notebook easier\n#### NOTE - Below link is not working. So download and zip on your own\n#!wget --no-check-certificate \\\n# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \\\n# -O /tmp/glove.6B.100d.txt\nembeddings_index = {};\nwith open('/content/glove.6B.100d.txt') as f:\n for line in f:\n values = line.split();\n word = values[0];\n coefs = np.asarray(values[1:], dtype='float32');\n embeddings_index[word] = coefs;\n\nembeddings_matrix = np.zeros((vocab_size+1, embedding_dim));\nfor word, i in word_index.items():\n embedding_vector = embeddings_index.get(word);\n if embedding_vector is not None:\n embeddings_matrix[i] = embedding_vector;", "_____no_output_____" ], [ "print(len(embeddings_matrix))\n# Expected Output\n# 138859", "138330\n" ], [ "training_padded = np.asarray(training_sequences)\ntraining_labels_np = np.asarray(training_labels)\ntesting_padded = np.asarray(test_sequences)\ntesting_labels_np = np.asarray(test_labels)\nprint(training_labels)", "[0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1]\n" ], [ "model = tf.keras.Sequential([\n tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False),\n # YOUR CODE HERE - experiment with combining different types, such as convolutions and LSTMs\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Conv1D(64, 5, activation='relu'),\n tf.keras.layers.MaxPooling1D(pool_size=4),\n #tf.keras.layers.LSTM(64),\n tf.keras.layers.Bidirectional(tf.keras.layers.GRU(32)),\n tf.keras.layers.Dense(1, activation='sigmoid')\n])\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])# YOUR CODE HERE\nmodel.summary()\n\nnum_epochs = 50\nhistory = model.fit(training_padded, training_labels_np, epochs=num_epochs, validation_data=(testing_padded, testing_labels_np), verbose=2)\n\nprint(\"Training Complete\")\n", "Model: \"sequential_7\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding_7 (Embedding) (None, 16, 100) 13833000 \n_________________________________________________________________\ndropout_5 (Dropout) (None, 16, 100) 0 \n_________________________________________________________________\nconv1d_7 (Conv1D) (None, 12, 64) 32064 \n_________________________________________________________________\nmax_pooling1d_5 (MaxPooling1 (None, 3, 64) 0 \n_________________________________________________________________\nbidirectional (Bidirectional (None, 64) 18816 \n_________________________________________________________________\ndense_9 (Dense) (None, 1) 65 \n=================================================================\nTotal params: 13,883,945\nTrainable params: 50,945\nNon-trainable params: 13,833,000\n_________________________________________________________________\nEpoch 1/50\n4500/4500 - 27s - loss: 0.5673 - accuracy: 0.6983 - val_loss: 0.5389 - val_accuracy: 0.7249\nEpoch 2/50\n4500/4500 - 23s - loss: 0.5263 - accuracy: 0.7325 - val_loss: 0.5172 - val_accuracy: 0.7400\nEpoch 3/50\n4500/4500 - 23s - loss: 0.5093 - accuracy: 0.7456 - val_loss: 0.5127 - val_accuracy: 0.7403\nEpoch 4/50\n4500/4500 - 23s - loss: 0.4995 - accuracy: 0.7521 - val_loss: 0.5138 - val_accuracy: 0.7417\nEpoch 5/50\n4500/4500 - 23s - loss: 0.4903 - accuracy: 0.7578 - val_loss: 0.5094 - val_accuracy: 0.7465\nEpoch 6/50\n4500/4500 - 23s - loss: 0.4842 - accuracy: 0.7623 - val_loss: 0.5115 - val_accuracy: 0.7414\nEpoch 7/50\n4500/4500 - 23s - loss: 0.4774 - accuracy: 0.7676 - val_loss: 0.5103 - val_accuracy: 0.7452\nEpoch 8/50\n4500/4500 - 23s - loss: 0.4740 - accuracy: 0.7688 - val_loss: 0.5092 - val_accuracy: 0.7479\nEpoch 9/50\n4500/4500 - 23s - loss: 0.4703 - accuracy: 0.7711 - val_loss: 0.5095 - val_accuracy: 0.7491\nEpoch 10/50\n4500/4500 - 23s - loss: 0.4661 - accuracy: 0.7742 - val_loss: 0.5084 - val_accuracy: 0.7491\nEpoch 11/50\n4500/4500 - 23s - loss: 0.4641 - accuracy: 0.7756 - val_loss: 0.5109 - val_accuracy: 0.7471\nEpoch 12/50\n4500/4500 - 23s - loss: 0.4616 - accuracy: 0.7773 - val_loss: 0.5129 - val_accuracy: 0.7449\nEpoch 13/50\n4500/4500 - 23s - loss: 0.4585 - accuracy: 0.7799 - val_loss: 0.5179 - val_accuracy: 0.7444\nEpoch 14/50\n4500/4500 - 23s - loss: 0.4568 - accuracy: 0.7805 - val_loss: 0.5127 - val_accuracy: 0.7446\nEpoch 15/50\n4500/4500 - 23s - loss: 0.4553 - accuracy: 0.7817 - val_loss: 0.5188 - val_accuracy: 0.7437\nEpoch 16/50\n4500/4500 - 23s - loss: 0.4528 - accuracy: 0.7824 - val_loss: 0.5167 - val_accuracy: 0.7418\nEpoch 17/50\n4500/4500 - 23s - loss: 0.4514 - accuracy: 0.7830 - val_loss: 0.5167 - val_accuracy: 0.7451\nEpoch 18/50\n4500/4500 - 23s - loss: 0.4509 - accuracy: 0.7838 - val_loss: 0.5120 - val_accuracy: 0.7471\nEpoch 19/50\n4500/4500 - 23s - loss: 0.4486 - accuracy: 0.7851 - val_loss: 0.5203 - val_accuracy: 0.7452\nEpoch 20/50\n4500/4500 - 23s - loss: 0.4481 - accuracy: 0.7863 - val_loss: 0.5164 - val_accuracy: 0.7449\nEpoch 21/50\n4500/4500 - 23s - loss: 0.4478 - accuracy: 0.7851 - val_loss: 0.5142 - val_accuracy: 0.7476\nEpoch 22/50\n4500/4500 - 23s - loss: 0.4454 - accuracy: 0.7868 - val_loss: 0.5172 - val_accuracy: 0.7442\nEpoch 23/50\n4500/4500 - 23s - loss: 0.4456 - accuracy: 0.7877 - val_loss: 0.5159 - val_accuracy: 0.7433\nEpoch 24/50\n4500/4500 - 23s - loss: 0.4446 - accuracy: 0.7883 - val_loss: 0.5185 - val_accuracy: 0.7466\nEpoch 25/50\n4500/4500 - 23s - loss: 0.4429 - accuracy: 0.7888 - val_loss: 0.5186 - val_accuracy: 0.7439\nEpoch 26/50\n4500/4500 - 23s - loss: 0.4443 - accuracy: 0.7875 - val_loss: 0.5196 - val_accuracy: 0.7459\nEpoch 27/50\n4500/4500 - 23s - loss: 0.4427 - accuracy: 0.7888 - val_loss: 0.5174 - val_accuracy: 0.7448\nEpoch 28/50\n4500/4500 - 23s - loss: 0.4418 - accuracy: 0.7897 - val_loss: 0.5196 - val_accuracy: 0.7422\nEpoch 29/50\n4500/4500 - 23s - loss: 0.4416 - accuracy: 0.7895 - val_loss: 0.5214 - val_accuracy: 0.7429\nEpoch 30/50\n4500/4500 - 23s - loss: 0.4410 - accuracy: 0.7897 - val_loss: 0.5202 - val_accuracy: 0.7426\nEpoch 31/50\n4500/4500 - 23s - loss: 0.4407 - accuracy: 0.7901 - val_loss: 0.5269 - val_accuracy: 0.7428\nEpoch 32/50\n4500/4500 - 23s - loss: 0.4397 - accuracy: 0.7908 - val_loss: 0.5184 - val_accuracy: 0.7439\nEpoch 33/50\n4500/4500 - 23s - loss: 0.4406 - accuracy: 0.7893 - val_loss: 0.5172 - val_accuracy: 0.7459\nEpoch 34/50\n4500/4500 - 23s - loss: 0.4394 - accuracy: 0.7914 - val_loss: 0.5228 - val_accuracy: 0.7442\nEpoch 35/50\n4500/4500 - 23s - loss: 0.4391 - accuracy: 0.7906 - val_loss: 0.5267 - val_accuracy: 0.7421\nEpoch 36/50\n4500/4500 - 23s - loss: 0.4374 - accuracy: 0.7928 - val_loss: 0.5225 - val_accuracy: 0.7458\nEpoch 37/50\n4500/4500 - 23s - loss: 0.4381 - accuracy: 0.7922 - val_loss: 0.5189 - val_accuracy: 0.7450\nEpoch 38/50\n4500/4500 - 23s - loss: 0.4378 - accuracy: 0.7913 - val_loss: 0.5201 - val_accuracy: 0.7454\nEpoch 39/50\n4500/4500 - 23s - loss: 0.4357 - accuracy: 0.7926 - val_loss: 0.5183 - val_accuracy: 0.7477\nEpoch 40/50\n4500/4500 - 23s - loss: 0.4387 - accuracy: 0.7920 - val_loss: 0.5266 - val_accuracy: 0.7388\nEpoch 41/50\n4500/4500 - 23s - loss: 0.4363 - accuracy: 0.7927 - val_loss: 0.5240 - val_accuracy: 0.7412\nEpoch 42/50\n4500/4500 - 23s - loss: 0.4357 - accuracy: 0.7933 - val_loss: 0.5212 - val_accuracy: 0.7462\nEpoch 43/50\n4500/4500 - 23s - loss: 0.4359 - accuracy: 0.7933 - val_loss: 0.5246 - val_accuracy: 0.7446\nEpoch 44/50\n4500/4500 - 23s - loss: 0.4357 - accuracy: 0.7923 - val_loss: 0.5268 - val_accuracy: 0.7458\nEpoch 45/50\n4500/4500 - 23s - loss: 0.4345 - accuracy: 0.7937 - val_loss: 0.5247 - val_accuracy: 0.7457\nEpoch 46/50\n4500/4500 - 23s - loss: 0.4336 - accuracy: 0.7947 - val_loss: 0.5258 - val_accuracy: 0.7439\nEpoch 47/50\n4500/4500 - 23s - loss: 0.4345 - accuracy: 0.7941 - val_loss: 0.5255 - val_accuracy: 0.7460\nEpoch 48/50\n4500/4500 - 23s - loss: 0.4347 - accuracy: 0.7945 - val_loss: 0.5224 - val_accuracy: 0.7489\nEpoch 49/50\n4500/4500 - 23s - loss: 0.4361 - accuracy: 0.7926 - val_loss: 0.5235 - val_accuracy: 0.7432\nEpoch 50/50\n4500/4500 - 23s - loss: 0.4372 - accuracy: 0.7928 - val_loss: 0.5250 - val_accuracy: 0.7459\nTraining Complete\n" ], [ "import matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\n\n#-----------------------------------------------------------\n# Retrieve a list of list results on training and test data\n# sets for each training epoch\n#-----------------------------------------------------------\nacc=history.history['accuracy']\nval_acc=history.history['val_accuracy']\nloss=history.history['loss']\nval_loss=history.history['val_loss']\n\nepochs=range(len(acc)) # Get number of epochs\n\n#------------------------------------------------\n# Plot training and validation accuracy per epoch\n#------------------------------------------------\nplt.plot(epochs, acc, 'r')\nplt.plot(epochs, val_acc, 'b')\nplt.title('Training and validation accuracy')\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Accuracy\")\nplt.legend([\"Accuracy\", \"Validation Accuracy\"])\n\nplt.figure()\n\n#------------------------------------------------\n# Plot training and validation loss per epoch\n#------------------------------------------------\nplt.plot(epochs, loss, 'r')\nplt.plot(epochs, val_loss, 'b')\nplt.title('Training and validation loss')\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Loss\")\nplt.legend([\"Loss\", \"Validation Loss\"])\n\nplt.figure()\n\n\n# Expected Output\n# A chart where the validation loss does not increase sharply!", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d019ce2e757fbfdbca81a1d1bafc7aa09bdd87f3
5,362
ipynb
Jupyter Notebook
notebooks/01.2_scattering_compute_speed.ipynb
sgaut023/Chronic-Liver-Classification
98523a467eed3b51c20a73ed5ddbc53a1bf7a8d6
[ "RSA-MD" ]
1
2022-03-09T11:34:00.000Z
2022-03-09T11:34:00.000Z
notebooks/01.2_scattering_compute_speed.ipynb
sgaut023/Chronic-Liver-Classification
98523a467eed3b51c20a73ed5ddbc53a1bf7a8d6
[ "RSA-MD" ]
null
null
null
notebooks/01.2_scattering_compute_speed.ipynb
sgaut023/Chronic-Liver-Classification
98523a467eed3b51c20a73ed5ddbc53a1bf7a8d6
[ "RSA-MD" ]
null
null
null
28.983784
159
0.499814
[ [ [ "# 01.2 Scattering Compute Speed\n\n**NOT COMPLETED**\n\nIn this notebook, the speed to extract scattering coefficients is computed.", "_____no_output_____" ] ], [ [ "import sys\nimport random\nimport os\nsys.path.append('../src')\nimport warnings\nwarnings.filterwarnings(\"ignore\") \nimport torch\nfrom tqdm import tqdm\nfrom kymatio.torch import Scattering2D\nimport time\nimport kymatio.scattering2d.backend as backend\n\n###############################################################################\n# Finally, we import the `Scattering2D` class that computes the scattering\n# transform.\n\nfrom kymatio import Scattering2D\n", "_____no_output_____" ] ], [ [ "# 3. Scattering Speed Test", "_____no_output_____" ] ], [ [ "# From: https://github.com/kymatio/kymatio/blob/0.1.X/examples/2d/compute_speed.py\n# Benchmark setup\n# --------------------\nJ = 3\nL = 8\ntimes = 10\ndevices = ['cpu', 'gpu']\nscattering = Scattering2D(J, shape=(M, N), L=L, backend='torch_skcuda')\ndata = np.concatenate(dataset['img'],axis=0)\ndata = torch.from_numpy(data)\nx = data[0:batch_size]", "_____no_output_____" ], [ "%%time\n#mlflow.set_experiment('compute_speed_scattering')\nfor device in devices:\n #with mlflow.start_run():\n fmt_str = '==> Testing Float32 with {} backend, on {}, forward'\n print(fmt_str.format('torch', device.upper()))\n\n if device == 'gpu':\n scattering.cuda()\n x = x.cuda()\n else:\n scattering.cpu()\n x = x.cpu()\n\n scattering.forward(x)\n\n if device == 'gpu':\n torch.cuda.synchronize()\n\n t_start = time.time()\n for _ in range(times):\n scattering.forward(x)\n\n if device == 'gpu':\n torch.cuda.synchronize()\n\n t_elapsed = time.time() - t_start\n\n fmt_str = 'Elapsed time: {:2f} [s / {:d} evals], avg: {:.2f} (s/batch)'\n print(fmt_str.format(t_elapsed, times, t_elapsed/times))\n# mlflow.log_param('M',M)\n# mlflow.log_param('N',N)\n# mlflow.log_param('Backend', device.upper())\n# mlflow.log_param('J', J)\n# mlflow.log_param('L', L)\n# mlflow.log_param('Batch Size', batch_size)\n# mlflow.log_param('Times', times)\n# mlflow.log_metric('Elapsed Time', t_elapsed)\n# mlflow.log_metric('Average Time', times)\n\n ###############################################################################\n # The resulting output should be something like\n #\n # .. code-block:: text\n #\n # ==> Testing Float32 with torch backend, on CPU, forward\n # Elapsed time: 624.910853 [s / 10 evals], avg: 62.49 (s/batch)\n # ==> Testing Float32 with torch backend, on GPU, forward\n", "==> Testing Float32 with torch backend, on CPU, forward\nElapsed time: 523.081820 [s / 10 evals], avg: 52.31 (s/batch)\n==> Testing Float32 with torch backend, on GPU, forward\nElapsed time: 16.777041 [s / 10 evals], avg: 1.68 (s/batch)\nCPU times: user 53min 2s, sys: 4min 47s, total: 57min 50s\nWall time: 9min 54s\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d019d55c1b469b33533f36111740c2d99a9e3db7
87,140
ipynb
Jupyter Notebook
docs/02.04-Working-with-Data-and-Figures.ipynb
jckantor/nbcollection
37d75ddfb16b8cb4958ae963a6973aa428f5feee
[ "MIT" ]
1
2020-09-13T05:36:33.000Z
2020-09-13T05:36:33.000Z
docs/02.04-Working-with-Data-and-Figures.ipynb
jckantor/nbcollection
37d75ddfb16b8cb4958ae963a6973aa428f5feee
[ "MIT" ]
61
2020-05-20T17:35:40.000Z
2022-01-04T00:13:01.000Z
docs/02.04-Working-with-Data-and-Figures.ipynb
jckantor/nbcollection
37d75ddfb16b8cb4958ae963a6973aa428f5feee
[ "MIT" ]
2
2020-06-15T15:57:58.000Z
2021-12-11T20:39:21.000Z
312.329749
77,616
0.918155
[ [ [ "<!--NOTEBOOK_HEADER-->\n*This notebook contains material from [nbpages](https://jckantor.github.io/nbpages) by Jeffrey Kantor (jeff at nd.edu). The text is released under the\n[CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode).\nThe code is released under the [MIT license](https://opensource.org/licenses/MIT).*", "_____no_output_____" ], [ "<!--NAVIGATION-->\n< [2.3 Heirarchical Tagging](https://jckantor.github.io/nbpages/02.03-Heirarchical-Tagging.html) | [Contents](toc.html) | [Tag Index](tag_index.html) | [2.5 Lint](https://jckantor.github.io/nbpages/02.05-Lint.html) ><p><a href=\"https://colab.research.google.com/github/jckantor/nbpages/blob/master/docs/02.04-Working-with-Data-and-Figures.ipynb\"> <img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a><p><a href=\"https://jckantor.github.io/nbpages/02.04-Working-with-Data-and-Figures.ipynb\"> <img align=\"left\" src=\"https://img.shields.io/badge/Github-Download-blue.svg\" alt=\"Download\" title=\"Download Notebook\"></a>", "_____no_output_____" ] ], [ [ "# IMPORT DATA FILES USED BY THIS NOTEBOOK\nimport os, requests\n\nfile_links = [(\"data/Stock_Data.csv\", \"https://jckantor.github.io/nbpages/data/Stock_Data.csv\")]\n\n# This cell has been added by nbpages. Run this cell to download data files required for this notebook.\n\nfor filepath, fileurl in file_links:\n stem, filename = os.path.split(filepath)\n if stem:\n if not os.path.exists(stem):\n os.mkdir(stem)\n if not os.path.isfile(filepath):\n with open(filepath, 'wb') as f:\n response = requests.get(fileurl)\n f.write(response.content)\n", "_____no_output_____" ] ], [ [ "# 2.4 Working with Data and Figures", "_____no_output_____" ], [ "## 2.4.1 Importing data\n\nThe following cell reads the data file `Stock_Data.csv` from the `data` subdirectory. The name of this file will appear in the data index.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\ndf = pd.read_csv(\"data/Stock_Data.csv\")\ndf.head()", "_____no_output_____" ] ], [ [ "## 2.4.2 Creating and saving figures\n\nThe following cell creates a figure `Stock_Data.png` in the `figures` subdirectory. The name of this file will appear in the figures index.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport os\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set_style(\"darkgrid\")\nfig, ax = plt.subplots(2, 1, figsize=(8, 5))\n(df/df.iloc[0]).drop('VIX', axis=1).plot(ax=ax[0])\ndf['VIX'].plot(ax=ax[1])\nax[0].set_title('Normalized Indices')\nax[1].set_title('Volatility VIX')\nax[1].set_xlabel('Days')\nfig.tight_layout()\n\nif not os.path.exists(\"figures\"):\n os.mkdir(\"figures\")\nplt.savefig(\"figures/Stock_Data.png\")", "_____no_output_____" ] ], [ [ "<!--NAVIGATION-->\n< [2.3 Heirarchical Tagging](https://jckantor.github.io/nbpages/02.03-Heirarchical-Tagging.html) | [Contents](toc.html) | [Tag Index](tag_index.html) | [2.5 Lint](https://jckantor.github.io/nbpages/02.05-Lint.html) ><p><a href=\"https://colab.research.google.com/github/jckantor/nbpages/blob/master/docs/02.04-Working-with-Data-and-Figures.ipynb\"> <img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a><p><a href=\"https://jckantor.github.io/nbpages/02.04-Working-with-Data-and-Figures.ipynb\"> <img align=\"left\" src=\"https://img.shields.io/badge/Github-Download-blue.svg\" alt=\"Download\" title=\"Download Notebook\"></a>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d019df1dc4e7ede43c002af6ef0c96410f3b182a
20,629
ipynb
Jupyter Notebook
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
1d92575bfe9213562b84ab66a44d892c7dbb855a
[ "MIT" ]
null
null
null
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
1d92575bfe9213562b84ab66a44d892c7dbb855a
[ "MIT" ]
null
null
null
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
1d92575bfe9213562b84ab66a44d892c7dbb855a
[ "MIT" ]
null
null
null
28.892157
416
0.542634
[ [ [ "# Homework #4\n\nThese problem sets focus on list comprehensions, string operations and regular expressions.\n\n## Problem set #1: List slices and list comprehensions\n\nLet's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called `numbers_str`:", "_____no_output_____" ] ], [ [ "numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'", "_____no_output_____" ] ], [ [ "In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in `numbers_str`, assigning the value of this expression to a variable `numbers`. If you do everything correctly, executing the cell should produce the output `985` (*not* `'985'`).", "_____no_output_____" ] ], [ [ "values = numbers_str.split(\",\")\nnumbers = [int(i) for i in values]\n# numbers\nmax(numbers)", "_____no_output_____" ] ], [ [ "Great! We'll be using the `numbers` list you created above in the next few problems.\n\nIn the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in `numbers`. Expected output:\n\n [506, 528, 550, 581, 699, 721, 736, 804, 855, 985]\n \n(Hint: use a slice.)", "_____no_output_____" ] ], [ [ "#test\nprint(sorted(numbers))\n", "[7, 65, 68, 120, 171, 258, 279, 332, 436, 496, 506, 528, 550, 581, 699, 721, 736, 804, 855, 985]\n" ], [ "sorted(numbers)[10:]", "_____no_output_____" ] ], [ [ "In the cell below, write an expression that evaluates to a list of the integers from `numbers` that are evenly divisible by three, *sorted in numerical order*. Expected output:\n\n [120, 171, 258, 279, 528, 699, 804, 855]", "_____no_output_____" ] ], [ [ "[i for i in sorted(numbers) if i%3 == 0]", "_____no_output_____" ] ], [ [ "Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in `numbers` that are less than 100. In order to do this, you'll need to use the `sqrt` function from the `math` module, which I've already imported for you. Expected output:\n\n [2.6457513110645907, 8.06225774829855, 8.246211251235321]\n \n(These outputs might vary slightly depending on your platform.)", "_____no_output_____" ] ], [ [ "import math\nfrom math import sqrt", "_____no_output_____" ], [ "[math.sqrt(i) for i in sorted(numbers) if i < 100]", "_____no_output_____" ] ], [ [ "## Problem set #2: Still more list comprehensions\n\nStill looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable `planets`. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.", "_____no_output_____" ] ], [ [ "planets = [\n {'diameter': 0.382,\n 'mass': 0.06,\n 'moons': 0,\n 'name': 'Mercury',\n 'orbital_period': 0.24,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.949,\n 'mass': 0.82,\n 'moons': 0,\n 'name': 'Venus',\n 'orbital_period': 0.62,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 1.00,\n 'mass': 1.00,\n 'moons': 1,\n 'name': 'Earth',\n 'orbital_period': 1.00,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.532,\n 'mass': 0.11,\n 'moons': 2,\n 'name': 'Mars',\n 'orbital_period': 1.88,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 11.209,\n 'mass': 317.8,\n 'moons': 67,\n 'name': 'Jupiter',\n 'orbital_period': 11.86,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 9.449,\n 'mass': 95.2,\n 'moons': 62,\n 'name': 'Saturn',\n 'orbital_period': 29.46,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 4.007,\n 'mass': 14.6,\n 'moons': 27,\n 'name': 'Uranus',\n 'orbital_period': 84.01,\n 'rings': 'yes',\n 'type': 'ice giant'},\n {'diameter': 3.883,\n 'mass': 17.2,\n 'moons': 14,\n 'name': 'Neptune',\n 'orbital_period': 164.8,\n 'rings': 'yes',\n 'type': 'ice giant'}]", "_____no_output_____" ] ], [ [ "Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output:\n\n ['Jupiter', 'Saturn', 'Uranus']", "_____no_output_____" ] ], [ [ "earth_diameter = planets[2]['diameter']", "_____no_output_____" ], [ "#earth radius is = half diameter. In a multiplication equation the diameter value can be use as a parameter.\n[i['name'] for i in planets if i['diameter'] >= earth_diameter*4]", "_____no_output_____" ] ], [ [ "In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: `446.79`", "_____no_output_____" ] ], [ [ "mass_list = []\nfor planet in planets:\n outcome = planet['mass']\n mass_list.append(outcome)\ntotal = sum(mass_list)\ntotal", "_____no_output_____" ] ], [ [ "Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word `giant` anywhere in the value for their `type` key. Expected output:\n\n ['Jupiter', 'Saturn', 'Uranus', 'Neptune']", "_____no_output_____" ] ], [ [ "[i['name'] for i in planets if 'giant' in i['type']]", "_____no_output_____" ] ], [ [ "*EXTREME BONUS ROUND*: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the [`key` parameter of the `sorted` function](https://docs.python.org/3.5/library/functions.html#sorted), which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:\n\n ['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']", "_____no_output_____" ] ], [ [ "#Done in class", "_____no_output_____" ] ], [ [ "## Problem set #3: Regular expressions", "_____no_output_____" ], [ "In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's *The Road Not Taken*. Make sure to run the following cell before you proceed.", "_____no_output_____" ] ], [ [ "import re\npoem_lines = ['Two roads diverged in a yellow wood,',\n 'And sorry I could not travel both',\n 'And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'To where it bent in the undergrowth;',\n '',\n 'Then took the other, as just as fair,',\n 'And having perhaps the better claim,',\n 'Because it was grassy and wanted wear;',\n 'Though as for that the passing there',\n 'Had worn them really about the same,',\n '',\n 'And both that morning equally lay',\n 'In leaves no step had trodden black.',\n 'Oh, I kept the first for another day!',\n 'Yet knowing how way leads on to way,',\n 'I doubted if I should ever come back.',\n '',\n 'I shall be telling this with a sigh',\n 'Somewhere ages and ages hence:',\n 'Two roads diverged in a wood, and I---',\n 'I took the one less travelled by,',\n 'And that has made all the difference.']", "_____no_output_____" ] ], [ [ "In the cell above, I defined a variable `poem_lines` which has a list of lines in the poem, and `import`ed the `re` library.\n\nIn the cell below, write a list comprehension (using `re.search()`) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the `\\b` anchor. Don't overthink the \"two words in a row\" requirement.)\n\nExpected result:\n\n```\n['Then took the other, as just as fair,',\n 'Had worn them really about the same,',\n 'And both that morning equally lay',\n 'I doubted if I should ever come back.',\n 'I shall be telling this with a sigh']\n```", "_____no_output_____" ] ], [ [ "[line for line in poem_lines if re.search(r\"\\b\\w{4}\\b\\s\\b\\w{4}\\b\", line)]", "_____no_output_____" ] ], [ [ "Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the `?` quantifier. Is there an existing character class, or a way to *write* a character class, that matches non-alphanumeric characters?) Expected output:\n\n```\n['And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'And having perhaps the better claim,',\n 'Though as for that the passing there',\n 'In leaves no step had trodden black.',\n 'Somewhere ages and ages hence:']\n```", "_____no_output_____" ] ], [ [ "[line for line in poem_lines if re.search(r\"(?:\\s\\w{5}\\b$|\\s\\w{5}\\b[.:;,]$)\", line)]", "_____no_output_____" ] ], [ [ "Okay, now a slightly trickier one. In the cell below, I've created a string `all_lines` which evaluates to the entire text of the poem in one string. Execute this cell.", "_____no_output_____" ] ], [ [ "all_lines = \" \".join(poem_lines)", "_____no_output_____" ] ], [ [ "Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should *not* include the `I`.) Hint: Use `re.findall()` and grouping! Expected output:\n\n ['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']", "_____no_output_____" ] ], [ [ "[item[2:] for item in (re.findall(r\"\\bI\\b\\s\\b[a-z]{1,}\", all_lines))]", "_____no_output_____" ] ], [ [ "Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.", "_____no_output_____" ] ], [ [ "entrees = [\n \"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95\",\n \"Lavender and Pepperoni Sandwich $8.49\",\n \"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v\",\n \"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v\",\n \"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95\",\n \"Rutabaga And Cucumber Wrap $8.49 - v\"\n]", "_____no_output_____" ] ], [ [ "You'll need to pull out the name of the dish and the price of the dish. The `v` after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the `for` loop.\n\nExpected output:\n\n```\n[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',\n 'price': 10.95,\n 'vegetarian': False},\n {'name': 'Lavender and Pepperoni Sandwich ',\n 'price': 8.49,\n 'vegetarian': False},\n {'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',\n 'price': 12.95,\n 'vegetarian': True},\n {'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',\n 'price': 9.95,\n 'vegetarian': True},\n {'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',\n 'price': 19.95,\n 'vegetarian': False},\n {'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]\n```", "_____no_output_____" ] ], [ [ "menu = []\nfor dish in entrees:\n match = re.search(r\"^(.*) \\$(.*)\", dish) \n vegetarian = re.search(r\"v$\", match.group(2))\n price = re.search(r\"(?:\\d\\.\\d\\d|\\d\\d\\.\\d\\d)\", dish)\n if vegetarian == None:\n vegetarian = False\n else:\n vegetarian = True\n if match:\n dish = {\n 'name': match.group(1), 'price': price.group(), 'vegetarian': vegetarian\n }\n menu.append(dish)\nmenu", "_____no_output_____" ] ], [ [ "Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d019f6ff15f3d8162ba6446a1acfb2e3814ec143
440,448
ipynb
Jupyter Notebook
examples/prep_demo.ipynb
NeuroDataDesign/pyprep
f97e7ec54acb5b5c80dec89d0d37e005877a8258
[ "MIT" ]
1
2019-12-13T00:51:40.000Z
2019-12-13T00:51:40.000Z
examples/prep_demo.ipynb
NeuroDataDesign/pyprep
f97e7ec54acb5b5c80dec89d0d37e005877a8258
[ "MIT" ]
3
2019-11-26T14:46:25.000Z
2020-03-21T05:59:33.000Z
examples/prep_demo.ipynb
NeuroDataDesign/pyprep
f97e7ec54acb5b5c80dec89d0d37e005877a8258
[ "MIT" ]
null
null
null
1,262.028653
420,358
0.941301
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d019fe270b86f1379833e192ec59a8b984199ef5
9,716
ipynb
Jupyter Notebook
census/catboost/gcp_ai_platform/notebooks/catboost_census_notebook.ipynb
jared-burns/machine_learning_examples
5ae0a5ba8e0395250fb4d40f77a5f03b5390c0bd
[ "MIT" ]
12
2020-10-12T15:57:29.000Z
2022-02-06T08:09:20.000Z
census/catboost/gcp_ai_platform/notebooks/catboost_census_notebook.ipynb
jared-burns/machine_learning_examples
5ae0a5ba8e0395250fb4d40f77a5f03b5390c0bd
[ "MIT" ]
1
2021-05-21T14:43:09.000Z
2021-05-21T14:43:09.000Z
census/catboost/gcp_ai_platform/notebooks/catboost_census_notebook.ipynb
jared-burns/machine_learning_examples
5ae0a5ba8e0395250fb4d40f77a5f03b5390c0bd
[ "MIT" ]
4
2020-11-20T08:12:20.000Z
2021-01-26T08:12:21.000Z
29.353474
165
0.583985
[ [ [ "Used https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/xgboost/notebooks/census_training/train.py as a starting point and adjusted to CatBoost", "_____no_output_____" ] ], [ [ "#Google Cloud Libraries\nfrom google.cloud import storage\n\n\n#System Libraries\nimport datetime\nimport subprocess\n\n#Data Libraries\nimport pandas as pd\nimport numpy as np\n\n#ML Libraries\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.model_selection import train_test_split\nimport xgboost as xgb\nfrom catboost import CatBoostClassifier, Pool, cv\nfrom catboost import CatBoost, Pool\n\n", "_____no_output_____" ], [ "from catboost.utils import get_gpu_device_count\nprint('I see %i GPU devices' % get_gpu_device_count())", "I see 1 GPU devices\n" ], [ "# Fill in your Cloud Storage bucket name\nBUCKET_ID = \"mchrestkha-demo-env-ml-examples\"\n\ncensus_data_filename = 'adult.data.csv'\n\n# Public bucket holding the census data\nbucket = storage.Client().bucket('cloud-samples-data')\n\n# Path to the data inside the public bucket\ndata_dir = 'ai-platform/census/data/'\n\n# Download the data\nblob = bucket.blob(''.join([data_dir, census_data_filename]))\nblob.download_to_filename(census_data_filename)\n", "_____no_output_____" ], [ "# these are the column labels from the census data files\nCOLUMNS = (\n 'age',\n 'workclass',\n 'fnlwgt',\n 'education',\n 'education-num',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'capital-gain',\n 'capital-loss',\n 'hours-per-week',\n 'native-country',\n 'income-level'\n)\n# categorical columns contain data that need to be turned into numerical values before being used by XGBoost\nCATEGORICAL_COLUMNS = (\n 'workclass',\n 'education',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'native-country'\n)\n\n# Load the training census dataset\nwith open(census_data_filename, 'r') as train_data:\n raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)\n# remove column we are trying to predict ('income-level') from features list\nX = raw_training_data.drop('income-level', axis=1)\n# create training labels list\n#train_labels = (raw_training_data['income-level'] == ' >50K')\ny = raw_training_data['income-level']", "_____no_output_____" ], [ "# Since the census data set has categorical features, we need to convert\n# them to numerical values.\n# convert data in categorical columns to numerical values\nX_enc=X\nencoders = {col:LabelEncoder() for col in CATEGORICAL_COLUMNS}\nfor col in CATEGORICAL_COLUMNS:\n X_enc[col] = encoders[col].fit_transform(X[col])\n ", "_____no_output_____" ], [ "y_enc=LabelEncoder().fit_transform(y)", "_____no_output_____" ], [ "X_train, X_validation, y_train, y_validation = train_test_split(X_enc, y_enc, train_size=0.75, random_state=42)", "_____no_output_____" ], [ "print(type(y))\nprint(type(y_enc))", "_____no_output_____" ], [ "%%time\n\n#model = CatBoost({'iterations':50})\nmodel=CatBoostClassifier(\n od_type='Iter'\n#iterations=5000,\n#custom_loss=['Accuracy']\n)\nmodel.fit(\n X_train,y_train,eval_set=(X_validation, y_validation),\n\n verbose=50)\n\n# # load data into DMatrix object\n# dtrain = xgb.DMatrix(train_features, train_labels)\n# # train model\n# bst = xgb.train({}, dtrain, 20)", "Learning rate set to 0.069772\n0:\tlearn: 0.6282687\ttest: 0.6273059\tbest: 0.6273059 (0)\ttotal: 11.3ms\tremaining: 11.2s\n50:\tlearn: 0.3021165\ttest: 0.3008721\tbest: 0.3008721 (50)\ttotal: 530ms\tremaining: 9.87s\n100:\tlearn: 0.2857407\ttest: 0.2886646\tbest: 0.2886646 (100)\ttotal: 1.03s\tremaining: 9.14s\n150:\tlearn: 0.2748276\ttest: 0.2825841\tbest: 0.2825841 (150)\ttotal: 1.53s\tremaining: 8.59s\n200:\tlearn: 0.2660846\ttest: 0.2787806\tbest: 0.2787806 (200)\ttotal: 2.02s\tremaining: 8.04s\n250:\tlearn: 0.2594067\ttest: 0.2771832\tbest: 0.2771832 (250)\ttotal: 2.52s\tremaining: 7.52s\nStopped by overfitting detector (20 iterations wait)\n\nbestTest = 0.2770424728\nbestIteration = 257\n\nShrink model to first 258 iterations.\nCPU times: user 9.63 s, sys: 788 ms, total: 10.4 s\nWall time: 2.85 s\n" ], [ "# Export the model to a file\nfname = 'catboost_census_model.onnx'\nmodel.save_model(fname, format='onnx')\n\n# Upload the model to GCS\nbucket = storage.Client().bucket(BUCKET_ID)\nblob = bucket.blob('{}/{}'.format(\n datetime.datetime.now().strftime('census/catboost_model_dir/catboost_census_%Y%m%d_%H%M%S'),\n fname))\nblob.upload_from_filename(fname)", "_____no_output_____" ], [ "!gsutil ls gs://$BUCKET_ID/census/*", "gs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_212707/:\ngs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_212707/<catboost.core.CatBoostClassifier object at 0x7fdb929aa6d0>\n\ngs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_212852/:\ngs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_212852/<catboost.core.CatBoostClassifier object at 0x7fdb929aa6d0>\n\ngs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_213004/:\ngs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_213004/<catboost.core.CatBoostClassifier object at 0x7fdb929aa6d0>\n\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_census_20200525_020526/:\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_census_20200525_020526/model.bst\n\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_census_20200525_021023/:\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_census_20200525_021023/model.bst\n\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_census_20200525_023122/:\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_census_20200525_023122/model.bst\n\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_job_dir/:\ngs://mchrestkha-demo-env-ml-examples/census/xgboost_job_dir/packages/\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01a0688fb2b0a175473c5c835b234bcd12ce6dd
122,784
ipynb
Jupyter Notebook
week-4/Sentiment Analysis & Popularity Score.ipynb
Egnite-git/ds-python-du-ankithsavio
dfae33c92e44877b1a4e57aa029b2e76d204a624
[ "MIT" ]
null
null
null
week-4/Sentiment Analysis & Popularity Score.ipynb
Egnite-git/ds-python-du-ankithsavio
dfae33c92e44877b1a4e57aa029b2e76d204a624
[ "MIT" ]
10
2021-11-15T15:05:33.000Z
2022-01-17T13:49:43.000Z
week-4/Sentiment Analysis & Popularity Score.ipynb
Egnite-git/ds-python-du-ankithsavio
dfae33c92e44877b1a4e57aa029b2e76d204a624
[ "MIT" ]
1
2021-11-15T16:46:14.000Z
2021-11-15T16:46:14.000Z
120.1409
88,268
0.82356
[ [ [ "import requests\nimport csv\nimport pandas as pd\nimport feedparser\nimport re", "_____no_output_____" ], [ "file = open(\"newfeed3.csv\",\"w\",encoding=\"utf-8\")\nwriter = csv.writer(file)\nwriter.writerow([\"Title\",\"Description\",\"Link\",\"Year\",\"Month\"])\nfeed = open(\"FinalUrl.txt\",\"r\")\nurls = feed.read()\nurls = urls.split(\"\\n\")\ndf = pd.DataFrame(columns=[\"Title\",\"Description\",\"Link\",\"Year\",\"Month\"])\nitem_dicts = {}\nfor url in urls:\n\n try: \n f = feedparser.parse(url)\n except Exception as e:\n print('Could not parse the xml: ', url)\n print(e)\n for item in f.entries:\n r = re.compile(r\"<[^>]*>\")\n try:\n items_dicts = {'Title':item.title,'Description':r.sub(r\"\",item.summary),'Link':item.link,'Year':item.published_parsed[0],'Month':item.published_parsed[1]}\n except:\n pass\n \n f = csv.DictWriter(file, items_dicts.keys())\n f.writerow(items_dicts)", "_____no_output_____" ], [ "import nltk\nfrom nltk.tokenize import sent_tokenize, word_tokenize\nfrom nltk.corpus import stopwords\nfrom nltk.stem.porter import PorterStemmer\nfrom nltk.stem.wordnet import WordNetLemmatizer\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nimport yake", "_____no_output_____" ], [ "df = pd.read_csv(\"newfeed3.csv\")", "_____no_output_____" ], [ "df.dropna(inplace=True)", "_____no_output_____" ], [ "df.isna().sum()", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "desc_1 = []\nfor text in df[\"Description\"]:\n desc_1.append(re.sub(\"\\s+\",\" \",text).lower())", "_____no_output_____" ], [ "desc_2 = []\nfor text in desc_1:\n desc_2.append(re.sub(\"\\[.+\\]\",\"\",text))", "_____no_output_____" ], [ "desc_3 = []\nfor text in desc_2:\n desc_3.append(re.sub(\"&.+;\",\"\",text))", "_____no_output_____" ], [ "desc_4 = []\nfor text in desc_3:\n desc_4.append(re.sub(r'http\\S+', '',text))", "_____no_output_____" ], [ "clean_desc = []\nfor text in desc_4:\n clean_desc.append(re.sub(r'[^\\w\\s]',\"\",text))", "_____no_output_____" ], [ "stop_words=set(stopwords.words(\"english\"))\nwnet = WordNetLemmatizer()\nport = PorterStemmer()", "_____no_output_____" ], [ "stop_words_2 = []\ncondition = ['not','nor','no']\nfor words in stop_words:\n if words not in condition:\n stop_words_2.append(words)", "_____no_output_____" ], [ "def lemmatize_text(text):\n words = word_tokenize(text)\n words_2 = []\n lemm_2 = \"\"\n for word in words:\n if word not in stop_words_2:\n words_2.append(word)\n for word in words_2:\n lemm = wnet.lemmatize(word)\n lemm_2+=lemm+\" \"\n return lemm_2", "_____no_output_____" ], [ "#lemm_desc = []\nlemm_desc = \"\"\nfor text in clean_desc:\n #lemm_desc.append(lemmatize_text(text))\n lemm_desc+=lemmatize_text(text)+\" \"\n", "_____no_output_____" ], [ "language = \"en\"\nmax_ngram_size = 2\ndeduplication_thresold = 0.9\ndeduplication_algo = 'seqm'\nwindowSize = 1\nnumOfKeywords = 100\n\ncustom_kw_extractor = yake.KeywordExtractor(lan=language, n=max_ngram_size, dedupLim=deduplication_thresold, dedupFunc=deduplication_algo, windowsSize=windowSize, top=numOfKeywords, features=None)\nkeywords = custom_kw_extractor.extract_keywords(lemm_desc)\n\nfor kw in keywords:\n print(kw)", "('data science', 2.9713755192469733e-07)\n('hugging face', 4.625689608042039e-07)\n('amazon sagemaker', 4.694923997463719e-07)\n('face transformer', 6.904301471841336e-07)\n('transformer model', 7.956300432632581e-07)\n('machine learning', 1.1174496048151354e-06)\n('sagemaker appeared', 1.263711690829689e-06)\n('part data', 1.5616335023303966e-06)\n('data blogger', 1.684777286791755e-06)\n('technology amazon', 2.5750457483443353e-06)\n('sagemaker train', 2.870652417167545e-06)\n('transformer introduction', 2.8842013683046305e-06)\n('introduction hugging', 2.89850314697253e-06)\n('face popular', 2.9275702921950886e-06)\n('sagemaker offer', 3.042394194038166e-06)\n('cloud hugging', 3.070706958849251e-06)\n('science blogathon', 3.0906445028987167e-06)\n('deploy hugging', 3.4137728703296698e-06)\n('huggingface transformer', 3.4700599790720027e-06)\n('offer post', 3.7492012391375e-06)\n('data scientist', 3.799616412194829e-06)\n('model prerequisite', 3.8761129958551525e-06)\n('analytics vidhya', 4.111872191998404e-06)\n('article published', 4.212685664443665e-06)\n('post huggingface', 4.297454249871627e-06)\n('published part', 4.372229402927656e-06)\n('vidhya article', 4.765332160791916e-06)\n('source company', 4.806265374052609e-06)\n('open source', 4.975187695082899e-06)\n('nlp technology', 5.122993555996601e-06)\n('objective learn', 5.3772452496109395e-06)\n('neural network', 5.42230182779257e-06)\n('post data', 5.47677223080647e-06)\n('big data', 5.7009144808474115e-06)\n('company providing', 5.75768224209091e-06)\n('popular open', 6.080070929254688e-06)\n('blog post', 6.237941341599078e-06)\n('basic knowledge', 6.3889920933564545e-06)\n('stateoftheart nlp', 6.713184357545781e-06)\n('train deploy', 6.910478987636064e-06)\n('prerequisite basic', 6.996614299781859e-06)\n('deep learning', 7.117136848833436e-06)\n('aws cloud', 7.30883382029901e-06)\n('data data', 7.338752121451262e-06)\n('knowledge aws', 7.471820765843747e-06)\n('blogathon objective', 7.767996959177206e-06)\n('science data', 8.171282677929175e-06)\n('providing stateoftheart', 8.179517983085382e-06)\n('data', 8.740476756816503e-06)\n('appeared john', 8.9459066991494e-06)\n('continue reading', 9.794306931785167e-06)\n('data post', 1.0268947932762132e-05)\n('learning model', 1.0420656477000236e-05)\n('data analytics', 1.2409197175388656e-05)\n('learning data', 1.305629037558085e-05)\n('analytics world', 1.3886872833415145e-05)\n('computer science', 1.4674358918602455e-05)\n('science blog', 1.5883460530600248e-05)\n('center data', 1.68037715756688e-05)\n('learn company', 1.7085983905606233e-05)\n('data appeared', 1.9800271152495334e-05)\n('predictive analytics', 2.067357418967666e-05)\n('science machine', 2.1135838512423138e-05)\n('data engineering', 2.232970243221154e-05)\n('learning algorithm', 2.4138373866751717e-05)\n('data technology', 2.437602386752916e-05)\n('blog data', 2.4963557119345284e-05)\n('science appeared', 2.5196026169463295e-05)\n('data management', 2.581466422219447e-05)\n('post', 2.6048253586356437e-05)\n('appeared', 2.708296413566856e-05)\n('data analysis', 3.0383921945123614e-05)\n('science', 3.26887756765241e-05)\n('data source', 3.348726601313772e-05)\n('introduction data', 3.406028195599293e-05)\n('post python', 3.4574481185569944e-05)\n('post face', 3.477117836914777e-05)\n('blogger blog', 3.479963285312422e-05)\n('post learn', 3.4956150935180236e-05)\n('learning problem', 3.550145498468347e-05)\n('case data', 3.667045767827049e-05)\n('youtube data', 3.747467187389407e-05)\n('report learn', 3.7979956093768775e-05)\n('predictive model', 3.826098309788852e-05)\n('data exchange', 3.934603801910769e-05)\n('model', 3.9424690532918004e-05)\n('learning', 4.0033670351118115e-05)\n('model post', 4.0069337793754786e-05)\n('training data', 4.1273220578404286e-05)\n('world data', 4.131547667221566e-05)\n('data mining', 4.138105900387551e-05)\n('learning post', 4.1759231024652036e-05)\n('john cook', 4.1786415220674964e-05)\n('data warehouse', 4.313470119535097e-05)\n('science research', 4.326625546224153e-05)\n('research statement', 4.349414150720075e-05)\n('face', 4.439256245845208e-05)\n('learning rate', 4.521413380121897e-05)\n('science project', 4.5286663307445984e-05)\n('learning appeared', 4.5291324250854564e-05)\n" ], [ "kw = pd.DataFrame(keywords,columns=['keywords','tf idf'])", "_____no_output_____" ], [ "kw", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "fig ,ax = plt.subplots(figsize=(20,10))\nax.bar(kw['keywords'],kw['tf idf'])\nplt.xticks(rotation='vertical')\nplt.xlabel('keywords')\nplt.ylabel('tf idf');", "_____no_output_____" ], [ "from nltk.sentiment.vader import SentimentIntensityAnalyzer", "_____no_output_____" ], [ "nltk.download('vader_lexicon')", "[nltk_data] Downloading package vader_lexicon to\n[nltk_data] C:\\Users\\ankit\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package vader_lexicon is already up-to-date!\n" ], [ "def sentiment_analyse(text):\n score = SentimentIntensityAnalyzer().polarity_scores(text)\n pos = 1000 * score['pos']\n return pos", "_____no_output_____" ], [ "lemm_desc2 = []\nfor text in clean_desc:\n lemm_desc2.append(lemmatize_text(text))", "_____no_output_____" ], [ "p_score = []\nfor text in lemm_desc2:\n score = sentiment_analyse(text)\n p_score.append(score)", "_____no_output_____" ], [ "df[\"Popularity Score\"] = p_score", "_____no_output_____" ], [ "df", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01a0a04e785f44595f02a845fe149796f994a2c
22,326
ipynb
Jupyter Notebook
MS-malware-suspectibility-detection/6-final-model/FinalModel.ipynb
Semiu/malware-detector
3701c4bb7b4275a03f6d1c48dfab7303422b8d97
[ "MIT" ]
2
2021-09-06T10:04:22.000Z
2021-09-06T17:49:45.000Z
MS-malware-suspectibility-detection/6-final-model/FinalModel.ipynb
Semiu/malware-detector
3701c4bb7b4275a03f6d1c48dfab7303422b8d97
[ "MIT" ]
null
null
null
MS-malware-suspectibility-detection/6-final-model/FinalModel.ipynb
Semiu/malware-detector
3701c4bb7b4275a03f6d1c48dfab7303422b8d97
[ "MIT" ]
null
null
null
33.725076
195
0.496282
[ [ [ "Final models with hyperparameters tuned for Logistics Regression and XGBoost with selected features.", "_____no_output_____" ] ], [ [ "#Import the libraries \nimport pandas as pd\nimport numpy as np\nfrom tqdm import tqdm\nfrom sklearn import linear_model, metrics, preprocessing, model_selection\nfrom sklearn.preprocessing import StandardScaler\nimport xgboost as xgb", "_____no_output_____" ], [ "#Load the data\nmodeling_dataset = pd.read_csv('/content/drive/MyDrive/prediction/frac_cleaned_fod_data.csv', low_memory = False)", "_____no_output_____" ], [ "#All columns - except 'HasDetections', 'kfold', and 'MachineIdentifier'\ntrain_features = [tf for tf in modeling_dataset.columns if tf not in ('HasDetections', 'kfold', 'MachineIdentifier')]", "_____no_output_____" ], [ "#The features selected based on the feature selection method earlier employed\ntrain_features_after_selection = ['AVProductStatesIdentifier', 'Processor','AvSigVersion', 'Census_TotalPhysicalRAM', 'Census_InternalPrimaryDiagonalDisplaySizeInInches', \n 'Census_IsVirtualDevice', 'Census_PrimaryDiskTotalCapacity', 'Wdft_IsGamer', 'Census_IsAlwaysOnAlwaysConnectedCapable', 'EngineVersion',\n 'Census_ProcessorCoreCount', 'Census_OSEdition', 'Census_OSInstallTypeName', 'Census_OSSkuName', 'AppVersion', 'OsBuildLab', 'OsSuite',\n 'Firewall', 'IsProtected', 'Census_IsTouchEnabled', 'Census_ActivationChannel', 'LocaleEnglishNameIdentifier','Census_SystemVolumeTotalCapacity',\n 'Census_InternalPrimaryDisplayResolutionHorizontal','Census_HasOpticalDiskDrive', 'OsBuild', 'Census_InternalPrimaryDisplayResolutionVertical',\n 'CountryIdentifier', 'Census_MDC2FormFactor', 'GeoNameIdentifier', 'Census_PowerPlatformRoleName', 'Census_OSWUAutoUpdateOptionsName', 'SkuEdition',\n 'Census_OSVersion', 'Census_GenuineStateName', 'Census_OSBuildRevision', 'Platform', 'Census_ChassisTypeName', 'Census_FlightRing', \n 'Census_PrimaryDiskTypeName', 'Census_OSBranch', 'Census_IsSecureBootEnabled', 'OsPlatformSubRelease']", "_____no_output_____" ], [ "#Define the categorical features of the data\ncategorical_features = ['ProductName',\n 'EngineVersion',\n 'AppVersion',\n 'AvSigVersion',\n 'Platform',\n 'Processor',\n 'OsVer',\n 'OsPlatformSubRelease',\n 'OsBuildLab',\n 'SkuEdition',\n 'Census_MDC2FormFactor',\n 'Census_DeviceFamily',\n 'Census_PrimaryDiskTypeName',\n 'Census_ChassisTypeName',\n 'Census_PowerPlatformRoleName',\n 'Census_OSVersion',\n 'Census_OSArchitecture',\n 'Census_OSBranch',\n 'Census_OSEdition',\n 'Census_OSSkuName',\n 'Census_OSInstallTypeName',\n 'Census_OSWUAutoUpdateOptionsName',\n 'Census_GenuineStateName',\n 'Census_ActivationChannel',\n 'Census_FlightRing']", "_____no_output_____" ], [ "#XGBoost\n\"\"\"\nBest parameters set:\n\talpha: 1.0\n\tcolsample_bytree: 0.6\n\teta: 0.05\n\tgamma: 0.1\n\tlamda: 1.0\n\tmax_depth: 9\n\tmin_child_weight: 5\n\tsubsample: 0.7\n\"\"\"", "_____no_output_____" ], [ "#XGBoost \ndef opt_run_xgboost(fold):\n for col in train_features:\n if col in categorical_features:\n #Initialize the Label Encoder\n lbl = preprocessing.LabelEncoder()\n #Fit on the categorical features\n lbl.fit(modeling_dataset[col])\n #Transform \n modeling_dataset.loc[:,col] = lbl.transform(modeling_dataset[col])\n \n #Get training and validation data using folds\n modeling_datasets_train = modeling_dataset[modeling_dataset.kfold != fold].reset_index(drop=True)\n modeling_datasets_valid = modeling_dataset[modeling_dataset.kfold == fold].reset_index(drop=True)\n \n #Get train data\n X_train = modeling_datasets_train[train_features_after_selection].values\n #Get validation data\n X_valid = modeling_datasets_valid[train_features_after_selection].values\n\n #Initialize XGboost model\n xgb_model = xgb.XGBClassifier(\n \talpha= 1.0,\n colsample_bytree= 0.6,\n eta= 0.05,\n gamma= 0.1,\n lamda= 1.0,\n max_depth= 9,\n min_child_weight= 5,\n subsample= 0.7,\n n_jobs=-1)\n \n #Fit the model on training data\n xgb_model.fit(X_train, modeling_datasets_train.HasDetections.values)\n\n #Predict on validation\n valid_preds = xgb_model.predict_proba(X_valid)[:,1]\n valid_preds_pc = xgb_model.predict(X_valid)\n\n #Get the ROC AUC score\n auc = metrics.roc_auc_score(modeling_datasets_valid.HasDetections.values, valid_preds)\n\n #Get the precision score\n pre = metrics.precision_score(modeling_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')\n\n #Get the Recall score\n rc = metrics.recall_score(modeling_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')\n\n return auc, pre, rc", "_____no_output_____" ], [ "#LR\n\"\"\"\n'penalty': 'l2', \n'C': 49.71967742639108, \n'solver': 'lbfgs'\nmax_iter: 300\n\"\"\"", "_____no_output_____" ], [ "#Function for Logistic Regression Classification\ndef opt_run_lr(fold):\n #Get training and validation data using folds\n cleaned_fold_datasets_train = modeling_dataset[modeling_dataset.kfold != fold].reset_index(drop=True)\n cleaned_fold_datasets_valid = modeling_dataset[modeling_dataset.kfold == fold].reset_index(drop=True)\n \n #Initialize OneHotEncoder from scikit-learn, and fit it on training and validation features\n ohe = preprocessing.OneHotEncoder()\n full_data = pd.concat(\n [cleaned_fold_datasets_train[train_features_after_selection],cleaned_fold_datasets_valid[train_features_after_selection]],\n axis = 0\n )\n ohe.fit(full_data[train_features_after_selection])\n \n #transform the training and validation data\n x_train = ohe.transform(cleaned_fold_datasets_train[train_features_after_selection])\n x_valid = ohe.transform(cleaned_fold_datasets_valid[train_features_after_selection])\n\n #Initialize the Logistic Regression Model\n lr_model = linear_model.LogisticRegression(\n penalty= 'l2',\n C = 49.71967742639108,\n solver= 'lbfgs',\n max_iter= 300,\n n_jobs=-1\n )\n\n #Fit model on training data\n lr_model.fit(x_train, cleaned_fold_datasets_train.HasDetections.values)\n\n #Predict on the validation data using the probability for the AUC\n valid_preds = lr_model.predict_proba(x_valid)[:, 1]\n \n #For precision and Recall\n valid_preds_pc = lr_model.predict(x_valid)\n\n #Get the ROC AUC score\n auc = metrics.roc_auc_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds)\n\n #Get the precision score\n pre = metrics.precision_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')\n\n #Get the Recall score \n rc = metrics.recall_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')\n\n return auc, pre, rc", "_____no_output_____" ], [ "#A list to hold the values of the XGB performance metrics\nxg = []\nfor fold in tqdm(range(10)):\n xg.append(opt_run_xgboost(fold))", "100%|██████████| 10/10 [40:54<00:00, 245.49s/it]\n" ], [ "#Run the Logistic regression model for all folds and hold their values\nlr = []\nfor fold in tqdm(range(10)):\n lr.append(opt_run_lr(fold))", "100%|██████████| 10/10 [14:20<00:00, 86.10s/it]\n" ], [ "xgb_auc = []\nxgb_pre = []\nxgb_rc = []\n\nlr_auc = []\nlr_pre = []\nlr_rc = []", "_____no_output_____" ], [ "#Loop to get each of the performance metric for average computation\nfor i in lr:\n lr_auc.append(i[0])\n lr_pre.append(i[1])\n lr_rc.append(i[2])", "_____no_output_____" ], [ "for j in xg:\n xgb_auc.append(i[0])\n xgb_pre.append(i[1])\n xgb_rc.append(i[2])", "_____no_output_____" ], [ "#Dictionary to hold the basic model performance data\nfinal_model_performance = {\"logistic_regression\": {\"auc\":\"\", \"precision\":\"\", \"recall\":\"\"},\n \"xgb\": {\"auc\":\"\",\"precision\":\"\",\"recall\":\"\"}\n }", "_____no_output_____" ], [ "#Calculate average of each of the lists of performance metrics and update the dictionary\nfinal_model_performance['logistic_regression'].update({'auc':sum(lr_auc)/len(lr_auc)})\nfinal_model_performance['xgb'].update({'auc':sum(xgb_auc)/len(xgb_auc)})\n\nfinal_model_performance['logistic_regression'].update({'precision':sum(lr_pre)/len(lr_pre)})\nfinal_model_performance['xgb'].update({'precision':sum(xgb_pre)/len(xgb_pre)})\n\nfinal_model_performance['logistic_regression'].update({'recall':sum(lr_rc)/len(lr_rc)})\nfinal_model_performance['xgb'].update({'recall':sum(xgb_rc)/len(xgb_rc)})\n", "_____no_output_____" ], [ "final_model_performance", "_____no_output_____" ], [ "#LR\n\"\"\"\n'penalty': 'l2', \n'C': 49.71967742639108, \n'solver': 'lbfgs'\nmax_iter: 100\n\"\"\"", "_____no_output_____" ], [ "#Function for Logistic Regression Classification - max_iter = 100\ndef opt_run_lr100(fold):\n #Get training and validation data using folds\n cleaned_fold_datasets_train = modeling_dataset[modeling_dataset.kfold != fold].reset_index(drop=True)\n cleaned_fold_datasets_valid = modeling_dataset[modeling_dataset.kfold == fold].reset_index(drop=True)\n \n #Initialize OneHotEncoder from scikit-learn, and fit it on training and validation features\n ohe = preprocessing.OneHotEncoder()\n full_data = pd.concat(\n [cleaned_fold_datasets_train[train_features_after_selection],cleaned_fold_datasets_valid[train_features_after_selection]],\n axis = 0\n )\n ohe.fit(full_data[train_features_after_selection])\n \n #transform the training and validation data\n x_train = ohe.transform(cleaned_fold_datasets_train[train_features_after_selection])\n x_valid = ohe.transform(cleaned_fold_datasets_valid[train_features_after_selection])\n\n #Initialize the Logistic Regression Model\n lr_model = linear_model.LogisticRegression(\n penalty= 'l2',\n C = 49.71967742639108,\n solver= 'lbfgs',\n max_iter= 100,\n n_jobs=-1\n )\n\n #Fit model on training data\n lr_model.fit(x_train, cleaned_fold_datasets_train.HasDetections.values)\n\n #Predict on the validation data using the probability for the AUC\n valid_preds = lr_model.predict_proba(x_valid)[:, 1]\n \n #For precision and Recall\n valid_preds_pc = lr_model.predict(x_valid)\n\n #Get the ROC AUC score\n auc = metrics.roc_auc_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds)\n\n #Get the precision score\n pre = metrics.precision_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')\n\n #Get the Recall score \n rc = metrics.recall_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')\n\n return auc, pre, rc", "_____no_output_____" ], [ "#Run the Logistic regression model for all folds and hold their values\nlr100 = []\nfor fold in tqdm(range(10)):\n lr100.append(opt_run_lr100(fold))", "100%|██████████| 10/10 [06:46<00:00, 40.68s/it]\n" ], [ "lr100_auc = []\nlr100_pre = []\nlr100_rc = []\n\nfor k in lr100:\n lr100_auc.append(k[0])\n lr100_pre.append(k[1])\n lr100_rc.append(k[2])", "_____no_output_____" ], [ "sum(lr100_auc)/len(lr100_auc) ", "_____no_output_____" ], [ "sum(lr100_pre)/len(lr100_pre)", "_____no_output_____" ], [ "sum(lr100_rc)/len(lr100_rc)", "_____no_output_____" ], [ "\"\"\"\n{'logistic_regression': {'auc': 0.660819451656712,\n 'precision': 0.6069858170181643,\n 'recall': 0.6646704904969867},\n 'xgb': {'auc': 0.6583717792973377,\n 'precision': 0.6042291042291044,\n 'recall': 0.6542422535211267}}\n\"\"\"", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01a20f5200fbcf8360c0ca639d1266a0e82188a
10,038
ipynb
Jupyter Notebook
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
d9c24b573f7fac5b7e407f0b5c5bad4a7c224183
[ "BSD-3-Clause" ]
null
null
null
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
d9c24b573f7fac5b7e407f0b5c5bad4a7c224183
[ "BSD-3-Clause" ]
null
null
null
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
d9c24b573f7fac5b7e407f0b5c5bad4a7c224183
[ "BSD-3-Clause" ]
null
null
null
31.968153
496
0.613867
[ [ [ "# Dealing with errors after a run", "_____no_output_____" ], [ "In this example, we run the model on a list of three glaciers:\ntwo of them will end with errors: one because it already failed at\npreprocessing (i.e. prior to this run), and one during the run. We show how to analyze theses erros and solve (some) of them, as described in the OGGM documentation under [troubleshooting](https://docs.oggm.org/en/latest/faq.html?highlight=border#troubleshooting).", "_____no_output_____" ], [ "## Run with `cfg.PARAMS['continue_on_error'] = True`", "_____no_output_____" ] ], [ [ "# Locals\nimport oggm.cfg as cfg\nfrom oggm import utils, workflow, tasks\n\n# Libs\nimport os\nimport xarray as xr\nimport pandas as pd\n\n# Initialize OGGM and set up the default run parameters\ncfg.initialize(logging_level='WARNING')\n\n# Here we override some of the default parameters\n# How many grid points around the glacier?\n# We make it small because we want the model to error because\n# of flowing out of the domain\ncfg.PARAMS['border'] = 80\n\n# This is useful since we have three glaciers\ncfg.PARAMS['use_multiprocessing'] = True\n\n# This is the important bit!\n# We tell OGGM to continue despite of errors\ncfg.PARAMS['continue_on_error'] = True\n\n# Local working directory (where OGGM will write its output)\nWORKING_DIR = utils.gettempdir('OGGM_Errors')\nutils.mkdir(WORKING_DIR, reset=True)\ncfg.PATHS['working_dir'] = WORKING_DIR\n\nrgi_ids = ['RGI60-11.00897', 'RGI60-11.01450', 'RGI60-11.03295']\n\n# Go - get the pre-processed glacier directories\ngdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=4)\n\n# We can step directly to the experiment!\n# Random climate representative for the recent climate (1985-2015)\n# with a negative bias added to the random temperature series\nworkflow.execute_entity_task(tasks.run_random_climate, gdirs,\n nyears=150, seed=0,\n temperature_bias=-1)", "_____no_output_____" ] ], [ [ "## Error diagnostics", "_____no_output_____" ] ], [ [ "# Write the compiled output\nutils.compile_glacier_statistics(gdirs); # saved as glacier_statistics.csv in the WORKING_DIR folder\nutils.compile_run_output(gdirs); # saved as run_output.nc in the WORKING_DIR folder", "_____no_output_____" ], [ "# Read it\nwith xr.open_dataset(os.path.join(WORKING_DIR, 'run_output.nc')) as ds:\n ds = ds.load()\ndf_stats = pd.read_csv(os.path.join(WORKING_DIR, 'glacier_statistics.csv'), index_col=0)", "_____no_output_____" ], [ "# all possible statistics about the glaciers\ndf_stats", "_____no_output_____" ] ], [ [ "- in the column *error_task*, we can see whether an error occurred, and if yes during which task\n- *error_msg* describes the actual error message ", "_____no_output_____" ] ], [ [ "df_stats[['error_task', 'error_msg']]", "_____no_output_____" ] ], [ [ "We can also check which glacier failed at which task by using [compile_task_log]('https://docs.oggm.org/en/latest/generated/oggm.utils.compile_task_log.html#oggm.utils.compile_task_log').", "_____no_output_____" ] ], [ [ "# also saved as task_log.csv in the WORKING_DIR folder - \"append=False\" replaces the existing one\nutils.compile_task_log(gdirs, task_names=['glacier_masks', 'compute_centerlines', 'flowline_model_run'], append=False)", "_____no_output_____" ] ], [ [ "## Error solving", "_____no_output_____" ], [ "### RuntimeError: `Glacier exceeds domain boundaries, at year: 98.08333333333333`", "_____no_output_____" ], [ "To remove this error just increase the domain boundary **before** running `init_glacier_directories` ! Attention, this means that more data has to be downloaded and the run takes more time. The available options for `cfg.PARAMS['border']` are **10, 40, 80 or 160** at the moment; the unit is number of grid points outside the glacier boundaries. More about that in the OGGM documentation under [preprocessed files](https://docs.oggm.org/en/latest/input-data.html#pre-processed-directories).", "_____no_output_____" ] ], [ [ "# reset to recompute statistics\nutils.mkdir(WORKING_DIR, reset=True)\n\n# increase the amount of gridpoints outside the glacier\ncfg.PARAMS['border'] = 160\ngdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=4)\nworkflow.execute_entity_task(tasks.run_random_climate, gdirs,\n nyears=150, seed=0,\n temperature_bias=-1);\n\n# recompute the output\n# we can also get the run output directly from the methods\ndf_stats = utils.compile_glacier_statistics(gdirs)\nds = utils.compile_run_output(gdirs)", "_____no_output_____" ], [ "# check again\ndf_stats[['error_task', 'error_msg']]", "_____no_output_____" ] ], [ [ "Now `RGI60-11.00897` runs without errors!", "_____no_output_____" ], [ "### Error: `Need a valid model_flowlines file.`", "_____no_output_____" ], [ "This error message in the log is misleading: it does not really describe the source of the error, which happened earlier in the processing chain. Therefore we can look instead into the glacier_statistics via [compile_glacier_statistics](https://docs.oggm.org/en/latest/generated/oggm.utils.compile_glacier_statistics.html) or into the log output via [compile_task_log](https://docs.oggm.org/en/latest/generated/oggm.utils.compile_task_log.html#oggm.utils.compile_task_log):", "_____no_output_____" ] ], [ [ "print('error_task: {}, error_msg: {}'.format(df_stats.loc['RGI60-11.03295']['error_task'],\n df_stats.loc['RGI60-11.03295']['error_msg']))", "_____no_output_____" ] ], [ [ "Now we have a better understanding of the error: \n- OGGM can not work with this geometry of this glacier and could therefore not make a gridded mask of the glacier outlines. \n- there is no way to prevent this except you find a better way to pre-process the geometry of this glacier\n- these glaciers have to be ignored! Less than 0.5% of glacier area globally have errors during the geometry processing or failures in computing certain topographical properties by e.g. invalid DEM, see [Sect. 4.2 Invalid Glaciers of the OGGM paper (Maussion et al., 2019)](https://gmd.copernicus.org/articles/12/909/2019/#section4) and [this tutorial](preprocessing_errors.ipynb) for more up-to-date numbers", "_____no_output_____" ], [ "## Ignoring those glaciers with errors that we can't solve", "_____no_output_____" ], [ "In the run_output, you can for example just use `*.dropna` to remove these. For other applications (e.g. quantitative mass change evaluation), more will be needed (not available yet in the OGGM codebase):", "_____no_output_____" ] ], [ [ "ds.dropna(dim='rgi_id') # here we can e.g. find the volume evolution", "_____no_output_____" ] ], [ [ "## What's next?\n\n- read about [preprocessing errors](preprocessing_errors.ipynb)\n- return to the [OGGM documentation](https://docs.oggm.org)\n- back to the [table of contents](welcome.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d01a25e99f844a5b683d7acc3017147e206d7094
232,714
ipynb
Jupyter Notebook
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
9c567bf2c8b571021b120d879ba9edf7751b9f92
[ "Apache-2.0" ]
542
2019-11-10T12:09:31.000Z
2022-03-28T11:39:07.000Z
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
9c567bf2c8b571021b120d879ba9edf7751b9f92
[ "Apache-2.0" ]
117
2019-11-12T09:39:24.000Z
2022-03-12T00:20:41.000Z
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
9c567bf2c8b571021b120d879ba9edf7751b9f92
[ "Apache-2.0" ]
246
2019-11-09T21:53:24.000Z
2022-03-29T00:57:07.000Z
138.850835
53,324
0.867838
[ [ [ "<a href=\"https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2)%20Understand%20the%20effect%20of%20freezing%20base%20model%20in%20transfer%20learning%20-%202%20-%20pytorch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Goals\n\n\n### In the previous tutorial you studied the role of freezing models on a small dataset. \n\n\n### Understand the role of freezing models in transfer learning on a fairly large dataset\n\n\n### Why freeze/unfreeze base models in transfer learning\n\n\n### Use comparison feature to appropriately set this parameter on custom dataset\n\n\n### You will be using lego bricks dataset to train the classifiers", "_____no_output_____" ], [ "# What is freezing base network\n\n\n - To recap you have two parts in your network\n - One that already existed, the pretrained one, the base network\n - The new sub-network or a single layer you added\n\n\n -The hyper-parameter we can see here: Freeze base network\n - Freezing base network makes the base network untrainable\n - The base network now acts as a feature extractor and only the next half is trained\n - If you do not freeze the base network the entire network is trained", "_____no_output_____" ], [ "# Table of Contents\n\n\n## [Install](#0)\n\n\n## [Freeze Base network in densenet121 and train a classifier](#1)\n\n\n## [Unfreeze base network in densenet121 and train another classifier](#2)\n\n\n## [Compare both the experiment](#3)", "_____no_output_____" ], [ "<a id='0'></a>\n# Install Monk", "_____no_output_____" ], [ "## Using pip (Recommended)\n\n - colab (gpu) \n - All bakcends: `pip install -U monk-colab`\n \n\n - kaggle (gpu) \n - All backends: `pip install -U monk-kaggle`\n \n\n - cuda 10.2\t\n - All backends: `pip install -U monk-cuda102`\n - Gluon bakcned: `pip install -U monk-gluon-cuda102`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda102`\n - Keras backend: `pip install -U monk-keras-cuda102`\n \n\n - cuda 10.1\t\n - All backend: `pip install -U monk-cuda101`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda101`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda101`\n\t - Keras backend: `pip install -U monk-keras-cuda101`\n \n\n - cuda 10.0\t\n - All backend: `pip install -U monk-cuda100`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda100`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda100`\n\t - Keras backend: `pip install -U monk-keras-cuda100`\n \n\n - cuda 9.2\t\n - All backend: `pip install -U monk-cuda92`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda92`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda92`\n\t - Keras backend: `pip install -U monk-keras-cuda92`\n \n\n - cuda 9.0\t\n - All backend: `pip install -U monk-cuda90`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda90`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda90`\n\t - Keras backend: `pip install -U monk-keras-cuda90`\n \n\n - cpu \t\t\n - All backend: `pip install -U monk-cpu`\n\t - Gluon bakcned: `pip install -U monk-gluon-cpu`\n\t - Pytorch backend: `pip install -U monk-pytorch-cpu`\n\t - Keras backend: `pip install -U monk-keras-cpu`", "_____no_output_____" ], [ "## Install Monk Manually (Not recommended)\n \n### Step 1: Clone the library\n - git clone https://github.com/Tessellate-Imaging/monk_v1.git\n \n \n \n \n### Step 2: Install requirements \n - Linux\n - Cuda 9.0\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`\n - Cuda 9.2\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`\n - Cuda 10.0\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`\n - Cuda 10.1\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`\n - Cuda 10.2\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`\n - CPU (Non gpu system)\n - `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`\n \n \n - Windows\n - Cuda 9.0 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`\n - Cuda 9.2 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`\n - Cuda 10.0 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`\n - Cuda 10.1 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`\n - Cuda 10.2 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`\n - CPU (Non gpu system)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`\n \n \n - Mac\n - CPU (Non gpu system)\n - `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`\n \n \n - Misc\n - Colab (GPU)\n - `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`\n - Kaggle (GPU)\n - `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`\n \n \n \n### Step 3: Add to system path (Required for every terminal or kernel run)\n - `import sys`\n - `sys.path.append(\"monk_v1/\");`", "_____no_output_____" ], [ "## Dataset - LEGO Classification\n - https://www.kaggle.com/joosthazelzet/lego-brick-images/", "_____no_output_____" ] ], [ [ "! wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1MRC58-oCdR1agFTWreDFqevjEOIWDnYZ' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\\1\\n/p')&id=1MRC58-oCdR1agFTWreDFqevjEOIWDnYZ\" -O skin_cancer_mnist_dataset.zip && rm -rf /tmp/cookies.txt", "_____no_output_____" ], [ "! unzip -qq skin_cancer_mnist_dataset.zip", "_____no_output_____" ] ], [ [ "# Imports", "_____no_output_____" ] ], [ [ "#Using pytorch backend \n\n# When installed using pip\nfrom monk.pytorch_prototype import prototype\n\n\n# When installed manually (Uncomment the following)\n#import os\n#import sys\n#sys.path.append(\"monk_v1/\");\n#sys.path.append(\"monk_v1/monk/\");\n#from monk.pytorch_prototype import prototype", "_____no_output_____" ] ], [ [ "<a id='1'></a>\n# Freeze Base network in densenet121 and train a classifier", "_____no_output_____" ], [ "## Creating and managing experiments\n - Provide project name\n - Provide experiment name\n - For a specific data create a single project\n - Inside each project multiple experiments can be created\n - Every experiment can be have diferent hyper-parameters attached to it", "_____no_output_____" ] ], [ [ "gtf = prototype(verbose=1);\ngtf.Prototype(\"Project\", \"Freeze_Base_Network\");", "Pytorch Version: 1.2.0\n\nExperiment Details\n Project: Project\n Experiment: Freeze_Base_Network\n Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/2_freezing_base_network/workspace/Project/Freeze_Base_Network/\n\n" ] ], [ [ "### This creates files and directories as per the following structure\n \n \n workspace\n |\n |--------Project\n |\n |\n |-----Freeze_Base_Network\n |\n |-----experiment-state.json\n |\n |-----output\n |\n |------logs (All training logs and graphs saved here)\n |\n |------models (all trained models saved here)\n ", "_____no_output_____" ], [ "## Set dataset and select the model", "_____no_output_____" ], [ "## Quick mode training\n\n - Using Default Function\n - dataset_path\n - model_name\n - freeze_base_network\n - num_epochs\n \n \n## Sample Dataset folder structure\n\n parent_directory\n |\n |\n |------cats\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on)\n |------dogs\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on) ", "_____no_output_____" ], [ "## Modifyable params \n - dataset_path: path to data\n - model_name: which pretrained model to use\n - freeze_base_network: Retrain already trained network or not\n - num_epochs: Number of epochs to train for", "_____no_output_____" ] ], [ [ "gtf.Default(dataset_path=\"skin_cancer_mnist_dataset/images\",\n path_to_csv=\"skin_cancer_mnist_dataset/train_labels.csv\",\n model_name=\"densenet121\", \n \n \n \n \n freeze_base_network=True, # Set this param as true\n \n \n \n num_epochs=5);\n\n#Read the summary generated once you run this cell. ", "Dataset Details\n Train path: skin_cancer_mnist_dataset/images\n Val path: None\n CSV train path: skin_cancer_mnist_dataset/train_labels.csv\n CSV val path: None\n\nDataset Params\n Input Size: 224\n Batch Size: 4\n Data Shuffle: True\n Processors: 4\n Train-val split: 0.7\n Delimiter: ,\n\nPre-Composed Train Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nPre-Composed Val Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num train images: 7010\n Num val images: 3005\n Num classes: 7\n\nModel Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n Freeze base network: True\n\nModel Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num layers in model: 242\n Num trainable layers: 1\n\nOptimizer\n Name: sgd\n Learning rate: 0.01\n Params: {'lr': 0.01, 'momentum': 0, 'weight_decay': 0.0001, 'momentum_dampening_rate': 0, 'clipnorm': 0.0, 'clipvalue': 0.0}\n\n\n\nLearning rate scheduler\n Name: multisteplr\n Params: {'milestones': [2, 3], 'gamma': 0.1, 'last_epoch': -1}\n\nLoss\n Name: softmaxcrossentropy\n Params: {'weight': None, 'batch_axis': 0, 'axis_to_sum_over': -1, 'label_as_categories': True, 'label_smoothing': False}\n\nTraining params\n Num Epochs: 5\n\nDisplay params\n Display progress: True\n Display progress realtime: True\n Save Training logs: True\n Save Intermediate models: True\n Intermediate model prefix: intermediate_model_\n\n" ] ], [ [ "## From the summary above\n\n - Model Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n \n \n Freeze base network: True", "_____no_output_____" ], [ "## Another thing to notice from summary\n\n Model Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num of potentially trainable layers: 242\n Num of actual trainable layers: 1\n \n\n### There are a total of 242 layers\n\n### Since we have freezed base network only 1 is trainable, the final layer", "_____no_output_____" ], [ "## Train the classifier", "_____no_output_____" ] ], [ [ "#Start Training\ngtf.Train();\n\n#Read the training summary generated once you run the cell and training is completed", "Training Start\n Epoch 1/5\n ----------\n" ] ], [ [ "### Best validation Accuracy achieved - 74.77 %\n(You may get a different result)", "_____no_output_____" ], [ "<a id='2'></a>\n# Unfreeze Base network in densenet121 and train a classifier", "_____no_output_____" ], [ "## Creating and managing experiments\n - Provide project name\n - Provide experiment name\n - For a specific data create a single project\n - Inside each project multiple experiments can be created\n - Every experiment can be have diferent hyper-parameters attached to it", "_____no_output_____" ] ], [ [ "gtf = prototype(verbose=1);\ngtf.Prototype(\"Project\", \"Unfreeze_Base_Network\");", "Pytorch Version: 1.2.0\n\nExperiment Details\n Project: Project\n Experiment: Unfreeze_Base_Network\n Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/2_freezing_base_network/workspace/Project/Unfreeze_Base_Network/\n\n" ] ], [ [ "### This creates files and directories as per the following structure\n \n \n workspace\n |\n |--------Project\n |\n |\n |-----Freeze_Base_Network (Previously created)\n |\n |-----experiment-state.json\n |\n |-----output\n |\n |------logs (All training logs and graphs saved here)\n |\n |------models (all trained models saved here)\n |\n |\n |-----Unfreeze_Base_Network (Created Now)\n |\n |-----experiment-state.json\n |\n |-----output\n |\n |------logs (All training logs and graphs saved here)\n |\n |------models (all trained models saved here)", "_____no_output_____" ], [ "## Set dataset and select the model", "_____no_output_____" ], [ "## Quick mode training\n\n - Using Default Function\n - dataset_path\n - model_name\n - freeze_base_network\n - num_epochs\n \n \n## Sample Dataset folder structure\n\n parent_directory\n |\n |\n |------cats\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on)\n |------dogs\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on)", "_____no_output_____" ], [ "## Modifyable params \n - dataset_path: path to data\n - model_name: which pretrained model to use\n - freeze_base_network: Retrain already trained network or not\n - num_epochs: Number of epochs to train for", "_____no_output_____" ] ], [ [ "gtf.Default(dataset_path=\"skin_cancer_mnist_dataset/images\",\n path_to_csv=\"skin_cancer_mnist_dataset/train_labels.csv\",\n model_name=\"densenet121\",\n \n \n \n freeze_base_network=False, # Set this param as false\n \n \n \n num_epochs=5);\n\n#Read the summary generated once you run this cell. ", "Dataset Details\n Train path: skin_cancer_mnist_dataset/images\n Val path: None\n CSV train path: skin_cancer_mnist_dataset/train_labels.csv\n CSV val path: None\n\nDataset Params\n Input Size: 224\n Batch Size: 4\n Data Shuffle: True\n Processors: 4\n Train-val split: 0.7\n Delimiter: ,\n\nPre-Composed Train Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nPre-Composed Val Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num train images: 7010\n Num val images: 3005\n Num classes: 7\n\nModel Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n Freeze base network: False\n\nModel Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num layers in model: 242\n Num trainable layers: 242\n\nOptimizer\n Name: sgd\n Learning rate: 0.01\n Params: {'lr': 0.01, 'momentum': 0, 'weight_decay': 0.0001, 'momentum_dampening_rate': 0, 'clipnorm': 0.0, 'clipvalue': 0.0}\n\n\n\nLearning rate scheduler\n Name: multisteplr\n Params: {'milestones': [2, 3], 'gamma': 0.1, 'last_epoch': -1}\n\nLoss\n Name: softmaxcrossentropy\n Params: {'weight': None, 'batch_axis': 0, 'axis_to_sum_over': -1, 'label_as_categories': True, 'label_smoothing': False}\n\nTraining params\n Num Epochs: 5\n\nDisplay params\n Display progress: True\n Display progress realtime: True\n Save Training logs: True\n Save Intermediate models: True\n Intermediate model prefix: intermediate_model_\n\n" ] ], [ [ "## From the summary above\n\n - Model Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n \n \n Freeze base network: False", "_____no_output_____" ], [ "## Another thing to notice from summary\n\n Model Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num of potentially trainable layers: 242\n Num of actual trainable layers: 242\n \n\n### There are a total of 242 layers\n\n### Since we have unfreezed base network all 242 layers are trainable including the final layer", "_____no_output_____" ], [ "## Train the classifier", "_____no_output_____" ] ], [ [ "#Start Training\ngtf.Train();\n\n#Read the training summary generated once you run the cell and training is completed", "Training Start\n Epoch 1/5\n ----------\n" ] ], [ [ "### Best Val Accuracy achieved - 81.33 %\n(You may get a different result)", "_____no_output_____" ], [ "<a id='3'></a>\n# Compare both the experiment", "_____no_output_____" ] ], [ [ "# Invoke the comparison class\nfrom monk.compare_prototype import compare", "_____no_output_____" ] ], [ [ "### Creating and managing comparison experiments\n - Provide project name", "_____no_output_____" ] ], [ [ "# Create a project \ngtf = compare(verbose=1);\ngtf.Comparison(\"Compare-effect-of-freezing\");", "Comparison: - Compare-effect-of-freezing\n" ] ], [ [ "### This creates files and directories as per the following structure\n \n workspace\n |\n |--------comparison\n |\n |\n |-----Compare-effect-of-freezing\n |\n |------stats_best_val_acc.png\n |------stats_max_gpu_usage.png\n |------stats_training_time.png\n |------train_accuracy.png\n |------train_loss.png\n |------val_accuracy.png\n |------val_loss.png\n \n |\n |-----comparison.csv (Contains necessary details of all experiments)", "_____no_output_____" ], [ "### Add the experiments\n - First argument - Project name\n - Second argument - Experiment name", "_____no_output_____" ] ], [ [ "gtf.Add_Experiment(\"Project\", \"Freeze_Base_Network\");\ngtf.Add_Experiment(\"Project\", \"Unfreeze_Base_Network\");", "Project - Project, Experiment - Freeze_Base_Network added\nProject - Project, Experiment - Unfreeze_Base_Network added\n" ] ], [ [ "### Run Analysis", "_____no_output_____" ] ], [ [ "gtf.Generate_Statistics();", "Generating statistics...\nGenerated\n\n" ] ], [ [ "## Visualize and study comparison metrics", "_____no_output_____" ], [ "### Training Accuracy Curves", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/train_accuracy.png\") ", "_____no_output_____" ] ], [ [ "### Training Loss Curves", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/train_loss.png\") ", "_____no_output_____" ] ], [ [ "### Validation Accuracy Curves", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/val_accuracy.png\") ", "_____no_output_____" ] ], [ [ "### Validation loss curves", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/val_loss.png\") ", "_____no_output_____" ] ], [ [ "## Accuracies achieved on validation dataset\n\n### With freezing base network - 74.77 %\n### Without freezing base network - 81.33 %\n\n#### For this classifier, keeping the base network trainable seems to be a good option. Thus for other data it may result in overfitting the training data\n\n(You may get a different result)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d01a350a5e178563189a383bb1f02b8fc40b66d9
4,433
ipynb
Jupyter Notebook
ipynb/Caesar Cipher.ipynb
davzoku/pyground
983f3670915346a1a8c27fb563ac91bdb5b45cf9
[ "MIT" ]
null
null
null
ipynb/Caesar Cipher.ipynb
davzoku/pyground
983f3670915346a1a8c27fb563ac91bdb5b45cf9
[ "MIT" ]
null
null
null
ipynb/Caesar Cipher.ipynb
davzoku/pyground
983f3670915346a1a8c27fb563ac91bdb5b45cf9
[ "MIT" ]
null
null
null
25.045198
249
0.502369
[ [ [ "## Caesar Cipher\n\nA Caesar cipher, also known as shift cipher is one of the simplest and most widely known encryption techniques. \nIt is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, D would be replaced by A, E would become B, and so on. \n\n![Cipher Table](https://microbit-challenges.readthedocs.io/en/latest/_images/shift.png)\n", "_____no_output_____" ], [ "Insert message to encrypt and shift (0<= S <=26) below.\n\nBy default, Caesar Cipher does a left shift of 3", "_____no_output_____" ] ], [ [ "msg = \"The quick brown fox jumps over the lazy dog 123 !@#\"\nshift = 3\n", "_____no_output_____" ], [ "def getmsg():\n processedmsg = ''\n for x in msg:\n if x.isalpha():\n num = ord(x)\n num += shift\n \n if x.isupper():\n if num > ord('Z'):\n num -= 26\n elif num < ord('A'):\n num += 26\n elif x.islower():\n if num > ord('z'):\n num -= 26\n elif num < ord('a'):\n num += 26\n \n processedmsg += chr(num)\n else:\n processedmsg += x\n return processedmsg", "_____no_output_____" ] ], [ [ "The for loop above inspects each letter in the message.\n\nchr(), character function takes an integer ordinal and returns a character. ie. chr(65) outputs 'A' based on the ASCII table\n\nord(), ordinal does the reverse. ie ord('A') gives 65.\n\nBased on the ASCII Table, 'Z' with a shift of 3 will give us ']', which is undesirable.\n\nThus, we need the if-else statements to perform a \"wraparound\". If num has a value large than the ordinal value of 'Z', subtract 26.\n\nIf num is less than 'a', add 26.\n\n**\"else:\n processedmsg += x'**\n \n concatenates any spaces, numbers etc that are not encrypted or decrypted.", "_____no_output_____" ] ], [ [ "encrypted=getmsg()\nprint(encrypted)\n", "Wkh txlfn eurzq ira mxpsv ryhu wkh odcb grj 123 !@#\n" ] ], [ [ "Note that only alphabets are encrypted.\n\nTo decrypt, the algorithm is very similar.", "_____no_output_____" ] ], [ [ "shift=-shift\nmsg=encrypted\n\ndecrypted= getmsg()\nprint(decrypted)", "The quick brown fox jumps over the lazy dog 123 !@#\n" ] ], [ [ "By reversing the polarity of the shift key we can get back the plain text.\n\n## References\n[Invent with Python](http://inventwithpython.com/hacking/)\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d01a396c0be70303e8b36ed8df972d695a0f2c77
277,854
ipynb
Jupyter Notebook
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
05ff5f502a0aaccc171b8edf5bc463ed848326b0
[ "CC-BY-3.0", "Apache-2.0" ]
3
2018-11-26T03:17:05.000Z
2021-12-07T16:08:33.000Z
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
05ff5f502a0aaccc171b8edf5bc463ed848326b0
[ "CC-BY-3.0", "Apache-2.0" ]
null
null
null
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
05ff5f502a0aaccc171b8edf5bc463ed848326b0
[ "CC-BY-3.0", "Apache-2.0" ]
null
null
null
252.365123
113,556
0.902816
[ [ [ "## 使用TensorFlow的基本步骤\n以使用LinearRegression来预测房价为例。\n- 使用RMSE(均方根误差)评估模型预测的准确率\n- 通过调整超参数来提高模型的预测准确率", "_____no_output_____" ] ], [ [ "from __future__ import print_function\n\nimport math\n\nfrom IPython import display\nfrom matplotlib import cm\nfrom matplotlib import gridspec\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom sklearn import metrics\nimport tensorflow as tf\nfrom tensorflow.python.data import Dataset\n\ntf.logging.set_verbosity(tf.logging.ERROR)\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:.1f}'.format", "/Users/kevin/.virtualenvs/py36/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\n return f(*args, **kwds)\n" ], [ "# 加载数据集\ncalifornia_housing_df = pd.read_csv(\"https://download.mlcc.google.cn/mledu-datasets/california_housing_train.csv\", sep=\",\")", "_____no_output_____" ], [ "# 将数据打乱\ncalifornia_housing_df = california_housing_df.reindex(np.random.permutation(california_housing_df.index))\n# 替换房价的单位为k\ncalifornia_housing_df['median_house_value'] /=1000.0\nprint(\"california house dataframe: \\n\", california_housing_df) # 根据pd设置,只显示10条数据,以及保留小数点后一位", "california house dataframe: \n longitude latitude housing_median_age total_rooms total_bedrooms \\\n840 -117.1 32.7 29.0 1429.0 293.0 \n15761 -122.4 37.8 52.0 3260.0 1535.0 \n2964 -117.8 34.1 23.0 7079.0 1381.0 \n5005 -118.1 33.8 36.0 1074.0 188.0 \n9816 -119.7 36.5 29.0 1702.0 301.0 \n... ... ... ... ... ... \n1864 -117.3 34.7 28.0 1932.0 421.0 \n6257 -118.2 34.1 11.0 1281.0 418.0 \n4690 -118.1 34.1 52.0 1282.0 189.0 \n6409 -118.3 33.9 44.0 1103.0 265.0 \n11082 -121.0 38.7 5.0 5743.0 1074.0 \n\n population households median_income median_house_value \n840 1091.0 317.0 3.5 118.0 \n15761 3260.0 1457.0 0.9 500.0 \n2964 3205.0 1327.0 3.1 212.3 \n5005 496.0 196.0 4.6 217.4 \n9816 914.0 280.0 2.8 79.2 \n... ... ... ... ... \n1864 1156.0 404.0 1.9 55.6 \n6257 1584.0 330.0 2.9 153.1 \n4690 431.0 187.0 6.1 470.8 \n6409 760.0 247.0 1.7 99.6 \n11082 2651.0 962.0 4.1 172.5 \n\n[17000 rows x 9 columns]\n" ] ], [ [ "### 检查数据", "_____no_output_____" ] ], [ [ "# 使用pd的describe方法来统计一些信息\ncalifornia_housing_df.describe()", "_____no_output_____" ] ], [ [ "### 构建模型\n我们将在这个例子中预测中位房价,将其作为学习的标签,使用房间总数作为输入特征。", "_____no_output_____" ], [ "#### 第1步:定义特征并配置特征列\n为了把数据导入TensorFlow,我们需要指定每个特征包含的数据类型。我们主要使用以下两种类型:\n- 分类数据: 一种文字数据。\n- 数值数据:一种数字(整数或浮点数)数据或希望视为数字的数据。\n\n在TF中我们使用**特征列**的结构来表示特征的数据类型。特征列仅存储对特征数据的描述,不包含特征数据本身。", "_____no_output_____" ] ], [ [ "# 定义输入特征\nkl_feature = california_housing_df[['total_rooms']]\n\n# 配置房间总数为数值特征列\nfeature_columns = [tf.feature_column.numeric_column('total_rooms')]", "[_NumericColumn(key='total_rooms', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)]\n" ] ], [ [ "#### 第2步: 定义目标", "_____no_output_____" ] ], [ [ "# 定义目标标签\ntargets = california_housing_df['median_house_value']", "_____no_output_____" ] ], [ [ "**梯度裁剪**是在应用梯度值之前设置其上限,梯度裁剪有助于确保数值稳定性,防止梯度爆炸。", "_____no_output_____" ], [ "#### 第3步:配置线性回归器", "_____no_output_____" ] ], [ [ "# 使用Linear Regressor配置线性回归模型,使用GradientDescentOptimizer优化器训练模型\nkl_optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0000001)\n# 使用clip_gradients_by_norm梯度裁剪我们的优化器,梯度裁剪可以确保我们的梯度大小在训练期间不会变得过大,梯度过大会导致梯度下降失败。\nkl_optimizer = tf.contrib.estimator.clip_gradients_by_norm(kl_optimizer, 5.0)\n\n# 使用我们的特征列和优化器配置线性回归模型\nhouse_linear_regressor = tf.estimator.LinearRegressor(feature_columns=feature_columns, optimizer=kl_optimizer)", "_____no_output_____" ] ], [ [ "#### 第4步:定义输入函数\n要将数据导入LinearRegressor,我们需要定义一个输入函数,让它告诉TF如何对数据进行预处理,以及在模型训练期间如何批处理、随机处理和重复数据。\n首先我们将Pandas特征数据转换成NumPy数组字典,然后利用Dataset API构建Dataset对象,拆分数据为batch_size的批数据,以按照指定周期数(num_epochs)进行重复,**注意:**如果默认值num_epochs=None传递到repeat(),输入数据会无限期重复。\nshuffle: Bool, 是否打乱数据\nbuffer_size: 指定shuffle从中随机抽样的数据集大小", "_____no_output_____" ] ], [ [ "def kl_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):\n \"\"\"使用单个特征训练房价预测模型\n Args:\n features: 特征DataFrame\n targets: 目标DataFrame\n batch_size: 批大小\n shuffle: Bool. 是否打乱数据\n Return:\n 下一个数据批次的元组(features, labels)\n \"\"\"\n # 把pandas数据转换成np.array构成的dict数据\n features = {key: np.array(value) for key, value in dict(features).items()}\n \n # 构建数据集,配置批和重复次数、\n ds = Dataset.from_tensor_slices((features, targets)) # 数据大小 2GB 限制\n ds = ds.batch(batch_size).repeat(num_epochs)\n \n # 打乱数据\n if shuffle:\n ds = ds.shuffle(buffer_size=10000) # buffer_size指随机抽样的数据集大小\n \n # 返回下一批次的数据\n features, labels = ds.make_one_shot_iterator().get_next()\n return features, labels", "_____no_output_____" ] ], [ [ "**注意:** 更详细的输入函数和Dataset API参考:[TF Developer's Guide](https://www.tensorflow.org/programmers_guide/datasets)", "_____no_output_____" ], [ "#### 第5步:训练模型\n在linear_regressor上调用train()来训练模型", "_____no_output_____" ] ], [ [ "_ = house_linear_regressor.train(input_fn=lambda: kl_input_fn(kl_feature, targets), steps=100)", "_____no_output_____" ] ], [ [ "#### 第6步:评估模型\n**注意:**训练误差可以衡量训练的模型与训练数据的拟合情况,但**不能**衡量模型泛化到新数据的效果,我们需要拆分数据来评估模型的泛化能力。", "_____no_output_____" ] ], [ [ "# 只做一次预测,所以把epoch设为1并关闭随机\nprediction_input_fn = lambda: kl_input_fn(kl_feature, targets, num_epochs=1, shuffle=False)\n\n# 调用predict进行预测\npredictions = house_linear_regressor.predict(input_fn=prediction_input_fn)\n\n# 把预测结果转换为numpy数组\npredictions = np.array([item['predictions'][0] for item in predictions])\n\n# 打印MSE和RMSE\nmean_squared_error = metrics.mean_squared_error(predictions, targets)\nroot_mean_squared_error = math.sqrt(mean_squared_error)\n\nprint(\"均方误差 %0.3f\" % mean_squared_error)\nprint(\"均方根误差: %0.3f\" % root_mean_squared_error)\n", "均方误差 56367.025\n均方根误差: 237.417\n" ], [ "min_house_value = california_housing_df['median_house_value'].min()\nmax_house_value = california_housing_df['median_house_value'].max()\nmin_max_diff = max_house_value- min_house_value\n\nprint(\"最低中位房价: %0.3f\" % min_house_value)\nprint(\"最高中位房价: %0.3f\" % max_house_value)\nprint(\"中位房价最低最高差值: %0.3f\" % min_max_diff)\nprint(\"均方根误差:%0.3f\" % root_mean_squared_error)", "最低中位房价: 14.999\n最高中位房价: 500.001\n中位房价最低最高差值: 485.002\n均方根误差:237.417\n" ] ], [ [ "由此结果可以看出模型的效果并不理想,我们可以使用一些基本的策略来降低误差。", "_____no_output_____" ] ], [ [ "calibration_data = pd.DataFrame()\ncalibration_data[\"predictions\"] = pd.Series(predictions)\ncalibration_data[\"targets\"] = pd.Series(targets)\ncalibration_data.describe()", "_____no_output_____" ], [ "# 我们可以可视化数据和我们学到的线,\nsample = california_housing_df.sample(n=300) # 得到均匀分布的sample数据df", "_____no_output_____" ], [ "# 得到房屋总数的最小最大值\nx_0 = sample[\"total_rooms\"].min()\nx_1 = sample[\"total_rooms\"].max()\n\n# 获得训练后的最终权重和偏差\nweight = house_linear_regressor.get_variable_value('linear/linear_model/total_rooms/weights')[0]\nbias = house_linear_regressor.get_variable_value('linear/linear_model/bias_weights')\n\n# 计算最低最高房间数(特征)对应的房价(标签)\ny_0 = weight * x_0 + bias\ny_1 = weight * x_1 +bias\n\n# 画图\nplt.plot([x_0,x_1], [y_0,y_1],c='r')\nplt.ylabel('median_house_value')\nplt.xlabel('total_rooms')\n\n# 画出散点图\nplt.scatter(sample[\"total_rooms\"], sample[\"median_house_value\"])\n\nplt.show()", "_____no_output_____" ] ], [ [ "### 模型调参\n以上代码封装调参", "_____no_output_____" ] ], [ [ "def train_model(learning_rate, steps, batch_size, input_feature=\"total_rooms\"):\n \"\"\"Trains a linear regression model of one feature.\n \n Args:\n learning_rate: A `float`, the learning rate.\n steps: A non-zero `int`, the total number of training steps. A training step\n consists of a forward and backward pass using a single batch.\n batch_size: A non-zero `int`, the batch size.\n input_feature: A `string` specifying a column from `california_housing_df`\n to use as input feature.\n \"\"\"\n \n periods = 10\n steps_per_period = steps / periods\n\n my_feature = input_feature\n my_feature_data = california_housing_df[[my_feature]]\n my_label = \"median_house_value\"\n targets = california_housing_df[my_label]\n\n # Create feature columns.\n feature_columns = [tf.feature_column.numeric_column(my_feature)]\n \n # Create input functions.\n training_input_fn = lambda:kl_input_fn(my_feature_data, targets, batch_size=batch_size)\n prediction_input_fn = lambda: kl_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)\n \n # Create a linear regressor object.\n my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\n my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)\n linear_regressor = tf.estimator.LinearRegressor(\n feature_columns=feature_columns,\n optimizer=my_optimizer\n )\n\n # Set up to plot the state of our model's line each period.\n plt.figure(figsize=(15, 6))\n plt.subplot(1, 2, 1)\n plt.title(\"Learned Line by Period\")\n plt.ylabel(my_label)\n plt.xlabel(my_feature)\n sample = california_housing_df.sample(n=300)\n plt.scatter(sample[my_feature], sample[my_label])\n colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]\n\n # Train the model, but do so inside a loop so that we can periodically assess\n # loss metrics.\n print(\"Training model...\")\n print(\"RMSE (on training data):\")\n root_mean_squared_errors = []\n for period in range (0, periods):\n # Train the model, starting from the prior state.\n linear_regressor.train(\n input_fn=training_input_fn,\n steps=steps_per_period\n )\n # Take a break and compute predictions.\n predictions = linear_regressor.predict(input_fn=prediction_input_fn)\n predictions = np.array([item['predictions'][0] for item in predictions])\n \n # Compute loss.\n root_mean_squared_error = math.sqrt(\n metrics.mean_squared_error(predictions, targets))\n # Occasionally print the current loss.\n print(\" period %02d : %0.2f\" % (period, root_mean_squared_error))\n # Add the loss metrics from this period to our list.\n root_mean_squared_errors.append(root_mean_squared_error)\n # Finally, track the weights and biases over time.\n # Apply some math to ensure that the data and line are plotted neatly.\n y_extents = np.array([0, sample[my_label].max()])\n \n weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]\n bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')\n\n x_extents = (y_extents - bias) / weight\n x_extents = np.maximum(np.minimum(x_extents,\n sample[my_feature].max()),\n sample[my_feature].min())\n y_extents = weight * x_extents + bias\n plt.plot(x_extents, y_extents, color=colors[period]) \n print(\"Model training finished.\")\n\n # Output a graph of loss metrics over periods.\n plt.subplot(1, 2, 2)\n plt.ylabel('RMSE')\n plt.xlabel('Periods')\n plt.title(\"Root Mean Squared Error vs. Periods\")\n plt.tight_layout()\n plt.plot(root_mean_squared_errors)\n\n # Output a table with calibration data.\n calibration_data = pd.DataFrame()\n calibration_data[\"predictions\"] = pd.Series(predictions)\n calibration_data[\"targets\"] = pd.Series(targets)\n display.display(calibration_data.describe())\n\n print(\"Final RMSE (on training data): %0.2f\" % root_mean_squared_error)", "_____no_output_____" ] ], [ [ "**练习1: 使RMSE不超过180**", "_____no_output_____" ] ], [ [ "train_model(learning_rate=0.00002, steps=500, batch_size=5)", "Training model...\nRMSE (on training data):\n period 00 : 225.63\n period 01 : 214.42\n period 02 : 204.04\n period 03 : 194.97\n period 04 : 186.60\n period 05 : 180.80\n period 06 : 175.66\n period 07 : 171.74\n period 08 : 168.96\n period 09 : 167.23\nModel training finished.\n" ] ], [ [ "### 模型调参的启发法\n> 不要死循规则\n\n- 训练误差应该稳步减小,刚开始是急剧减小,最终应随着训练收敛达到平稳状态。\n- 如果训练尚未收敛,尝试运行更长的时间。\n- 如果训练误差减小速度过慢,则提高学习速率也许有助于加快其减小速度。\n- 但有时如果学习速率过高,训练误差的减小速度反而会变慢。\n- 如果训练误差变化很大,尝试降低学习速率。\n- 较低的学习速率和较大的步数/较大的批量大小通常是不错的组合。\n- 批量大小过小也会导致不稳定情况。不妨先尝试 100 或 1000 等较大的值,然后逐渐减小值的大小,直到出现性能降低的情况。", "_____no_output_____" ], [ "**练习2:尝试其他特征**\n我们使用population特征替代。", "_____no_output_____" ] ], [ [ "train_model(learning_rate=0.00005, steps=500, batch_size=5, input_feature=\"population\")", "Training model...\nRMSE (on training data):\n period 00 : 222.79\n period 01 : 209.51\n period 02 : 198.00\n period 03 : 189.59\n period 04 : 182.78\n period 05 : 179.35\n period 06 : 177.30\n period 07 : 176.11\n period 08 : 175.97\n period 09 : 176.51\nModel training finished.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
d01a3bd9a87d591626b7607d11441e73cbe041df
526,613
ipynb
Jupyter Notebook
8 semester/CV/lab2.ipynb
vladtsap/study
87bc1aae4db67fdc18d5203f4e2af1dee1220ec5
[ "MIT" ]
1
2021-07-13T14:35:21.000Z
2021-07-13T14:35:21.000Z
8 semester/CV/lab2.ipynb
vladtsap/study
87bc1aae4db67fdc18d5203f4e2af1dee1220ec5
[ "MIT" ]
null
null
null
8 semester/CV/lab2.ipynb
vladtsap/study
87bc1aae4db67fdc18d5203f4e2af1dee1220ec5
[ "MIT" ]
null
null
null
526,613
526,613
0.962654
[ [ [ "!wget -c https://i.imgur.com/K74Rsq2.jpg -O painting.jpg\n!wget -c https://i.imgur.com/HnwPrgi.jpg -O painting_in_life.jpg", "--2021-03-17 09:46:35-- https://i.imgur.com/K74Rsq2.jpg\nResolving i.imgur.com (i.imgur.com)... 199.232.64.193\nConnecting to i.imgur.com (i.imgur.com)|199.232.64.193|:443... connected.\nHTTP request sent, awaiting response... 416 Range Not Satisfiable\n\n The file is already fully retrieved; nothing to do.\n\n--2021-03-17 09:46:35-- https://i.imgur.com/HnwPrgi.jpg\nResolving i.imgur.com (i.imgur.com)... 199.232.64.193\nConnecting to i.imgur.com (i.imgur.com)|199.232.64.193|:443... connected.\nHTTP request sent, awaiting response... 416 Range Not Satisfiable\n\n The file is already fully retrieved; nothing to do.\n\n" ], [ "import cv2 as cv\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "img1 = cv.imread('painting.jpg', cv.IMREAD_GRAYSCALE) # queryImage\nimg2 = cv.imread('painting_in_life.jpg', cv.IMREAD_GRAYSCALE) # trainImage", "_____no_output_____" ], [ "# Initiate ORB detector\norb = cv.ORB_create()\n\n# find the keypoints and descriptors with ORB\nkp1, des1 = orb.detectAndCompute(img1, None)\nkp2, des2 = orb.detectAndCompute(img2, None)", "_____no_output_____" ], [ "# create BFMatcher object\n# bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True)\n# Match descriptors.\n# matches = bf.match(des1, des2)", "_____no_output_____" ], [ "def hamdist(x, y):\n diffs = 0\n\n if len(x) != len(y):\n return max(len(x), len(y))\n\n for ch1, ch2 in zip(x, y):\n if ch1 != ch2:\n diffs += 1\n\n return diffs\n\n\nmatches = []\nfor i, k1 in enumerate(des1):\n for j, k2 in enumerate(des2):\n matches.append(cv.DMatch(_distance=hamdist(k1, k2), _imgIdx=0, _queryIdx=i, _trainIdx=j))", "_____no_output_____" ], [ "# Sort them in the order of their distance.\nmatches = sorted(matches, key=lambda x: x.distance)\n\n# Draw first 10 matches.\nimg3 = cv.drawMatches(img1, kp1, img2, kp2, matches[:10], None, flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)\n\nplt.rcParams['figure.figsize'] = [20, 16]\nplt.imshow(img3), plt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
d01a43ff7ffd3c40bc38f05c6a62c0ecf9f63182
36,711
ipynb
Jupyter Notebook
ait_repository/test/tests/eval_metamorphic_test_tf1.13_0.1.ipynb
ads-ad-itcenter/qunomon.forked
48d532692d353fe2d3946f62b227f834f9349034
[ "Apache-2.0" ]
16
2020-11-18T05:43:55.000Z
2021-11-27T14:43:26.000Z
ait_repository/test/tests/eval_metamorphic_test_tf1.13_0.1.ipynb
aistairc/qunomon
d4e9c5cb569b16addfbe6c33c73812065065a1df
[ "Apache-2.0" ]
1
2022-03-23T07:55:54.000Z
2022-03-23T13:24:11.000Z
ait_repository/test/tests/eval_metamorphic_test_tf1.13_0.1.ipynb
ads-ad-itcenter/qunomon.forked
48d532692d353fe2d3946f62b227f834f9349034
[ "Apache-2.0" ]
3
2021-02-12T01:56:31.000Z
2022-03-23T02:45:02.000Z
48.431398
746
0.491052
[ [ [ "# test note\n\n\n* jupyterはコンテナ起動すること\n* テストベッド一式起動済みであること\n", "_____no_output_____" ] ], [ [ "!pip install --upgrade pip\n!pip install --force-reinstall ../lib/ait_sdk-0.1.7-py3-none-any.whl", "Requirement already satisfied: pip in /usr/local/lib/python3.6/dist-packages (21.0.1)\nCollecting pip\n Downloading pip-21.1.1-py3-none-any.whl (1.5 MB)\n\u001b[K |████████████████████████████████| 1.5 MB 4.0 MB/s eta 0:00:01\n\u001b[?25hInstalling collected packages: pip\n Attempting uninstall: pip\n Found existing installation: pip 21.0.1\n Uninstalling pip-21.0.1:\n Successfully uninstalled pip-21.0.1\nSuccessfully installed pip-21.1.1\nProcessing /workdir/root/lib/ait_sdk-0.1.7-py3-none-any.whl\nCollecting numpy<=1.19.3\n Downloading numpy-1.19.3-cp36-cp36m-manylinux2010_x86_64.whl (14.9 MB)\n\u001b[K |████████████████████████████████| 14.9 MB 2.8 MB/s eta 0:00:01 |███████▉ | 3.6 MB 3.7 MB/s eta 0:00:04 |███████████▎ | 5.2 MB 3.7 MB/s eta 0:00:03 |████████████████▌ | 7.6 MB 3.7 MB/s eta 0:00:02 |██████████████████▏ | 8.4 MB 3.7 MB/s eta 0:00:02 |███████████████████████████▍ | 12.7 MB 2.8 MB/s eta 0:00:01\n\u001b[?25hCollecting py-cpuinfo<=7.0.0\n Downloading py-cpuinfo-7.0.0.tar.gz (95 kB)\n\u001b[K |████████████████████████████████| 95 kB 4.1 MB/s eta 0:00:011\n\u001b[?25hCollecting keras<=2.4.3\n Downloading Keras-2.4.3-py2.py3-none-any.whl (36 kB)\nCollecting nbformat<=5.0.8\n Downloading nbformat-5.0.8-py3-none-any.whl (172 kB)\n\u001b[K |████████████████████████████████| 172 kB 10.6 MB/s eta 0:00:01\n\u001b[?25hCollecting psutil<=5.7.3\n Downloading psutil-5.7.3.tar.gz (465 kB)\n\u001b[K |████████████████████████████████| 465 kB 8.4 MB/s eta 0:00:01 |█████████████████████████▍ | 368 kB 8.4 MB/s eta 0:00:01\n\u001b[?25hCollecting nbconvert<=6.0.7\n Using cached nbconvert-6.0.7-py3-none-any.whl (552 kB)\nCollecting scipy>=0.14\n Downloading scipy-1.5.4-cp36-cp36m-manylinux1_x86_64.whl (25.9 MB)\n\u001b[K |████████████████████████████████| 25.9 MB 6.4 MB/s eta 0:00:01 |██▊ | 2.2 MB 6.3 MB/s eta 0:00:04 |█████▏ | 4.2 MB 6.3 MB/s eta 0:00:04 |███████████████▊ | 12.7 MB 6.5 MB/s eta 0:00:03 |██████████████████████▎ | 18.0 MB 5.8 MB/s eta 0:00:02 |█████████████████████████████▋ | 24.0 MB 6.4 MB/s eta 0:00:01\n\u001b[?25hCollecting h5py\n Downloading h5py-3.1.0-cp36-cp36m-manylinux1_x86_64.whl (4.0 MB)\n\u001b[K |████████████████████████████████| 4.0 MB 8.1 MB/s eta 0:00:01\n\u001b[?25hCollecting pyyaml\n Downloading PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl (640 kB)\n\u001b[K |████████████████████████████████| 640 kB 6.3 MB/s eta 0:00:01\n\u001b[?25hCollecting testpath\n Using cached testpath-0.4.4-py2.py3-none-any.whl (163 kB)\nCollecting jupyterlab-pygments\n Using cached jupyterlab_pygments-0.1.2-py2.py3-none-any.whl (4.6 kB)\nCollecting mistune<2,>=0.8.1\n Using cached mistune-0.8.4-py2.py3-none-any.whl (16 kB)\nCollecting jinja2>=2.4\n Using cached Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)\nCollecting jupyter-core\n Using cached jupyter_core-4.7.1-py3-none-any.whl (82 kB)\nCollecting pandocfilters>=1.4.1\n Using cached pandocfilters-1.4.3-py3-none-any.whl\nCollecting entrypoints>=0.2.2\n Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)\nCollecting defusedxml\n Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)\nCollecting traitlets>=4.2\n Using cached traitlets-4.3.3-py2.py3-none-any.whl (75 kB)\nCollecting bleach\n Using cached bleach-3.3.0-py2.py3-none-any.whl (283 kB)\nCollecting nbclient<0.6.0,>=0.5.0\n Using cached nbclient-0.5.3-py3-none-any.whl (82 kB)\nCollecting pygments>=2.4.1\n Downloading Pygments-2.9.0-py3-none-any.whl (1.0 MB)\n\u001b[K |████████████████████████████████| 1.0 MB 11.8 MB/s eta 0:00:01 |████████▌ | 266 kB 11.8 MB/s eta 0:00:01\n\u001b[?25hCollecting MarkupSafe>=0.23\n Using cached MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_x86_64.whl (32 kB)\nCollecting jupyter-client>=6.1.5\n Using cached jupyter_client-6.1.12-py3-none-any.whl (112 kB)\nCollecting async-generator\n Using cached async_generator-1.10-py3-none-any.whl (18 kB)\nCollecting nest-asyncio\n Using cached nest_asyncio-1.5.1-py3-none-any.whl (5.0 kB)\nCollecting tornado>=4.1\n Using cached tornado-6.1-cp36-cp36m-manylinux2010_x86_64.whl (427 kB)\nCollecting python-dateutil>=2.1\n Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)\nCollecting pyzmq>=13\n Using cached pyzmq-22.0.3-cp36-cp36m-manylinux1_x86_64.whl (1.1 MB)\nCollecting ipython-genutils\n Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)\nCollecting jsonschema!=2.5.0,>=2.4\n Using cached jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)\nCollecting pyrsistent>=0.14.0\n Using cached pyrsistent-0.17.3-cp36-cp36m-linux_x86_64.whl\nCollecting importlib-metadata\n Downloading importlib_metadata-4.0.1-py3-none-any.whl (16 kB)\nCollecting attrs>=17.4.0\n Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)\n\u001b[K |████████████████████████████████| 53 kB 2.0 MB/s eta 0:00:011\n\u001b[?25hCollecting setuptools\n Downloading setuptools-56.2.0-py3-none-any.whl (785 kB)\n\u001b[K |████████████████████████████████| 785 kB 5.5 MB/s eta 0:00:01 |███████████████████████████████▎| 768 kB 5.5 MB/s eta 0:00:01\n\u001b[?25hCollecting six>=1.11.0\n Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)\nCollecting decorator\n Downloading decorator-5.0.7-py3-none-any.whl (8.8 kB)\nCollecting packaging\n Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)\nCollecting webencodings\n Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)\nCollecting cached-property\n Downloading cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB)\nCollecting typing-extensions>=3.6.4\n Downloading typing_extensions-3.10.0.0-py3-none-any.whl (26 kB)\nCollecting zipp>=0.5\n Downloading zipp-3.4.1-py3-none-any.whl (5.2 kB)\nCollecting pyparsing>=2.0.2\n Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)\nBuilding wheels for collected packages: psutil, py-cpuinfo\n Building wheel for psutil (setup.py) ... \u001b[?25ldone\n\u001b[?25h Created wheel for psutil: filename=psutil-5.7.3-cp36-cp36m-linux_x86_64.whl size=288610 sha256=add7bf93ebb9ecbd8650a6cf9469361f154e3e577a660725a014c35bae9e2b35\n Stored in directory: /root/.cache/pip/wheels/fa/ad/67/90bbaacdcfe970960dd5158397f23a6579b51d853720d7856d\n Building wheel for py-cpuinfo (setup.py) ... \u001b[?25ldone\n\u001b[?25h Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-py3-none-any.whl size=20299 sha256=b2ec8e860f6c76a428e7e43a1be32903a0b2061998a1606cd0dd1d40219c59a1\n Stored in directory: /root/.cache/pip/wheels/46/6d/cc/73a126dc2e09fe56fcec0a7386d255762611fbed1c86d3bbcc\nSuccessfully built psutil py-cpuinfo\nInstalling collected packages: zipp, typing-extensions, six, ipython-genutils, decorator, traitlets, setuptools, pyrsistent, importlib-metadata, attrs, tornado, pyzmq, python-dateutil, pyparsing, jupyter-core, jsonschema, webencodings, pygments, packaging, numpy, nest-asyncio, nbformat, MarkupSafe, jupyter-client, cached-property, async-generator, testpath, scipy, pyyaml, pandocfilters, nbclient, mistune, jupyterlab-pygments, jinja2, h5py, entrypoints, defusedxml, bleach, py-cpuinfo, psutil, nbconvert, keras, ait-sdk\n Attempting uninstall: zipp\n Found existing installation: zipp 3.4.0\n Uninstalling zipp-3.4.0:\n Successfully uninstalled zipp-3.4.0\n Attempting uninstall: six\n Found existing installation: six 1.15.0\n Uninstalling six-1.15.0:\n Successfully uninstalled six-1.15.0\n Attempting uninstall: ipython-genutils\n Found existing installation: ipython-genutils 0.2.0\n Uninstalling ipython-genutils-0.2.0:\n Successfully uninstalled ipython-genutils-0.2.0\n Attempting uninstall: decorator\n Found existing installation: decorator 4.4.2\n Uninstalling decorator-4.4.2:\n Successfully uninstalled decorator-4.4.2\n Attempting uninstall: traitlets\n Found existing installation: traitlets 4.3.3\n Uninstalling traitlets-4.3.3:\n Successfully uninstalled traitlets-4.3.3\n Attempting uninstall: setuptools\n Found existing installation: setuptools 54.1.2\n Uninstalling setuptools-54.1.2:\n Successfully uninstalled setuptools-54.1.2\n Attempting uninstall: pyrsistent\n Found existing installation: pyrsistent 0.17.3\n Uninstalling pyrsistent-0.17.3:\n Successfully uninstalled pyrsistent-0.17.3\n Attempting uninstall: importlib-metadata\n Found existing installation: importlib-metadata 3.1.1\n Uninstalling importlib-metadata-3.1.1:\n Successfully uninstalled importlib-metadata-3.1.1\n Attempting uninstall: attrs\n Found existing installation: attrs 20.3.0\n Uninstalling attrs-20.3.0:\n Successfully uninstalled attrs-20.3.0\n Attempting uninstall: tornado\n Found existing installation: tornado 6.1\n Uninstalling tornado-6.1:\n Successfully uninstalled tornado-6.1\n Attempting uninstall: pyzmq\n Found existing installation: pyzmq 22.0.3\n Uninstalling pyzmq-22.0.3:\n Successfully uninstalled pyzmq-22.0.3\n Attempting uninstall: python-dateutil\n Found existing installation: python-dateutil 2.8.1\n Uninstalling python-dateutil-2.8.1:\n Successfully uninstalled python-dateutil-2.8.1\n Attempting uninstall: pyparsing\n Found existing installation: pyparsing 2.4.7\n Uninstalling pyparsing-2.4.7:\n Successfully uninstalled pyparsing-2.4.7\n Attempting uninstall: jupyter-core\n Found existing installation: jupyter-core 4.7.1\n Uninstalling jupyter-core-4.7.1:\n Successfully uninstalled jupyter-core-4.7.1\n Attempting uninstall: jsonschema\n Found existing installation: jsonschema 3.2.0\n Uninstalling jsonschema-3.2.0:\n Successfully uninstalled jsonschema-3.2.0\n Attempting uninstall: webencodings\n Found existing installation: webencodings 0.5.1\n Uninstalling webencodings-0.5.1:\n Successfully uninstalled webencodings-0.5.1\n Attempting uninstall: pygments\n Found existing installation: Pygments 2.8.1\n Uninstalling Pygments-2.8.1:\n Successfully uninstalled Pygments-2.8.1\n Attempting uninstall: packaging\n Found existing installation: packaging 20.9\n Uninstalling packaging-20.9:\n Successfully uninstalled packaging-20.9\n Attempting uninstall: numpy\n Found existing installation: numpy 1.18.5\n Uninstalling numpy-1.18.5:\n Successfully uninstalled numpy-1.18.5\n Attempting uninstall: nest-asyncio\n Found existing installation: nest-asyncio 1.5.1\n Uninstalling nest-asyncio-1.5.1:\n Successfully uninstalled nest-asyncio-1.5.1\n Attempting uninstall: nbformat\n Found existing installation: nbformat 5.1.2\n Uninstalling nbformat-5.1.2:\n Successfully uninstalled nbformat-5.1.2\n Attempting uninstall: MarkupSafe\n Found existing installation: MarkupSafe 1.1.1\n Uninstalling MarkupSafe-1.1.1:\n Successfully uninstalled MarkupSafe-1.1.1\n Attempting uninstall: jupyter-client\n Found existing installation: jupyter-client 6.1.12\n Uninstalling jupyter-client-6.1.12:\n Successfully uninstalled jupyter-client-6.1.12\n Attempting uninstall: async-generator\n Found existing installation: async-generator 1.10\n Uninstalling async-generator-1.10:\n Successfully uninstalled async-generator-1.10\n Attempting uninstall: testpath\n Found existing installation: testpath 0.4.4\n Uninstalling testpath-0.4.4:\n Successfully uninstalled testpath-0.4.4\n Attempting uninstall: scipy\n Found existing installation: scipy 1.4.1\n Uninstalling scipy-1.4.1:\n Successfully uninstalled scipy-1.4.1\n Attempting uninstall: pandocfilters\n Found existing installation: pandocfilters 1.4.3\n Uninstalling pandocfilters-1.4.3:\n Successfully uninstalled pandocfilters-1.4.3\n Attempting uninstall: nbclient\n Found existing installation: nbclient 0.5.3\n Uninstalling nbclient-0.5.3:\n Successfully uninstalled nbclient-0.5.3\n Attempting uninstall: mistune\n Found existing installation: mistune 0.8.4\n Uninstalling mistune-0.8.4:\n Successfully uninstalled mistune-0.8.4\n Attempting uninstall: jupyterlab-pygments\n Found existing installation: jupyterlab-pygments 0.1.2\n Uninstalling jupyterlab-pygments-0.1.2:\n Successfully uninstalled jupyterlab-pygments-0.1.2\n Attempting uninstall: jinja2\n Found existing installation: Jinja2 2.11.3\n Uninstalling Jinja2-2.11.3:\n Successfully uninstalled Jinja2-2.11.3\n Attempting uninstall: h5py\n Found existing installation: h5py 2.10.0\n Uninstalling h5py-2.10.0:\n Successfully uninstalled h5py-2.10.0\n Attempting uninstall: entrypoints\n Found existing installation: entrypoints 0.3\n Uninstalling entrypoints-0.3:\n Successfully uninstalled entrypoints-0.3\n Attempting uninstall: defusedxml\n Found existing installation: defusedxml 0.7.1\n Uninstalling defusedxml-0.7.1:\n Successfully uninstalled defusedxml-0.7.1\n Attempting uninstall: bleach\n Found existing installation: bleach 3.3.0\n Uninstalling bleach-3.3.0:\n Successfully uninstalled bleach-3.3.0\n Attempting uninstall: nbconvert\n Found existing installation: nbconvert 6.0.7\n Uninstalling nbconvert-6.0.7:\n Successfully uninstalled nbconvert-6.0.7\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntensorflow 2.3.0 requires h5py<2.11.0,>=2.10.0, but you have h5py 3.1.0 which is incompatible.\ntensorflow 2.3.0 requires numpy<1.19.0,>=1.16.0, but you have numpy 1.19.3 which is incompatible.\ntensorflow 2.3.0 requires scipy==1.4.1, but you have scipy 1.5.4 which is incompatible.\u001b[0m\nSuccessfully installed MarkupSafe-1.1.1 ait-sdk-0.1.7 async-generator-1.10 attrs-21.2.0 bleach-3.3.0 cached-property-1.5.2 decorator-5.0.7 defusedxml-0.7.1 entrypoints-0.3 h5py-3.1.0 importlib-metadata-4.0.1 ipython-genutils-0.2.0 jinja2-2.11.3 jsonschema-3.2.0 jupyter-client-6.1.12 jupyter-core-4.7.1 jupyterlab-pygments-0.1.2 keras-2.4.3 mistune-0.8.4 nbclient-0.5.3 nbconvert-6.0.7 nbformat-5.0.8 nest-asyncio-1.5.1 numpy-1.19.3 packaging-20.9 pandocfilters-1.4.3 psutil-5.7.3 py-cpuinfo-7.0.0 pygments-2.9.0 pyparsing-2.4.7 pyrsistent-0.17.3 python-dateutil-2.8.1 pyyaml-5.4.1 pyzmq-22.0.3 scipy-1.5.4 setuptools-56.2.0 six-1.16.0 testpath-0.4.4 tornado-6.1 traitlets-4.3.3 typing-extensions-3.10.0.0 webencodings-0.5.1 zipp-3.4.1\n\u001b[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\u001b[0m\n" ], [ "from pathlib import Path\nimport pprint\nfrom ait_sdk.test.hepler import Helper\nimport json", "_____no_output_____" ], [ "# settings cell\n\n# mounted dir\nroot_dir = Path('/workdir/root/ait')\n\nait_name='eval_metamorphic_test_tf1.13'\nait_version='0.1'\n\nait_full_name=f'{ait_name}_{ait_version}'\nait_dir = root_dir / ait_full_name\n\ntd_name=f'{ait_name}_test'\n\n# (dockerホスト側の)インベントリ登録用アセット格納ルートフォルダ\ncurrent_dir = %pwd\nwith open(f'{current_dir}/config.json', encoding='utf-8') as f:\n json_ = json.load(f)\n root_dir = json_['host_ait_root_dir']\n is_container = json_['is_container']\ninvenotory_root_dir = f'{root_dir}\\\\ait\\\\{ait_full_name}\\\\local_qai\\\\inventory'\n\n# entry point address\n# コンテナ起動かどうかでポート番号が変わるため、切り替える\nif is_container:\n backend_entry_point = 'http://host.docker.internal:8888/qai-testbed/api/0.0.1'\n ip_entry_point = 'http://host.docker.internal:8888/qai-ip/api/0.0.1'\nelse:\n backend_entry_point = 'http://host.docker.internal:5000/qai-testbed/api/0.0.1'\n ip_entry_point = 'http://host.docker.internal:6000/qai-ip/api/0.0.1'\n\n# aitのデプロイフラグ\n# 一度実施すれば、それ以降は実施しなくてOK\nis_init_ait = True\n\n# インベントリの登録フラグ\n# 一度実施すれば、それ以降は実施しなくてOK\nis_init_inventory = True\n", "_____no_output_____" ], [ "helper = Helper(backend_entry_point=backend_entry_point, \n ip_entry_point=ip_entry_point,\n ait_dir=ait_dir,\n ait_full_name=ait_full_name)", "_____no_output_____" ], [ "# health check\n\nhelper.get_bk('/health-check')\nhelper.get_ip('/health-check')", "<Response [200]>\n{'Code': 0, 'Message': 'alive.'}\n<Response [200]>\n{'Code': 0, 'Message': 'alive.'}\n" ], [ "# create ml-component\nres = helper.post_ml_component(name=f'MLComponent_{ait_full_name}', description=f'Description of {ait_full_name}', problem_domain=f'ProbremDomain of {ait_full_name}')\nhelper.set_ml_component_id(res['MLComponentId'])", "<Response [200]>\n{'MLComponentId': 13,\n 'Result': {'Code': 'P22000', 'Message': 'add ml-component success.'}}\n" ], [ "# deploy AIT\nif is_init_ait:\n helper.deploy_ait_non_build()\nelse:\n print('skip deploy AIT')", "<Response [400]>\n{'Code': 'T54000',\n 'Message': 'already exist ait = eval_metamorphic_test_tf1.13-0.1'}\n<Response [200]>\n{'Code': 'D00001', 'Message': 'Deploy success'}\n" ], [ "res = helper.get_data_types()\nmodel_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'model'][0]['Id']\ndataset_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'dataset'][0]['Id']\nres = helper.get_file_systems()\nunix_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'UNIX_FILE_SYSTEM'][0]['Id']\nwindows_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'WINDOWS_FILE'][0]['Id']", "_____no_output_____" ], [ "# add inventories\n\nif is_init_inventory:\n inv1_name = helper.post_inventory('train_image', dataset_data_type_id, windows_file_system_id, \n f'{invenotory_root_dir}\\\\mnist_dataset\\\\mnist_dataset.zip',\n 'MNIST_dataset are train image, train label, test image, test label', ['zip'])\n inv2_name = helper.post_inventory('mnist_model', dataset_data_type_id, windows_file_system_id, \n f'{invenotory_root_dir}\\\\mnist_model\\\\model_mnist.zip',\n 'MNIST_model', ['zip'])\n\nelse:\n print('skip add inventories')", "<Response [200]>\n{'result': {'Code': 'I22000', 'Message': 'append Inventory success.'}}\n<Response [200]>\n{'result': {'Code': 'I22000', 'Message': 'append Inventory success.'}}\n" ], [ "# get ait_json and inventory_jsons\n\nres_json = helper.get_bk('/QualityMeasurements/RelationalOperators', is_print_json=False).json()\neq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '=='][0])\nnq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '!='][0])\ngt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>'][0])\nge_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>='][0])\nlt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<'][0])\nle_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<='][0])\n\nres_json = helper.get_bk('/testRunners', is_print_json=False).json()\nait_json = [j for j in res_json['TestRunners'] if j['Name'] == ait_name][-1]\n\ninv_1_json = helper.get_inventory(inv1_name)\ninv_2_json = helper.get_inventory(inv2_name)", "<Response [200]>\n<Response [200]>\n<Response [200]>\n<Response [200]>\n" ], [ "# add teast_descriptions\n\nhelper.post_td(td_name, ait_json['QualityDimensionId'],\n quality_measurements=[\n {\"Id\":ait_json['Report']['Measures'][0]['Id'], \"Value\":\"0.25\", \"RelationalOperatorId\":lt_id, \"Enable\":True}\n ],\n target_inventories=[\n {\"Id\":1, \"InventoryId\": inv_1_json['Id'], \"TemplateInventoryId\": ait_json['TargetInventories'][0]['Id']},\n {\"Id\":2, \"InventoryId\": inv_2_json['Id'], \"TemplateInventoryId\": ait_json['TargetInventories'][1]['Id']}\n ],\n test_runner={\n \"Id\":ait_json['Id'],\n \"Params\":[\n {\"TestRunnerParamTemplateId\":ait_json['ParamTemplates'][0]['Id'], \"Value\":\"10\"},\n {\"TestRunnerParamTemplateId\":ait_json['ParamTemplates'][1]['Id'], \"Value\":\"500\"},\n {\"TestRunnerParamTemplateId\":ait_json['ParamTemplates'][2]['Id'], \"Value\":\"train\"}\n ]\n })", "<Response [200]>\n{'Result': {'Code': 'T22000', 'Message': 'append test description success.'}}\n" ], [ "# get test_description_jsons\ntd_1_json = helper.get_td(td_name)", "<Response [200]>\n" ], [ "# run test_descriptions\nhelper.post_run_and_wait(td_1_json['Id'])", "<Response [200]>\n{'Job': {'Id': '13', 'StartDateTime': '2021-05-10 14:07:31.784737+09:00'},\n 'Result': {'Code': 'R12000', 'Message': 'job launch success.'}}\n[{'Id': 13,\n 'Result': 'OK',\n 'ResultDetail': 'average : OK.\\n',\n 'Status': 'DONE',\n 'TestDescriptionID': 14}]\n" ], [ "res_json = helper.get_td_detail(td_1_json['Id'])\npprint.pprint(res_json)", "<Response [200]>\n{'Result': {'Code': 'T32000', 'Message': 'get detail success.'},\n 'TestDescriptionDetail': {'Id': 14,\n 'Name': 'eval_metamorphic_test_tf1.13_test',\n 'Opinion': '',\n 'QualityDimension': {'Id': 6,\n 'Name': 'Robustness_of_trained_model'},\n 'QualityMeasurements': [{'Description': 'Average '\n 'number of '\n 'NG output',\n 'Enable': True,\n 'Id': 31,\n 'Name': 'average',\n 'RelationalOperatorId': 4,\n 'Structure': 'single',\n 'Value': '0.25'}],\n 'Star': False,\n 'TargetInventories': [{'DataType': {'Id': 1,\n 'Name': 'dataset'},\n 'Description': 'MNIST_dataset '\n 'are train '\n 'image, train '\n 'label, test '\n 'image, test '\n 'label',\n 'Id': 29,\n 'Name': 'eval_metamorphic_test_tf1.13_0.1_train_image',\n 'TemplateInventoryId': 18},\n {'DataType': {'Id': 1,\n 'Name': 'dataset'},\n 'Description': 'MNIST_model',\n 'Id': 30,\n 'Name': 'eval_metamorphic_test_tf1.13_0.1_mnist_model',\n 'TemplateInventoryId': 19}],\n 'TestDescriptionResult': {'Detail': 'average : '\n 'OK.\\n',\n 'Downloads': [{'Description': 'deep_saucer_log',\n 'DownloadURL': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/466',\n 'FileName': 'deep.log',\n 'Id': 31,\n 'Name': 'DeepLog'},\n {'Description': 'AIT_log',\n 'DownloadURL': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/467',\n 'FileName': 'ait.log',\n 'Id': 32,\n 'Name': 'Log'}],\n 'Graphs': [{'Description': 'number '\n 'of '\n 'NG '\n 'output',\n 'FileName': 'result.csv',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/465',\n 'GraphType': 'table',\n 'Id': 413,\n 'Name': 'result',\n 'ReportIndex': 1,\n 'ReportName': 'result',\n 'ReportRequired': True}],\n 'LogFile': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/468',\n 'Summary': 'OK'},\n 'TestRunner': {'Author': 'AIST',\n 'Description': 'Metamorphic test.\\n'\n 'Make sure can be '\n 'classified in the '\n 'same result as the '\n 'original class be '\n 'added a little '\n 'processing to the '\n 'original data.',\n 'Email': '',\n 'Id': 9,\n 'LandingPage': '',\n 'Name': 'eval_metamorphic_test_tf1.13',\n 'Params': [{'Id': 48,\n 'Name': 'Lap',\n 'TestRunnerParamTemplateId': 36,\n 'Value': '10'},\n {'Id': 49,\n 'Name': 'NumTest',\n 'TestRunnerParamTemplateId': 37,\n 'Value': '500'},\n {'Id': 50,\n 'Name': 'mnist_type',\n 'TestRunnerParamTemplateId': 38,\n 'Value': 'train'}],\n 'Quality': 'https://airc.aist.go.jp/aiqm/quality/internal/Robustness_of_trained_model',\n 'Version': '0.1'}}}\n" ], [ "# generate report\nres = helper.post_report(td_1_json['Id'])", "<Response [200]>\n{'OutParams': {'ReportUrl': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/469'},\n 'Result': {'Code': 'D12000', 'Message': 'command invoke success.'}}\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01a5e576af2a8268b31b0c0ea08d2b7d501f03d
401,689
ipynb
Jupyter Notebook
notebooks/quick_start.ipynb
timgates42/prophet
20f590b7263b540eb5e7a116e03360066c58de4d
[ "MIT" ]
2
2020-11-13T16:48:44.000Z
2021-01-18T13:53:16.000Z
notebooks/quick_start.ipynb
timgates42/prophet
20f590b7263b540eb5e7a116e03360066c58de4d
[ "MIT" ]
2
2021-09-28T05:36:42.000Z
2022-02-26T10:01:12.000Z
notebooks/quick_start.ipynb
timgates42/prophet
20f590b7263b540eb5e7a116e03360066c58de4d
[ "MIT" ]
1
2021-06-08T07:27:52.000Z
2021-06-08T07:27:52.000Z
616.087423
137,892
0.940038
[ [ [ "%load_ext rpy2.ipython\n%matplotlib inline\nimport logging\nlogging.getLogger('fbprophet').setLevel(logging.ERROR)\nimport warnings\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ] ], [ [ "## Python API\n\nProphet follows the `sklearn` model API. We create an instance of the `Prophet` class and then call its `fit` and `predict` methods. ", "_____no_output_____" ], [ "The input to Prophet is always a dataframe with two columns: `ds` and `y`. The `ds` (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp. The `y` column must be numeric, and represents the measurement we wish to forecast.\n\nAs an example, let's look at a time series of the log daily page views for the Wikipedia page for [Peyton Manning](https://en.wikipedia.org/wiki/Peyton_Manning). We scraped this data using the [Wikipediatrend](https://cran.r-project.org/package=wikipediatrend) package in R. Peyton Manning provides a nice example because it illustrates some of Prophet's features, like multiple seasonality, changing growth rates, and the ability to model special days (such as Manning's playoff and superbowl appearances). The CSV is available [here](https://github.com/facebook/prophet/blob/master/examples/example_wp_log_peyton_manning.csv).\n\nFirst we'll import the data:", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom fbprophet import Prophet", "_____no_output_____" ], [ "df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')\ndf.head()", "_____no_output_____" ] ], [ [ "We fit the model by instantiating a new `Prophet` object. Any settings to the forecasting procedure are passed into the constructor. Then you call its `fit` method and pass in the historical dataframe. Fitting should take 1-5 seconds.", "_____no_output_____" ] ], [ [ "m = Prophet()\nm.fit(df)", "INFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.\n" ] ], [ [ "Predictions are then made on a dataframe with a column `ds` containing the dates for which a prediction is to be made. You can get a suitable dataframe that extends into the future a specified number of days using the helper method `Prophet.make_future_dataframe`. By default it will also include the dates from the history, so we will see the model fit as well. ", "_____no_output_____" ] ], [ [ "future = m.make_future_dataframe(periods=365)\nfuture.tail()", "_____no_output_____" ] ], [ [ "The `predict` method will assign each row in `future` a predicted value which it names `yhat`. If you pass in historical dates, it will provide an in-sample fit. The `forecast` object here is a new dataframe that includes a column `yhat` with the forecast, as well as columns for components and uncertainty intervals.", "_____no_output_____" ] ], [ [ "forecast = m.predict(future)\nforecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()", "_____no_output_____" ] ], [ [ "You can plot the forecast by calling the `Prophet.plot` method and passing in your forecast dataframe.", "_____no_output_____" ] ], [ [ "fig1 = m.plot(forecast)", "_____no_output_____" ] ], [ [ "If you want to see the forecast components, you can use the `Prophet.plot_components` method. By default you'll see the trend, yearly seasonality, and weekly seasonality of the time series. If you include holidays, you'll see those here, too.", "_____no_output_____" ] ], [ [ "fig2 = m.plot_components(forecast)", "_____no_output_____" ] ], [ [ "An interactive figure of the forecast and components can be created with plotly. You will need to install plotly 4.0 or above separately, as it will not by default be installed with fbprophet. You will also need to install the `notebook` and `ipywidgets` packages.", "_____no_output_____" ] ], [ [ "from fbprophet.plot import plot_plotly, plot_components_plotly\n\nplot_plotly(m, forecast)", "_____no_output_____" ], [ "plot_components_plotly(m, forecast)", "_____no_output_____" ] ], [ [ "More details about the options available for each method are available in the docstrings, for example, via `help(Prophet)` or `help(Prophet.fit)`. The [R reference manual](https://cran.r-project.org/web/packages/prophet/prophet.pdf) on CRAN provides a concise list of all of the available functions, each of which has a Python equivalent.", "_____no_output_____" ], [ "## R API\n\nIn R, we use the normal model fitting API. We provide a `prophet` function that performs fitting and returns a model object. You can then call `predict` and `plot` on this model object.", "_____no_output_____" ] ], [ [ "%%R\nlibrary(prophet)", "_____no_output_____" ] ], [ [ "First we read in the data and create the outcome variable. As in the Python API, this is a dataframe with columns `ds` and `y`, containing the date and numeric value respectively. The ds column should be YYYY-MM-DD for a date, or YYYY-MM-DD HH:MM:SS for a timestamp. As above, we use here the log number of views to Peyton Manning's Wikipedia page, available [here](https://github.com/facebook/prophet/blob/master/examples/example_wp_log_peyton_manning.csv).", "_____no_output_____" ] ], [ [ "%%R\ndf <- read.csv('../examples/example_wp_log_peyton_manning.csv')", "_____no_output_____" ] ], [ [ "We call the `prophet` function to fit the model. The first argument is the historical dataframe. Additional arguments control how Prophet fits the data and are described in later pages of this documentation.", "_____no_output_____" ] ], [ [ "%%R\nm <- prophet(df)", "_____no_output_____" ] ], [ [ "Predictions are made on a dataframe with a column `ds` containing the dates for which predictions are to be made. The `make_future_dataframe` function takes the model object and a number of periods to forecast and produces a suitable dataframe. By default it will also include the historical dates so we can evaluate in-sample fit.", "_____no_output_____" ] ], [ [ "%%R\nfuture <- make_future_dataframe(m, periods = 365)\ntail(future)", "_____no_output_____" ] ], [ [ "As with most modeling procedures in R, we use the generic `predict` function to get our forecast. The `forecast` object is a dataframe with a column `yhat` containing the forecast. It has additional columns for uncertainty intervals and seasonal components.", "_____no_output_____" ] ], [ [ "%%R\nforecast <- predict(m, future)\ntail(forecast[c('ds', 'yhat', 'yhat_lower', 'yhat_upper')])", "_____no_output_____" ] ], [ [ "You can use the generic `plot` function to plot the forecast, by passing in the model and the forecast dataframe.", "_____no_output_____" ] ], [ [ "%%R -w 10 -h 6 -u in\nplot(m, forecast)", "_____no_output_____" ] ], [ [ "You can use the `prophet_plot_components` function to see the forecast broken down into trend, weekly seasonality, and yearly seasonality.", "_____no_output_____" ] ], [ [ "%%R -w 9 -h 9 -u in\nprophet_plot_components(m, forecast)", "_____no_output_____" ] ], [ [ "An interactive plot of the forecast using Dygraphs can be made with the command `dyplot.prophet(m, forecast)`.\n\nMore details about the options available for each method are available in the docstrings, for example, via `?prophet` or `?fit.prophet`. This documentation is also available in the [reference manual](https://cran.r-project.org/web/packages/prophet/prophet.pdf) on CRAN.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d01a731eaa8fde6afb22cb2ef525ad2d7380c5a5
24,256
ipynb
Jupyter Notebook
example-notebooks/immutable-revival.ipynb
yutiansut/nteract
561072a381c3e131b7933d0a27b3b1ebebddd5d1
[ "BSD-3-Clause" ]
1
2017-09-07T00:48:06.000Z
2017-09-07T00:48:06.000Z
example-notebooks/immutable-revival.ipynb
yutiansut/nteract
561072a381c3e131b7933d0a27b3b1ebebddd5d1
[ "BSD-3-Clause" ]
null
null
null
example-notebooks/immutable-revival.ipynb
yutiansut/nteract
561072a381c3e131b7933d0a27b3b1ebebddd5d1
[ "BSD-3-Clause" ]
null
null
null
33.272977
692
0.513564
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d01a7b0feb0c1f8337522bcb9eb25da910a9a28d
61,668
ipynb
Jupyter Notebook
k-mean.ipynb
pawel-krawczyk/machine_learning_basic
d77f6c8294ff99f04cee1590e2669664eecb93d0
[ "MIT" ]
1
2020-03-10T13:55:09.000Z
2020-03-10T13:55:09.000Z
k-mean.ipynb
pawel-krawczyk/machine_learning_basic
d77f6c8294ff99f04cee1590e2669664eecb93d0
[ "MIT" ]
null
null
null
k-mean.ipynb
pawel-krawczyk/machine_learning_basic
d77f6c8294ff99f04cee1590e2669664eecb93d0
[ "MIT" ]
null
null
null
61,668
61,668
0.857154
[ [ [ "#import libraries\n#data management\nimport pandas as pd\n\n#ML\nfrom sklearn.cluster import KMeans\nfrom sklearn.preprocessing import MinMaxScaler\n\n#visualisation\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "#Import data\ndf = pd.read_csv(\"https://raw.githubusercontent.com/codebasics/py/master/ML/13_kmeans/income.csv\")\ndf.head()", "_____no_output_____" ], [ "#crate scatter plot with Age on x axis and Income on y axis\nplt.scatter(df.Age,df['Income($)'])\n#add labels\nplt.xlabel('Age')\nplt.ylabel('Income($)')", "_____no_output_____" ], [ "#create k-means clustering object\nkm = KMeans(n_clusters=3)\n\n#train the model and predict values\ny_predicted = km.fit_predict(df[['Age','Income($)']])\ny_predicted", "_____no_output_____" ], [ "#add the values to the dataframe\ndf['cluster']=y_predicted\ndf.head()", "_____no_output_____" ], [ "#visualise the data\ndf1 = df[df.cluster==0]\ndf2 = df[df.cluster==1]\ndf3 = df[df.cluster==2]\nplt.scatter(df1.Age,df1['Income($)'],color='green')\nplt.scatter(df2.Age,df2['Income($)'],color='red')\nplt.scatter(df3.Age,df3['Income($)'],color='black')\nplt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1],color='purple',marker='*',label='centroid')\nplt.xlabel('Age')\nplt.ylabel('Income ($)')\nplt.legend()", "_____no_output_____" ], [ "#scale the data\nscaler = MinMaxScaler()\n\nscaler.fit(df[['Income($)']])\ndf['Income($)'] = scaler.transform(df[['Income($)']])\n\nscaler.fit(df[['Age']])\ndf['Age'] = scaler.transform(df[['Age']])\ndf.head()", "_____no_output_____" ], [ "#train again\nkm = KMeans(n_clusters=3)\ny_predicted = km.fit_predict(df[['Age','Income($)']])\ndf['cluster']=y_predicted\ndf.head()", "_____no_output_____" ], [ "df1 = df[df.cluster==0]\ndf2 = df[df.cluster==1]\ndf3 = df[df.cluster==2]\nplt.scatter(df1.Age,df1['Income($)'],color='green')\nplt.scatter(df2.Age,df2['Income($)'],color='red')\nplt.scatter(df3.Age,df3['Income($)'],color='black')\nplt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1],\n color='purple',marker='*',label='centroid')\nplt.legend()", "_____no_output_____" ] ], [ [ "### BONUS: Elbow plot - finding elbow in order to decide about the number of clusters", "_____no_output_____" ] ], [ [ "#find errir for 1-10 clusters\nsse = []\nk_rng = range(1,10)\nfor k in k_rng:\n km = KMeans(n_clusters=k)\n km.fit(df[['Age','Income($)']])\n sse.append(km.inertia_)", "_____no_output_____" ], [ "#plot errors and find \"elbow\"\nplt.xlabel('K')\nplt.ylabel('Sum of squared error')\nplt.plot(k_rng,sse)", "_____no_output_____" ], [ "", "Cloning into 'machine_learning_basic'...\nremote: Enumerating objects: 12, done.\u001b[K\nremote: Counting objects: 100% (12/12), done.\u001b[K\nremote: Compressing objects: 100% (11/11), done.\u001b[K\nremote: Total 12 (delta 2), reused 4 (delta 1), pack-reused 0\u001b[K\nUnpacking objects: 100% (12/12), done.\n" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d01a8cd09762764c8d33882c66c5021c9d4725d7
27,481
ipynb
Jupyter Notebook
.ipynb_checkpoints/Corona-checkpoint.ipynb
ayushman17/COVID-19-Detector
940b2f4ade2cde98f35b634e8861f9d5557c223b
[ "MIT" ]
2
2020-05-14T22:18:26.000Z
2020-05-20T13:04:35.000Z
.ipynb_checkpoints/Corona-checkpoint.ipynb
ayushman17/COVID-19-Detector
940b2f4ade2cde98f35b634e8861f9d5557c223b
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Corona-checkpoint.ipynb
ayushman17/COVID-19-Detector
940b2f4ade2cde98f35b634e8861f9d5557c223b
[ "MIT" ]
null
null
null
27.842958
216
0.337943
[ [ [ "import pandas as pd", "_____no_output_____" ] ], [ [ "## Reading Data", "_____no_output_____" ] ], [ [ "df = pd.read_csv('data.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.tail()", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2999 entries, 0 to 2998\nData columns (total 6 columns):\nfever 2999 non-null float64\nbodyPain 2999 non-null int64\nage 2999 non-null int64\nrunnyNose 2999 non-null int64\ndiffBreath 2999 non-null int64\ninfectionProb 2999 non-null int64\ndtypes: float64(1), int64(5)\nmemory usage: 140.7 KB\n" ], [ "df['fever'].value_counts()", "_____no_output_____" ], [ "df['diffBreath'].value_counts()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ] ], [ [ "## Train Test Splitting", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "def data_split(data, ratio):\n np.random.seed(42)\n shuffled = np.random.permutation(len(data))\n test_set_size = int(len(data) * ratio)\n test_indices = shuffled[:test_set_size]\n train_indices = shuffled[test_set_size:]\n return data.iloc[train_indices], data.iloc[test_indices]", "_____no_output_____" ], [ "np.random.permutation(7)", "_____no_output_____" ], [ "train, test = data_split(df, 0.2)", "_____no_output_____" ], [ "train", "_____no_output_____" ], [ "test", "_____no_output_____" ], [ "X_train = train[['fever', 'bodyPain', 'age', 'runnyNose', 'diffBreath']].to_numpy()\nX_test = test[['fever', 'bodyPain', 'age', 'runnyNose', 'diffBreath']].to_numpy()", "_____no_output_____" ], [ "Y_train = train[['infectionProb']].to_numpy().reshape(2400,)\nY_test = test[['infectionProb']].to_numpy().reshape(599,)", "_____no_output_____" ], [ "Y_train", "_____no_output_____" ], [ "from sklearn.linear_model import LogisticRegression", "_____no_output_____" ], [ "clf = LogisticRegression()\nclf.fit(X_train, Y_train)", "C:\\Users\\Ayushman singh\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n" ], [ "inputFeatures = [101, 1, 22, -1, 1]\ninfProb =clf.predict_proba([inputFeatures])[0][1]", "_____no_output_____" ], [ "infProb", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01a902d71ba16b97e8add1a9fef68b1b90c034a
49,715
ipynb
Jupyter Notebook
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
ccb7edf0cd9d1e77bd951bfaa48d14dc95ce2aca
[ "Apache-2.0" ]
1
2021-07-10T21:57:23.000Z
2021-07-10T21:57:23.000Z
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
ccb7edf0cd9d1e77bd951bfaa48d14dc95ce2aca
[ "Apache-2.0" ]
null
null
null
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
ccb7edf0cd9d1e77bd951bfaa48d14dc95ce2aca
[ "Apache-2.0" ]
1
2021-07-28T19:58:18.000Z
2021-07-28T19:58:18.000Z
47.802885
944
0.637373
[ [ [ "## TensorFlow 2 Complete Project Workflow in Amazon SageMaker\n### Data Preprocessing -> Code Prototyping -> Automatic Model Tuning -> Deployment\n \n1. [Introduction](#Introduction)\n2. [SageMaker Processing for dataset transformation](#SageMakerProcessing)\n3. [Local Mode training](#LocalModeTraining)\n4. [Local Mode endpoint](#LocalModeEndpoint)\n5. [SageMaker hosted training](#SageMakerHostedTraining)\n6. [Automatic Model Tuning](#AutomaticModelTuning)\n7. [SageMaker hosted endpoint](#SageMakerHostedEndpoint)\n8. [Workflow Automation with the Step Functions Data Science SDK](#WorkflowAutomation)\n 1. [Add an IAM policy to your SageMaker role](#IAMPolicy)\n 2. [Create an execution role for Step Functions](#CreateExecutionRole)\n 3. [Set up a TrainingPipeline](#TrainingPipeline)\n 4. [Visualizing the workflow](#VisualizingWorkflow)\n 5. [Creating and executing the pipeline](#CreatingExecutingPipeline)\n 6. [Cleanup](#Cleanup)\n9. [Extensions](#Extensions)\n\n\n### ***Prerequisite: To run the Local Mode sections of this example, use a SageMaker Notebook Instance; otherwise skip those sections (for example if you're using SageMaker Studio instead).***\n\n \n## Introduction <a class=\"anchor\" id=\"Introduction\">\n\nIf you are using TensorFlow 2, you can use the Amazon SageMaker prebuilt TensorFlow 2 container with training scripts similar to those you would use outside SageMaker. This feature is named Script Mode. Using Script Mode and other SageMaker features, you can build a complete workflow for a TensorFlow 2 project. This notebook presents such a workflow, including all key steps such as preprocessing data with SageMaker Processing, code prototyping with SageMaker Local Mode training and inference, and production-ready model training and deployment with SageMaker hosted training and inference. Automatic Model Tuning in SageMaker is used to tune the model's hyperparameters. Additionally, the [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/readmelink.html) is used to automate the main training and deployment steps for use in a production workflow outside notebooks. \n\nTo enable you to run this notebook within a reasonable time (typically less than an hour), this notebook's use case is a straightforward regression task: predicting house prices based on the well-known Boston Housing dataset. This public dataset contains 13 features regarding housing stock of towns in the Boston area. Features include average number of rooms, accessibility to radial highways, adjacency to the Charles River, etc. \n\nTo begin, we'll import some necessary packages and set up directories for local training and test data. We'll also set up a SageMaker Session to perform various operations, and specify an Amazon S3 bucket to hold input data and output. The default bucket used here is created by SageMaker if it doesn't already exist, and named in accordance with the AWS account ID and AWS Region. ", "_____no_output_____" ] ], [ [ "import os\nimport sagemaker\nimport tensorflow as tf\n\nsess = sagemaker.Session()\nbucket = sess.default_bucket() \n\ndata_dir = os.path.join(os.getcwd(), 'data')\nos.makedirs(data_dir, exist_ok=True)\n\ntrain_dir = os.path.join(os.getcwd(), 'data/train')\nos.makedirs(train_dir, exist_ok=True)\n\ntest_dir = os.path.join(os.getcwd(), 'data/test')\nos.makedirs(test_dir, exist_ok=True)\n\nraw_dir = os.path.join(os.getcwd(), 'data/raw')\nos.makedirs(raw_dir, exist_ok=True)", "_____no_output_____" ] ], [ [ "# SageMaker Processing for dataset transformation <a class=\"anchor\" id=\"SageMakerProcessing\">\n\nNext, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for prototyping and can be run on relatively inexpensive and less powerful instances, while processing, training and model hosting tasks are run on separate, more powerful SageMaker-managed instances. SageMaker Processing includes off-the-shelf support for Scikit-learn, as well as a Bring Your Own Container option, so it can be used with many different data transformation technologies and tasks. \n\nFirst we'll load the Boston Housing dataset, save the raw feature data and upload it to Amazon S3 for transformation by SageMaker Processing. We'll also save the labels for training and testing.", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom tensorflow.python.keras.datasets import boston_housing\nfrom sklearn.preprocessing import StandardScaler\n\n(x_train, y_train), (x_test, y_test) = boston_housing.load_data()\n\nnp.save(os.path.join(raw_dir, 'x_train.npy'), x_train)\nnp.save(os.path.join(raw_dir, 'x_test.npy'), x_test)\nnp.save(os.path.join(train_dir, 'y_train.npy'), y_train)\nnp.save(os.path.join(test_dir, 'y_test.npy'), y_test)\ns3_prefix = 'tf-2-workflow'\nrawdata_s3_prefix = '{}/data/raw'.format(s3_prefix)\nraw_s3 = sess.upload_data(path='./data/raw/', key_prefix=rawdata_s3_prefix)\nprint(raw_s3)", "_____no_output_____" ] ], [ [ "To use SageMaker Processing, simply supply a Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a minimal contract: input and output data must be placed in specified directories. If this is done, SageMaker Processing automatically loads the input data from S3 and uploads transformed data back to S3 when the job is complete.", "_____no_output_____" ] ], [ [ "%%writefile preprocessing.py\n\nimport glob\nimport numpy as np\nimport os\nfrom sklearn.preprocessing import StandardScaler\n\nif __name__=='__main__':\n \n input_files = glob.glob('{}/*.npy'.format('/opt/ml/processing/input'))\n print('\\nINPUT FILE LIST: \\n{}\\n'.format(input_files))\n scaler = StandardScaler()\n for file in input_files:\n raw = np.load(file)\n transformed = scaler.fit_transform(raw)\n if 'train' in file:\n output_path = os.path.join('/opt/ml/processing/train', 'x_train.npy')\n np.save(output_path, transformed)\n print('SAVED TRANSFORMED TRAINING DATA FILE\\n')\n else:\n output_path = os.path.join('/opt/ml/processing/test', 'x_test.npy')\n np.save(output_path, transformed)\n print('SAVED TRANSFORMED TEST DATA FILE\\n')", "_____no_output_____" ] ], [ [ "Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. Although the Boston Housing dataset is quite small, we'll use two instances to showcase how easy it is to spin up a cluster for SageMaker Processing. ", "_____no_output_____" ] ], [ [ "from sagemaker import get_execution_role\nfrom sagemaker.sklearn.processing import SKLearnProcessor\n\nsklearn_processor = SKLearnProcessor(framework_version='0.20.0',\n role=get_execution_role(),\n instance_type='ml.m5.xlarge',\n instance_count=2)", "_____no_output_____" ] ], [ [ "We're now ready to run the Processing job. To enable distributing the data files equally among the instances, we specify the `ShardedByS3Key` distribution type in the `ProcessingInput` object. This ensures that if we have `n` instances, each instance will receive `1/n` files from the specified S3 bucket. It may take around 3 minutes for the following code cell to run, mainly to set up the cluster. At the end of the job, the cluster automatically will be torn down by SageMaker. ", "_____no_output_____" ] ], [ [ "from sagemaker.processing import ProcessingInput, ProcessingOutput\nfrom time import gmtime, strftime \n\nprocessing_job_name = \"tf-2-workflow-{}\".format(strftime(\"%d-%H-%M-%S\", gmtime()))\noutput_destination = 's3://{}/{}/data'.format(bucket, s3_prefix)\n\nsklearn_processor.run(code='preprocessing.py',\n job_name=processing_job_name,\n inputs=[ProcessingInput(\n source=raw_s3,\n destination='/opt/ml/processing/input',\n s3_data_distribution_type='ShardedByS3Key')],\n outputs=[ProcessingOutput(output_name='train',\n destination='{}/train'.format(output_destination),\n source='/opt/ml/processing/train'),\n ProcessingOutput(output_name='test',\n destination='{}/test'.format(output_destination),\n source='/opt/ml/processing/test')])\n\npreprocessing_job_description = sklearn_processor.jobs[-1].describe()", "_____no_output_____" ] ], [ [ "In the log output of the SageMaker Processing job above, you should be able to see logs in two different colors for the two different instances, and that each instance received different files. Without the `ShardedByS3Key` distribution type, each instance would have received a copy of **all** files. By spreading the data equally among `n` instances, you should receive a speedup by approximately a factor of `n` for most stateless data transformations. After saving the job results locally, we'll move on to prototyping training and inference code with Local Mode.", "_____no_output_____" ] ], [ [ "train_in_s3 = '{}/train/x_train.npy'.format(output_destination)\ntest_in_s3 = '{}/test/x_test.npy'.format(output_destination)\n!aws s3 cp {train_in_s3} ./data/train/x_train.npy\n!aws s3 cp {test_in_s3} ./data/test/x_test.npy", "_____no_output_____" ] ], [ [ "## Local Mode training <a class=\"anchor\" id=\"LocalModeTraining\">\n\nLocal Mode in Amazon SageMaker is a convenient way to make sure your code is working locally as expected before moving on to full scale, hosted training in a separate, more powerful SageMaker-managed cluster. To train in Local Mode, it is necessary to have docker-compose or nvidia-docker-compose (for GPU instances) installed. Running the following commands will install docker-compose or nvidia-docker-compose, and configure the notebook environment for you.", "_____no_output_____" ] ], [ [ "!wget -q https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-script-mode/master/local_mode_setup.sh\n!wget -q https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-script-mode/master/daemon.json \n!/bin/bash ./local_mode_setup.sh", "_____no_output_____" ] ], [ [ "Next, we'll set up a TensorFlow Estimator for Local Mode training. Key parameters for the Estimator include:\n\n- `train_instance_type`: the kind of hardware on which training will run. In the case of Local Mode, we simply set this parameter to `local` to invoke Local Mode training on the CPU, or to `local_gpu` if the instance has a GPU. \n- `git_config`: to make sure training scripts are source controlled for coordinated, shared use by a team, the Estimator can pull in the code from a Git repository rather than local directories. \n- Other parameters of note: the algorithm’s hyperparameters, which are passed in as a dictionary, and a Boolean parameter indicating that we are using Script Mode. \n\nRecall that we are using Local Mode here mainly to make sure our code is working. Accordingly, instead of performing a full cycle of training with many epochs (passes over the full dataset), we'll train only for a small number of epochs just to confirm the code is working properly and avoid wasting full-scale training time unnecessarily.", "_____no_output_____" ] ], [ [ "from sagemaker.tensorflow import TensorFlow\n\ngit_config = {'repo': 'https://github.com/aws-samples/amazon-sagemaker-script-mode', \n 'branch': 'master'}\n\nmodel_dir = '/opt/ml/model'\ntrain_instance_type = 'local'\nhyperparameters = {'epochs': 5, 'batch_size': 128, 'learning_rate': 0.01}\nlocal_estimator = TensorFlow(git_config=git_config,\n source_dir='tf-2-workflow/train_model',\n entry_point='train.py',\n model_dir=model_dir,\n instance_type=train_instance_type,\n instance_count=1,\n hyperparameters=hyperparameters,\n role=sagemaker.get_execution_role(),\n base_job_name='tf-2-workflow',\n framework_version='2.2',\n py_version='py37',\n script_mode=True)", "_____no_output_____" ] ], [ [ "The `fit` method call below starts the Local Mode training job. Metrics for training will be logged below the code, inside the notebook cell. You should observe the validation loss decrease substantially over the five epochs, with no training errors, which is a good indication that our training code is working as expected.", "_____no_output_____" ] ], [ [ "inputs = {'train': f'file://{train_dir}',\n 'test': f'file://{test_dir}'}\n\nlocal_estimator.fit(inputs)", "_____no_output_____" ] ], [ [ "## Local Mode endpoint <a class=\"anchor\" id=\"LocalModeEndpoint\">\n\nWhile Amazon SageMaker’s Local Mode training is very useful to make sure your training code is working before moving on to full scale training, it also would be useful to have a convenient way to test your model locally before incurring the time and expense of deploying it to production. One possibility is to fetch the TensorFlow SavedModel artifact or a model checkpoint saved in Amazon S3, and load it in your notebook for testing. However, an even easier way to do this is to use the SageMaker Python SDK to do this work for you by setting up a Local Mode endpoint.\n\nMore specifically, the Estimator object from the Local Mode training job can be used to deploy a model locally. With one exception, this code is the same as the code you would use to deploy to production. In particular, all you need to do is invoke the local Estimator's deploy method, and similarly to Local Mode training, specify the instance type as either `local_gpu` or `local` depending on whether your notebook is on a GPU instance or CPU instance. \n\nJust in case there are other inference containers running in Local Mode, we'll stop them to avoid conflict before deploying our new model locally.", "_____no_output_____" ] ], [ [ "!docker container stop $(docker container ls -aq) >/dev/null", "_____no_output_____" ] ], [ [ "The following single line of code deploys the model locally in the SageMaker TensorFlow Serving container: ", "_____no_output_____" ] ], [ [ "local_predictor = local_estimator.deploy(initial_instance_count=1, instance_type='local')", "_____no_output_____" ] ], [ [ "To get predictions from the Local Mode endpoint, simply invoke the Predictor's predict method.", "_____no_output_____" ] ], [ [ "local_results = local_predictor.predict(x_test[:10])['predictions']", "_____no_output_____" ] ], [ [ "As a sanity check, the predictions can be compared against the actual target values.", "_____no_output_____" ] ], [ [ "local_preds_flat_list = [float('%.1f'%(item)) for sublist in local_results for item in sublist]\nprint('predictions: \\t{}'.format(np.array(local_preds_flat_list)))\nprint('target values: \\t{}'.format(y_test[:10].round(decimals=1)))", "_____no_output_____" ] ], [ [ "We only trained the model for a few epochs and there is much room for improvement, but the predictions so far should at least appear reasonably within the ballpark. \n\nTo avoid having the SageMaker TensorFlow Serving container indefinitely running locally, simply gracefully shut it down by calling the `delete_endpoint` method of the Predictor object.", "_____no_output_____" ] ], [ [ "local_predictor.delete_endpoint()", "_____no_output_____" ] ], [ [ "## SageMaker hosted training <a class=\"anchor\" id=\"SageMakerHostedTraining\">\n\nNow that we've confirmed our code is working locally, we can move on to use SageMaker's hosted training functionality. Hosted training is preferred for doing actual training, especially large-scale, distributed training. Unlike Local Mode training, for hosted training the actual training itself occurs not on the notebook instance, but on a separate cluster of machines managed by SageMaker. Before starting hosted training, the data must be in S3, or an EFS or FSx for Lustre file system. We'll upload to S3 now, and confirm the upload was successful.", "_____no_output_____" ] ], [ [ "s3_prefix = 'tf-2-workflow'\n\ntraindata_s3_prefix = '{}/data/train'.format(s3_prefix)\ntestdata_s3_prefix = '{}/data/test'.format(s3_prefix)", "_____no_output_____" ], [ "train_s3 = sess.upload_data(path='./data/train/', key_prefix=traindata_s3_prefix)\ntest_s3 = sess.upload_data(path='./data/test/', key_prefix=testdata_s3_prefix)\n\ninputs = {'train':train_s3, 'test': test_s3}\n\nprint(inputs)", "_____no_output_____" ] ], [ [ "We're now ready to set up an Estimator object for hosted training. It is similar to the Local Mode Estimator, except the `train_instance_type` has been set to a SageMaker ML instance type instead of `local` for Local Mode. Also, since we know our code is working now, we'll train for a larger number of epochs with the expectation that model training will converge to an improved, lower validation loss.\n\nWith these two changes, we simply call `fit` to start the actual hosted training.", "_____no_output_____" ] ], [ [ "train_instance_type = 'ml.c5.xlarge'\nhyperparameters = {'epochs': 30, 'batch_size': 128, 'learning_rate': 0.01}\n\ngit_config = {'repo': 'https://github.com/aws-samples/amazon-sagemaker-script-mode', \n 'branch': 'master'}\n\nestimator = TensorFlow(git_config=git_config,\n source_dir='tf-2-workflow/train_model',\n entry_point='train.py',\n model_dir=model_dir,\n instance_type=train_instance_type,\n instance_count=1,\n hyperparameters=hyperparameters,\n role=sagemaker.get_execution_role(),\n base_job_name='tf-2-workflow',\n framework_version='2.2',\n py_version='py37',\n script_mode=True)", "_____no_output_____" ] ], [ [ "After starting the hosted training job with the `fit` method call below, you should observe the training converge over the longer number of epochs to a validation loss that is considerably lower than that which was achieved in the shorter Local Mode training job. Can we do better? We'll look into a way to do so in the **Automatic Model Tuning** section below. ", "_____no_output_____" ] ], [ [ "estimator.fit(inputs)", "_____no_output_____" ] ], [ [ "As with the Local Mode training, hosted training produces a model saved in S3 that we can retrieve. This is an example of the modularity of SageMaker: having trained the model in SageMaker, you can now take the model out of SageMaker and run it anywhere else. Alternatively, you can deploy the model into a production-ready environment using SageMaker's hosted endpoints functionality, as shown in the **SageMaker hosted endpoint** section below.\n\nRetrieving the model from S3 is very easy: the hosted training estimator you created above stores a reference to the model's location in S3. You simply copy the model from S3 using the estimator's `model_data` property and unzip it to inspect the contents.", "_____no_output_____" ] ], [ [ "!aws s3 cp {estimator.model_data} ./model/model.tar.gz", "_____no_output_____" ] ], [ [ "The unzipped archive should include the assets required by TensorFlow Serving to load the model and serve it, including a .pb file: ", "_____no_output_____" ] ], [ [ "!tar -xvzf ./model/model.tar.gz -C ./model", "_____no_output_____" ] ], [ [ "## Automatic Model Tuning <a class=\"anchor\" id=\"AutomaticModelTuning\">\n\nSo far we have simply run one Local Mode training job and one Hosted Training job without any real attempt to tune hyperparameters to produce a better model, other than increasing the number of epochs. Selecting the right hyperparameter values to train your model can be difficult, and typically is very time consuming if done manually. The right combination of hyperparameters is dependent on your data and algorithm; some algorithms have many different hyperparameters that can be tweaked; some are very sensitive to the hyperparameter values selected; and most have a non-linear relationship between model fit and hyperparameter values. SageMaker Automatic Model Tuning helps automate the hyperparameter tuning process: it runs multiple training jobs with different hyperparameter combinations to find the set with the best model performance.\n\nWe begin by specifying the hyperparameters we wish to tune, and the range of values over which to tune each one. We also must specify an objective metric to be optimized: in this use case, we'd like to minimize the validation loss.", "_____no_output_____" ] ], [ [ "from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner\n\nhyperparameter_ranges = {\n 'learning_rate': ContinuousParameter(0.001, 0.2, scaling_type=\"Logarithmic\"),\n 'epochs': IntegerParameter(10, 50),\n 'batch_size': IntegerParameter(64, 256),\n}\n\nmetric_definitions = [{'Name': 'loss',\n 'Regex': ' loss: ([0-9\\\\.]+)'},\n {'Name': 'val_loss',\n 'Regex': ' val_loss: ([0-9\\\\.]+)'}]\n\nobjective_metric_name = 'val_loss'\nobjective_type = 'Minimize'", "_____no_output_____" ] ], [ [ "Next we specify a HyperparameterTuner object that takes the above definitions as parameters. Each tuning job must be given a budget: a maximum number of training jobs. A tuning job will complete after that many training jobs have been executed. \n\nWe also can specify how much parallelism to employ, in this case five jobs, meaning that the tuning job will complete after three series of five jobs in parallel have completed. For the default Bayesian Optimization tuning strategy used here, the tuning search is informed by the results of previous groups of training jobs, so we don't run all of the jobs in parallel, but rather divide the jobs into groups of parallel jobs. There is a trade-off: using more parallel jobs will finish tuning sooner, but likely will sacrifice tuning search accuracy. \n\nNow we can launch a hyperparameter tuning job by calling the `fit` method of the HyperparameterTuner object. The tuning job may take around 10 minutes to finish. While you're waiting, the status of the tuning job, including metadata and results for invidual training jobs within the tuning job, can be checked in the SageMaker console in the **Hyperparameter tuning jobs** panel. ", "_____no_output_____" ] ], [ [ "tuner = HyperparameterTuner(estimator,\n objective_metric_name,\n hyperparameter_ranges,\n metric_definitions,\n max_jobs=15,\n max_parallel_jobs=5,\n objective_type=objective_type)\n\ntuning_job_name = \"tf-2-workflow-{}\".format(strftime(\"%d-%H-%M-%S\", gmtime()))\ntuner.fit(inputs, job_name=tuning_job_name)\ntuner.wait()", "_____no_output_____" ] ], [ [ "After the tuning job is finished, we can use the `HyperparameterTuningJobAnalytics` object from the SageMaker Python SDK to list the top 5 tuning jobs with the best performance. Although the results vary from tuning job to tuning job, the best validation loss from the tuning job (under the FinalObjectiveValue column) likely will be substantially lower than the validation loss from the hosted training job above, where we did not perform any tuning other than manually increasing the number of epochs once. ", "_____no_output_____" ] ], [ [ "tuner_metrics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name)\ntuner_metrics.dataframe().sort_values(['FinalObjectiveValue'], ascending=True).head(5)", "_____no_output_____" ] ], [ [ "The total training time and training jobs status can be checked with the following lines of code. Because automatic early stopping is by default off, all the training jobs should be completed normally. For an example of a more in-depth analysis of a tuning job, see the SageMaker official sample [HPO_Analyze_TuningJob_Results.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/analyze_results/HPO_Analyze_TuningJob_Results.ipynb) notebook.", "_____no_output_____" ] ], [ [ "total_time = tuner_metrics.dataframe()['TrainingElapsedTimeSeconds'].sum() / 3600\nprint(\"The total training time is {:.2f} hours\".format(total_time))\ntuner_metrics.dataframe()['TrainingJobStatus'].value_counts()", "_____no_output_____" ] ], [ [ "## SageMaker hosted endpoint <a class=\"anchor\" id=\"SageMakerHostedEndpoint\">\n\nAssuming the best model from the tuning job is better than the model produced by the individual Hosted Training job above, we could now easily deploy that model to production. A convenient option is to use a SageMaker hosted endpoint, which serves real time predictions from the trained model (Batch Transform jobs also are available for asynchronous, offline predictions on large datasets). The endpoint will retrieve the TensorFlow SavedModel created during training and deploy it within a SageMaker TensorFlow Serving container. This all can be accomplished with one line of code. \n\nMore specifically, by calling the `deploy` method of the HyperparameterTuner object we instantiated above, we can directly deploy the best model from the tuning job to a SageMaker hosted endpoint. It will take several minutes longer to deploy the model to the hosted endpoint compared to the Local Mode endpoint, which is more useful for fast prototyping of inference code. ", "_____no_output_____" ] ], [ [ "tuning_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge')", "_____no_output_____" ] ], [ [ "We can compare the predictions generated by this endpoint with those generated locally by the Local Mode endpoint: ", "_____no_output_____" ] ], [ [ "results = tuning_predictor.predict(x_test[:10])['predictions'] \nflat_list = [float('%.1f'%(item)) for sublist in results for item in sublist]\nprint('predictions: \\t{}'.format(np.array(flat_list)))\nprint('target values: \\t{}'.format(y_test[:10].round(decimals=1)))", "_____no_output_____" ] ], [ [ "To avoid billing charges from stray resources, you can delete the prediction endpoint to release its associated instance(s).", "_____no_output_____" ] ], [ [ "sess.delete_endpoint(tuning_predictor.endpoint_name)", "_____no_output_____" ] ], [ [ "## Workflow Automation with the AWS Step Functions Data Science SDK <a class=\"anchor\" id=\"WorkflowAutomation\">\n\nIn the previous parts of this notebook, we prototyped various steps of a TensorFlow project within the notebook itself. Notebooks are great for prototyping, but generally are not used in production-ready machine learning pipelines. For example, a simple pipeline in SageMaker includes the following steps: \n\n1. Training the model.\n2. Creating a SageMaker Model object that wraps the model artifact for serving.\n3. Creating a SageMaker Endpoint Configuration specifying how the model should be served (e.g. hardware type and amount).\n4. Deploying the trained model to the configured SageMaker Endpoint. \n\nThe AWS Step Functions Data Science SDK automates the process of creating and running these kinds of workflows using AWS Step Functions and SageMaker. It does this by allowing you to create workflows using short, simple Python scripts that define workflow steps and chain them together. Under the hood, all the workflow steps are coordinated by AWS Step Functions without any need for you to manage the underlying infrastructure. \n\nTo begin, install the Step Functions Data Science SDK: ", "_____no_output_____" ] ], [ [ "import sys\n\n!{sys.executable} -m pip install --quiet --upgrade stepfunctions", "_____no_output_____" ] ], [ [ "### Add an IAM policy to your SageMaker role <a class=\"anchor\" id=\"IAMPolicy\">\n\n**If you are running this notebook on an Amazon SageMaker notebook instance**, the IAM role assumed by your notebook instance needs permission to create and run workflows in AWS Step Functions. To provide this permission to the role, do the following.\n\n1. Open the Amazon [SageMaker console](https://console.aws.amazon.com/sagemaker/). \n2. Select **Notebook instances** and choose the name of your notebook instance\n3. Under **Permissions and encryption** select the role ARN to view the role on the IAM console\n4. Choose **Attach policies** and search for `AWSStepFunctionsFullAccess`.\n5. Select the check box next to `AWSStepFunctionsFullAccess` and choose **Attach policy**\n\nIf you are running this notebook in a local environment, the SDK will use your configured AWS CLI configuration. For more information, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).\n\n\n### Create an execution role for Step Functions <a class=\"anchor\" id=\"CreateExecutionRole\">\n\nYou also need to create an execution role for Step Functions to enable that service to access SageMaker and other service functionality.\n\n1. Go to the [IAM console](https://console.aws.amazon.com/iam/)\n2. Select **Roles** and then **Create role**.\n3. Under **Choose the service that will use this role** select **Step Functions**\n4. Choose **Next** until you can enter a **Role name**\n5. Enter a name such as `StepFunctionsWorkflowExecutionRole` and then select **Create role**\n\n\nSelect your newly create role and attach a policy to it. The following steps attach a policy that provides full access to Step Functions, however as a good practice you should only provide access to the resources you need. \n\n1. Under the **Permissions** tab, click **Add inline policy**\n2. Enter the following in the **JSON** tab\n\n```json\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"sagemaker:CreateTransformJob\",\n \"sagemaker:DescribeTransformJob\",\n \"sagemaker:StopTransformJob\",\n \"sagemaker:CreateTrainingJob\",\n \"sagemaker:DescribeTrainingJob\",\n \"sagemaker:StopTrainingJob\",\n \"sagemaker:CreateHyperParameterTuningJob\",\n \"sagemaker:DescribeHyperParameterTuningJob\",\n \"sagemaker:StopHyperParameterTuningJob\",\n \"sagemaker:CreateModel\",\n \"sagemaker:CreateEndpointConfig\",\n \"sagemaker:CreateEndpoint\",\n \"sagemaker:DeleteEndpointConfig\",\n \"sagemaker:DeleteEndpoint\",\n \"sagemaker:UpdateEndpoint\",\n \"sagemaker:ListTags\",\n \"lambda:InvokeFunction\",\n \"sqs:SendMessage\",\n \"sns:Publish\",\n \"ecs:RunTask\",\n \"ecs:StopTask\",\n \"ecs:DescribeTasks\",\n \"dynamodb:GetItem\",\n \"dynamodb:PutItem\",\n \"dynamodb:UpdateItem\",\n \"dynamodb:DeleteItem\",\n \"batch:SubmitJob\",\n \"batch:DescribeJobs\",\n \"batch:TerminateJob\",\n \"glue:StartJobRun\",\n \"glue:GetJobRun\",\n \"glue:GetJobRuns\",\n \"glue:BatchStopJobRun\"\n ],\n \"Resource\": \"*\"\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"iam:PassRole\"\n ],\n \"Resource\": \"*\",\n \"Condition\": {\n \"StringEquals\": {\n \"iam:PassedToService\": \"sagemaker.amazonaws.com\"\n }\n }\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"events:PutTargets\",\n \"events:PutRule\",\n \"events:DescribeRule\"\n ],\n \"Resource\": [\n \"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule\",\n \"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule\",\n \"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule\",\n \"arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule\",\n \"arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule\"\n ]\n }\n ]\n}\n```\n\n3. Choose **Review policy** and give the policy a name such as `StepFunctionsWorkflowExecutionPolicy`\n4. Choose **Create policy**. You will be redirected to the details page for the role.\n5. Copy the **Role ARN** at the top of the **Summary**", "_____no_output_____" ], [ "### Set up a TrainingPipeline <a class=\"anchor\" id=\"TrainingPipeline\">\n\nAlthough the AWS Step Functions Data Science SDK provides various primitives to build up pipelines from scratch, it also provides prebuilt templates for common workflows, including a [TrainingPipeline](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/pipelines.html#stepfunctions.template.pipeline.train.TrainingPipeline) object to simplify creation of a basic pipeline that includes model training and deployment. \n\nThe following code cell configures a `pipeline` object with the necessary parameters to define such a simple pipeline:", "_____no_output_____" ] ], [ [ "import stepfunctions\n\nfrom stepfunctions.template.pipeline import TrainingPipeline\n\n# paste the StepFunctionsWorkflowExecutionRole ARN from above\nworkflow_execution_role = \"<execution-role-arn>\"\n\npipeline = TrainingPipeline(\n estimator=estimator,\n role=workflow_execution_role,\n inputs=inputs,\n s3_bucket=bucket\n)", "_____no_output_____" ] ], [ [ "### Visualizing the workflow <a class=\"anchor\" id=\"VisualizingWorkflow\">\n\nYou can now view the workflow definition, and visualize it as a graph. This workflow and graph represent your training pipeline from starting a training job to deploying the model.", "_____no_output_____" ] ], [ [ "print(pipeline.workflow.definition.to_json(pretty=True))", "_____no_output_____" ], [ "pipeline.render_graph()", "_____no_output_____" ] ], [ [ "### Creating and executing the pipeline <a class=\"anchor\" id=\"CreatingExecutingPipeline\">\n\nBefore the workflow can be run for the first time, the pipeline must be created using the `create` method:", "_____no_output_____" ] ], [ [ "pipeline.create()", "_____no_output_____" ] ], [ [ "Now the workflow can be started by invoking the pipeline's `execute` method:", "_____no_output_____" ] ], [ [ "execution = pipeline.execute()", "_____no_output_____" ] ], [ [ "Use the `list_executions` method to list all executions for the workflow you created, including the one we just started. After a pipeline is created, it can be executed as many times as needed, for example on a schedule for retraining on new data. (For purposes of this notebook just execute the workflow one time to save resources.) The output will include a list you can click through to access a view of the execution in the AWS Step Functions console.", "_____no_output_____" ] ], [ [ "pipeline.workflow.list_executions(html=True)", "_____no_output_____" ] ], [ [ "While the workflow is running, you can check workflow progress inside this notebook with the `render_progress` method. This generates a snapshot of the current state of your workflow as it executes. This is a static image. Run the cell again to check progress while the workflow is running.", "_____no_output_____" ] ], [ [ "execution.render_progress()", "_____no_output_____" ] ], [ [ "#### BEFORE proceeding with the rest of the notebook:\n\nWait until the workflow completes with status **Succeeded**, which will take a few minutes. You can check status with `render_progress` above, or open in a new browser tab the **Inspect in AWS Step Functions** link in the cell output. \n\nTo view the details of the completed workflow execution, from model training through deployment, use the `list_events` method, which lists all events in the workflow execution.", "_____no_output_____" ] ], [ [ "execution.list_events(reverse_order=True, html=False)", "_____no_output_____" ] ], [ [ "From this list of events, we can extract the name of the endpoint that was set up by the workflow. ", "_____no_output_____" ] ], [ [ "import re\n\nendpoint_name_suffix = re.search('endpoint\\Wtraining\\Wpipeline\\W([a-zA-Z0-9\\W]+?)\"', str(execution.list_events())).group(1)\nprint(endpoint_name_suffix)", "_____no_output_____" ] ], [ [ "Once we have the endpoint name, we can use it to instantiate a TensorFlowPredictor object that wraps the endpoint. This TensorFlowPredictor can be used to make predictions, as shown in the following code cell. \n\n#### BEFORE running the following code cell:\n\nGo to the [SageMaker console](https://console.aws.amazon.com/sagemaker/), click **Endpoints** in the left panel, and make sure that the endpoint status is **InService**. If the status is **Creating**, wait until it changes, which may take several minutes.", "_____no_output_____" ] ], [ [ "from sagemaker.tensorflow import TensorFlowPredictor\n\nworkflow_predictor = TensorFlowPredictor('training-pipeline-' + endpoint_name_suffix)\n\nresults = workflow_predictor.predict(x_test[:10])['predictions'] \nflat_list = [float('%.1f'%(item)) for sublist in results for item in sublist]\nprint('predictions: \\t{}'.format(np.array(flat_list)))\nprint('target values: \\t{}'.format(y_test[:10].round(decimals=1)))", "_____no_output_____" ] ], [ [ "Using the AWS Step Functions Data Science SDK, there are many other workflows you can create to automate your machine learning tasks. For example, you could create a workflow to automate model retraining on a periodic basis. Such a workflow could include a test of model quality after training, with subsequent branches for failing (no model deployment) and passing the quality test (model is deployed). Other possible workflow steps include Automatic Model Tuning, data preprocessing with AWS Glue, and more. \n\nFor a detailed example of a retraining workflow, see the AWS ML Blog post [Automating model retraining and deployment using the AWS Step Functions Data Science SDK for Amazon SageMaker](https://aws.amazon.com/blogs/machine-learning/automating-model-retraining-and-deployment-using-the-aws-step-functions-data-science-sdk-for-amazon-sagemaker/).", "_____no_output_____" ], [ "### Cleanup <a class=\"anchor\" id=\"Cleanup\">\n\nThe workflow we created above deployed a model to an endpoint. To avoid billing charges for an unused endpoint, you can delete it using the SageMaker console. To do so, go to the [SageMaker console](https://console.aws.amazon.com/sagemaker/). Then click **Endpoints** in the left panel, and select and delete any unneeded endpoints in the list. ", "_____no_output_____" ], [ "## Extensions <a class=\"anchor\" id=\"Extensions\">\n\nWe've covered a lot of content in this notebook: SageMaker Processing for data transformation, Local Mode for prototyping training and inference code, Automatic Model Tuning, and SageMaker hosted training and inference. These are central elements for most deep learning workflows in SageMaker. Additionally, we examined how the AWS Step Functions Data Science SDK helps automate deep learning workflows after completion of the prototyping phase of a project.\n\nBesides all of the SageMaker features explored above, there are many other features that may be applicable to your project. For example, to handle common problems during deep learning model training such as vanishing or exploding gradients, **SageMaker Debugger** is useful. To manage common problems such as data drift after a model is in production, **SageMaker Model Monitor** can be applied.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
d01a95d1b53c8f9a174d732c7e137853534c9476
4,251
ipynb
Jupyter Notebook
plotly_widgets_compound_interest.ipynb
summiee/jupyter_demos
5a0c6c802a29d74cfda14fb4412b9df4aaebdfcf
[ "MIT" ]
null
null
null
plotly_widgets_compound_interest.ipynb
summiee/jupyter_demos
5a0c6c802a29d74cfda14fb4412b9df4aaebdfcf
[ "MIT" ]
null
null
null
plotly_widgets_compound_interest.ipynb
summiee/jupyter_demos
5a0c6c802a29d74cfda14fb4412b9df4aaebdfcf
[ "MIT" ]
1
2021-01-26T17:41:00.000Z
2021-01-26T17:41:00.000Z
31.723881
120
0.556104
[ [ [ "### Example: compound interest \n\n## $A = P (1 + \\frac{r}{n})^{nt}$\n\n+ A - amount\n+ P - principle\n+ r - interest rate\n+ n - number of times interest is compunded per unit 't'\n+ t - time\n", "_____no_output_____" ] ], [ [ "import numpy as np \nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\nimport ipywidgets as widgets\nfrom ipywidgets import interactive\n", "_____no_output_____" ], [ "def compound_interest_with_saving_rate(start_value, saving_per_month, interest_rate, duration_years):\n months = np.array(np.linspace(0, (12*duration_years), (12*duration_years)+1))\n balance = np.array([(start_value+i*saving_per_month)*(1+interest_rate/12)**(i) for i in months])\n principal = np.array([start_value + saving_per_month *i for i in months])\n return months, balance, principal\n\ndef visualize(start_value, saving_per_month, interest_rate, duration_years):\n months, balance, principle = compound_interest_with_saving_rate(start_value, saving_per_month,\n interest_rate, duration_years)\n print(months[-1], balance[-1], principle[-1])\n \n fig = go.Figure()\n fig.add_trace(go.Scatter(x=months/12, y=balance, name=\"balance\"))\n fig.add_trace(go.Scatter(x=months/12,y=principle,name=\"principle\"))\n fig.update_xaxes(title_text=\"<b>years</b>\")\n fig.update_yaxes(title_text=\"<b>balance</b>\")\n fig.show()\n \n fig = make_subplots(specs=[[{\"secondary_y\": True}]])\n fig.add_trace(go.Scatter(x=months/12,y=balance-principle,name=\"interest\"),secondary_y=False, )\n fig.add_trace(go.Scatter(x=months/12,y=principle/balance,name=\"ratio\"),secondary_y=True, )\n fig.update_xaxes(title_text=\"<b>years</b>\")\n fig.update_yaxes(title_text=\"<b>interest</b>\", secondary_y=False)\n fig.update_yaxes(title_text=\"<b>ratio</b>\", secondary_y=True)\n fig.show()\n", "_____no_output_____" ], [ "interactive_plot = interactive(visualize,\n start_value=widgets.IntSlider(min=0, max=10000,step=100, value=1000),\n saving_per_month=widgets.IntSlider(min=0, max=1000,step=10, value=500),\n interest_rate=widgets.FloatSlider(min=-0.5,max=0.5, step=0.01,value=0.05),\n duration_years=widgets.IntSlider(min=0, max=50,step=1, value=10))\ninteractive_plot", "_____no_output_____" ], [ "\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
d01a9c0928cb1a7f79acb26b73e9421e92717cf9
41,478
ipynb
Jupyter Notebook
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
eef80e20b01817464a75c823c24bed26c5efa576
[ "MIT" ]
null
null
null
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
eef80e20b01817464a75c823c24bed26c5efa576
[ "MIT" ]
null
null
null
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
eef80e20b01817464a75c823c24bed26c5efa576
[ "MIT" ]
null
null
null
30.953731
735
0.564444
[ [ [ "# Paralelizacion de entrenamiento de redes neuronales con TensorFlow\n\nEn esta seccion dejaremos atras los rudimentos de las matematicas y nos centraremos en utilizar TensorFlow, la cual es una de las librerias mas populares de arpendizaje profundo y que realiza una implementacion mas eficaz de las redes neuronales que cualquier otra implementacion de Numpy.\n\nTensorFlow es una interfaz de programacion multiplataforma y escalable para implementar y ejecutar algortimos de aprendizaje automatico de una manera mas eficaz ya que permite usar tanto la CPU como la GPU, la cual suele tener muchos mas procesadores que la CPU, los cuales, combinando sus frecuencias, presentan un rendimiento mas potente. La API mas desarrollada de esta herramienta se presenta para Python, por lo cual muchos desarrolladores se ven atraidos a este lenguaje.\n\n## Primeros pasos con TensorFlow\n\nhttps://jakevdp.github.io/PythonDataScienceHandbook/02.01-understanding-data-types.html\n", "_____no_output_____" ] ], [ [ "# Creando tensores\n# =============================================\nimport tensorflow as tf\nimport numpy as np\nnp.set_printoptions(precision=3)\n\na = np.array([1, 2, 3], dtype=np.int32)\nb = [4, 5, 6]\n\nt_a = tf.convert_to_tensor(a)\nt_b = tf.convert_to_tensor(b)\n\nprint(t_a)\nprint(t_b)", "_____no_output_____" ], [ "# Obteniendo las dimensiones de un tensor\n# ===============================================\nt_ones = tf.ones((2, 3))\nprint(t_ones)\nt_ones.shape", "_____no_output_____" ], [ "# Obteniendo los valores del tensor como array\n# ===============================================\nt_ones.numpy()", "_____no_output_____" ], [ "# Creando un tensor de valores constantes\n# ================================================\nconst_tensor = tf.constant([1.2, 5, np.pi], dtype=tf.float32)\nprint(const_tensor)", "_____no_output_____" ], [ "matriz = np.array([[2, 3, 4, 5], [6, 7, 8, 8]], dtype = np.int32)\nmatriz", "_____no_output_____" ], [ "matriz_tf = tf.convert_to_tensor(matriz)\nprint(matriz_tf, end = '\\n'*2)\nprint(matriz_tf.numpy(), end = '\\n'*2)\nprint(matriz_tf.shape)", "_____no_output_____" ] ], [ [ "## Manipulando los tipos de datos y forma de un tensor", "_____no_output_____" ] ], [ [ "# Cambiando el tipo de datos del tensor\n# ==============================================\nprint(matriz_tf.dtype)\n\nmatriz_tf_n = tf.cast(matriz_tf, tf.int64)\n\nprint(matriz_tf_n.dtype)", "_____no_output_____" ], [ "# Transponiendo un tensor\n# =================================================\nt = tf.random.uniform(shape=(3, 5))\nprint(t, end = '\\n'*2)\n\nt_tr = tf.transpose(t)\nprint(t_tr, end = '\\n'*2)", "_____no_output_____" ], [ "# Redimensionando un vector\n# =====================================\nt = tf.zeros((30,))\nprint(t, end = '\\n'*2)\nprint(t.shape, end = '\\n'*3)\n\nt_reshape = tf.reshape(t, shape=(5, 6))\nprint(t_reshape, end = '\\n'*2)\nprint(t_reshape.shape)", "_____no_output_____" ], [ "# Removiendo las dimensiones innecesarias\n# =====================================================\nt = tf.zeros((1, 2, 1, 4, 1))\nprint(t, end = '\\n'*2)\nprint(t.shape, end = '\\n'*3)\n\nt_sqz = tf.squeeze(t, axis=(2, 4))\nprint(t_sqz, end = '\\n'*2)\nprint(t_sqz.shape, end = '\\n'*3)\nprint(t.shape, ' --> ', t_sqz.shape)", "_____no_output_____" ] ], [ [ "## Operaciones matematicas sobre tensores", "_____no_output_____" ] ], [ [ "# Inicializando dos tensores con numeros aleatorios\n# =============================================================\ntf.random.set_seed(1)\nt1 = tf.random.uniform(shape=(5, 2), minval=-1.0, maxval=1.0)\nt2 = tf.random.normal(shape=(5, 2), mean=0.0, stddev=1.0)\n\nprint(t1, '\\n'*2, t2)", "_____no_output_____" ], [ "# Producto tipo element-wise: elemento a elemento\n# =================================================\nt3 = tf.multiply(t1, t2).numpy()\nprint(t3)", "_____no_output_____" ], [ "# Promedio segun el eje\n# ================================================\nt4 = tf.math.reduce_mean(t1, axis=None)\nprint(t4, end = '\\n'*3)\n\nt4 = tf.math.reduce_mean(t1, axis=0)\nprint(t4, end = '\\n'*3)\n\nt4 = tf.math.reduce_mean(t1, axis=1)\nprint(t4, end = '\\n'*3)", "_____no_output_____" ], [ "# suma segun el eje\n# =================================================\nt4 = tf.math.reduce_sum(t1, axis=None)\nprint('Suma de todos los elementos:', t4, end = '\\n'*3)\n\nt4 = tf.math.reduce_sum(t1, axis=0)\nprint('Suma de los elementos por columnas:', t4, end = '\\n'*3)\n\nt4 = tf.math.reduce_sum(t1, axis=1)\nprint('Suma de los elementos por filas:', t4, end = '\\n'*3)", "_____no_output_____" ], [ "# Desviacion estandar segun el eje\n# =================================================\nt4 = tf.math.reduce_std(t1, axis=None)\nprint('Suma de todos los elementos:', t4, end = '\\n'*3)\n\nt4 = tf.math.reduce_std(t1, axis=0)\nprint('Suma de los elementos por columnas:', t4, end = '\\n'*3)\n\nt4 = tf.math.reduce_std(t1, axis=1)\nprint('Suma de los elementos por filas:', t4, end = '\\n'*3)", "_____no_output_____" ], [ "# Producto entre matrices\n# ===========================================\nt5 = tf.linalg.matmul(t1, t2, transpose_b=True)\nprint(t5.numpy(), end = '\\n'*2)", "_____no_output_____" ], [ "# Producto entre matrices\n# ===========================================\nt6 = tf.linalg.matmul(t1, t2, transpose_a=True)\nprint(t6.numpy())", "_____no_output_____" ], [ "# Calculando la norma de un vector\n# ==========================================\nnorm_t1 = tf.norm(t1, ord=2, axis=None).numpy()\nprint(norm_t1, end='\\n'*2)\n\nnorm_t1 = tf.norm(t1, ord=2, axis=0).numpy()\nprint(norm_t1, end='\\n'*2)\n\nnorm_t1 = tf.norm(t1, ord=2, axis=1).numpy()\nprint(norm_t1, end='\\n'*2)", "_____no_output_____" ] ], [ [ "## Partir, apilar y concatenar tensores\n\n", "_____no_output_____" ] ], [ [ "# Datos a trabajar\n# =======================================\ntf.random.set_seed(1)\nt = tf.random.uniform((6,))\nprint(t.numpy())", "_____no_output_____" ], [ "# Partiendo el tensor en un numero determinado de piezas\n# ======================================================\nt_splits = tf.split(t, num_or_size_splits = 3)\n[item.numpy() for item in t_splits]", "_____no_output_____" ], [ "# Partiendo el tensor segun los tamaños definidos\n# ======================================================\ntf.random.set_seed(1)\nt = tf.random.uniform((6,))\nprint(t.numpy())\nt_splits = tf.split(t, num_or_size_splits=[3, 3])\n[item.numpy() for item in t_splits]", "_____no_output_____" ], [ "print(matriz_tf.numpy())\n# m_splits = tf.split(t, num_or_size_splits = 0, axis = 1)\nmatriz_n = tf.reshape(matriz_tf, shape = (8,))\nprint(matriz_n.numpy())\nm_splits = tf.split(matriz_n, num_or_size_splits = 2)\n[item.numpy() for item in m_splits]", "_____no_output_____" ], [ "# Concatenando tensores\n# =========================================\nA = tf.ones((3,))\nprint(A, end ='\\n'*2)\n\nB = tf.zeros((2,))\nprint(B, end ='\\n'*2)\n\nC = tf.concat([A, B], axis=0)\nprint(C.numpy())", "_____no_output_____" ], [ "# Apilando tensores\n# =========================================\nA = tf.ones((3,))\nprint(A, end ='\\n'*2)\nB = tf.zeros((3,))\nprint(B, end ='\\n'*2)\nS = tf.stack([A, B], axis=1)\nprint(S.numpy())", "_____no_output_____" ] ], [ [ "Mas funciones y herramientas en:\n\nhttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf.", "_____no_output_____" ], [ "<div class=\"burk\">\nEJERCICIOS</div><i class=\"fa fa-lightbulb-o \"></i>\n\n1. Cree dos tensores de dimensiones (4, 6), de numeros aleatorios provenientes de una distribucion normal estandar con promedio 0.0 y dsv 1.0. Imprimalos.\n2. Multiplique los anteriores tensores de las dos formas vistas, element-wise y producto matricial, realizando las dos transposiciones vistas. \n3. Calcule los promedios, desviaciones estandar y suma de sus elementos para los dos tensores.\n4. Redimensione los tensores para que sean ahora de rango 1.\n5. Calcule el coseno de los elementos de los tensores (revise la documentacion).\n6. Cree un tensor de rango 1 con 1001 elementos, empezando con el 0 y hasta el 30.\n7. Realice un for sobre los elementos del tensor e imprimalos.\n8. Realice el calculo de los factoriales de los numero del 1 al 30 usando el tensor del punto 6. Imprima el resultado como un DataFrame", "_____no_output_____" ], [ "# Creación de *pipelines* de entrada con tf.data: la API de conjunto de datos de TensorFlow\n\nCuando entrenamos un modelo NN profundo, generalmente entrenamos el modelo de forma incremental utilizando un algoritmo de optimización iterativo como el descenso de gradiente estocástico, como hemos visto en clases anteriores.\n\nLa API de Keras es un contenedor de TensorFlow para crear modelos NN. La API de Keras proporciona un método, `.fit ()`, para entrenar los modelos. En los casos en que el conjunto de datos de entrenamiento es bastante pequeño y se puede cargar como un tensor en la memoria, los modelos de TensorFlow (que se compilan con la API de Keras) pueden usar este tensor directamente a través de su método .fit () para el entrenamiento. Sin embargo, en casos de uso típicos, cuando el conjunto de datos es demasiado grande para caber en la memoria de la computadora, necesitaremos cargar los datos del dispositivo de almacenamiento principal (por ejemplo, el disco duro o la unidad de estado sólido) en trozos, es decir, lote por lote. \n\nAdemás, es posible que necesitemos construir un *pipeline* de procesamiento de datos para aplicar ciertas transformaciones y pasos de preprocesamiento a nuestros datos, como el centrado medio, el escalado o la adición de ruido para aumentar el procedimiento de entrenamiento y evitar el sobreajuste.\n\nAplicar las funciones de preprocesamiento manualmente cada vez puede resultar bastante engorroso. Afortunadamente, TensorFlow proporciona una clase especial para construir *pipelines* de preprocesamiento eficientes y convenientes. En esta parte, veremos una descripción general de los diferentes métodos para construir un conjunto de datos de TensorFlow, incluidas las transformaciones del conjunto de datos y los pasos de preprocesamiento comunes.\n\n## Creando un Dataset de TensorFlow desde tensores existentes\n\nSi los datos ya existen en forma de un objeto tensor, una lista de Python o una matriz NumPy, podemos crear fácilmente un conjunto de datos usando la función `tf.data.Dataset.from_tensor_slices()`. Esta función devuelve un objeto de la clase Dataset, que podemos usar para iterar a través de los elementos individuales en el conjunto de datos de entrada:\n", "_____no_output_____" ] ], [ [ "import tensorflow as tf\n# Ejemplo con listas\n# ======================================================\na = [1.2, 3.4, 7.5, 4.1, 5.0, 1.0]\nds = tf.data.Dataset.from_tensor_slices(a)\nprint(ds)", "<TensorSliceDataset shapes: (), types: tf.float32>\n" ], [ "for item in ds:\n print(item)\n \nfor i in ds:\n print(i.numpy())", "tf.Tensor(1.2, shape=(), dtype=float32)\ntf.Tensor(3.4, shape=(), dtype=float32)\ntf.Tensor(7.5, shape=(), dtype=float32)\ntf.Tensor(4.1, shape=(), dtype=float32)\ntf.Tensor(5.0, shape=(), dtype=float32)\ntf.Tensor(1.0, shape=(), dtype=float32)\n1.2\n3.4\n7.5\n4.1\n5.0\n1.0\n" ] ], [ [ "Si queremos crear lotes a partir de este conjunto de datos, con un tamaño de lote deseado de 3, podemos hacerlo de la siguiente manera:", "_____no_output_____" ] ], [ [ "# Creando lotes de 3 elementos cada uno\n# ===================================================\nds_batch = ds.batch(3)\nfor i, elem in enumerate(ds_batch, 1):\n print(f'batch {i}:', elem)", "batch 1: tf.Tensor([1.2 3.4 7.5], shape=(3,), dtype=float32)\nbatch 2: tf.Tensor([4.1 5. 1. ], shape=(3,), dtype=float32)\n" ] ], [ [ "Esto creará dos lotes a partir de este conjunto de datos, donde los primeros tres elementos van al lote n° 1 y los elementos restantes al lote n° 2. El método `.batch()` tiene un argumento opcional, `drop_remainder`, que es útil para los casos en los que el número de elementos en el tensor no es divisible por el tamaño de lote deseado. El valor predeterminado de `drop_remainder` es `False`.\n\n## Combinar dos tensores en un Dataset\n\nA menudo, podemos tener los datos en dos (o posiblemente más) tensores. Por ejemplo, podríamos tener un tensor para características y un tensor para etiquetas. En tales casos, necesitamos construir un conjunto de datos que combine estos tensores juntos, lo que nos permitirá recuperar los elementos de estos tensores en tuplas.\n\nSuponga que tenemos dos tensores, t_x y t_y. El tensor t_x contiene nuestros valores de características, cada uno de tamaño 3, y t_y almacena las etiquetas de clase. Para este ejemplo, primero creamos estos dos tensores de la siguiente manera:", "_____no_output_____" ] ], [ [ "# Datos de ejemplo\n# ============================================\ntf.random.set_seed(1)\nt_x = tf.random.uniform([4, 3], dtype=tf.float32)\nt_y = tf.range(4)\nprint(t_x)\nprint(t_y)", "tf.Tensor(\n[[0.16513085 0.9014813 0.6309742 ]\n [0.4345461 0.29193902 0.64250207]\n [0.9757855 0.43509948 0.6601019 ]\n [0.60489583 0.6366315 0.6144488 ]], shape=(4, 3), dtype=float32)\ntf.Tensor([0 1 2 3], shape=(4,), dtype=int32)\n" ], [ "# Uniendo los dos tensores en un Dataset\n# ============================================\nds_x = tf.data.Dataset.from_tensor_slices(t_x)\nds_y = tf.data.Dataset.from_tensor_slices(t_y)\n\nds_joint = tf.data.Dataset.zip((ds_x, ds_y))\n\nfor example in ds_joint:\n print('x:', example[0].numpy(),' y:', example[1].numpy())", "x: [0.16513085 0.9014813 0.6309742 ] y: 0\nx: [0.4345461 0.29193902 0.64250207] y: 1\nx: [0.9757855 0.43509948 0.6601019 ] y: 2\nx: [0.60489583 0.6366315 0.6144488 ] y: 3\n" ], [ "ds_joint = tf.data.Dataset.from_tensor_slices((t_x, t_y))\nfor example in ds_joint:\n #print(example)\n print('x:', example[0].numpy(), ' y:', example[1].numpy())\n\nds_joint", "x: [0.16513085 0.9014813 0.6309742 ] y: 0\nx: [0.4345461 0.29193902 0.64250207] y: 1\nx: [0.9757855 0.43509948 0.6601019 ] y: 2\nx: [0.60489583 0.6366315 0.6144488 ] y: 3\n" ], [ "# Operacion sobre el dataset generado\n# ====================================================\nds_trans = ds_joint.map(lambda x, y: (x*2-1.0, y))\nfor example in ds_trans: \n print(' x:', example[0].numpy(), ' y:', example[1].numpy())", " x: [-0.6697383 0.80296254 0.26194835] y: 0\n x: [-0.13090777 -0.41612196 0.28500414] y: 1\n x: [ 0.951571 -0.12980103 0.32020378] y: 2\n x: [0.20979166 0.27326298 0.22889757] y: 3\n" ] ], [ [ "## Mezclar, agrupar y repetir\n\nPara entrenar un modelo NN usando la optimización de descenso de gradiente estocástico, es importante alimentar los datos de entrenamiento como lotes mezclados aleatoriamente. Ya hemos visto arriba como crear lotes llamando al método `.batch()` de un objeto de conjunto de datos. Ahora, además de crear lotes, vamos a mezclar y reiterar sobre los conjuntos de datos:", "_____no_output_____" ] ], [ [ "# Mezclando los elementos de un tensor\n# ===================================================\ntf.random.set_seed(1)\nds = ds_joint.shuffle(buffer_size = len(t_x))\nfor example in ds:\n print(' x:', example[0].numpy(), ' y:', example[1].numpy())", "_____no_output_____" ] ], [ [ "donde las filas se barajan sin perder la correspondencia uno a uno entre las entradas en x e y. El método `.shuffle()` requiere un argumento llamado `buffer_size`, que determina cuántos elementos del conjunto de datos se agrupan antes de barajar. Los elementos del búfer se recuperan aleatoriamente y su lugar en el búfer se asigna a los siguientes elementos del conjunto de datos original (sin mezclar). Por lo tanto, si elegimos un tamaño de búfer pequeño, es posible que no mezclemos perfectamente el conjunto de datos.\n\nSi el conjunto de datos es pequeño, la elección de un tamaño de búfer relativamente pequeño puede afectar negativamente el rendimiento predictivo del NN, ya que es posible que el conjunto de datos no esté completamente aleatorizado. En la práctica, sin embargo, por lo general no tiene un efecto notable cuando se trabaja con conjuntos de datos relativamente grandes, lo cual es común en el aprendizaje profundo.\n\nAlternativamente, para asegurar una aleatorización completa durante cada época, simplemente podemos elegir un tamaño de búfer que sea igual al número de ejemplos de entrenamiento, como en el código anterior (`buffer_size = len(t_x)`).\n\n Ahora, creemos lotes a partir del conjunto de datos ds_joint:", "_____no_output_____" ] ], [ [ "ds = ds_joint.batch(batch_size = 3, drop_remainder = False)\nprint(ds)\nbatch_x, batch_y = next(iter(ds))\nprint('Batch-x:\\n', batch_x.numpy())", "_____no_output_____" ], [ "print('Batch-y: ', batch_y.numpy())", "_____no_output_____" ] ], [ [ "Además, al entrenar un modelo para múltiples épocas, necesitamos mezclar e iterar sobre el conjunto de datos por el número deseado de épocas. Entonces, repitamos el conjunto de datos por lotes dos veces:", "_____no_output_____" ] ], [ [ "ds = ds_joint.batch(3).repeat(count = 2)\nfor i,(batch_x, batch_y) in enumerate(ds):\n print(i, batch_x.numpy(), batch_y.numpy(), end = '\\n'*2)", "_____no_output_____" ] ], [ [ "Esto da como resultado dos copias de cada lote. Si cambiamos el orden de estas dos operaciones, es decir, primero lote y luego repetimos, los resultados serán diferentes:", "_____no_output_____" ] ], [ [ "ds = ds_joint.repeat(count=2).batch(3)\nfor i,(batch_x, batch_y) in enumerate(ds):\n print(i, batch_x.numpy(), batch_y.numpy(), end = '\\n'*2)", "_____no_output_____" ] ], [ [ "Finalmente, para comprender mejor cómo se comportan estas tres operaciones (batch, shuffle y repeat), experimentemos con ellas en diferentes órdenes. Primero, combinaremos las operaciones en el siguiente orden: (1) shuffle, (2) batch y (3) repeat:", "_____no_output_____" ] ], [ [ "# Orden 1: shuffle -> batch -> repeat\ntf.random.set_seed(1)\nds = ds_joint.shuffle(4).batch(2).repeat(3)\nfor i,(batch_x, batch_y) in enumerate(ds):\n print(i, batch_x, batch_y.numpy(), end = '\\n'*2)", "_____no_output_____" ], [ "# Orden 2: batch -> shuffle -> repeat\ntf.random.set_seed(1)\nds = ds_joint.batch(2).shuffle(4).repeat(3)\nfor i,(batch_x, batch_y) in enumerate(ds):\n print(i, batch_x, batch_y.numpy(), end = '\\n'*2)", "_____no_output_____" ], [ "# Orden 2: batch -> repeat-> shuffle\ntf.random.set_seed(1)\nds = ds_joint.batch(2).repeat(3).shuffle(4)\nfor i,(batch_x, batch_y) in enumerate(ds):\n print(i, batch_x, batch_y.numpy(), end = '\\n'*2)", "_____no_output_____" ] ], [ [ "## Obteniendo conjuntos de datos disponibles de la biblioteca tensorflow_datasets\n\nLa biblioteca tensorflow_datasets proporciona una buena colección de conjuntos de datos disponibles gratuitamente para entrenar o evaluar modelos de aprendizaje profundo. Los conjuntos de datos están bien formateados y vienen con descripciones informativas, incluido el formato de características y etiquetas y su tipo y dimensionalidad, así como la cita del documento original que introdujo el conjunto de datos en formato BibTeX. Otra ventaja es que todos estos conjuntos de datos están preparados y listos para usar como objetos tf.data.Dataset, por lo que todas las funciones que cubrimos se pueden usar directamente:", "_____no_output_____" ] ], [ [ "# pip install tensorflow-datasets", "_____no_output_____" ], [ "import tensorflow_datasets as tfds\nprint(len(tfds.list_builders()))\nprint(tfds.list_builders()[:5])", "_____no_output_____" ], [ "# Trabajando con el archivo mnist\n# ===============================================\nmnist, mnist_info = tfds.load('mnist', with_info=True, shuffle_files=False)", "_____no_output_____" ], [ "print(mnist_info)", "_____no_output_____" ], [ "print(mnist.keys())", "_____no_output_____" ], [ "ds_train = mnist['train']\nds_train = ds_train.map(lambda item:(item['image'], item['label']))\nds_train = ds_train.batch(10)\nbatch = next(iter(ds_train))\nprint(batch[0].shape, batch[1])", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(15, 6))\nfor i,(image,label) in enumerate(zip(batch[0], batch[1])):\n ax = fig.add_subplot(2, 5, i+1)\n ax.set_xticks([]); ax.set_yticks([])\n ax.imshow(image[:, :, 0], cmap='gray_r')\n ax.set_title('{}'.format(label), size=15)\nplt.show()", "_____no_output_____" ] ], [ [ "# Construyendo un modelo NN en TensorFlow\n\n## La API de TensorFlow Keras (tf.keras)\n\nKeras es una API NN de alto nivel y se desarrolló originalmente para ejecutarse sobre otras bibliotecas como TensorFlow y Theano. Keras proporciona una interfaz de programación modular y fácil de usar que permite la creación de prototipos y la construcción de modelos complejos en solo unas pocas líneas de código. Keras se puede instalar independientemente de PyPI y luego configurarse para usar TensorFlow como su motor de backend. Keras está estrechamente integrado en TensorFlow y se puede acceder a sus módulos a través de tf.keras.\n\nEn TensorFlow 2.0, tf.keras se ha convertido en el enfoque principal y recomendado para implementar modelos. Esto tiene la ventaja de que admite funcionalidades específicas de TensorFlow, como las canalizaciones de conjuntos de datos que usan tf.data.\n\nLa API de Keras (tf.keras) hace que la construcción de un modelo NN sea extremadamente fácil. El enfoque más utilizado para crear una NN en TensorFlow es a través de `tf.keras.Sequential()`, que permite apilar capas para formar una red. Se puede dar una pila de capas en una lista de Python a un modelo definido como tf.keras.Sequential(). Alternativamente, las capas se pueden agregar una por una usando el método .add().\n\nAdemás, tf.keras nos permite definir un modelo subclasificando tf.keras.Model.\n\nEsto nos da más control sobre la propagacion hacia adelante al definir el método call() para nuestra clase modelo para especificar la propagacion hacia adelante explicitamente. \n\nFinalmente, los modelos construidos usando la API tf.keras se pueden compilar y entrenar a través de los métodos .compile() y .fit().", "_____no_output_____" ], [ "## Construyendo un modelo de regresion lineal\n\n", "_____no_output_____" ] ], [ [ "X_train = np.arange(10).reshape((10, 1))\ny_train = np.array([1.0, 1.3, 3.1, 2.0, 5.0, 6.3, 6.6, 7.4, 8.0, 9.0])", "_____no_output_____" ], [ "X_train, y_train", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nax.plot(X_train, y_train, 'o', markersize=10)\nax.set_xlabel('x')\nax.set_ylabel('y')", "_____no_output_____" ], [ "import tensorflow as tf\n\nX_train_norm = (X_train - np.mean(X_train))/np.std(X_train)\nds_train_orig = tf.data.Dataset.from_tensor_slices((tf.cast(X_train_norm, tf.float32),tf.cast(y_train, tf.float32)))\n\nfor i in ds_train_orig:\n print(i[0].numpy(), i[1].numpy())", "_____no_output_____" ] ], [ [ "Ahora, podemos definir nuestro modelo de regresión lineal como $𝑧 = 𝑤x + 𝑏$. Aquí, vamos a utilizar la API de Keras. `tf.keras` proporciona capas predefinidas para construir modelos NN complejos, pero para empezar, usaremos un modelo desde cero:", "_____no_output_____" ] ], [ [ "class MyModel(tf.keras.Model):\n def __init__(self):\n super(MyModel, self).__init__()\n self.w = tf.Variable(0.0, name='weight')\n self.b = tf.Variable(0.0, name='bias')\n\n def call(self, x):\n return self.w * x + self.b", "_____no_output_____" ], [ "model = MyModel()\nmodel.build(input_shape=(None, 1))\nmodel.summary()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d01ab5b034af0bcf5675e1e13a316d1a409387d4
18,808
ipynb
Jupyter Notebook
crappyChat_setup.ipynb
rid-dim/pySafe
bcecb60b0bfbcdf7b778d45351432e3b8c264901
[ "MIT" ]
9
2018-03-30T21:40:21.000Z
2019-04-29T14:06:51.000Z
crappyChat_setup.ipynb
rid-dim/pySafe
bcecb60b0bfbcdf7b778d45351432e3b8c264901
[ "MIT" ]
null
null
null
crappyChat_setup.ipynb
rid-dim/pySafe
bcecb60b0bfbcdf7b778d45351432e3b8c264901
[ "MIT" ]
null
null
null
33.112676
1,986
0.619949
[ [ [ "import safenet\nsafenet.setup_logger(file_level=safenet.log_util.WARNING)\nmyApp = safenet.App()\nmyAuth_,addData=safenet.safe_utils.AuthReq(myApp.ffi_app.NULL,0,0,id=b'crappy_chat_reloaded',scope=b'noScope'\n ,name=b'i_love_it',vendor=b'no_vendor',app_container=True,ffi=myApp.ffi_app)", "_____no_output_____" ], [ "encodedAuth = myApp.encode_authentication(myAuth_)", "_____no_output_____" ], [ "encodedAuth", "_____no_output_____" ], [ "grantedAuth = myApp.sysUri.quickSetup(myAuth_,encodedAuth)", "_____no_output_____" ], [ "grantedAuth", "_____no_output_____" ], [ "grantedAuth='bAEAAAADIADW4EAAAAAAAAAAAAAQAAAAAAAAAAAEFNJ53ABPX5QW524YYAMEN7T4MJJVIYH656RYZ4FCSZ4TUT7DX3AQAAAAAAAAAAADZO24ITUIIFUWNIUPYODCATWPRBZIBHLD4B6DGFUJDNASIIFYX5MQAAAAAAAAAAAG7B6WQXKW3UPQET62ZWDRY3U7NEYKRWBPQHLYJHTOOYIPPGOWKFFAAAAAAAAAAAACGBOVXSSUKP2Z7YMG5JJDC7BNTUU3YD4SBOBYN3CWRJXGCXLOSFTPQ7LILVLN2HYCJ7NM3BY4N2PWSMFI3AXYDV4ETZXHMEHXTHLFCSIAAAAAAAAAAAAJDOR7QCDWE2VXANINUIE4NYFTIAT66JFQN7B7ALHOV3QYVIYSGQIAAAAAAAAAAABK6S5AF4FRXH4AOBERKM65IJZZNGEILVD3GSDMQBIV4GP2XE5JHQGIAAAAAAAAAAAIAAAAAAAAAAABRG44C4NRSFY3TMLRYHI2TIOBTCMAAAAAAAAAAAMJTHAXDMOBOGE4DKLRSGE4DUNJUHAZREAAAAAAAAAAAGEZTQLRWHAXDCOBRFY2TOORVGQ4DGEQAAAAAAAAAAAYTGOBOGY4C4MJYGEXDMMB2GU2DQMYSAAAAAAAAAAADCMZYFY3DQLRRHAYS4OBWHI2TIOBTCIAAAAAAAAAAAMJTHAXDMOBOGE4DCLRYG45DKNBYGMJQAAAAAAAAAABRGM4C4NRYFYYTQMJOGE3DQORVGQ4DGEYAAAAAAAAAAAYTGOBOGY4C4MJYGEXDCNZWHI2TIOBTCMAAAAAAAAAAAMJTHAXDMOBOGE4DCLRRG44TUNJUHAZRGAAAAAAAAAAAGEZTQLRWHAXDCOBRFYYTQMB2GU2DQMYTAAAAAAAAAAADCMZYFY3DQLRRHAYS4MJYGI5DKNBYGMJQAAAAAAAAAABRGM4C4NRYFYYTQMJOGI2DEORVGQ4DGEYAAAAAAAAAAAYTGOBOGY4C4MJYGEXDENBTHI2TIOBTCMAAAAAAAAAAAMJTHAXDMOBOGE4DCLRSGQ4TUNJUHAZREAAAAAAAAAAAGEZTQLRWHAXDCOBZFYYTIORVGQ4DGEQAAAAAAAAAAAYTGOBOGY4C4MJYHEXDCNJ2GU2DQMYSAAAAAAAAAAADCMZYFY3DQLRRHA4S4MJXHI2TIOBTCIAAAAAAAAAAAMJTHAXDMOBOGE4DSLRRHA5DKNBYGMJAAAAAAAAAAABRGM4C4NRYFYYTQOJOGE4TUNJUHAZREAAAAAAAAAAAGEZTQLRWHAXDCOBZFYZTCORVGQ4DGEQAAAAAAAAAAAYTGOBOGY4C4MJYHEXDGNB2GU2DQMYSAAAAAAAAAAADCMZYFY3DQLRRHA4S4MZWHI2TIOBTCIAAAAAAAAAAAMJTHAXDMOBOGE4DSLRTHA5DKNBYGMJAAAAAAAAAAABRGM4C4NRYFYYTQOJOGM4TUNJUHAZRCAAAAAAAAAAAGQ3C4MJQGEXDKLRRG44TUNJUHAZQC2YVAAAAAAAAAEDQAAAAAAAAAADBNRYGQYK7GIAOWVHBIXIX3YGQAZIQREUXG4475KAEQOJARMHK5Z3DWBIVRXPEAVMYHIAAAAAAAAABQAAAAAAAAAAAIDF2MO3P472PTSCK3IIOW43ZICJR4Q4P5ZR6UWABAAAAAAAAAAABIAAAAAAAAAAAMFYHA4ZPORSXG5CQOJXWO4TBNVHGC3LFO7DUGA44PHQPW2LQGIPOFH34XS3SO3V3X6S3LX7ETSBIRY3TCAHJQOQAAAAAAAAAAEQAAAAAAAAAAAEIJOL5UDCOQRO3N2G6CFLCDF4ACW3LH2ON27YBAOOC7G4YGV25S4MAAAAAAAAAAAGJ6FXG5Y7A2Z5GTAO7H5APZ2ALENSBY2J7T4QNKAAFAAAAAAAAAAAAAAAAAAAQAAAAAIAAAAADAAAAABAAAAAAA'", "_____no_output_____" ], [ "myApp.setup_app(myAuth_,grantedAuth)", "_____no_output_____" ], [ "signKey = myApp.get_pub_key_handle()", "_____no_output_____" ], [ "signKey", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "### now we have an app and can start doing stuff", "_____no_output_____" ], [ "---", "_____no_output_____" ], [ "### creating a mutable Object", "_____no_output_____" ] ], [ [ "myMutable = myApp.mData()", "_____no_output_____" ] ], [ [ "### define Entries and drop them onto Safe", "_____no_output_____" ] ], [ [ "import datetime", "_____no_output_____" ], [ "\nnow = datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S')\nmyName = 'Welcome to the SAFE Network'\ntext = 'free speech and free knowledge to the world!'\ntimeUser = f'{now} {myName}'\nentries={timeUser:text}", "_____no_output_____" ] ], [ [ "entries={'firstkey':'this is awesome',\n 'secondKey':'and soon it should be',\n 'thirdKey':'even easier to use safe with python',\n 'i love safe':'and this is just the start',\n 'thisWasUploaded at':datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S UTC'),\n 'additionalEntry':input('enter your custom value here: ')}", "_____no_output_____" ] ], [ [ "infoData = myMutable.new_random_public(777,signKey,entries)", "_____no_output_____" ], [ "print(safenet.safe_utils.getXorAddresOfMutable(infoData,myMutable.ffi_app))", "safe://f017016209759859bf90755eb2d8a246b7c022a71520116d57efd659f1c649fb8a598f05e:777\n" ], [ "additionalEntries={'this wasnt here':'before'}", "_____no_output_____" ], [ "additionalEntries={'baduff':'another entry'}", "_____no_output_____" ], [ "myMutable.insertEntries(infoData,additionalEntries)", "_____no_output_____" ], [ "with open('testfile','wb') as f:\n f.write(myMutable.ffi_app.buffer(infoData)[:])", "_____no_output_____" ], [ "with open('testfile','rb') as f:\n infoData= safenet.safe_utils.getffiMutable(f.read(),myMutable.ffi_app)", "_____no_output_____" ], [ "myMutable.ffi_app.buffer(infoData)[:]", "_____no_output_____" ], [ "mutableBytes = b'H\\x8f\\x08x}\\xc5D]U\\xeeW\\x08\\xe0\\xb4\\xaau\\x94\\xd4\\x8a\\x0bz\\x06h\\xe3{}\\xd1\\x06\\x843\\x01P[t\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x007\\xdbNV\\x00\\x00'", "_____no_output_____" ], [ "infoData= safenet.safe_utils.getffiMutable(mutableBytes,myMutable.ffi_app)", "_____no_output_____" ], [ "infoData", "_____no_output_____" ], [ "def getNewEntries(lastState,newState):\n newEntries = {}\n for additional in [item for item in newState if item not in lastState]:\n newEntries[additional]=newState[additional]\n return newEntries, newState", "_____no_output_____" ] ], [ [ "lastState={}", "_____no_output_____" ], [ "additionalEntries, lastState = getNewEntries(lastState,myMutable.getCurrentState(infoData))", "_____no_output_____" ], [ "additionalEntries", "_____no_output_____" ] ], [ [ "import queue\nimport time\nfrom threading import Thread\nimport datetime\nimport sys\nfrom PyQt5.QtWidgets import (QWidget, QPushButton, QTextBrowser,QLineEdit,\n QHBoxLayout, QVBoxLayout, QApplication)\n\n\nclass Example(QWidget):\n \n def __init__(self):\n super().__init__()\n \n self.lineedit1 = QLineEdit(\"anon\")\n self.browser = QTextBrowser()\n self.lineedit = QLineEdit(\"Type a message and press Enter\")\n self.lineedit.selectAll()\n self.setWindowTitle(\"crappychat_reloaded\")\n vbox = QVBoxLayout()\n vbox.addWidget(self.lineedit1)\n vbox.addWidget(self.browser)\n vbox.addWidget(self.lineedit)\n self.setLayout(vbox) \n \n self.setGeometry(300, 300, 900, 600) \n self.show()\n self.lineedit.setFocus()\n self.lineedit.returnPressed.connect(self.updateUi)\n \n self.messageQueue = queue.Queue()\n t = Thread(name='updateThread', target=self.updateBrowser)\n t.start()\n \n def updateUi(self):\n try:\n now = datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S')\n myName = self.lineedit1.text()\n text = self.lineedit.text()\n timeUser = f'{now} {myName}'\n additionalEntries={timeUser:text}\n self.messageQueue.put(additionalEntries)\n \n #self.browser.append(f\"<b>{timeUser}</b>: {text}\") \n self.lineedit.clear()\n \n except:\n self.browser.append(\"<font color=red>{0} is invalid!</font>\"\n .format(text)) \n \n def updateBrowser(self):\n lastState={}\n while True:\n try:\n if not self.messageQueue.empty():\n newEntries = self.messageQueue.get()\n myMutable.insertEntries(infoData,newEntries)\n additionalEntries, lastState = getNewEntries(lastState,myMutable.getCurrentState(infoData))\n for entry in additionalEntries:\n entry_string = entry.decode()\n value_string = additionalEntries[entry].decode()\n self.browser.append(f\"<b>{entry_string}</b>: {value_string}\")\n self.browser.ensureCursorVisible()\n except:\n pass\n time.sleep(2)\n \n \n \nif __name__ == '__main__':\n \n app = QApplication(sys.argv)\n ex = Example()\n sys.exit(app.exec_())", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
d01acb0db2c5bfc7c9fce7d06392f95f8d359533
139,765
ipynb
Jupyter Notebook
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
4719afdb6e90e9deb91268fe9a88e1cbf2b34a86
[ "BSD-3-Clause" ]
null
null
null
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
4719afdb6e90e9deb91268fe9a88e1cbf2b34a86
[ "BSD-3-Clause" ]
null
null
null
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
4719afdb6e90e9deb91268fe9a88e1cbf2b34a86
[ "BSD-3-Clause" ]
null
null
null
75.630411
46,412
0.809266
[ [ [ "# Tutorial 13: Skyrmion in a disk\n\n> Interactive online tutorial:\n> [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb)", "_____no_output_____" ], [ "In this tutorial, we compute and relax a skyrmion in a interfacial-DMI material in a confined disk like geometry.", "_____no_output_____" ] ], [ [ "import oommfc as oc\nimport discretisedfield as df\nimport micromagneticmodel as mm", "_____no_output_____" ] ], [ [ "We define mesh in cuboid through corner points `p1` and `p2`, and discretisation cell size `cell`.", "_____no_output_____" ] ], [ [ "region = df.Region(p1=(-50e-9, -50e-9, 0), p2=(50e-9, 50e-9, 10e-9))\nmesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9))", "_____no_output_____" ] ], [ [ "The mesh we defined is:", "_____no_output_____" ] ], [ [ "%matplotlib inline\nmesh.k3d()", "_____no_output_____" ] ], [ [ "Now, we can define the system object by first setting up the Hamiltonian:", "_____no_output_____" ] ], [ [ "system = mm.System(name='skyrmion')\n\nsystem.energy = (mm.Exchange(A=1.6e-11)\n + mm.DMI(D=4e-3, crystalclass='Cnv') \n + mm.UniaxialAnisotropy(K=0.51e6, u=(0, 0, 1)) \n + mm.Demag()\n + mm.Zeeman(H=(0, 0, 2e5)))", "_____no_output_____" ] ], [ [ "Disk geometry is set up be defining the saturation magnetisation (norm of the magnetisation field). For that, we define a function:", "_____no_output_____" ] ], [ [ "Ms = 1.1e6\n\ndef Ms_fun(pos):\n \"\"\"Function to set magnitude of magnetisation: zero outside cylindric shape, \n Ms inside cylinder.\n \n Cylinder radius is 50nm.\n \n \"\"\"\n x, y, z = pos\n if (x**2 + y**2)**0.5 < 50e-9:\n return Ms\n else:\n return 0", "_____no_output_____" ] ], [ [ "And the second function we need is the function to definr the initial magnetisation which is going to relax to skyrmion.", "_____no_output_____" ] ], [ [ "def m_init(pos):\n \"\"\"Function to set initial magnetisation direction: \n -z inside cylinder (r=10nm),\n +z outside cylinder.\n y-component to break symmetry.\n \n \"\"\"\n x, y, z = pos\n if (x**2 + y**2)**0.5 < 10e-9:\n return (0, 0, -1)\n else:\n return (0, 0, 1)\n \n\n# create system with above geometry and initial magnetisation\nsystem.m = df.Field(mesh, dim=3, value=m_init, norm=Ms_fun)", "_____no_output_____" ] ], [ [ "The geometry is now:", "_____no_output_____" ] ], [ [ "system.m.norm.k3d_nonzero()", "_____no_output_____" ] ], [ [ "and the initial magnetsation is:", "_____no_output_____" ] ], [ [ "system.m.plane('z').mpl()", "/Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:715: RuntimeWarning: divide by zero encountered in double_scalars\n length = a * (widthu_per_lenu / (self.scale * self.width))\n/Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:715: RuntimeWarning: invalid value encountered in multiply\n length = a * (widthu_per_lenu / (self.scale * self.width))\n/Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:767: RuntimeWarning: invalid value encountered in less\n short = np.repeat(length < minsh, 8, axis=1)\n/Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:780: RuntimeWarning: invalid value encountered in less\n tooshort = length < self.minlength\n" ] ], [ [ "Finally we can minimise the energy and plot the magnetisation.", "_____no_output_____" ] ], [ [ "# minimize the energy\nmd = oc.MinDriver()\nmd.drive(system)\n\n# Plot relaxed configuration: vectors in z-plane\nsystem.m.plane('z').mpl()", "Running OOMMF (ExeOOMMFRunner) [2020/06/12 00:57]... (1.9 s)\n" ], [ "# Plot z-component only:\nsystem.m.z.plane('z').mpl()", "_____no_output_____" ], [ "# 3d-plot of z-component\nsystem.m.z.k3d_scalar(filter_field=system.m.norm)", "_____no_output_____" ] ], [ [ "Finally we can sample and plot the magnetisation along the line:", "_____no_output_____" ] ], [ [ "system.m.z.line(p1=(-49e-9, 0, 0), p2=(49e-9, 0, 0), n=20).mpl()", "_____no_output_____" ] ], [ [ "## Other\n\nMore details on various functionality can be found in the [API Reference](https://oommfc.readthedocs.io/en/latest/).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d01ada551a88f5e51d6ec9a4bdcb0769feb72cb0
5,785
ipynb
Jupyter Notebook
tasks/reader/Deployment.ipynb
platiagro/tasks
a6103cb101eeed26381cdb170a11d0e1dc53d3ad
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
2
2021-02-16T12:39:57.000Z
2021-07-21T11:36:39.000Z
tasks/reader/Deployment.ipynb
platiagro/tasks
a6103cb101eeed26381cdb170a11d0e1dc53d3ad
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
20
2020-10-26T18:05:27.000Z
2021-11-30T19:05:22.000Z
tasks/reader/Deployment.ipynb
platiagro/tasks
a6103cb101eeed26381cdb170a11d0e1dc53d3ad
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
7
2020-10-13T18:12:22.000Z
2021-08-13T19:16:21.000Z
37.564935
379
0.562143
[ [ [ "# Reader - Implantação\n\nEste componente utiliza um modelo de QA pré-treinado em Português com o dataset SQuAD v1.1, é um modelo de domínio público disponível em [Hugging Face](https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese).<br>\n\nSeu objetivo é encontrar a resposta de uma ou mais perguntas de acordo com uma lista de contextos distintos.\n\nA tabela de dados de entrada deve possuir uma coluna de contextos, em que cada linha representa um contexto diferente, e uma coluna de perguntas em que cada linha representa uma pergunta a ser realizada. Note que para cada pergunta serão utilizados todos os contextos fornecidos para realização da inferência, e portanto, podem haver bem mais contextos do que perguntas.\n\nObs: Este componente utiliza recursos da internet, portanto é importante estar conectado à rede para que este componente funcione corretamente.<br>\n### **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).**", "_____no_output_____" ], [ "## Declaração de Classe para Predições em Tempo Real\n\nA tarefa de implantação cria um serviço REST para predições em tempo-real.<br>\nPara isso você deve criar uma classe `Model` que implementa o método `predict`.", "_____no_output_____" ] ], [ [ "%%writefile Model.py\n\nimport joblib\nimport numpy as np\nimport pandas as pd\nfrom reader import Reader\n\nclass Model:\n def __init__(self):\n self.loaded = False\n\n def load(self):\n \n # Load artifacts\n artifacts = joblib.load(\"/tmp/data/reader.joblib\")\n self.model_parameters = artifacts[\"model_parameters\"]\n self.inference_parameters = artifacts[\"inference_parameters\"]\n \n # Initialize reader\n self.reader = Reader(**self.model_parameters)\n \n # Set model loaded\n self.loaded = True\n print(\"Loaded model\")\n \n def class_names(self):\n column_names = list(self.inference_parameters['output_columns'])\n return column_names\n\n def predict(self, X, feature_names, meta=None):\n if not self.loaded:\n self.load()\n \n # Convert to dataframe\n if feature_names != []:\n df = pd.DataFrame(X, columns = feature_names)\n df = df[self.inference_parameters['input_columns']]\n else:\n df = pd.DataFrame(X, columns = self.inference_parameters['input_columns'])\n \n # Predict answers #\n\n # Iterate over dataset\n for idx, row in df.iterrows():\n\n # Get question\n question = row[self.inference_parameters['question_column_name']]\n\n # Get context\n context = row[self.inference_parameters['context_column_name']]\n\n # Make prediction\n answer, probability, _ = self.reader([question], [context])\n\n # Save to df\n df.at[idx, self.inference_parameters['answer_column_name']] = answer[0]\n df.at[idx, self.inference_parameters['proba_column_name']] = probability[0]\n\n # Retrieve Only Best Answer #\n\n # Initializate best df\n best_df = pd.DataFrame(columns=df.columns)\n\n # Get unique questions\n unique_questions = df[self.inference_parameters['question_column_name']].unique()\n\n # Iterate over each unique question\n for question in unique_questions:\n\n # Filter df\n question_df = df[df[self.inference_parameters['question_column_name']] == question]\n\n # Sort by score (descending)\n question_df = question_df.sort_values(by=self.inference_parameters['proba_column_name'], ascending=False).reset_index(drop=True)\n\n # Append best ansewer to output df\n best_df = pd.concat((best_df,pd.DataFrame(question_df.loc[0]).T)).reset_index(drop=True)\n \n if self.inference_parameters['keep_best'] == 'sim':\n return best_df.values\n else:\n return df.values", "Overwriting Model.py\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ] ]
d01adf518daccca71b47da8482f0e2946bc01cab
164,842
ipynb
Jupyter Notebook
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
218ef38b87f8d777d5abcb04913212cbcb21ecb1
[ "MIT" ]
2
2021-03-17T20:31:54.000Z
2022-03-17T19:24:37.000Z
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
218ef38b87f8d777d5abcb04913212cbcb21ecb1
[ "MIT" ]
1
2021-08-23T20:55:07.000Z
2021-08-23T20:55:07.000Z
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
218ef38b87f8d777d5abcb04913212cbcb21ecb1
[ "MIT" ]
1
2020-04-06T05:43:31.000Z
2020-04-06T05:43:31.000Z
170.291322
32,536
0.884799
[ [ [ "# Estimator validation\n\nThis notebook contains code to generate Figure 2 of the paper. \n\nThis notebook also serves to compare the estimates of the re-implemented scmemo with sceb package from Vasilis. \n", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\nimport scanpy as sc\nimport scipy as sp\nimport itertools\nimport numpy as np\nimport scipy.stats as stats\nfrom scipy.integrate import dblquad\nimport seaborn as sns\nfrom statsmodels.stats.multitest import fdrcorrection\nimport imp\npd.options.display.max_rows = 999\npd.set_option('display.max_colwidth', -1)\nimport pickle as pkl\nimport time", "_____no_output_____" ], [ "import matplotlib as mpl\nmpl.rcParams['pdf.fonttype'] = 42\nmpl.rcParams['ps.fonttype'] = 42\n\nimport matplotlib.pylab as pylab\nparams = {'legend.fontsize': 'x-small',\n 'axes.labelsize': 'medium',\n 'axes.titlesize':'medium',\n 'figure.titlesize':'medium',\n 'xtick.labelsize':'xx-small',\n 'ytick.labelsize':'xx-small'}\npylab.rcParams.update(params)\n", "_____no_output_____" ], [ "import sys\nsys.path.append('/data/home/Github/scrna-parameter-estimation/dist/schypo-0.0.0-py3.7.egg')\nimport schypo\nimport schypo.simulate as simulate", "/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/scanpy/api/__init__.py:6: FutureWarning: \n\nIn a future version of Scanpy, `scanpy.api` will be removed.\nSimply use `import scanpy as sc` and `import scanpy.external as sce` instead.\n\n FutureWarning,\n" ], [ "import sys\nsys.path.append('/data/home/Github/single_cell_eb/')\nsys.path.append('/data/home/Github/single_cell_eb/sceb/')\nimport scdd", "_____no_output_____" ], [ "data_path = '/data/parameter_estimation/'\nfig_path = '/data/home/Github/scrna-parameter-estimation/figures/fig3/'", "_____no_output_____" ] ], [ [ "### Check 1D estimates of `sceb` with `scmemo`\n\nUsing the Poisson model. The outputs should be identical, this is for checking the implementation. ", "_____no_output_____" ] ], [ [ "data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(100, 20))\nadata = sc.AnnData(data)\nsize_factors = scdd.dd_size_factor(adata)\nNr = data.sum(axis=1).mean()", "_____no_output_____" ], [ "_, M_dd = scdd.dd_1d_moment(adata, size_factor=scdd.dd_size_factor(adata), Nr=Nr)\nvar_scdd = scdd.M_to_var(M_dd)\nprint(var_scdd)", "_____no_output_____" ], [ "imp.reload(estimator)\nmean_scmemo, var_scmemo = estimator._poisson_1d(data, data.shape[0], estimator._estimate_size_factor(data))\nprint(var_scmemo)", "_____no_output_____" ], [ "df = pd.DataFrame()\ndf['size_factor'] = size_factors\ndf['inv_size_factor'] = 1/size_factors\ndf['inv_size_factor_sq'] = 1/size_factors**2\ndf['expr'] = data[:, 0].todense().A1\nprecomputed_size_factors = df.groupby('expr')['inv_size_factor'].mean(), df.groupby('expr')['inv_size_factor_sq'].mean()", "_____no_output_____" ], [ "imp.reload(estimator)\nexpr, count = np.unique(data[:, 0].todense().A1, return_counts=True)\nprint(estimator._poisson_1d((expr, count), data.shape[0], precomputed_size_factors))", "[0.5217290008068085, 0.9860336223993191]\n" ] ], [ [ "### Check 2D estimates of `sceb` and `scmemo`\n\nUsing the Poisson model. The outputs should be identical, this is for checking the implementation. ", "_____no_output_____" ] ], [ [ "data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(1000, 4))\nadata = sc.AnnData(data)\nsize_factors = scdd.dd_size_factor(adata)", "_____no_output_____" ], [ "mean_scdd, cov_scdd, corr_scdd = scdd.dd_covariance(adata, size_factors)\nprint(cov_scdd)", "[[ 9.66801891 -1.45902975 -1.97166503 -10.13305759]\n [ -1.45902975 3.37530982 -0.83509601 -2.76389597]\n [ -1.97166503 -0.83509601 2.51976446 -2.9553916 ]\n [-10.13305759 -2.76389597 -2.9553916 1.48619472]]\n" ], [ "imp.reload(estimator)\ncov_scmemo = estimator._poisson_cov(data, data.shape[0], size_factors, idx1=[0, 1, 2], idx2=[1, 2, 3])\nprint(cov_scmemo)", "[[ -1.45902975 -1.97166503 -10.13305759]\n [ 3.37530982 -0.83509601 -2.76389597]\n [ -0.83509601 2.51976446 -2.9553916 ]]\n" ], [ "expr, count = np.unique(data[:, :2].toarray(), return_counts=True, axis=0)\n\ndf = pd.DataFrame()\ndf['size_factor'] = size_factors\ndf['inv_size_factor'] = 1/size_factors\ndf['inv_size_factor_sq'] = 1/size_factors**2\ndf['expr1'] = data[:, 0].todense().A1\ndf['expr2'] = data[:, 1].todense().A1\n\nprecomputed_size_factors = df.groupby(['expr1', 'expr2'])['inv_size_factor'].mean(), df.groupby(['expr1', 'expr2'])['inv_size_factor_sq'].mean()", "/home/ssm-user/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:5: RuntimeWarning: divide by zero encountered in true_divide\n \"\"\"\n/home/ssm-user/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:6: RuntimeWarning: divide by zero encountered in true_divide\n \n" ], [ "cov_scmemo = estimator._poisson_cov((expr[:, 0], expr[:, 1], count), data.shape[0], size_factor=precomputed_size_factors)\nprint(cov_scmemo)", "-1.4590297282462616\n" ] ], [ [ "### Extract parameters from interferon dataset", "_____no_output_____" ] ], [ [ "adata = sc.read(data_path + 'interferon_filtered.h5ad')\nadata = adata[adata.obs.cell_type == 'CD4 T cells - ctrl']\ndata = adata.X.copy()\nrelative_data = data.toarray()/data.sum(axis=1)", "_____no_output_____" ], [ "q = 0.07\nx_param, z_param, Nc, good_idx = schypo.simulate.extract_parameters(adata.X, q=q, min_mean=q)", "_____no_output_____" ], [ "imp.reload(simulate)\n\ntranscriptome = simulate.simulate_transcriptomes(\n n_cells=10000, \n means=z_param[0],\n variances=z_param[1],\n corr=x_param[2],\n Nc=Nc)\nrelative_transcriptome = transcriptome/transcriptome.sum(axis=1).reshape(-1, 1)\n\nqs, captured_data = simulate.capture_sampling(transcriptome, q=q, q_sq=q**2+1e-10)", "_____no_output_____" ], [ "def qqplot(x, y, s=1):\n \n plt.scatter(\n np.quantile(x, np.linspace(0, 1, 1000)),\n np.quantile(y, np.linspace(0, 1, 1000)),\n s=s)\n\n plt.plot(x, x, lw=1, color='m')", "_____no_output_____" ], [ "plt.figure(figsize=(8, 2));\nplt.subplots_adjust(wspace=0.2);\n\nplt.subplot(1, 3, 1);\nsns.distplot(np.log(captured_data.mean(axis=0)), hist=False, label='Simulated')\nsns.distplot(np.log(data[:, good_idx].toarray().var(axis=0)), hist=False, label='Real')\nplt.xlabel('Log(mean)')\n\nplt.subplot(1, 3, 2);\nsns.distplot(np.log(captured_data.var(axis=0)), hist=False)\nsns.distplot(np.log(data[:, good_idx].toarray().var(axis=0)), hist=False)\nplt.xlabel('Log(variance)')\n\nplt.subplot(1, 3, 3);\nsns.distplot(np.log(captured_data.sum(axis=1)), hist=False)\nsns.distplot(np.log(data.toarray().sum(axis=1)), hist=False)\nplt.xlabel('Log(total UMI count)')\n\nplt.savefig(figpath + 'simulation_stats.png', bbox_inches='tight')", "_____no_output_____" ] ], [ [ "### Compare datasets generated by Poisson and hypergeometric processes", "_____no_output_____" ] ], [ [ "_, poi_captured = simulate.capture_sampling(transcriptome, q=q, process='poisson')\n_, hyper_captured = simulate.capture_sampling(transcriptome, q=q, process='hyper')", "_____no_output_____" ], [ "q_list = [0.05, 0.1, 0.2, 0.3, 0.5]", "_____no_output_____" ], [ "plt.figure(figsize=(8, 2))\nplt.subplots_adjust(wspace=0.3)\nfor idx, q in enumerate(q_list):\n \n _, poi_captured = simulate.capture_sampling(transcriptome, q=q, process='poisson')\n _, hyper_captured = simulate.capture_sampling(transcriptome, q=q, process='hyper')\n relative_poi_captured = poi_captured/poi_captured.sum(axis=1).reshape(-1, 1)\n relative_hyper_captured = hyper_captured/hyper_captured.sum(axis=1).reshape(-1, 1)\n \n poi_corr = np.corrcoef(relative_poi_captured, rowvar=False)\n hyper_corr = np.corrcoef(relative_hyper_captured, rowvar=False)\n\n sample_idx = np.random.choice(poi_corr.ravel().shape[0], 100000)\n \n plt.subplot(1, len(q_list), idx+1)\n plt.scatter(poi_corr.ravel()[sample_idx], hyper_corr.ravel()[sample_idx], s=1, alpha=1)\n plt.plot([-1, 1], [-1, 1], 'm', lw=1)\n# plt.xlim([-0.3, 0.4])\n# plt.ylim([-0.3, 0.4])\n \n if idx != 0:\n plt.yticks([])\n plt.title('q={}'.format(q))\nplt.savefig(figpath + 'poi_vs_hyp_sim_corr.png', bbox_inches='tight')", "_____no_output_____" ] ], [ [ "### Compare Poisson vs HG estimators", "_____no_output_____" ] ], [ [ "def compare_esimators(q, plot=False, true_data=None, var_q=1e-10):\n \n q_sq = var_q + q**2\n \n true_data = schypo.simulate.simulate_transcriptomes(1000, 1000, correlated=True) if true_data is None else true_data\n true_relative_data = true_data / true_data.sum(axis=1).reshape(-1, 1)\n\n qs, captured_data = schypo.simulate.capture_sampling(true_data, q, q_sq)\n Nr = captured_data.sum(axis=1).mean()\n captured_relative_data = captured_data/captured_data.sum(axis=1).reshape(-1, 1)\n adata = sc.AnnData(sp.sparse.csr_matrix(captured_data))\n sf = schypo.estimator._estimate_size_factor(adata.X, 'hyper_relative', total=True)\n\n good_idx = (captured_data.mean(axis=0) > q)\n\n # True moments\n m_true, v_true, corr_true = true_relative_data.mean(axis=0), true_relative_data.var(axis=0), np.corrcoef(true_relative_data, rowvar=False)\n rv_true = v_true/m_true**2#schypo.estimator._residual_variance(m_true, v_true, schypo.estimator._fit_mv_regressor(m_true, v_true))\n \n # Compute 1D moments\n m_obs, v_obs = captured_relative_data.mean(axis=0), captured_relative_data.var(axis=0)\n rv_obs = v_obs/m_obs**2#schypo.estimator._residual_variance(m_obs, v_obs, schypo.estimator._fit_mv_regressor(m_obs, v_obs))\n m_poi, v_poi = schypo.estimator._poisson_1d_relative(adata.X, size_factor=sf, n_obs=true_data.shape[0])\n rv_poi = v_poi/m_poi**2#schypo.estimator._residual_variance(m_poi, v_poi, schypo.estimator._fit_mv_regressor(m_poi, v_poi))\n m_hyp, v_hyp = schypo.estimator._hyper_1d_relative(adata.X, size_factor=sf, n_obs=true_data.shape[0], q=q)\n rv_hyp = v_hyp/m_hyp**2#schypo.estimator._residual_variance(m_hyp, v_hyp, schypo.estimator._fit_mv_regressor(m_hyp, v_hyp))\n\n # Compute 2D moments\n corr_obs = np.corrcoef(captured_relative_data, rowvar=False)\n# corr_obs = corr_obs[np.triu_indices(corr_obs.shape[0])]\n \n idx1 = np.array([i for i,j in itertools.combinations(range(adata.shape[1]), 2) if good_idx[i] and good_idx[j]])\n idx2 = np.array([j for i,j in itertools.combinations(range(adata.shape[1]), 2) if good_idx[i] and good_idx[j]])\n sample_idx = np.random.choice(idx1.shape[0], 10000)\n \n idx1 = idx1[sample_idx]\n idx2 = idx2[sample_idx]\n\n corr_true = corr_true[(idx1, idx2)]\n corr_obs = corr_obs[(idx1, idx2)]\n \n cov_poi = schypo.estimator._poisson_cov_relative(adata.X, n_obs=adata.shape[0], size_factor=sf, idx1=idx1, idx2=idx2)\n cov_hyp = schypo.estimator._hyper_cov_relative(adata.X, n_obs=adata.shape[0], size_factor=sf, idx1=idx1, idx2=idx2, q=q)\n corr_poi = schypo.estimator._corr_from_cov(cov_poi, v_poi[idx1], v_poi[idx2])\n corr_hyp = schypo.estimator._corr_from_cov(cov_hyp, v_hyp[idx1], v_hyp[idx2])\n\n corr_poi[np.abs(corr_poi) > 1] = np.nan\n corr_hyp[np.abs(corr_hyp) > 1] = np.nan\n\n mean_list = [m_obs, m_poi, m_hyp]\n var_list = [rv_obs, rv_poi, rv_hyp]\n corr_list = [corr_obs, corr_poi, corr_hyp]\n estimated_list = [mean_list, var_list, corr_list]\n true_list = [m_true, rv_true, corr_true]\n\n if plot:\n count = 0\n for j in range(3):\n for i in range(3):\n\n plt.subplot(3, 3, count+1)\n\n\n if i != 2:\n plt.scatter(\n np.log(true_list[i][good_idx]),\n np.log(estimated_list[i][j][good_idx]),\n s=0.1)\n plt.plot(np.log(true_list[i][good_idx]), np.log(true_list[i][good_idx]), linestyle='--', color='m')\n plt.xlim(np.log(true_list[i][good_idx]).min(), np.log(true_list[i][good_idx]).max())\n plt.ylim(np.log(true_list[i][good_idx]).min(), np.log(true_list[i][good_idx]).max())\n\n else:\n\n x = true_list[i]\n y = estimated_list[i][j]\n \n print(x.shape, y.shape)\n\n plt.scatter(\n x,\n y,\n s=0.1)\n plt.plot([-1, 1], [-1, 1],linestyle='--', color='m')\n plt.xlim(-1, 1);\n plt.ylim(-1, 1);\n \n# if not (i == j):\n# plt.yticks([]);\n# plt.xticks([]);\n \n if i == 1 or i == 0:\n \n print((np.log(true_list[i][good_idx]) > np.log(estimated_list[i][j][good_idx])).mean())\n\n count += 1\n else:\n return qs, good_idx, estimated_list, true_list", "_____no_output_____" ], [ "import matplotlib.pylab as pylab\nparams = {'legend.fontsize': 'small',\n 'axes.labelsize': 'medium',\n 'axes.titlesize':'medium',\n 'figure.titlesize':'medium',\n 'xtick.labelsize':'xx-small',\n 'ytick.labelsize':'xx-small'}\npylab.rcParams.update(params)", "_____no_output_____" ] ], [ [ "imp.reload(simulate)\n\nq = 0.4\nplt.figure(figsize=(4, 4))\nplt.subplots_adjust(wspace=0.5, hspace=0.5)\ntrue_data = simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1],\n corr=x_param[2],\n Nc=Nc)\ncompare_esimators(q, plot=True, true_data=true_data)\nplt.savefig(fig_path + 'poi_vs_hyper_scatter_2.png', bbox_inches='tight')", "_____no_output_____" ] ], [ [ "true_data = schypo.simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1], Nc=Nc)", "_____no_output_____" ], [ "q = 0.025\nplt.figure(figsize=(4, 4))\nplt.subplots_adjust(wspace=0.5, hspace=0.5)\ncompare_esimators(q, plot=True, true_data=true_data)\nplt.savefig(fig_path + 'poi_vs_hyper_scatter_rv_2.5.png', bbox_inches='tight', dpi=1200)", "/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:48: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:67: RuntimeWarning: invalid value encountered in log\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:94: RuntimeWarning: invalid value encountered in log\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:94: RuntimeWarning: invalid value encountered in greater\n" ], [ "q = 0.4\nplt.figure(figsize=(4, 4))\nplt.subplots_adjust(wspace=0.5, hspace=0.5)\ncompare_esimators(q, plot=True, true_data=true_data)\nplt.savefig(fig_path + 'poi_vs_hyper_scatter_rv_40.png', bbox_inches='tight', dpi=1200)", "/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n" ], [ "def compute_mse(x, y, log=True):\n \n if log:\n return np.nanmean(np.abs(np.log(x)-np.log(y)))\n else:\n return np.nanmean(np.abs(x-y))\n\ndef concordance(x, y, log=True):\n \n if log:\n a = np.log(x)\n b = np.log(y)\n else:\n a = x\n b = y\n cond = np.isfinite(a) & np.isfinite(b)\n a = a[cond]\n b = b[cond]\n cmat = np.cov(a, b)\n return 2*cmat[0,1]/(cmat[0,0] + cmat[1,1] + (a.mean()-b.mean())**2)\n \nm_mse_list, v_mse_list, c_mse_list = [], [], []\n# true_data = schypo.simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1],\n# Nc=Nc)\nq_list = [0.01, 0.025, 0.1, 0.15, 0.3, 0.5, 0.7, 0.99]\nqs_list = []\nfor q in q_list:\n qs, good_idx, est, true = compare_esimators(q, plot=False, true_data=true_data)\n qs_list.append(qs)\n m_mse_list.append([concordance(x[good_idx], true[0][good_idx]) for x in est[0]])\n v_mse_list.append([concordance(x[good_idx], true[1][good_idx]) for x in est[1]])\n c_mse_list.append([concordance(x, true[2], log=False) for x in est[2]])\n \nm_mse_list, v_mse_list, c_mse_list = np.array(m_mse_list), np.array(v_mse_list), np.array(c_mse_list)", "/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:48: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log\n # This is added back by InteractiveShellApp.init_path()\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:48: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log\n # This is added back by InteractiveShellApp.init_path()\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log\n # This is added back by InteractiveShellApp.init_path()\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log\n # This is added back by InteractiveShellApp.init_path()\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log\n # This is added back by InteractiveShellApp.init_path()\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater\n/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log\n # This is added back by InteractiveShellApp.init_path()\n" ], [ "import matplotlib.pylab as pylab\nparams = {'legend.fontsize': 'small',\n 'axes.labelsize': 'medium',\n 'axes.titlesize':'medium',\n 'figure.titlesize':'medium',\n 'xtick.labelsize':'small',\n 'ytick.labelsize':'small'}\npylab.rcParams.update(params)\n\nplt.figure(figsize=(8, 3))\nplt.subplots_adjust(wspace=0.5)\n\nplt.subplot(1, 3, 1)\nplt.plot(q_list[1:], m_mse_list[:, 0][1:], '-o')\n# plt.legend(['Naive,\\nPoisson,\\nHG'])\nplt.ylabel('CCC log(mean)')\nplt.xlabel('overall UMI efficiency (q)')\n\nplt.subplot(1, 3, 2)\nplt.plot(q_list[2:], v_mse_list[:, 0][2:], '-o')\nplt.plot(q_list[2:], v_mse_list[:, 1][2:], '-o')\nplt.plot(q_list[2:], v_mse_list[:, 2][2:], '-o')\nplt.legend(['Naive', 'Poisson', 'HG'], ncol=3, loc='upper center', bbox_to_anchor=(0.4,1.15))\nplt.ylabel('CCC log(variance)')\nplt.xlabel('overall UMI efficiency (q)')\n\nplt.subplot(1, 3, 3)\nplt.plot(q_list[2:], c_mse_list[:, 0][2:], '-o')\nplt.plot(q_list[2:], c_mse_list[:, 1][2:], '-o')\nplt.plot(q_list[2:], c_mse_list[:, 2][2:], '-o')\n# plt.legend(['Naive', 'Poisson', 'HG'])\nplt.ylabel('CCC correlation')\nplt.xlabel('overall UMI efficiency (q)')\n\nplt.savefig(fig_path + 'poi_vs_hyper_rv_ccc.pdf', bbox_inches='tight')", "_____no_output_____" ], [ "plt.figure(figsize=(1, 1.3))\nplt.plot(q_list, v_mse_list[:, 0], '-o', ms=4)\nplt.plot(q_list, v_mse_list[:, 1], '-o', ms=4)\nplt.plot(q_list, v_mse_list[:, 2], '-o', ms=4)\n\nplt.savefig(fig_path + 'poi_vs_hyper_ccc_var_rv_inset.pdf', bbox_inches='tight')", "_____no_output_____" ], [ "plt.figure(figsize=(1, 1.3))\nplt.plot(q_list, c_mse_list[:, 0], '-o', ms=4)\nplt.plot(q_list, c_mse_list[:, 1], '-o', ms=4)\nplt.plot(q_list, c_mse_list[:, 2], '-o', ms=4)\nplt.savefig(fig_path + 'poi_vs_hyper_ccc_corr_inset.pdf', bbox_inches='tight')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "raw", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "raw" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
d01ae92fb89aaa9b968a6b5d1950273fd2dffb3a
1,389
ipynb
Jupyter Notebook
homework/homework-for-week-14-regex_BLANK.ipynb
sandeepmj/fall21-students-practical-python
d808afc955ce07fb5d9593069f88db819c0f1c45
[ "MIT" ]
null
null
null
homework/homework-for-week-14-regex_BLANK.ipynb
sandeepmj/fall21-students-practical-python
d808afc955ce07fb5d9593069f88db819c0f1c45
[ "MIT" ]
null
null
null
homework/homework-for-week-14-regex_BLANK.ipynb
sandeepmj/fall21-students-practical-python
d808afc955ce07fb5d9593069f88db819c0f1c45
[ "MIT" ]
1
2021-11-01T01:41:39.000Z
2021-11-01T01:41:39.000Z
22.047619
133
0.558675
[ [ [ "## Find key data points from multiple documents\n\nDownload <a href=\"https://drive.google.com/file/d/1V6hmJhCqMyR65e4tal1Q70Lc_jvtZm0F/view?usp=sharing\">these documents</a>.\n\nThey all have an identical structure to them.\n\nUsing regex, capture and export as a CSV the following data points in all the documents:\n\n- The case number.\n- Whether the decision was to accept or reject the appeal.\n- The request date.\n- The decision date.\n- Source file name\n\n\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
d01afa40a330e5820d065066efaaaf0cb02c25a7
93,372
ipynb
Jupyter Notebook
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
4662c2cc60f7941723a876a6032b411e40f5ec62
[ "MIT" ]
4
2021-08-20T18:21:09.000Z
2022-01-12T09:30:29.000Z
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
4662c2cc60f7941723a876a6032b411e40f5ec62
[ "MIT" ]
null
null
null
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
4662c2cc60f7941723a876a6032b411e40f5ec62
[ "MIT" ]
null
null
null
276.248521
66,872
0.905935
[ [ [ "# TRTR and TSTR Results Comparison", "_____no_output_____" ] ], [ [ "#import libraries\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\npd.set_option('precision', 4)", "_____no_output_____" ] ], [ [ "## 1. Create empty dataset to save metrics differences", "_____no_output_____" ] ], [ [ "DATA_TYPES = ['Real','GM','SDV','CTGAN','WGANGP']\nSYNTHESIZERS = ['GM','SDV','CTGAN','WGANGP']\nml_models = ['RF','KNN','DT','SVM','MLP']", "_____no_output_____" ] ], [ [ "## 2. Read obtained results when TRTR and TSTR", "_____no_output_____" ] ], [ [ "FILEPATHS = {'Real' : 'RESULTS/models_results_real.csv',\n 'GM' : 'RESULTS/models_results_gm.csv',\n 'SDV' : 'RESULTS/models_results_sdv.csv',\n 'CTGAN' : 'RESULTS/models_results_ctgan.csv',\n 'WGANGP' : 'RESULTS/models_results_wgangp.csv'}", "_____no_output_____" ], [ "#iterate over all datasets filepaths and read each dataset\nresults_all = dict()\nfor name, path in FILEPATHS.items() :\n results_all[name] = pd.read_csv(path, index_col='model')\nresults_all", "_____no_output_____" ] ], [ [ "## 3. Calculate differences of models", "_____no_output_____" ] ], [ [ "metrics_diffs_all = dict()\nreal_metrics = results_all['Real']\ncolumns = ['data','accuracy_diff','precision_diff','recall_diff','f1_diff']\nmetrics = ['accuracy','precision','recall','f1']\n\nfor name in SYNTHESIZERS :\n syn_metrics = results_all[name]\n metrics_diffs_all[name] = pd.DataFrame(columns = columns)\n for model in ml_models :\n real_metrics_model = real_metrics.loc[model]\n syn_metrics_model = syn_metrics.loc[model]\n data = [model]\n for m in metrics :\n data.append(abs(real_metrics_model[m] - syn_metrics_model[m]))\n metrics_diffs_all[name] = metrics_diffs_all[name].append(pd.DataFrame([data], columns = columns))\nmetrics_diffs_all", "_____no_output_____" ] ], [ [ "## 4. Compare absolute differences", "_____no_output_____" ], [ "### 4.1. Barplots for each metric", "_____no_output_____" ] ], [ [ "metrics = ['accuracy', 'precision', 'recall', 'f1']\nmetrics_diff = ['accuracy_diff', 'precision_diff', 'recall_diff', 'f1_diff']\ncolors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple']\n\nbarwidth = 0.15\n\nfig, axs = plt.subplots(nrows=1, ncols=4, figsize=(15, 2.5))\naxs_idxs = range(4)\nidx = dict(zip(metrics + metrics_diff,axs_idxs))\n\nfor i in range(0,len(metrics)) :\n data = dict()\n y_pos = dict()\n y_pos[0] = np.arange(len(ml_models))\n ax = axs[idx[metrics[i]]]\n \n for k in range(0,len(DATA_TYPES)) :\n generator_data = results_all[DATA_TYPES[k]] \n data[k] = [0, 0, 0, 0, 0]\n \n for p in range(0,len(ml_models)) :\n data[k][p] = generator_data[metrics[i]].iloc[p]\n \n ax.bar(y_pos[k], data[k], color=colors[k], width=barwidth, edgecolor='white', label=DATA_TYPES[k])\n y_pos[k+1] = [x + barwidth for x in y_pos[k]]\n \n ax.set_xticks([r + barwidth*2 for r in range(len(ml_models))])\n ax.set_xticklabels([])\n ax.set_xticklabels(ml_models, fontsize=10)\n ax.set_title(metrics[i], fontsize=12)\n \nax.legend(DATA_TYPES, ncol=5, bbox_to_anchor=(-0.3, -0.2))\nfig.tight_layout()\n#fig.suptitle('Models performance comparisson Boxplots (TRTR and TSTR) \\n Dataset F - Indian Liver Patient', fontsize=18)\nfig.savefig('RESULTS/MODELS_METRICS_BARPLOTS.svg', bbox_inches='tight')", "_____no_output_____" ], [ "metrics = ['accuracy_diff', 'precision_diff', 'recall_diff', 'f1_diff']\ncolors = ['tab:orange', 'tab:green', 'tab:red', 'tab:purple']\n\nfig, axs = plt.subplots(nrows=1, ncols=4, figsize=(15,2.5))\naxs_idxs = range(4)\nidx = dict(zip(metrics,axs_idxs))\n\nfor i in range(0,len(metrics)) :\n data = dict()\n ax = axs[idx[metrics[i]]]\n \n for k in range(0,len(SYNTHESIZERS)) :\n generator_data = metrics_diffs_all[SYNTHESIZERS[k]] \n data[k] = [0, 0, 0, 0, 0]\n \n for p in range(0,len(ml_models)) :\n data[k][p] = generator_data[metrics[i]].iloc[p]\n \n ax.plot(data[k], 'o-', color=colors[k], label=SYNTHESIZERS[k])\n \n ax.set_xticks(np.arange(len(ml_models)))\n ax.set_xticklabels(ml_models, fontsize=10)\n ax.set_title(metrics[i], fontsize=12)\n ax.set_ylim(bottom=-0.01, top=0.28)\n ax.grid()\n \nax.legend(SYNTHESIZERS, ncol=5, bbox_to_anchor=(-0.4, -0.2))\nfig.tight_layout()\n#fig.suptitle('Models performance comparisson Boxplots (TRTR and TSTR) \\n Dataset F - Indian Liver Patient', fontsize=18)\nfig.savefig('RESULTS/MODELS_METRICS_DIFFERENCES.svg', bbox_inches='tight')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
d01b06191b0028e3ed7c2abbef73c36feb48fc99
720,959
ipynb
Jupyter Notebook
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
e9b80fcdb8d50c6dd8ee3f75b9054e07b15a6163
[ "MIT" ]
5
2020-04-04T23:00:15.000Z
2021-09-05T21:47:43.000Z
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
e9b80fcdb8d50c6dd8ee3f75b9054e07b15a6163
[ "MIT" ]
62
2019-12-02T19:08:35.000Z
2022-03-30T21:30:42.000Z
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
e9b80fcdb8d50c6dd8ee3f75b9054e07b15a6163
[ "MIT" ]
3
2021-02-19T16:06:29.000Z
2022-03-06T22:25:58.000Z
523.952762
52,004
0.937944
[ [ [ "# Generating Simpson's Paradox\n\nWe have been maually setting, but now we should also be able to generate it more programatically. his notebook will describe how we develop some functions that will be included in the `sp_data_util` package.", "_____no_output_____" ] ], [ [ "# %load code/env\n# standard imports we use throughout the project\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\n\nimport wiggum as wg\nimport sp_data_util as spdata\nfrom sp_data_util import sp_plot", "_____no_output_____" ] ], [ [ "We have been thinking of SP hrough gaussian mixture data, so we'll first work wih that. To cause SP we need he clusters to have an opposite trend of the per cluster covariance. ", "_____no_output_____" ] ], [ [ "# setup\nr_clusters = -.6 # correlation coefficient of clusters\ncluster_spread = .8 # pearson correlation of means\np_sp_clusters = .5 # portion of clusters with SP \nk = 5 # number of clusters\ncluster_size = [2,3]\ndomain_range = [0, 20, 0, 20]\nN = 200 # number of points\np_clusters = [1.0/k]*k", "_____no_output_____" ], [ "# keep all means in the middle 80%\nmu_trim = .2\n\n# sample means\ncenter = [np.mean(domain_range[:2]),np.mean(domain_range[2:])]\nmu_transform = np.repeat(np.diff(domain_range)[[0,2]]*(mu_trim),2)\nmu_transform[[1,3]] = mu_transform[[1,3]]*-1 # sign flip every other\nmu_domain = [d + m_t for d, m_t in zip(domain_range,mu_transform)]\ncorr = [[1, cluster_spread],[cluster_spread,1]]\nd = np.sqrt(np.diag(np.diff(mu_domain)[[0,2]]))\ncov = np.dot(d,corr).dot(d)\n# sample a lot of means, just for vizualization\n# mu = np.asarray([np.random.uniform(*mu_domain[:2],size=k*5), # uniform in x\n# np.random.uniform(*mu_domain[2:],size=k*5)]).T # uniform in y\nmu = np.random.multivariate_normal(center, cov,k*50)\nsns.regplot(mu[:,0], mu[:,1])\nplt.axis(domain_range);\n# mu", "_____no_output_____" ] ], [ [ "However independent sampling isn't really very uniform and we'd like to ensure the clusters are more spread out, so we can use some post processing to thin out close ones. ", "_____no_output_____" ] ], [ [ "mu_thin = [mu[0]] # keep the first one\np_dist = [1]\n\n# we'll use a gaussian kernel around each to filter and only the closest point matters\ndist = lambda mu_c,x: stats.norm.pdf(min(np.sum(np.square(mu_c -x),axis=1)))\n\nfor m in mu:\n p_keep = 1- dist(mu_thin,m)\n if p_keep > .99:\n mu_thin.append(m)\n p_dist.append(p_keep)\n\nmu_thin = np.asarray(mu_thin)\nsns.regplot(mu_thin[:,0], mu_thin[:,1])\nplt.axis(domain_range)", "_____no_output_____" ] ], [ [ "Now, we can sample points on top of that, also we'll only use the first k", "_____no_output_____" ] ], [ [ "sns.regplot(mu_thin[:k,0], mu_thin[:k,1])\nplt.axis(domain_range)", "_____no_output_____" ] ], [ [ "Keeping only a few, we can end up with ones in the center, but if we sort them by the distance to the ones previously selected, we get them spread out a little more", "_____no_output_____" ] ], [ [ "\n# sort by distance\nmu_sort, p_sort = zip(*sorted(zip(mu_thin,p_dist),\n key = lambda x: x[1], reverse =True))\n\nmu_sort = np.asarray(mu_sort)\nsns.regplot(mu_sort[:k,0], mu_sort[:k,1])\nplt.axis(domain_range)", "_____no_output_____" ], [ "# cluster covariance\ncluster_corr = np.asarray([[1,r_clusters],[r_clusters,1]])\ncluster_std = np.diag(np.sqrt(cluster_size))\ncluster_cov = np.dot(cluster_std,cluster_corr).dot(cluster_std)\n\n# sample from a GMM\nz = np.random.choice(k,N,p_clusters)\n\nx = np.asarray([np.random.multivariate_normal(mu_sort[z_i],cluster_cov) for z_i in z])\n\n# make a dataframe\nlatent_df = pd.DataFrame(data=x,\n columns = ['x1', 'x2'])\n\n# code cluster as color and add it a column to the dataframe\nlatent_df['color'] = z\n\nsp_plot(latent_df,'x1','x2','color')", "_____no_output_____" ] ], [ [ "We might not want all of the clusters to have the reveral though, so we can also sample the covariances", "_____no_output_____" ] ], [ [ "# cluster covariance\ncluster_std = np.diag(np.sqrt(cluster_size))\ncluster_corr_sp = np.asarray([[1,r_clusters],[r_clusters,1]]) # correlation with sp\ncluster_cov_sp = np.dot(cluster_std,cluster_corr_sp).dot(cluster_std) #cov with sp\ncluster_corr = np.asarray([[1,-r_clusters],[-r_clusters,1]]) #correlation without sp\ncluster_cov = np.dot(cluster_std,cluster_corr).dot(cluster_std) #cov wihtout sp\n\ncluster_covs = [cluster_corr_sp, cluster_corr]\n# sample the[0,1] k times\nc_sp = np.random.choice(2,k,p=[p_sp_clusters,1-p_sp_clusters])\n\n\n# sample from a GMM\nz = np.random.choice(k,N,p_clusters)\n\nx = np.asarray([np.random.multivariate_normal(mu_sort[z_i],cluster_covs[c_sp[z_i]]) for z_i in z])\n\n# make a dataframe\nlatent_df = pd.DataFrame(data=x,\n columns = ['x1', 'x2'])\n\n# code cluster as color and add it a column to the dataframe\nlatent_df['color'] = z\n\nsp_plot(latent_df,'x1','x2','color')", "_____no_output_____" ], [ "[p_sp_clusters,1-p_sp_clusters]", "_____no_output_____" ], [ "c_sp", "_____no_output_____" ] ], [ [ "We'll call this construction of SP `geometric_2d_gmm_sp` and it's included in the `sp_data_utils` module now, so it can be called as follows. We'll change the portion of clusters with SP to 1, to ensure that all are SP. ", "_____no_output_____" ] ], [ [ "type(r_clusters)\ntype(cluster_size)\ntype(cluster_spread)\ntype(p_sp_clusters)\ntype(domain_range)\ntype(p_clusters)", "_____no_output_____" ], [ "p_sp_clusters = .9\nsp_df2 = spdata.geometric_2d_gmm_sp(r_clusters,cluster_size,cluster_spread,\n p_sp_clusters, domain_range,k,N,p_clusters)\nsp_plot(sp_df2,'x1','x2','color')", "_____no_output_____" ] ], [ [ "With this, we can start to see how the parameters control a little", "_____no_output_____" ] ], [ [ "# setup\nr_clusters = -.4 # correlation coefficient of clusters\ncluster_spread = .8 # pearson correlation of means\np_sp_clusters = .6 # portion of clusters with SP \nk = 5 # number of clusters\ncluster_size = [4,4]\ndomain_range = [0, 20, 0, 20]\nN = 200 # number of points\np_clusters = [.5, .2, .1, .1, .1]\n\nsp_df3 = spdata.geometric_2d_gmm_sp(r_clusters,cluster_size,cluster_spread,\n p_sp_clusters, domain_range,k,N,p_clusters)\nsp_plot(sp_df3,'x1','x2','color')", "_____no_output_____" ] ], [ [ "We might want to add multiple views, so we added a function that takes the same parameters or lists to allow each view to have different parameters. We'll look first at just two views with the same parameters, both as one another and as above", "_____no_output_____" ] ], [ [ "\nmany_sp_df = spdata.geometric_indep_views_gmm_sp(2,r_clusters,cluster_size,cluster_spread,p_sp_clusters,\n domain_range,k,N,p_clusters)\n\nsp_plot(many_sp_df,'x1','x2','A')\nsp_plot(many_sp_df,'x3','x4','B')\nmany_sp_df.head()", "200\n4\n" ] ], [ [ "We can also look at the pairs of variables that we did not design SP into and see that they have vey different structure", "_____no_output_____" ] ], [ [ "# f, ax_grid = plt.subplots(2,2) # , fig_size=(10,10)\n\nsp_plot(many_sp_df,'x1','x4','A')\nsp_plot(many_sp_df,'x2','x4','B')\nsp_plot(many_sp_df,'x2','x3','B')\nsp_plot(many_sp_df,'x1','x3','B')", "_____no_output_____" ] ], [ [ "And we can set up the views to be different from one another by design", "_____no_output_____" ] ], [ [ "# setup\nr_clusters = [.8, -.2] # correlation coefficient of clusters\ncluster_spread = [.8, .2] # pearson correlation of means\np_sp_clusters = [.6, 1] # portion of clusters with SP \nk = [5,3] # number of clusters\ncluster_size = [4,4]\ndomain_range = [0, 20, 0, 20]\nN = 200 # number of points\np_clusters = [[.5, .2, .1, .1, .1],[1.0/3]*3]\n\n\nmany_sp_df_diff = spdata.geometric_indep_views_gmm_sp(2,r_clusters,cluster_size,cluster_spread,p_sp_clusters,\n domain_range,k,N,p_clusters)\n\nsp_plot(many_sp_df_diff,'x1','x2','A')\nsp_plot(many_sp_df_diff,'x3','x4','B')\nmany_sp_df.head()", "200\n4\n" ] ], [ [ "And we can run our detection algorithm on this as well.", "_____no_output_____" ] ], [ [ "many_sp_df_diff_result = wg.detect_simpsons_paradox(many_sp_df_diff)\nmany_sp_df_diff_result", "_____no_output_____" ] ], [ [ "We designed in SP to occur between attributes `x1` and `x2` with respect to `A` and 2 & 3 in grouby by B, for portions fo the subgroups. We detect other occurences. It can be interesting to exmine trends between the deisnged and spontaneous occurences of SP, so ", "_____no_output_____" ] ], [ [ "designed_SP = [('x1','x2','A'),('x3','x4','B')]", "_____no_output_____" ], [ "des = []\nfor i,r in enumerate(many_sp_df_diff_result[['attr1','attr2','groupbyAttr']].values):\n if tuple(r) in designed_SP:\n des.append(i)", "_____no_output_____" ], [ "many_sp_df_diff_result['designed'] = 'no'\nmany_sp_df_diff_result.loc[des,'designed'] = 'yes'\nmany_sp_df_diff_result.head()", "_____no_output_____" ], [ "r_clusters = -.9 # correlation coefficient of clusters\ncluster_spread = .6 # pearson correlation of means\np_sp_clusters = .5 # portion of clusters with SP \nk = 5 # number of clusters\ncluster_size = [5,5]\ndomain_range = [0, 20, 0, 20]\nN = 200 # number of points\np_clusters = [1.0/k]*k\n\nmany_sp_df_diff = spdata.geometric_indep_views_gmm_sp(3,r_clusters,cluster_size,cluster_spread,p_sp_clusters,\n domain_range,k,N,p_clusters)\n\nsp_plot(many_sp_df_diff,'x1','x2','A')\nsp_plot(many_sp_df_diff,'x3','x4','B')\nsp_plot(many_sp_df_diff,'x3','x4','A')\nmany_sp_df_diff.head()", "200\n6\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d01b14a083d171ac99efceb0a49d5077a7228da6
6,235
ipynb
Jupyter Notebook
notebooks/real_video_test.ipynb
quinngroup/ornet-reu-2018
75af0532448b2235d9295f02278a98414dc3fb4f
[ "MIT" ]
null
null
null
notebooks/real_video_test.ipynb
quinngroup/ornet-reu-2018
75af0532448b2235d9295f02278a98414dc3fb4f
[ "MIT" ]
11
2018-06-14T15:45:41.000Z
2018-07-10T19:30:25.000Z
notebooks/real_video_test.ipynb
quinngroup/ornet-reu-2018
75af0532448b2235d9295f02278a98414dc3fb4f
[ "MIT" ]
null
null
null
26.875
166
0.524619
[ [ [ "import unittest\nimport numpy as np\nimport sys\nsys.path.insert(0, '/Users/mojtaba/Downloads/ornet-reu-2018-master-2/src')\nimport raster_scan2 as raster_scan\nimport read_video", "_____no_output_____" ], [ "class RasterTest(unittest.TestCase):\n\n def manual_scan(self, video):\n \"\"\"\n Manual, loop-based implementation of raster scanning.\n (reference implementation)\n \"\"\"\n frames, height, width = video.shape\n raster = np.zeros(shape = (frames, height * width))\n\n for index, frame in enumerate(video):\n raster[index] = frame.flatten()\n\n return raster\n\n def test_rasterscan1(self):\n y = np.arange(18).reshape((3, 3, 2))\n yt = self.manual_scan(y)\n yp = raster_scan.raster_scan(y)\n np.testing.assert_array_equal(yp, yt)\n\n\n # Add another function (e.g., test_rasterscan_real) that uses read_video to read in a full video, \n # creates a ground-truth, and then uses raster_scan2 to generate a prediction. \n \n def test_rasterscan_real(self):\n y = read_video.read_video('/Users/mojtaba/Desktop/OrNet Project/DAT VIDEOS/LLO/DsRed2-HeLa_2_21_LLO_Cell0.mov')\n y = np.array(y[1:])\n # Because of the output format of \"read_video\" module, we need to slice the \"y\"\n y = y[0,:,:,:]\n yt = self.manual_scan(y)\n yp = raster_scan.raster_scan(y)\n np.testing.assert_array_equal(yp, yt)", "_____no_output_____" ], [ "if __name__ == '__main__':\n unittest.main(argv=['first-arg-is-ignored'], exit=False)", "./Users/mojtaba/anaconda3/lib/python3.6/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float64 to uint8\n .format(dtypeobj_in, dtypeobj_out))\n/Users/mojtaba/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py:338: ResourceWarning: unclosed file <_io.BufferedWriter name=61>\n self._proc = None\n/Users/mojtaba/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py:338: ResourceWarning: unclosed file <_io.BufferedReader name=62>\n self._proc = None\n/Users/mojtaba/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:27: ResourceWarning: unclosed file <_io.BufferedReader name=64>\n.\n----------------------------------------------------------------------\nRan 2 tests in 16.702s\n\nOK\n" ] ], [ [ "# Here I'm going to test the details of the function that we are going to write for real video testing. ", "_____no_output_____" ] ], [ [ "y = read_video.read_video('/Users/mojtaba/Desktop/OrNet Project/DAT VIDEOS/LLO/DsRed2-HeLa_2_21_LLO_Cell0.mov')\ny2 = np.array(y[1:])\ny2.shape\n", "/Users/mojtaba/anaconda3/lib/python3.6/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float64 to uint8\n .format(dtypeobj_in, dtypeobj_out))\n" ], [ "y3 = y2[0,:,:,:]\ny3.shape", "_____no_output_____" ], [ "def manual_scan(self, video):\n \"\"\"\n Manual, loop-based implementation of raster scanning.\n (reference implementation)\n \"\"\"\n frames, height, width = video.shape\n raster = np.zeros(shape = (frames, height * width))\n\n for index, frame in enumerate(video):\n raster[index] = frame.flatten()\n\n return raster", "_____no_output_____" ], [ "yt = manual_scan(_, y3)\nyt.shape", "_____no_output_____" ], [ "yp = raster_scan.raster_scan(y3)\nyp.shape", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d01b1b726ee498864abc91342a69848e2ddeca21
43,894
ipynb
Jupyter Notebook
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
2,610
2020-10-01T14:14:53.000Z
2022-03-31T18:02:31.000Z
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
1,959
2020-09-30T20:22:42.000Z
2022-03-31T23:58:37.000Z
scientific_details_of_algorithms/lda_topic_modeling/LDA-Science.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
2,052
2020-09-30T22:11:46.000Z
2022-03-31T23:02:51.000Z
40.085845
888
0.628264
[ [ [ "# A Scientific Deep Dive Into SageMaker LDA\n\n1. [Introduction](#Introduction)\n1. [Setup](#Setup)\n1. [Data Exploration](#DataExploration)\n1. [Training](#Training)\n1. [Inference](#Inference)\n1. [Epilogue](#Epilogue)", "_____no_output_____" ], [ "# Introduction\n***\n\nAmazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Latent Dirichlet Allocation (LDA) is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. Since the method is unsupervised, the topics are not specified up front, and are not guaranteed to align with how a human may naturally categorize documents. The topics are learned as a probability distribution over the words that occur in each document. Each document, in turn, is described as a mixture of topics.\n\nThis notebook is similar to **LDA-Introduction.ipynb** but its objective and scope are a different. We will be taking a deeper dive into the theory. The primary goals of this notebook are,\n\n* to understand the LDA model and the example dataset,\n* understand how the Amazon SageMaker LDA algorithm works,\n* interpret the meaning of the inference output.\n\nFormer knowledge of LDA is not required. However, we will run through concepts rather quickly and at least a foundational knowledge of mathematics or machine learning is recommended. Suggested references are provided, as appropriate.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport os, re, tarfile\n\nimport boto3\nimport matplotlib.pyplot as plt\nimport mxnet as mx\nimport numpy as np\n\nnp.set_printoptions(precision=3, suppress=True)\n\n# some helpful utility functions are defined in the Python module\n# \"generate_example_data\" located in the same directory as this\n# notebook\nfrom generate_example_data import (\n generate_griffiths_data,\n match_estimated_topics,\n plot_lda,\n plot_lda_topics,\n)\n\n# accessing the SageMaker Python SDK\nimport sagemaker\nfrom sagemaker.amazon.common import RecordSerializer\nfrom sagemaker.serializers import CSVSerializer\nfrom sagemaker.deserializers import JSONDeserializer", "_____no_output_____" ] ], [ [ "# Setup\n\n***\n\n*This notebook was created and tested on an ml.m4.xlarge notebook instance.*\n\nWe first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:\n\n* `bucket` - An S3 bucket accessible by this account.\n * Used to store input training data and model data output.\n * Should be withing the same region as this notebook instance, training, and hosting.\n* `prefix` - The location in the bucket where this notebook's input and and output data will be stored. (The default value is sufficient.)\n* `role` - The IAM Role ARN used to give training and hosting access to your data.\n * See documentation on how to create these.\n * The script below will try to determine an appropriate Role ARN.", "_____no_output_____" ] ], [ [ "from sagemaker import get_execution_role\n\nrole = get_execution_role()\n\nbucket = sagemaker.Session().default_bucket()\nprefix = \"sagemaker/DEMO-lda-science\"\n\n\nprint(\"Training input/output will be stored in {}/{}\".format(bucket, prefix))\nprint(\"\\nIAM Role: {}\".format(role))", "_____no_output_____" ] ], [ [ "## The LDA Model\n\nAs mentioned above, LDA is a model for discovering latent topics describing a collection of documents. In this section we will give a brief introduction to the model. Let,\n\n* $M$ = the number of *documents* in a corpus\n* $N$ = the average *length* of a document.\n* $V$ = the size of the *vocabulary* (the total number of unique words)\n\nWe denote a *document* by a vector $w \\in \\mathbb{R}^V$ where $w_i$ equals the number of times the $i$th word in the vocabulary occurs within the document. This is called the \"bag-of-words\" format of representing a document.\n\n$$\n\\underbrace{w}_{\\text{document}} = \\overbrace{\\big[ w_1, w_2, \\ldots, w_V \\big] }^{\\text{word counts}},\n\\quad\nV = \\text{vocabulary size}\n$$\n\nThe *length* of a document is equal to the total number of words in the document: $N_w = \\sum_{i=1}^V w_i$.\n\nAn LDA model is defined by two parameters: a topic-word distribution matrix $\\beta \\in \\mathbb{R}^{K \\times V}$ and a Dirichlet topic prior $\\alpha \\in \\mathbb{R}^K$. In particular, let,\n\n$$\\beta = \\left[ \\beta_1, \\ldots, \\beta_K \\right]$$\n\nbe a collection of $K$ *topics* where each topic $\\beta_k \\in \\mathbb{R}^V$ is represented as probability distribution over the vocabulary. One of the utilities of the LDA model is that a given word is allowed to appear in multiple topics with positive probability. The Dirichlet topic prior is a vector $\\alpha \\in \\mathbb{R}^K$ such that $\\alpha_k > 0$ for all $k$.", "_____no_output_____" ], [ "# Data Exploration\n\n---\n\n## An Example Dataset\n\nBefore explaining further let's get our hands dirty with an example dataset. The following synthetic data comes from [1] and comes with a very useful visual interpretation.\n\n> [1] Thomas Griffiths and Mark Steyvers. *Finding Scientific Topics.* Proceedings of the National Academy of Science, 101(suppl 1):5228-5235, 2004.", "_____no_output_____" ] ], [ [ "print(\"Generating example data...\")\nnum_documents = 6000\nknown_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data(\n num_documents=num_documents, num_topics=10\n)\nnum_topics, vocabulary_size = known_beta.shape\n\n\n# separate the generated data into training and tests subsets\nnum_documents_training = int(0.9 * num_documents)\nnum_documents_test = num_documents - num_documents_training\n\ndocuments_training = documents[:num_documents_training]\ndocuments_test = documents[num_documents_training:]\n\ntopic_mixtures_training = topic_mixtures[:num_documents_training]\ntopic_mixtures_test = topic_mixtures[num_documents_training:]\n\nprint(\"documents_training.shape = {}\".format(documents_training.shape))\nprint(\"documents_test.shape = {}\".format(documents_test.shape))", "_____no_output_____" ] ], [ [ "Let's start by taking a closer look at the documents. Note that the vocabulary size of these data is $V = 25$. The average length of each document in this data set is 150. (See `generate_griffiths_data.py`.)", "_____no_output_____" ] ], [ [ "print(\"First training document =\\n{}\".format(documents_training[0]))\nprint(\"\\nVocabulary size = {}\".format(vocabulary_size))\nprint(\"Length of first document = {}\".format(documents_training[0].sum()))", "_____no_output_____" ], [ "average_document_length = documents.sum(axis=1).mean()\nprint(\"Observed average document length = {}\".format(average_document_length))", "_____no_output_____" ] ], [ [ "The example data set above also returns the LDA parameters,\n\n$$(\\alpha, \\beta)$$\n\nused to generate the documents. Let's examine the first topic and verify that it is a probability distribution on the vocabulary.", "_____no_output_____" ] ], [ [ "print(\"First topic =\\n{}\".format(known_beta[0]))\n\nprint(\n \"\\nTopic-word probability matrix (beta) shape: (num_topics, vocabulary_size) = {}\".format(\n known_beta.shape\n )\n)\nprint(\"\\nSum of elements of first topic = {}\".format(known_beta[0].sum()))", "_____no_output_____" ] ], [ [ "Unlike some clustering algorithms, one of the versatilities of the LDA model is that a given word can belong to multiple topics. The probability of that word occurring in each topic may differ, as well. This is reflective of real-world data where, for example, the word *\"rover\"* appears in a *\"dogs\"* topic as well as in a *\"space exploration\"* topic.\n\nIn our synthetic example dataset, the first word in the vocabulary belongs to both Topic #1 and Topic #6 with non-zero probability.", "_____no_output_____" ] ], [ [ "print(\"Topic #1:\\n{}\".format(known_beta[0]))\nprint(\"Topic #6:\\n{}\".format(known_beta[5]))", "_____no_output_____" ] ], [ [ "Human beings are visual creatures, so it might be helpful to come up with a visual representation of these documents.\n\nIn the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs within the document. Below we plot the first few documents of the training set reshaped into 5x5 pixel grids.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nfig = plot_lda(documents_training, nrows=3, ncols=4, cmap=\"gray_r\", with_colorbar=True)\nfig.suptitle(\"$w$ - Document Word Counts\")\nfig.set_dpi(160)", "_____no_output_____" ] ], [ [ "When taking a close look at these documents we can see some patterns in the word distributions suggesting that, perhaps, each topic represents a \"column\" or \"row\" of words with non-zero probability and that each document is composed primarily of a handful of topics.\n\nBelow we plots the *known* topic-word probability distributions, $\\beta$. Similar to the documents we reshape each probability distribution to a $5 \\times 5$ pixel image where the color represents the probability of that each word occurring in the topic.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nfig = plot_lda(known_beta, nrows=1, ncols=10)\nfig.suptitle(r\"Known $\\beta$ - Topic-Word Probability Distributions\")\nfig.set_dpi(160)\nfig.set_figheight(2)", "_____no_output_____" ] ], [ [ "These 10 topics were used to generate the document corpus. Next, we will learn about how this is done.", "_____no_output_____" ], [ "## Generating Documents\n\nLDA is a generative model, meaning that the LDA parameters $(\\alpha, \\beta)$ are used to construct documents word-by-word by drawing from the topic-word distributions. In fact, looking closely at the example documents above you can see that some documents sample more words from some topics than from others.\n\nLDA works as follows: given \n\n* $M$ documents $w^{(1)}, w^{(2)}, \\ldots, w^{(M)}$,\n* an average document length of $N$,\n* and an LDA model $(\\alpha, \\beta)$.\n\n**For** each document, $w^{(m)}$:\n* sample a topic mixture: $\\theta^{(m)} \\sim \\text{Dirichlet}(\\alpha)$\n* **For** each word $n$ in the document:\n * Sample a topic $z_n^{(m)} \\sim \\text{Multinomial}\\big( \\theta^{(m)} \\big)$\n * Sample a word from this topic, $w_n^{(m)} \\sim \\text{Multinomial}\\big( \\beta_{z_n^{(m)}} \\; \\big)$\n * Add to document\n\nThe [plate notation](https://en.wikipedia.org/wiki/Plate_notation) for the LDA model, introduced in [2], encapsulates this process pictorially.\n\n![](http://scikit-learn.org/stable/_images/lda_model_graph.png)\n\n> [2] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(Jan):993–1022, 2003.", "_____no_output_____" ], [ "## Topic Mixtures\n\nFor the documents we generated above lets look at their corresponding topic mixtures, $\\theta \\in \\mathbb{R}^K$. The topic mixtures represent the probablility that a given word of the document is sampled from a particular topic. For example, if the topic mixture of an input document $w$ is,\n\n$$\\theta = \\left[ 0.3, 0.2, 0, 0.5, 0, \\ldots, 0 \\right]$$\n\nthen $w$ is 30% generated from the first topic, 20% from the second topic, and 50% from the fourth topic. In particular, the words contained in the document are sampled from the first topic-word probability distribution 30% of the time, from the second distribution 20% of the time, and the fourth disribution 50% of the time.\n\n\nThe objective of inference, also known as scoring, is to determine the most likely topic mixture of a given input document. Colloquially, this means figuring out which topics appear within a given document and at what ratios. We will perform infernece later in the [Inference](#Inference) section.\n\nSince we generated these example documents using the LDA model we know the topic mixture generating them. Let's examine these topic mixtures.", "_____no_output_____" ] ], [ [ "print(\"First training document =\\n{}\".format(documents_training[0]))\nprint(\"\\nVocabulary size = {}\".format(vocabulary_size))\nprint(\"Length of first document = {}\".format(documents_training[0].sum()))", "_____no_output_____" ], [ "print(\"First training document topic mixture =\\n{}\".format(topic_mixtures_training[0]))\nprint(\"\\nNumber of topics = {}\".format(num_topics))\nprint(\"sum(theta) = {}\".format(topic_mixtures_training[0].sum()))", "_____no_output_____" ] ], [ [ "We plot the first document along with its topic mixture. We also plot the topic-word probability distributions again for reference.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nfig, (ax1, ax2) = plt.subplots(2, 1)\n\nax1.matshow(documents[0].reshape(5, 5), cmap=\"gray_r\")\nax1.set_title(r\"$w$ - Document\", fontsize=20)\nax1.set_xticks([])\nax1.set_yticks([])\n\ncax2 = ax2.matshow(topic_mixtures[0].reshape(1, -1), cmap=\"Reds\", vmin=0, vmax=1)\ncbar = fig.colorbar(cax2, orientation=\"horizontal\")\nax2.set_title(r\"$\\theta$ - Topic Mixture\", fontsize=20)\nax2.set_xticks([])\nax2.set_yticks([])\n\nfig.set_dpi(100)", "_____no_output_____" ], [ "%matplotlib inline\n\n# pot\nfig = plot_lda(known_beta, nrows=1, ncols=10)\nfig.suptitle(r\"Known $\\beta$ - Topic-Word Probability Distributions\")\nfig.set_dpi(160)\nfig.set_figheight(1.5)", "_____no_output_____" ] ], [ [ "Finally, let's plot several documents with their corresponding topic mixtures. We can see how topics with large weight in the document lead to more words in the document within the corresponding \"row\" or \"column\".", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nfig = plot_lda_topics(documents_training, 3, 4, topic_mixtures=topic_mixtures)\nfig.suptitle(r\"$(w,\\theta)$ - Documents with Known Topic Mixtures\")\nfig.set_dpi(160)", "_____no_output_____" ] ], [ [ "# Training\n\n***\n\nIn this section we will give some insight into how AWS SageMaker LDA fits an LDA model to a corpus, create an run a SageMaker LDA training job, and examine the output trained model.", "_____no_output_____" ], [ "## Topic Estimation using Tensor Decompositions\n\nGiven a document corpus, Amazon SageMaker LDA uses a spectral tensor decomposition technique to determine the LDA model $(\\alpha, \\beta)$ which most likely describes the corpus. See [1] for a primary reference of the theory behind the algorithm. The spectral decomposition, itself, is computed using the CPDecomp algorithm described in [2].\n\nThe overall idea is the following: given a corpus of documents $\\mathcal{W} = \\{w^{(1)}, \\ldots, w^{(M)}\\}, \\; w^{(m)} \\in \\mathbb{R}^V,$ we construct a statistic tensor,\n\n$$T \\in \\bigotimes^3 \\mathbb{R}^V$$\n\nsuch that the spectral decomposition of the tensor is approximately the LDA parameters $\\alpha \\in \\mathbb{R}^K$ and $\\beta \\in \\mathbb{R}^{K \\times V}$ which maximize the likelihood of observing the corpus for a given number of topics, $K$,\n\n$$T \\approx \\sum_{k=1}^K \\alpha_k \\; (\\beta_k \\otimes \\beta_k \\otimes \\beta_k)$$\n\nThis statistic tensor encapsulates information from the corpus such as the document mean, cross correlation, and higher order statistics. For details, see [1].\n\n\n> [1] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham Kakade, and Matus Telgarsky. *\"Tensor Decompositions for Learning Latent Variable Models\"*, Journal of Machine Learning Research, 15:2773–2832, 2014.\n>\n> [2] Tamara Kolda and Brett Bader. *\"Tensor Decompositions and Applications\"*. SIAM Review, 51(3):455–500, 2009.\n\n\n", "_____no_output_____" ], [ "## Store Data on S3\n\nBefore we run training we need to prepare the data.\n\nA SageMaker training job needs access to training data stored in an S3 bucket. Although training can accept data of various formats we convert the documents MXNet RecordIO Protobuf format before uploading to the S3 bucket defined at the beginning of this notebook.", "_____no_output_____" ] ], [ [ "# convert documents_training to Protobuf RecordIO format\nrecordio_protobuf_serializer = RecordSerializer()\nfbuffer = recordio_protobuf_serializer.serialize(documents_training)\n\n# upload to S3 in bucket/prefix/train\nfname = \"lda.data\"\ns3_object = os.path.join(prefix, \"train\", fname)\nboto3.Session().resource(\"s3\").Bucket(bucket).Object(s3_object).upload_fileobj(fbuffer)\n\ns3_train_data = \"s3://{}/{}\".format(bucket, s3_object)\nprint(\"Uploaded data to S3: {}\".format(s3_train_data))", "_____no_output_____" ] ], [ [ "Next, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication", "_____no_output_____" ] ], [ [ "from sagemaker.image_uris import retrieve\n\nregion_name = boto3.Session().region_name\ncontainer = retrieve(\"lda\", boto3.Session().region_name)\n\nprint(\"Using SageMaker LDA container: {} ({})\".format(container, region_name))", "_____no_output_____" ] ], [ [ "## Training Parameters\n\nParticular to a SageMaker LDA training job are the following hyperparameters:\n\n* **`num_topics`** - The number of topics or categories in the LDA model.\n * Usually, this is not known a priori.\n * In this example, howevever, we know that the data is generated by five topics.\n\n* **`feature_dim`** - The size of the *\"vocabulary\"*, in LDA parlance.\n * In this example, this is equal 25.\n\n* **`mini_batch_size`** - The number of input training documents.\n\n* **`alpha0`** - *(optional)* a measurement of how \"mixed\" are the topic-mixtures.\n * When `alpha0` is small the data tends to be represented by one or few topics.\n * When `alpha0` is large the data tends to be an even combination of several or many topics.\n * The default value is `alpha0 = 1.0`.\n\nIn addition to these LDA model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,\n\n* Recommended instance type: `ml.c4`\n* Current limitations:\n * SageMaker LDA *training* can only run on a single instance.\n * SageMaker LDA does not take advantage of GPU hardware.\n * (The Amazon AI Algorithms team is working hard to provide these capabilities in a future release!)", "_____no_output_____" ], [ "Using the above configuration create a SageMaker client and use the client to create a training job.", "_____no_output_____" ] ], [ [ "session = sagemaker.Session()\n\n# specify general training job information\nlda = sagemaker.estimator.Estimator(\n container,\n role,\n output_path=\"s3://{}/{}/output\".format(bucket, prefix),\n instance_count=1,\n instance_type=\"ml.c4.2xlarge\",\n sagemaker_session=session,\n)\n\n# set algorithm-specific hyperparameters\nlda.set_hyperparameters(\n num_topics=num_topics,\n feature_dim=vocabulary_size,\n mini_batch_size=num_documents_training,\n alpha0=1.0,\n)\n\n# run the training job on input data stored in S3\nlda.fit({\"train\": s3_train_data})", "_____no_output_____" ] ], [ [ "If you see the message\n\n> `===== Job Complete =====`\n\nat the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the \"Jobs\" tab and select training job matching the training job name, below:", "_____no_output_____" ] ], [ [ "print(\"Training job name: {}\".format(lda.latest_training_job.job_name))", "_____no_output_____" ] ], [ [ "## Inspecting the Trained Model\n\nWe know the LDA parameters $(\\alpha, \\beta)$ used to generate the example data. How does the learned model compare the known one? In this section we will download the model data and measure how well SageMaker LDA did in learning the model.\n\nFirst, we download the model data. SageMaker will output the model in \n\n> `s3://<bucket>/<prefix>/output/<training job name>/output/model.tar.gz`.\n\nSageMaker LDA stores the model as a two-tuple $(\\alpha, \\beta)$ where each LDA parameter is an MXNet NDArray.", "_____no_output_____" ] ], [ [ "# download and extract the model file from S3\njob_name = lda.latest_training_job.job_name\nmodel_fname = \"model.tar.gz\"\nmodel_object = os.path.join(prefix, \"output\", job_name, \"output\", model_fname)\nboto3.Session().resource(\"s3\").Bucket(bucket).Object(model_object).download_file(fname)\nwith tarfile.open(fname) as tar:\n tar.extractall()\nprint(\"Downloaded and extracted model tarball: {}\".format(model_object))\n\n# obtain the model file\nmodel_list = [fname for fname in os.listdir(\".\") if fname.startswith(\"model_\")]\nmodel_fname = model_list[0]\nprint(\"Found model file: {}\".format(model_fname))\n\n# get the model from the model file and store in Numpy arrays\nalpha, beta = mx.ndarray.load(model_fname)\nlearned_alpha_permuted = alpha.asnumpy()\nlearned_beta_permuted = beta.asnumpy()\n\nprint(\"\\nLearned alpha.shape = {}\".format(learned_alpha_permuted.shape))\nprint(\"Learned beta.shape = {}\".format(learned_beta_permuted.shape))", "_____no_output_____" ] ], [ [ "Presumably, SageMaker LDA has found the topics most likely used to generate the training corpus. However, even if this is case the topics would not be returned in any particular order. Therefore, we match the found topics to the known topics closest in L1-norm in order to find the topic permutation.\n\nNote that we will use the `permutation` later during inference to match known topic mixtures to found topic mixtures.\n\nBelow plot the known topic-word probability distribution, $\\beta \\in \\mathbb{R}^{K \\times V}$ next to the distributions found by SageMaker LDA as well as the L1-norm errors between the two.", "_____no_output_____" ] ], [ [ "permutation, learned_beta = match_estimated_topics(known_beta, learned_beta_permuted)\nlearned_alpha = learned_alpha_permuted[permutation]\n\nfig = plot_lda(np.vstack([known_beta, learned_beta]), 2, 10)\nfig.set_dpi(160)\nfig.suptitle(\"Known vs. Found Topic-Word Probability Distributions\")\nfig.set_figheight(3)\n\nbeta_error = np.linalg.norm(known_beta - learned_beta, 1)\nalpha_error = np.linalg.norm(known_alpha - learned_alpha, 1)\nprint(\"L1-error (beta) = {}\".format(beta_error))\nprint(\"L1-error (alpha) = {}\".format(alpha_error))", "_____no_output_____" ] ], [ [ "Not bad!\n\nIn the eyeball-norm the topics match quite well. In fact, the topic-word distribution error is approximately 2%.", "_____no_output_____" ], [ "# Inference\n\n***\n\nA trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a given document.\n\nWe create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up.", "_____no_output_____" ], [ "With this realtime endpoint at our fingertips we can finally perform inference on our training and test data.\n\nWe can pass a variety of data formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted, JSON-sparse-formatter, and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `CSVSerializer` and `JSONDeserializer` when configuring the inference endpoint.", "_____no_output_____" ] ], [ [ "lda_inference = lda.deploy(\n initial_instance_count=1,\n instance_type=\"ml.m4.xlarge\", # LDA inference may work better at scale on ml.c4 instances\n serializer=CSVSerializer(),\n deserializer=JSONDeserializer(),\n)", "_____no_output_____" ] ], [ [ "Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the \"Endpoints\" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below: ", "_____no_output_____" ] ], [ [ "print(\"Endpoint name: {}\".format(lda_inference.endpoint_name))", "_____no_output_____" ] ], [ [ "We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion.", "_____no_output_____" ] ], [ [ "results = lda_inference.predict(documents_test[:12])\n\nprint(results)", "_____no_output_____" ] ], [ [ "It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.\n\n```\n{\n 'predictions': [\n {'topic_mixture': [ ... ] },\n {'topic_mixture': [ ... ] },\n {'topic_mixture': [ ... ] },\n ...\n ]\n}\n```\n\nWe extract the topic mixtures, themselves, corresponding to each of the input documents.", "_____no_output_____" ] ], [ [ "inferred_topic_mixtures_permuted = np.array(\n [prediction[\"topic_mixture\"] for prediction in results[\"predictions\"]]\n)\n\nprint(\"Inferred topic mixtures (permuted):\\n\\n{}\".format(inferred_topic_mixtures_permuted))", "_____no_output_____" ] ], [ [ "## Inference Analysis\n\nRecall that although SageMaker LDA successfully learned the underlying topics which generated the sample data the topics were in a different order. Before we compare to known topic mixtures $\\theta \\in \\mathbb{R}^K$ we should also permute the inferred topic mixtures\n", "_____no_output_____" ] ], [ [ "inferred_topic_mixtures = inferred_topic_mixtures_permuted[:, permutation]\n\nprint(\"Inferred topic mixtures:\\n\\n{}\".format(inferred_topic_mixtures))", "_____no_output_____" ] ], [ [ "Let's plot these topic mixture probability distributions alongside the known ones.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\n# create array of bar plots\nwidth = 0.4\nx = np.arange(10)\n\nnrows, ncols = 3, 4\nfig, ax = plt.subplots(nrows, ncols, sharey=True)\nfor i in range(nrows):\n for j in range(ncols):\n index = i * ncols + j\n ax[i, j].bar(x, topic_mixtures_test[index], width, color=\"C0\")\n ax[i, j].bar(x + width, inferred_topic_mixtures[index], width, color=\"C1\")\n ax[i, j].set_xticks(range(num_topics))\n ax[i, j].set_yticks(np.linspace(0, 1, 5))\n ax[i, j].grid(which=\"major\", axis=\"y\")\n ax[i, j].set_ylim([0, 1])\n ax[i, j].set_xticklabels([])\n if i == (nrows - 1):\n ax[i, j].set_xticklabels(range(num_topics), fontsize=7)\n if j == 0:\n ax[i, j].set_yticklabels([0, \"\", 0.5, \"\", 1.0], fontsize=7)\n\nfig.suptitle(\"Known vs. Inferred Topic Mixtures\")\nax_super = fig.add_subplot(111, frameon=False)\nax_super.tick_params(labelcolor=\"none\", top=\"off\", bottom=\"off\", left=\"off\", right=\"off\")\nax_super.grid(False)\nax_super.set_xlabel(\"Topic Index\")\nax_super.set_ylabel(\"Topic Probability\")\nfig.set_dpi(160)", "_____no_output_____" ] ], [ [ "In the eyeball-norm these look quite comparable.\n\nLet's be more scientific about this. Below we compute and plot the distribution of L1-errors from **all** of the test documents. Note that we send a new payload of test documents to the inference endpoint and apply the appropriate permutation to the output.", "_____no_output_____" ] ], [ [ "%%time\n\n# create a payload containing all of the test documents and run inference again\n#\n# TRY THIS:\n# try switching between the test data set and a subset of the training\n# data set. It is likely that LDA inference will perform better against\n# the training set than the holdout test set.\n#\npayload_documents = documents_test # Example 1\nknown_topic_mixtures = topic_mixtures_test # Example 1\n# payload_documents = documents_training[:600]; # Example 2\n# known_topic_mixtures = topic_mixtures_training[:600] # Example 2\n\nprint(\"Invoking endpoint...\\n\")\nresults = lda_inference.predict(payload_documents)\n\ninferred_topic_mixtures_permuted = np.array(\n [prediction[\"topic_mixture\"] for prediction in results[\"predictions\"]]\n)\ninferred_topic_mixtures = inferred_topic_mixtures_permuted[:, permutation]\n\nprint(\"known_topics_mixtures.shape = {}\".format(known_topic_mixtures.shape))\nprint(\"inferred_topics_mixtures_test.shape = {}\\n\".format(inferred_topic_mixtures.shape))", "_____no_output_____" ], [ "%matplotlib inline\n\nl1_errors = np.linalg.norm((inferred_topic_mixtures - known_topic_mixtures), 1, axis=1)\n\n# plot the error freqency\nfig, ax_frequency = plt.subplots()\nbins = np.linspace(0, 1, 40)\nweights = np.ones_like(l1_errors) / len(l1_errors)\nfreq, bins, _ = ax_frequency.hist(l1_errors, bins=50, weights=weights, color=\"C0\")\nax_frequency.set_xlabel(\"L1-Error\")\nax_frequency.set_ylabel(\"Frequency\", color=\"C0\")\n\n\n# plot the cumulative error\nshift = (bins[1] - bins[0]) / 2\nx = bins[1:] - shift\nax_cumulative = ax_frequency.twinx()\ncumulative = np.cumsum(freq) / sum(freq)\nax_cumulative.plot(x, cumulative, marker=\"o\", color=\"C1\")\nax_cumulative.set_ylabel(\"Cumulative Frequency\", color=\"C1\")\n\n\n# align grids and show\nfreq_ticks = np.linspace(0, 1.5 * freq.max(), 5)\nfreq_ticklabels = np.round(100 * freq_ticks) / 100\nax_frequency.set_yticks(freq_ticks)\nax_frequency.set_yticklabels(freq_ticklabels)\nax_cumulative.set_yticks(np.linspace(0, 1, 5))\nax_cumulative.grid(which=\"major\", axis=\"y\")\nax_cumulative.set_ylim((0, 1))\n\n\nfig.suptitle(\"Topic Mixutre L1-Errors\")\nfig.set_dpi(110)", "_____no_output_____" ] ], [ [ "Machine learning algorithms are not perfect and the data above suggests this is true of SageMaker LDA. With more documents and some hyperparameter tuning we can obtain more accurate results against the known topic-mixtures.\n\nFor now, let's just investigate the documents-topic mixture pairs that seem to do well as well as those that do not. Below we retreive a document and topic mixture corresponding to a small L1-error as well as one with a large L1-error.", "_____no_output_____" ] ], [ [ "N = 6\n\ngood_idx = l1_errors < 0.05\ngood_documents = payload_documents[good_idx][:N]\ngood_topic_mixtures = inferred_topic_mixtures[good_idx][:N]\n\npoor_idx = l1_errors > 0.3\npoor_documents = payload_documents[poor_idx][:N]\npoor_topic_mixtures = inferred_topic_mixtures[poor_idx][:N]", "_____no_output_____" ], [ "%matplotlib inline\n\nfig = plot_lda_topics(good_documents, 2, 3, topic_mixtures=good_topic_mixtures)\nfig.suptitle(\"Documents With Accurate Inferred Topic-Mixtures\")\nfig.set_dpi(120)", "_____no_output_____" ], [ "%matplotlib inline\n\nfig = plot_lda_topics(poor_documents, 2, 3, topic_mixtures=poor_topic_mixtures)\nfig.suptitle(\"Documents With Inaccurate Inferred Topic-Mixtures\")\nfig.set_dpi(120)", "_____no_output_____" ] ], [ [ "In this example set the documents on which inference was not as accurate tend to have a denser topic-mixture. This makes sense when extrapolated to real-world datasets: it can be difficult to nail down which topics are represented in a document when the document uses words from a large subset of the vocabulary.", "_____no_output_____" ], [ "## Stop / Close the Endpoint\n\nFinally, we should delete the endpoint before we close the notebook.\n\nTo do so execute the cell below. Alternately, you can navigate to the \"Endpoints\" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select \"Delete\" from the \"Actions\" dropdown menu. ", "_____no_output_____" ] ], [ [ "sagemaker.Session().delete_endpoint(lda_inference.endpoint_name)", "_____no_output_____" ] ], [ [ "# Epilogue\n\n---\n\nIn this notebook we,\n\n* learned about the LDA model,\n* generated some example LDA documents and their corresponding topic-mixtures,\n* trained a SageMaker LDA model on a training set of documents and compared the learned model to the known model,\n* created an inference endpoint,\n* used the endpoint to infer the topic mixtures of a test input and analyzed the inference error.\n\nThere are several things to keep in mind when applying SageMaker LDA to real-word data such as a corpus of text documents. Note that input documents to the algorithm, both in training and inference, need to be vectors of integers representing word counts. Each index corresponds to a word in the corpus vocabulary. Therefore, one will need to \"tokenize\" their corpus vocabulary.\n\n$$\n\\text{\"cat\"} \\mapsto 0, \\; \\text{\"dog\"} \\mapsto 1 \\; \\text{\"bird\"} \\mapsto 2, \\ldots\n$$\n\nEach text document then needs to be converted to a \"bag-of-words\" format document.\n\n$$\nw = \\text{\"cat bird bird bird cat\"} \\quad \\longmapsto \\quad w = [2, 0, 3, 0, \\ldots, 0]\n$$\n\nAlso note that many real-word applications have large vocabulary sizes. It may be necessary to represent the input documents in sparse format. Finally, the use of stemming and lemmatization in data preprocessing provides several benefits. Doing so can improve training and inference compute time since it reduces the effective vocabulary size. More importantly, though, it can improve the quality of learned topic-word probability matrices and inferred topic mixtures. For example, the words *\"parliament\"*, *\"parliaments\"*, *\"parliamentary\"*, *\"parliament's\"*, and *\"parliamentarians\"* are all essentially the same word, *\"parliament\"*, but with different conjugations. For the purposes of detecting topics, such as a *\"politics\"* or *governments\"* topic, the inclusion of all five does not add much additional value as they all essentiall describe the same feature.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d01b2323a6300f50e4a71711d59a22b5f0a4df31
628
ipynb
Jupyter Notebook
Ideal - Word2Vec + LSTM.ipynb
Siraz22/FakeNewsCalssifier_NLP
8184e63cf90a613d7a93b2115afdbb006add725b
[ "Apache-2.0" ]
1
2021-10-07T02:08:32.000Z
2021-10-07T02:08:32.000Z
Ideal - Word2Vec + LSTM.ipynb
Siraz22/FakeNewsCalssifier_NLP
8184e63cf90a613d7a93b2115afdbb006add725b
[ "Apache-2.0" ]
null
null
null
Ideal - Word2Vec + LSTM.ipynb
Siraz22/FakeNewsCalssifier_NLP
8184e63cf90a613d7a93b2115afdbb006add725b
[ "Apache-2.0" ]
null
null
null
19.030303
67
0.563694
[]
[]
[]
d01b2b27b95f174e1c76779705d31aa2e3f4c907
27,737
ipynb
Jupyter Notebook
src/skempi2.ipynb
yotamfr/skempi
9e5dbb7661a36c973edb0e94cf8bfe843f839e66
[ "MIT" ]
1
2021-11-08T14:16:40.000Z
2021-11-08T14:16:40.000Z
src/skempi2.ipynb
yotamfr/skempi
9e5dbb7661a36c973edb0e94cf8bfe843f839e66
[ "MIT" ]
16
2019-12-16T21:16:26.000Z
2022-03-11T23:33:34.000Z
src/skempi2.ipynb
yotamfr/skempi
9e5dbb7661a36c973edb0e94cf8bfe843f839e66
[ "MIT" ]
null
null
null
40.610542
257
0.490933
[ [ [ "from skempi_utils import *\nfrom scipy.stats import pearsonr", "/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/utils/__init__.py:9: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .murmurhash import murmurhash3_32\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/utils/extmath.py:24: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._logistic_sigmoid import _log_logistic_sigmoid\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/utils/extmath.py:26: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .sparsefuncs_fast import csr_row_norms\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/metrics/cluster/supervised.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .expected_mutual_info_fast import expected_mutual_information\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/metrics/pairwise.py:30: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .pairwise_fast import _chi2_kernel_fast, _sparse_manhattan\n" ], [ "df = skempi_df\ndf_multi = df[~np.asarray([len(s)>8 for s in df.Protein])]\ns_multi = set([s[:4] for s in df_multi.Protein])\ns_groups = set([s[:4] for s in G1 + G2 + G3 + G4 + G5])\nlen(s_multi & s_groups), len(s_multi), len(s_groups)\ndf_multi.head()", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\nfrom itertools import combinations as comb\nfrom sklearn.externals import joblib\nimport numpy as np\n\ndef evaluate(group_str, y_true, y_pred, ix):\n y_pred_pos = y_pred[ix == 0]\n y_pred_neg = y_pred[ix == 1]\n y_true_pos = y_true[ix == 0]\n y_true_neg = y_true[ix == 1]\n cor_all, _ = pearsonr(y_true, y_pred)\n cor_pos, _ = pearsonr(y_true_pos, y_pred_pos)\n cor_neg, _ = pearsonr(y_true_neg, y_pred_neg)\n print(\"[%s:%d] cor_all:%.3f, cor_pos:%.3f, cor_neg:%.3f\" % (group_str, len(y_true), cor_all, cor_pos, cor_neg))\n return cor_all, cor_pos, cor_neg\n\ndef run_cv_test(X, y, ix, get_regressor, modelname, normalize=1):\n gt, preds, indx, cors = [], [], [], []\n groups = [G1, G2, G3, G4, G5]\n prots = G1 + G2 + G3 + G4 + G5\n for i, pair in enumerate(comb(range(NUM_GROUPS), 2)):\n group = groups[pair[0]] + groups[pair[1]]\n g1, g2 = np.asarray(pair) + 1\n indx_tst = (ix[:, 0] == g1) | (ix[:, 0] == g2)\n indx_trn = np.logical_not(indx_tst)\n y_trn = y[indx_trn]\n y_true = y[indx_tst]\n X_trn = X[indx_trn]\n X_tst = X[indx_tst]\n if normalize == 1:\n scaler = StandardScaler()\n scaler.fit(X_trn)\n X_trn = scaler.transform(X_trn)\n X_tst = scaler.transform(X_tst)\n regressor = get_regressor()\n regressor.fit(X_trn, y_trn)\n joblib.dump(regressor, 'models/%s%s.pkl' % (modelname, i))\n regressor = joblib.load('models/%s%s.pkl' % (modelname, i))\n y_pred = regressor.predict(X_tst)\n cor, pos, neg = evaluate(\"G%d,G%d\" % (g1, g2), y_true, y_pred, ix[indx_tst, 1])\n cors.append([cor, pos, neg])\n indx.extend(ix[indx_tst, 1])\n preds.extend(y_pred)\n gt.extend(y_true)\n return [np.asarray(a) for a in [gt, preds, indx, cors]]\n\n\ndef run_cv_test_ensemble(X, y, ix, alpha=0.5, normalize=1):\n gt, preds, indx, cors = [], [], [], []\n groups = [G1, G2, G3, G4, G5]\n prots = G1 + G2 + G3 + G4 + G5\n for i, pair in enumerate(comb(range(NUM_GROUPS), 2)):\n group = groups[pair[0]] + groups[pair[1]]\n g1, g2 = np.asarray(pair) + 1\n indx_tst = (ix[:, 0] == g1) | (ix[:, 0] == g2)\n indx_trn = (ix[:, 0] != 0) & ((ix[:, 0] == g1) | (ix[:, 0] == g2))\n y_trn = y[indx_trn]\n y_true = y[indx_tst]\n X_trn = X[indx_trn]\n X_tst = X[indx_tst]\n svr = joblib.load('models/svr%d.pkl' % i)\n rfr = joblib.load('models/rfr%d.pkl' % i)\n if normalize == 1:\n scaler = StandardScaler()\n scaler.fit(X_trn)\n X_trn = scaler.transform(X_trn)\n X_tst = scaler.transform(X_tst)\n y_pred_svr = svr.predict(X_tst)\n y_pred_rfr = rfr.predict(X_tst)\n y_pred = alpha * y_pred_svr + (1-alpha) * y_pred_rfr\n cor, pos, neg = evaluate(\"G%d,G%d\" % (g1, g2), y_true, y_pred, ix[indx_tst, 1])\n cors.append([cor, pos, neg])\n indx.extend(ix[indx_tst, 1])\n preds.extend(y_pred)\n gt.extend(y_true)\n return [np.asarray(a) for a in [gt, preds, indx, cors]]\n\n\ndef records_to_xy(skempi_records, load_neg=True):\n data = []\n for record in tqdm(skempi_records, desc=\"records processed\"):\n r = record\n assert r.struct is not None\n data.append([r.features(True), [r.ddg], [r.group, r.is_minus]])\n if not load_neg: continue \n rr = reversed(record)\n assert rr.struct is not None\n data.append([rr.features(True), [rr.ddg], [rr.group, rr.is_minus]])\n X, y, ix = [np.asarray(d) for d in zip(*data)]\n return X, y, ix", "_____no_output_____" ], [ "def get_temperature_array(records, agg=np.min):\n arr = []\n pbar = tqdm(range(len(skempi_df)), desc=\"row processed\")\n for i, row in skempi_df.iterrows():\n arr_obs_mut = []\n for mutation in row[\"Mutation(s)_cleaned\"].split(','):\n mut = Mutation(mutation)\n res_i, chain_id = mut.i, mut.chain_id\n t = tuple(row.Protein.split('_'))\n skempi_record = records[t]\n res = skempi_record[chain_id][res_i]\n temps = [a.temp for a in res.atoms]\n arr_obs_mut.append(np.mean(temps))\n arr.append(agg(arr_obs_mut))\n pbar.update(1)\n pbar.close()\n return arr\n\nskempi_records = load_skempi_structs(pdb_path=\"../data/pdbs_n\", compute_dist_mat=False)\ntemp_arr = get_temperature_array(skempi_records, agg=np.min)", "skempi structures processed: 100%|██████████| 158/158 [00:08<00:00, 17.60it/s]\nrow processed: 100%|██████████| 3047/3047 [00:00<00:00, 5533.66it/s]\n" ], [ "skempi_structs = load_skempi_structs(\"../data/pdbs\", compute_dist_mat=False)\nskempi_records = load_skempi_records(skempi_structs)", "skempi structures processed: 100%|██████████| 158/158 [00:06<00:00, 25.06it/s]\nskempi records processed: 100%|██████████| 3047/3047 [00:00<00:00, 5530.34it/s]\n" ], [ "# X_pos, y_pos, ix_pos = records_to_xy(skempi_records)\n# X_pos.shape, y_pos.shape, ix_pos.shape", "_____no_output_____" ], [ "X_, y_, ix_ = records_to_xy(skempi_records)", "records processed: 100%|██████████| 3047/3047 [2:27:48<00:00, 2.91s/it] \n" ], [ "X = X_[:, :]\n# X = np.concatenate([X.T, [temp_arr]], axis=0).T\ny = y_[:, 0]\nix = ix_\nX.shape, y.shape, ix.shape", "_____no_output_____" ], [ "print(\"----->SVR\")\nfrom sklearn.svm import SVR\ndef get_regressor(): return SVR(kernel='rbf')\ngt, preds, indx, cors = run_cv_test(X, y, ix, get_regressor, 'svr', normalize=1)\ncor1, _, _ = evaluate(\"CAT\", gt, preds, indx)\nprint(np.mean(cors, axis=0))\n\nprint(\"----->RFR\")\nfrom sklearn.ensemble import RandomForestRegressor\ndef get_regressor(): return RandomForestRegressor(n_estimators=50, random_state=0)\ngt, preds, indx, cors = run_cv_test(X, y, ix, get_regressor, 'rfr', normalize=1)\ncor2, _, _ = evaluate(\"CAT\", gt, preds, indx)\nprint(np.mean(cors, axis=0))\n\n# alpha = cor1/(cor1+cor2)\nalpha = 0.5\nprint(\"----->%.2f*SVR + %.2f*RFR\" % (alpha, 1-alpha))\ngt, preds, indx, cors = run_cv_test_ensemble(X, y, ix, normalize=1)\ncor, _, _ = evaluate(\"CAT\", gt, preds, indx)\nprint(np.mean(cors, axis=0))", "/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/svm/base.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import libsvm, liblinear\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/svm/base.py:9: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import libsvm_sparse\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/linear_model/base.py:35: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ..utils.seq_dataset import ArrayDataset, CSRDataset\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/linear_model/least_angle.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ..utils import arrayfuncs, as_float_array, check_X_y, deprecated\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/utils/random.py:10: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from ._random import sample_without_replacement\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/linear_model/coordinate_descent.py:29: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from . import cd_fast\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/linear_model/__init__.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .sgd_fast import Hinge, Log, ModifiedHuber, SquaredLoss, Huber\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/linear_model/sag.py:12: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .sag_fast import sag\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/neighbors/__init__.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .ball_tree import BallTree\n/media/disk1/yotam/skempi/skempi2/lib/python2.7/site-packages/sklearn/neighbors/__init__.py:7: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n from .kd_tree import KDTree\n" ], [ "from sklearn.svm import SVR\nfrom sklearn.ensemble import RandomForestRegressor\ndef run_holdout_test_ensemble(X, y, ix, alpha=0.5, normalize=1):\n indx_tst = ix[:, 0] == 0\n indx_trn = np.logical_not(indx_tst)\n y_trn = y[indx_trn]\n y_true = y[indx_tst]\n X_trn = X[indx_trn]\n X_tst = X[indx_tst]\n svr = SVR(kernel='rbf')\n rfr = RandomForestRegressor(n_estimators=50, random_state=0)\n if normalize == 1:\n scaler = StandardScaler()\n scaler.fit(X_trn)\n X_trn = scaler.transform(X_trn)\n X_tst = scaler.transform(X_tst)\n svr.fit(X_trn, y_trn)\n rfr.fit(X_trn, y_trn)\n y_pred_svr = svr.predict(X_tst)\n y_pred_rfr = rfr.predict(X_tst)\n y_pred = alpha * y_pred_svr + (1-alpha) * y_pred_rfr\n cor, pos, neg = evaluate(\"holdout\", y_true, y_pred, ix[indx_tst, 1])\n return cor, pos, neg", "_____no_output_____" ], [ "alpha = 0.5\nrun_holdout_test_ensemble(X, y, ix, alpha=0.5, normalize=1)", "[holdout:1966] cor_all:0.669, cor_pos:0.512, cor_neg:0.475\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01b37ec13d4d79d585d176c0bd9ce0aaa0f3914
16,534
ipynb
Jupyter Notebook
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
04eace9910966f8832f84f1da728d744d43eb3c9
[ "Apache-2.0" ]
6
2018-05-04T12:41:43.000Z
2021-07-16T15:24:19.000Z
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
04eace9910966f8832f84f1da728d744d43eb3c9
[ "Apache-2.0" ]
null
null
null
notebooks/Automate loan approvals with business rules.ipynb
ODMDev/decisions-on-spark
04eace9910966f8832f84f1da728d744d43eb3c9
[ "Apache-2.0" ]
5
2018-12-07T00:14:22.000Z
2021-11-05T17:10:50.000Z
16,534
16,534
0.694387
[ [ [ "# Automate loan approvals with Business rules in Apache Spark and Scala\n\n### Automating at scale your business decisions in Apache Spark with IBM ODM 8.9.2\n\nThis Scala notebook shows you how to execute locally business rules in DSX and Apache Spark. \nYou'll learn how to call in Apache Spark a rule-based decision service. This decision service has been programmed with IBM Operational Decision Manager. \n\nThis notebook puts in action a decision service named Miniloan that is part of the ODM tutorials. It determines with business rules whether a customer is eligible for a loan according to specific criteria. The criteria include the amount of the loan, the annual income of the borrower, and the duration of the loan.\n\nFirst we load an application data set that was captured as a CSV file. In scala we apply a map to this data set to automate a rule-based reasoning, in order to outcome a decision. The rule execution is performed locally in the Spark service. This notebook shows a complete Scala code that can execute any ruleset based on the public APIs.\n\nTo get the most out of this notebook, you should have some familiarity with the Scala programming language.\n\n## Contents \nThis notebook contains the following main sections:\n\n1. [Load the loan validation request dataset.](#loaddatatset)\n2. [Load the business rule execution and the simple loan application object model libraries.](#loadjars)\n3. [Import Scala packages.](#importpackages)\n4. [Implement a decision making function.](#implementDecisionServiceMap)\n5. [Execute the business rules to approve or reject the loan applications.](#executedecisions) \n6. [View the automated decisions.](#viewdecisions)\n7. [Summary and next steps.](#summary) ", "_____no_output_____" ], [ "<a id=\"accessdataset\"></a>\n## 1. Loading a loan application dataset file\nA data set of simple loan applications is already available. You load it in the Notebook through its url.", "_____no_output_____" ] ], [ [ "// @hidden_cell\nimport scala.sys.process._\n\n\"wget https://raw.githubusercontent.com/ODMDev/decisions-on-spark/master/data/miniloan/miniloan-requests-10K.csv\".!", "--2018-06-05 09:20:42-- https://raw.githubusercontent.com/ODMDev/decisions-on-spark/master/data/miniloan/miniloan-requests-10K.csv\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.48.133\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.48.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 417500 (408K) [text/plain]\nSaving to: ‘miniloan-requests-10K.csv.54’\n\n 0K .......... .......... .......... .......... .......... 12% 17.6M 0s\n 50K .......... .......... .......... .......... .......... 24% 20.3M 0s\n 100K .......... .......... .......... .......... .......... 36% 14.2M 0s\n 150K .......... .......... .......... .......... .......... 49% 18.5M 0s\n 200K .......... .......... .......... .......... .......... 61% 14.3M 0s\n 250K .......... .......... .......... .......... .......... 73% 6.38M 0s\n 300K .......... .......... .......... .......... .......... 85% 20.4M 0s\n 350K .......... .......... .......... .......... .......... 98% 30.5M 0s\n 400K ....... 100% 247M=0.03s\n\n2018-06-05 09:20:42 (15.1 MB/s) - ‘miniloan-requests-10K.csv.54’ saved [417500/417500]\n\n" ], [ "val filename = \"miniloan-requests-10K.csv\"", "_____no_output_____" ] ], [ [ "This following code loads the 10 000 simple loan application dataset written in CSV format.", "_____no_output_____" ] ], [ [ "val requestData = sc.textFile(filename)\nval requestDataCount = requestData.count\nprintln(s\"$requestDataCount loan requests read in a CVS format\")\nprintln(\"The first 5 requests:\")\nrequestData.take(20).foreach(println)", "10000 loan requests read in a CVS format\nThe first 5 requests:\nJohn Doe, 550, 80000, 250000, 240, 0.05d\nJohn Woo, 540, 100000, 250000, 240, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.07d\nJohn Doe, 550, 80000, 250000, 240, 0.05d\nJohn Woo, 540, 100000, 250000, 240, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.07d\nJohn Doe, 550, 80000, 250000, 240, 0.05d\nJohn Woo, 540, 100000, 250000, 240, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.07d\nJohn Doe, 550, 80000, 250000, 240, 0.05d\nJohn Woo, 540, 100000, 250000, 240, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.07d\nJohn Doe, 550, 80000, 250000, 240, 0.05d\nJohn Woo, 540, 100000, 250000, 240, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.05d\nPeter Woo, 540, 60000, 250000, 120, 0.07d\n" ] ], [ [ "<a id=\"loadjars\"></a>\n## 2. Add libraries for business rule execution and a loan application object model\nThe XXX refers to your object storage or other place where you make available these jars.\n\nAdd the following jars to execute the deployed decision service\n<il>\n<li>%AddJar https://XXX/j2ee_connector-1_5-fr.jar</li>\n<li>%AddJar https://XXX/jrules-engine.jar</li>\n<li>%AddJar https://XXX/jrules-res-execution.jar</li>\n</il>\n\nIn addition you need the Apache Jackson annotation lib\n<il>\n<li>%AddJar https://XXX/jackson-annotations-2.6.5.jar</li>\n</il>\n\nBusiness Rules apply on a Java executable Object Model packaged as a jar. We need these classes to create the decision requests, and to retreive the response from the rule engine.\n<il>\n<li>%AddJar https://XXX/miniloan-xom.jar</li>\n</il>", "_____no_output_____" ] ], [ [ "// @hidden_cell\n// The urls below are accessible for an IBM internal usage only\n\n%AddJar https://XXX/j2ee_connector-1_5-fr.jar\n%AddJar https://XXX/jrules-engine.jar\n%AddJar https://XXX/jrules-res-execution.jar\n%AddJar https://XXX/jackson-annotations-2.6.5.jar -f\n\n//Loan Application eXecutable Object Model\n%AddJar https://XXX/miniloan-xom.jar -f\n\nprint(\"Your notebook is now ready to execute business rules to approve or reject loan applications\")", "_____no_output_____" ] ], [ [ "<a id=\"importpackages\"></a>\n## 3. Import packages\nImport ODM and Apache Spark packages.", "_____no_output_____" ] ], [ [ "import java.util.Map\nimport java.util.HashMap\n\nimport com.fasterxml.jackson.core.JsonGenerationException\nimport com.fasterxml.jackson.core.JsonProcessingException\nimport com.fasterxml.jackson.databind.JsonMappingException\nimport com.fasterxml.jackson.databind.ObjectMapper\nimport com.fasterxml.jackson.databind.SerializationFeature\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.api.java.JavaDoubleRDD\nimport org.apache.spark.api.java.JavaRDD\nimport org.apache.spark.api.java.JavaSparkContext\nimport org.apache.spark.api.java.function.Function\nimport org.apache.hadoop.fs.FileSystem\nimport org.apache.hadoop.fs.Path\n\nimport scala.collection.JavaConverters._\n\nimport ilog.rules.res.model._\n\nimport com.ibm.res.InMemoryJ2SEFactory\nimport com.ibm.res.InMemoryRepositoryDAO\n\nimport ilog.rules.res.session._\n\nimport miniloan.Borrower\nimport miniloan.Loan\n\nimport scala.io.Source\nimport java.net.URL\nimport java.io.InputStream", "_____no_output_____" ] ], [ [ "<a id=\"implementDecisionServiceMap\"></a>\n## 4. Implement a Map function that executes a rule-based decision service", "_____no_output_____" ] ], [ [ "case class MiniLoanRequest(borrower: miniloan.Borrower, \n loan: miniloan.Loan) \n\ncase class RESRunner(sessionFactory: com.ibm.res.InMemoryJ2SEFactory) {\n \n def executeAsString(s: String): String = {\n println(\"executeAsString\")\n val request = makeRequest(s)\n val response = executeRequest(request)\n \n response\n }\n \n private def makeRequest(s: String): MiniLoanRequest = {\n val tokens = s.split(\",\")\n \n // Borrower deserialization from CSV\n val borrowerName = tokens(0)\n val borrowerCreditScore = java.lang.Integer.parseInt(tokens(1).trim())\n val borrowerYearlyIncome = java.lang.Integer.parseInt(tokens(2).trim())\n val loanAmount = java.lang.Integer.parseInt(tokens(3).trim())\n val loanDuration = java.lang.Integer.parseInt(tokens(4).trim())\n val yearlyInterestRate = java.lang.Double.parseDouble(tokens(5).trim())\n val borrower = new miniloan.Borrower(borrowerName, borrowerCreditScore, borrowerYearlyIncome)\n \n // Loan request deserialization from CSV\n val loan = new miniloan.Loan()\n loan.setAmount(loanAmount)\n loan.setDuration(loanDuration)\n loan.setYearlyInterestRate(yearlyInterestRate)\n \n val request = new MiniLoanRequest(borrower, loan)\n request\n }\n \n def executeRequest(request: MiniLoanRequest): String = {\n try {\n val sessionRequest = sessionFactory.createRequest()\n val rulesetPath = \"/Miniloan/Miniloan\"\n sessionRequest.setRulesetPath(ilog.rules.res.model.IlrPath.parsePath(rulesetPath))\n\n //sessionRequest.getTraceFilter.setInfoAllFilters(false)\n val inputParameters = sessionRequest.getInputParameters\n inputParameters.put(\"loan\", request.loan)\n inputParameters.put(\"borrower\", request.borrower)\n val session = sessionFactory.createStatelessSession()\n \n val response = session.execute(sessionRequest)\n \n var loan = response.getOutputParameters().get(\"loan\").asInstanceOf[miniloan.Loan]\n val mapper = new com.fasterxml.jackson.databind.ObjectMapper()\n mapper.configure(com.fasterxml.jackson.databind.SerializationFeature.FAIL_ON_EMPTY_BEANS, false)\n val results = new java.util.HashMap[String,Object]()\n results.put(\"input\", inputParameters)\n results.put(\"output\", response.getOutputParameters())\n try {\n //return mapper.writeValueAsString(results)\n return mapper.writerWithDefaultPrettyPrinter().writeValueAsString(results);\n } catch {\n case e: Exception => return e.toString()\n }\n \"Error\"\n } catch {\n case exception: Exception => {\n return exception.toString()\n }\n }\n \"Error\"\n }\n}\n\n\nval decisionService = new Function[String, String]() {\n\n @transient private var ruleSessionFactory: InMemoryJ2SEFactory = null\n private val rulesetURL = \"https://odmlibserver.mybluemix.net/8901/decisionservices/miniloan-8901.dsar\"\n @transient private var rulesetStream: InputStream = null\n\n def GetRuleSessionFactory(): InMemoryJ2SEFactory = {\n if (ruleSessionFactory == null) {\n ruleSessionFactory = new InMemoryJ2SEFactory()\n // Create the Management Session \n var repositoryFactory = ruleSessionFactory.createManagementSession().getRepositoryFactory()\n var repository = repositoryFactory.createRepository()\n \n // Deploy the Ruleapp with the Regular Management Session API.\n var rapp = repositoryFactory.createRuleApp(\"Miniloan\", IlrVersion.parseVersion(\"1.0\"));\n var rs = repositoryFactory.createRuleset(\"Miniloan\",IlrVersion.parseVersion(\"1.1\"));\n rapp.addRuleset(rs);\n \n //var fileStream = Source.fromResourceAsStream(RulesetFileName)\n\n rulesetStream = new java.net.URL(rulesetURL).openStream()\n\n rs.setRESRulesetArchive(IlrEngineType.DE,rulesetStream)\n repository.addRuleApp(rapp)\n \n }\n ruleSessionFactory\n }\n \n def call(s: String): String = {\n var runner = new RESRunner(GetRuleSessionFactory())\n return runner.executeAsString(s)\n }\n \n def execute(s: String): String = {\n try {\n var runner = new RESRunner(GetRuleSessionFactory())\n return runner.executeAsString(s)\n } catch {\n case exception: Exception => {\n exception.printStackTrace(System.err)\n }\n }\n \"Execution error\"\n }\n}", "_____no_output_____" ] ], [ [ "<a id=\"executedecisions\"></a>\n## 5. Automate the decision making on the loan application dataset\nYou invoke a map on the decision function. While the map occurs rule engines are processing in parallel the loan applications to produce a data set of answers.", "_____no_output_____" ] ], [ [ "println(\"Start of Execution\")\nval answers = requestData.map(decisionService.execute)\nprintf(\"Number of rule based decisions: %s \\n\" , answers.count)\n// Cleanup output file\n//val fs = FileSystem.get(new URI(outputPath), sc.hadoopConfiguration);\n//if (fs.exists(new Path(outputPath)))\n // fs.delete(new Path(outputPath), true)\n// Save RDD in a HDFS file\nprintln(\"End of Execution \")\n//answers.saveAsTextFile(\"swift://DecisionBatchExecution.\" + securedAccessName + \"/miniloan-decisions-10.csv\")\n\nprintln(\"Decision automation job done\")", "Start of Execution\nNumber of rule based decisions: 10000 \nEnd of Execution \nDecision automation job done\n" ] ], [ [ "<a id=\"viewdecisions\"></a>\n## 6. View your automated decisions\nEach decision is composed of output parameters and of a decision trace. The loan data contains the approval flag and the computed yearly repayment. The decision trace lists the business rules that have been executed in sequence to come to the conclusion. Each decision has been serialized in JSON.", "_____no_output_____" ] ], [ [ "//answers.toDF().show(false)\nanswers.take(1).foreach(println)", "{\n \"output\" : {\n \"ilog.rules.firedRulesCount\" : 0,\n \"loan\" : {\n \"amount\" : 250000,\n \"duration\" : 240,\n \"yearlyInterestRate\" : 0.05,\n \"yearlyRepayment\" : 19798,\n \"approved\" : true,\n \"messages\" : [ ]\n }\n },\n \"input\" : {\n \"loan\" : {\n \"amount\" : 250000,\n \"duration\" : 240,\n \"yearlyInterestRate\" : 0.05,\n \"yearlyRepayment\" : 19798,\n \"approved\" : true,\n \"messages\" : [ ]\n },\n \"borrower\" : {\n \"name\" : \"John Doe\",\n \"creditScore\" : 550,\n \"yearlyIncome\" : 80000\n }\n }\n}\n" ] ], [ [ "<a id=\"summary\"></a>\n## 7. Summary and next steps\nCongratulations! You have applied business rules to automatically determine loan approval eligibility. You loaded a loan application data set, ran a rule engine inside an Apache Spark cluster to make an eligibility decision for each applicant. Each decision is a Scala object that is part of a Spark Resilient Data Set. \nEach decision is structured with input parameters (the context of the decision) and output parameters. For audit purpose the rule engine can emit a decision trace.\n\nYou have successfully run a rule engine to automate decisions at scale in a Spark cluster. You can now invent your own business rules and run them with the same integration pattern.\n\n<a id=\"authors\"></a>\n## Authors\nPierre Feillet and Laurent Grateau are business rule engineers at IBM working in the Decision lab located in France.\n\nCopyright © 2018 IBM. This notebook and its source code are released under the terms of the MIT License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d01b38398ca27502803ea57b480ba3370bacf832
18,829
ipynb
Jupyter Notebook
data-stories/happiness-data/Project Practice.ipynb
BohanMeng/storytelling-with-data
291f8c4c3e1fd83e8057a773712d04febc6c21f6
[ "MIT" ]
2
2020-03-30T05:15:56.000Z
2022-03-21T16:24:56.000Z
data-stories/happiness-data/Project Practice.ipynb
BohanMeng/storytelling-with-data
291f8c4c3e1fd83e8057a773712d04febc6c21f6
[ "MIT" ]
2
2019-05-03T19:34:48.000Z
2019-05-25T01:28:22.000Z
data-stories/happiness-data/Project Practice.ipynb
FanruiShao/storytelling-with-data
55d5452a60ce2f16f398db014e4857b31f175f27
[ "MIT" ]
1
2018-01-17T19:14:05.000Z
2018-01-17T19:14:05.000Z
31.019769
182
0.452016
[ [ [ "##World Map Plotly \n\n#Import Plotly Lib and Set up Credentials with personal account\n!pip install plotly \n\nimport plotly\n\nplotly.tools.set_credentials_file(username='igleonaitis', api_key='If6Wh3xWNmdNioPzOZZo')\nplotly.tools.set_config_file(world_readable=True,\n sharing='public')\nimport plotly.plotly as py\nfrom plotly.graph_objs import *\nimport plotly.graph_objs as go\nimport pandas as pd", "Requirement already satisfied: plotly in /opt/conda/lib/python3.6/site-packages\nRequirement already satisfied: requests in /opt/conda/lib/python3.6/site-packages (from plotly)\nRequirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from plotly)\nRequirement already satisfied: nbformat>=4.2 in /opt/conda/lib/python3.6/site-packages (from plotly)\nRequirement already satisfied: pytz in /opt/conda/lib/python3.6/site-packages (from plotly)\nRequirement already satisfied: decorator>=4.0.6 in /opt/conda/lib/python3.6/site-packages (from plotly)\n" ], [ "#Import WHR 2017 data set \ndf = pd.read_excel('whr.xlsx', sheetname='Figure2.2 WHR 2017')\n\n#Set Up World Map Plot\n\nscl = [[0,'rgb(140,101,211)'],[0.25,'rgb(154,147,236)'],\n [0.50,'rgb(0,82,165)'],[0.75,'rgb(129,203,248)'],\n [1,'rgb(65,179,247)']]\n\n\ndata = [ dict(\n type = 'choropleth',\n locationmode = 'country names',\n locations = df['Country'],\n z = df['Happiness score'],\n text = df['Country'],\n colorscale = scl,\n autocolorscale = False,\n reversescale = False,\n marker = dict(\n line = dict (\n color = 'rgb(180,180,180)',\n width = 0.5\n ) ),\n colorbar = dict(\n autotick = False,\n tickprefix = False,\n title = 'World Happiness Score'),\n ) ]\n\nlayout = dict(\n title = '2017 National Happiness Scores GDP<br>Source:\\\n <a href=\"http://worldhappiness.report/ed/2017/\">\\\n World Happiness Report</a>',\n geo = dict(\n showframe = False,\n showcoastlines = False,\n projection = dict(\n type = 'Mercator'\n )\n )\n)\n\n#Create World Map Plot \nfig = dict(data = data, layout = layout)\npy.iplot(fig, validate=False, filename='d3-world-map')\n", "_____no_output_____" ], [ "df1 = pd.read_excel('whr.xlsx', sheetname='Figure2.2 WHR 2017')\n\n#Stacked Bar Plot \ntrace1 = go.Bar(\n y = df1['Country'],\n x = df1['Explained by: GDP per capita'], \n orientation = 'h', \n width = .5, \n name = 'GDP per Capita', \n marker=dict(\n color='rgb(140,101,211)'\n )\n)\ntrace2 = go.Bar(\n y = df1['Country'],\n x = df1['Explained by: Social support'], \n orientation = 'h', \n width = .5, \n name = 'Social Support', \n marker=dict(\n color='rgb(154,147,236)'\n )\n)\ntrace3 = go.Bar(\n y = df1['Country'],\n x = df1['Explained by: Healthy life expectancy'], \n orientation = 'h', \n width = .5,\n name = 'Healthy Life Expectancy', \n marker=dict(\n color='rgb(0,82,165)'\n )\n)\ntrace4 = go.Bar(\n y = df1['Country'],\n x = df1['Explained by: Freedom to make life choices'], \n orientation = 'h', \n width = .5, \n name = 'Freedom to Make Life Choices', \n marker=dict(\n color='rgb(129,203,248)'\n )\n)\ntrace5 = go.Bar(\n y = df1['Country'],\n x = df1['Explained by: Generosity'], \n orientation = 'h', \n width = .5, \n name = 'Generosity', \n marker=dict(\n color='rgb(65,179,247)'\n )\n)\ntrace6 = go.Bar(\n y = df1['Country'],\n x = df1['Explained by: Perceptions of corruption'], \n orientation = 'h', \n width = .5, \n name = 'Perceptions on Corruption', \n marker=dict(\n color='rgb(115, 235, 174)'\n )\n)\n\n\ndata = [trace1, trace2, trace3, trace4, trace5, trace6]\nlayout = go.Layout(\n title = 'Factor Makeup of Happiness Scores',\n barmode ='stack', \n autosize = False,\n width = 800,\n height = 1500,\n yaxis = dict(\n tickfont = dict(\n size = 6,\n color = 'black')),\n xaxis = dict(\n tickfont = dict(\n size = 10, \n color = 'black'))\n)\n\nfig = go.Figure(data=data, layout=layout)\npy.iplot(fig, filename='stacked-horizontal-bar')", "_____no_output_____" ], [ "import plotly.plotly as py\nfrom plotly.grid_objs import Grid, Column\nfrom plotly.figure_factory import *\n\nimport pandas as pd\nimport time\n\nxls_file = pd.ExcelFile('Internet_Usage.xls')\nxls_file\ndataset = xls_file.parse('Sheet1')\ndataset.head()", "_____no_output_____" ], [ "years_from_col = set(dataset['year'])\nyears_ints = sorted(list(years_from_col))\nyears = [str(year) for year in years_ints]\n\n\n# make list of continents\ncontinents = []\nfor continent in dataset['continent']:\n if continent not in continents: \n continents.append(continent)\n\ncolumns = []\n# make grid\nfor year in years:\n for continent in continents:\n dataset_by_year = dataset[dataset['year'] == int(year)]\n dataset_by_year_and_cont = dataset_by_year[dataset_by_year['continent'] == continent]\n for col_name in dataset_by_year_and_cont:\n # each column name is unique\n column_name = '{year}_{continent}_{header}_gapminder_grid'.format(\n year=year, continent=continent, header=col_name\n )\n a_column = Column(list(dataset_by_year_and_cont[col_name]), column_name)\n columns.append(a_column)\n\n# upload grid\ngrid = Grid(columns)\nurl = py.grid_ops.upload(grid, 'gapminder_grid'+str(time.time()), auto_open=False)\nurl", "_____no_output_____" ], [ "figure = {\n 'data': [],\n 'layout': {},\n 'frames': [],\n 'config': {'scrollzoom': True}\n}\n\n# fill in most of layout\nfigure['layout']['xaxis'] = {'range': [2, 8], 'title': 'World Happiness Score', 'gridcolor': '#FFFFFF'}\nfigure['layout']['yaxis'] = {'range': [0,100],'title': 'Internet Usage % of Pop.', 'gridcolor': '#FFFFFF'}\nfigure['layout']['hovermode'] = 'closest'\nfigure['layout']['plot_bgcolor'] = 'rgb(223, 232, 243)'", "_____no_output_____" ], [ "sliders_dict = {\n 'active': 0,\n 'yanchor': 'top',\n 'xanchor': 'left',\n 'currentvalue': {\n 'font': {'size': 20},\n 'prefix': 'Year:',\n 'visible': True,\n 'xanchor': 'right'\n },\n 'transition': {'duration': 300, 'easing': 'cubic-in-out'},\n 'pad': {'b': 10, 't': 50},\n 'len': 0.9,\n 'x': 0.1,\n 'y': 0,\n 'steps': []\n}\n", "_____no_output_____" ], [ "figure['layout']['updatemenus'] = [\n {\n 'buttons': [\n {\n 'args': [None, {'frame': {'duration': 500, 'redraw': False},\n 'fromcurrent': True, 'transition': {'duration': 300, 'easing': 'quadratic-in-out'}}],\n 'label': 'Play',\n 'method': 'animate'\n },\n {\n 'args': [[None], {'frame': {'duration': 0, 'redraw': False}, 'mode': 'immediate',\n 'transition': {'duration': 0}}],\n 'label': 'Pause',\n 'method': 'animate'\n }\n ],\n 'direction': 'left',\n 'pad': {'r': 10, 't': 87},\n 'showactive': False,\n 'type': 'buttons',\n 'x': 0.1,\n 'xanchor': 'right',\n 'y': 0,\n 'yanchor': 'top'\n }\n]\n\ncustom_colors = {\n 'Asia': 'rgb(171, 99, 250)',\n 'Europe': 'rgb(230, 99, 250)',\n 'Africa': 'rgb(99, 110, 250)',\n 'Americas': 'rgb(25, 211, 243)',\n 'Oceania': 'rgb(50, 170, 255)'\n}", "_____no_output_____" ], [ "col_name_template = '{year}_{continent}_{header}_gapminder_grid'\nyear = 2007\nfor continent in continents:\n data_dict = {\n 'xsrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='lifeExp'\n )),\n 'ysrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='gdpPercap'\n )),\n 'mode': 'markers',\n 'textsrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='country'\n )),\n 'marker': {\n 'sizemode': 'area',\n 'sizeref': 2000,\n 'sizesrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='pop'\n )),\n 'color': custom_colors[continent]\n },\n 'name': continent\n }\n figure['data'].append(data_dict)", "_____no_output_____" ], [ "for year in years:\n frame = {'data': [], 'name': str(year)}\n for continent in continents:\n data_dict = {\n 'xsrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='lifeExp'\n )),\n 'ysrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='gdpPercap'\n )),\n 'mode': 'markers',\n 'textsrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='country'\n )),\n 'marker': {\n 'sizemode': 'area',\n 'sizeref': 2000,\n 'sizesrc': grid.get_column_reference(col_name_template.format(\n year=year, continent=continent, header='pop'\n )),\n 'color': custom_colors[continent]\n },\n 'name': continent\n }\n frame['data'].append(data_dict)\n\n figure['frames'].append(frame)\n slider_step = {'args': [\n [year],\n {'frame': {'duration': 300, 'redraw': False},\n 'mode': 'immediate',\n 'transition': {'duration': 300}}\n ],\n 'label': year,\n 'method': 'animate'}\n sliders_dict['steps'].append(slider_step)\n\nfigure['layout']['sliders'] = [sliders_dict]", "_____no_output_____" ], [ "py.icreate_animations(figure, 'gapminder_example'+str(time.time()))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01b458aef43dd1337482e9b30957cbd8bac1624
50,861
ipynb
Jupyter Notebook
training/Training_edges_hog_gray.ipynb
OpenGridMap/power-grid-detection
221fcf0461dc869c8c64b11fa48596f83c20e1c8
[ "Apache-2.0" ]
null
null
null
training/Training_edges_hog_gray.ipynb
OpenGridMap/power-grid-detection
221fcf0461dc869c8c64b11fa48596f83c20e1c8
[ "Apache-2.0" ]
1
2018-07-22T22:43:27.000Z
2018-07-22T22:43:27.000Z
training/Training_edges_hog_gray.ipynb
OpenGridMap/power-grid-detection
221fcf0461dc869c8c64b11fa48596f83c20e1c8
[ "Apache-2.0" ]
null
null
null
85.768971
1,362
0.549714
[ [ [ "from __future__ import print_function\n\nimport os\nimport sys\nimport numpy as np\n\nfrom keras.optimizers import SGD\nfrom keras.callbacks import CSVLogger, ModelCheckpoint\n\nsys.path.append(os.path.join(os.getcwd(), os.pardir))\n\nimport config\n\nfrom utils.dataset.data_generator import DataGenerator\nfrom models.cnn3 import cnn", "Using Theano backend.\nUsing gpu device 1: GeForce GTX 680 (CNMeM is disabled, cuDNN 5005)\n" ], [ "lr=0.001\nn_epochs=500\nbatch_size=32\ninput_shape=(140, 140, 3)\n\nname = 'cnn_140_edges_hog_grey_lr_%f_nesterov' % lr", "_____no_output_____" ], [ "print('loading model...')\nmodel = cnn(input_shape=input_shape)\nmodel.summary()\n\noptimizer = SGD(lr=lr, clipnorm=1., clipvalue=0.5, nesterov=True)\n\nprint('compiling model...')\nmodel.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])\nprint('done.')\n\ncsv_logger = CSVLogger('%s_training.log' % name)\nbest_model_checkpointer = ModelCheckpoint(filepath=(\"./%s_training_weights_best.hdf5\" % name), verbose=1,\n save_best_only=True)\n\ncurrent_model_checkpointer = ModelCheckpoint(filepath=(\"./%s_training_weights_current.hdf5\" % name), verbose=0)", "loading model...\n____________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n====================================================================================================\ninput_1 (InputLayer) (None, 140, 140, 3) 0 \n____________________________________________________________________________________________________\nconvolution2d_1 (Convolution2D) (None, 140, 140, 128) 18944 input_1[0][0] \n____________________________________________________________________________________________________\nactivation_1 (Activation) (None, 140, 140, 128) 0 convolution2d_1[0][0] \n____________________________________________________________________________________________________\nmaxpooling2d_1 (MaxPooling2D) (None, 70, 70, 128) 0 activation_1[0][0] \n____________________________________________________________________________________________________\nconvolution2d_2 (Convolution2D) (None, 70, 70, 64) 204864 maxpooling2d_1[0][0] \n____________________________________________________________________________________________________\nactivation_2 (Activation) (None, 70, 70, 64) 0 convolution2d_2[0][0] \n____________________________________________________________________________________________________\nmaxpooling2d_2 (MaxPooling2D) (None, 35, 35, 64) 0 activation_2[0][0] \n____________________________________________________________________________________________________\nconvolution2d_3 (Convolution2D) (None, 35, 35, 64) 36928 maxpooling2d_2[0][0] \n____________________________________________________________________________________________________\nactivation_3 (Activation) (None, 35, 35, 64) 0 convolution2d_3[0][0] \n____________________________________________________________________________________________________\nmaxpooling2d_3 (MaxPooling2D) (None, 17, 17, 64) 0 activation_3[0][0] \n____________________________________________________________________________________________________\nflatten_1 (Flatten) (None, 18496) 0 maxpooling2d_3[0][0] \n____________________________________________________________________________________________________\ndense_1 (Dense) (None, 1024) 18940928 flatten_1[0][0] \n____________________________________________________________________________________________________\ndense_2 (Dense) (None, 1024) 1049600 dense_1[0][0] \n____________________________________________________________________________________________________\ndense_3 (Dense) (None, 512) 524800 dense_2[0][0] \n____________________________________________________________________________________________________\ndense_4 (Dense) (None, 2) 1026 dense_3[0][0] \n____________________________________________________________________________________________________\nactivation_4 (Activation) (None, 2) 0 dense_4[0][0] \n====================================================================================================\nTotal params: 20777090\n____________________________________________________________________________________________________\ncompiling model...\ndone.\n" ], [ "print('Initializing data generators...')\ntrain_data_gen = DataGenerator(dataset_file=config.train_data_file, batch_size=batch_size, preprocessing='edges_hog_gray')\nvalidation_data_gen = DataGenerator(dataset_file=config.validation_data_file, batch_size=batch_size, preprocessing='edges_hog_gray')\ntest_data_gen = DataGenerator(dataset_file=config.test_data_file, batch_size=batch_size, preprocessing='edges_hog_gray')\nprint('done.')", "Initializing data generators...\ndone.\n" ], [ "print('Fitting model...')\nhistory = model.fit_generator(train_data_gen,\n nb_epoch=n_epochs,\n samples_per_epoch=train_data_gen.n_batches * batch_size,\n validation_data=validation_data_gen,\n nb_val_samples=validation_data_gen.n_samples,\n verbose=1,\n callbacks=[csv_logger, best_model_checkpointer, current_model_checkpointer])\nprint('done.')", "Fitting model...\nEpoch 1/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.6842 - acc: 0.6566Epoch 00000: val_loss improved from inf to 0.67410, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 383s - loss: 0.6842 - acc: 0.6563 - val_loss: 0.6741 - val_acc: 0.6734\nEpoch 2/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.6676 - acc: 0.6650Epoch 00001: val_loss improved from 0.67410 to 0.65905, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.6677 - acc: 0.6647 - val_loss: 0.6590 - val_acc: 0.6725\nEpoch 3/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.6552 - acc: 0.6650Epoch 00002: val_loss improved from 0.65905 to 0.64789, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.6553 - acc: 0.6647 - val_loss: 0.6479 - val_acc: 0.6721\nEpoch 4/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.6461 - acc: 0.6650Epoch 00003: val_loss improved from 0.64789 to 0.64012, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 382s - loss: 0.6463 - acc: 0.6647 - val_loss: 0.6401 - val_acc: 0.6708\nEpoch 5/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.6390 - acc: 0.6650Epoch 00004: val_loss improved from 0.64012 to 0.63145, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 383s - loss: 0.6392 - acc: 0.6647 - val_loss: 0.6315 - val_acc: 0.6743\nEpoch 6/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.6315 - acc: 0.6650Epoch 00005: val_loss improved from 0.63145 to 0.62386, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.6317 - acc: 0.6647 - val_loss: 0.6239 - val_acc: 0.6725\nEpoch 7/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.6199 - acc: 0.6650Epoch 00006: val_loss improved from 0.62386 to 0.60869, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.6201 - acc: 0.6647 - val_loss: 0.6087 - val_acc: 0.6734\nEpoch 8/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.5970 - acc: 0.6650Epoch 00007: val_loss improved from 0.60869 to 0.58061, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.5971 - acc: 0.6647 - val_loss: 0.5806 - val_acc: 0.6710\nEpoch 9/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.5553 - acc: 0.6785Epoch 00008: val_loss improved from 0.58061 to 0.53442, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.5555 - acc: 0.6782 - val_loss: 0.5344 - val_acc: 0.7088\nEpoch 10/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.5123 - acc: 0.7105Epoch 00009: val_loss improved from 0.53442 to 0.49943, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.5126 - acc: 0.7102 - val_loss: 0.4994 - val_acc: 0.7326\nEpoch 11/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4847 - acc: 0.7317Epoch 00010: val_loss improved from 0.49943 to 0.48177, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4850 - acc: 0.7315 - val_loss: 0.4818 - val_acc: 0.7476\nEpoch 12/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4708 - acc: 0.7444Epoch 00011: val_loss improved from 0.48177 to 0.47363, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 382s - loss: 0.4711 - acc: 0.7441 - val_loss: 0.4736 - val_acc: 0.7548\nEpoch 13/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4637 - acc: 0.7512Epoch 00012: val_loss improved from 0.47363 to 0.46965, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4640 - acc: 0.7509 - val_loss: 0.4696 - val_acc: 0.7597\nEpoch 14/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4594 - acc: 0.7546Epoch 00013: val_loss improved from 0.46965 to 0.46501, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4597 - acc: 0.7542 - val_loss: 0.4650 - val_acc: 0.7647\nEpoch 15/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4566 - acc: 0.7571Epoch 00014: val_loss improved from 0.46501 to 0.46349, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4569 - acc: 0.7568 - val_loss: 0.4635 - val_acc: 0.7650\nEpoch 16/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4546 - acc: 0.7588Epoch 00015: val_loss improved from 0.46349 to 0.46234, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4550 - acc: 0.7585 - val_loss: 0.4623 - val_acc: 0.7678\nEpoch 17/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4530 - acc: 0.7603Epoch 00016: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4533 - acc: 0.7600 - val_loss: 0.4636 - val_acc: 0.7672\nEpoch 18/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4517 - acc: 0.7611Epoch 00017: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4520 - acc: 0.7608 - val_loss: 0.4634 - val_acc: 0.7687\nEpoch 19/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4505 - acc: 0.7616Epoch 00018: val_loss improved from 0.46234 to 0.46121, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4508 - acc: 0.7613 - val_loss: 0.4612 - val_acc: 0.7698\nEpoch 20/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4494 - acc: 0.7618Epoch 00019: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4498 - acc: 0.7615 - val_loss: 0.4615 - val_acc: 0.7696\nEpoch 21/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4485 - acc: 0.7621Epoch 00020: val_loss improved from 0.46121 to 0.45922, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4488 - acc: 0.7619 - val_loss: 0.4592 - val_acc: 0.7691\nEpoch 22/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4476 - acc: 0.7620Epoch 00021: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4479 - acc: 0.7618 - val_loss: 0.4592 - val_acc: 0.7685\nEpoch 23/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4467 - acc: 0.7623Epoch 00022: val_loss improved from 0.45922 to 0.45754, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4471 - acc: 0.7620 - val_loss: 0.4575 - val_acc: 0.7687\nEpoch 24/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4459 - acc: 0.7622Epoch 00023: val_loss improved from 0.45754 to 0.45556, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4463 - acc: 0.7619 - val_loss: 0.4556 - val_acc: 0.7687\nEpoch 25/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4452 - acc: 0.7624Epoch 00024: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4456 - acc: 0.7621 - val_loss: 0.4590 - val_acc: 0.7667\nEpoch 26/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4445 - acc: 0.7629Epoch 00025: val_loss improved from 0.45556 to 0.45485, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4448 - acc: 0.7626 - val_loss: 0.4548 - val_acc: 0.7683\nEpoch 27/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4439 - acc: 0.7631Epoch 00026: val_loss improved from 0.45485 to 0.45355, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4442 - acc: 0.7628 - val_loss: 0.4536 - val_acc: 0.7685\nEpoch 28/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4433 - acc: 0.7633Epoch 00027: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4436 - acc: 0.7630 - val_loss: 0.4544 - val_acc: 0.7683\nEpoch 29/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4427 - acc: 0.7636Epoch 00028: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4431 - acc: 0.7633 - val_loss: 0.4539 - val_acc: 0.7669\nEpoch 30/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4421 - acc: 0.7639Epoch 00029: val_loss improved from 0.45355 to 0.45310, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4424 - acc: 0.7636 - val_loss: 0.4531 - val_acc: 0.7678\nEpoch 31/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4415 - acc: 0.7642Epoch 00030: val_loss did not improve\n10496/10496 [==============================] - 381s - loss: 0.4419 - acc: 0.7639 - val_loss: 0.4549 - val_acc: 0.7665\nEpoch 32/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4410 - acc: 0.7640Epoch 00031: val_loss improved from 0.45310 to 0.45002, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 382s - loss: 0.4414 - acc: 0.7637 - val_loss: 0.4500 - val_acc: 0.7678\nEpoch 33/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4405 - acc: 0.7640Epoch 00032: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4409 - acc: 0.7637 - val_loss: 0.4517 - val_acc: 0.7669\nEpoch 34/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4400 - acc: 0.7642Epoch 00033: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4404 - acc: 0.7640 - val_loss: 0.4529 - val_acc: 0.7654\nEpoch 35/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4396 - acc: 0.7643Epoch 00034: val_loss improved from 0.45002 to 0.44926, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4399 - acc: 0.7641 - val_loss: 0.4493 - val_acc: 0.7667\nEpoch 36/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4391 - acc: 0.7644Epoch 00035: val_loss did not improve\n10496/10496 [==============================] - 382s - loss: 0.4395 - acc: 0.7642 - val_loss: 0.4505 - val_acc: 0.7661\nEpoch 37/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4386 - acc: 0.7645Epoch 00036: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4390 - acc: 0.7642 - val_loss: 0.4522 - val_acc: 0.7652\nEpoch 38/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4382 - acc: 0.7648Epoch 00037: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4385 - acc: 0.7645 - val_loss: 0.4501 - val_acc: 0.7652\nEpoch 39/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4377 - acc: 0.7648Epoch 00038: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4380 - acc: 0.7646 - val_loss: 0.4541 - val_acc: 0.7636\nEpoch 40/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4372 - acc: 0.7648Epoch 00039: val_loss improved from 0.44926 to 0.44773, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4376 - acc: 0.7646 - val_loss: 0.4477 - val_acc: 0.7665\nEpoch 41/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4367 - acc: 0.7651Epoch 00040: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4371 - acc: 0.7648 - val_loss: 0.4492 - val_acc: 0.7656\nEpoch 42/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4363 - acc: 0.7654Epoch 00041: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4366 - acc: 0.7651 - val_loss: 0.4479 - val_acc: 0.7663\nEpoch 43/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4358 - acc: 0.7655Epoch 00042: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4362 - acc: 0.7652 - val_loss: 0.4517 - val_acc: 0.7643\nEpoch 44/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4354 - acc: 0.7657Epoch 00043: val_loss improved from 0.44773 to 0.44724, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4358 - acc: 0.7654 - val_loss: 0.4472 - val_acc: 0.7665\nEpoch 45/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4349 - acc: 0.7657Epoch 00044: val_loss improved from 0.44724 to 0.44713, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4353 - acc: 0.7654 - val_loss: 0.4471 - val_acc: 0.7661\nEpoch 46/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4345 - acc: 0.7657Epoch 00045: val_loss improved from 0.44713 to 0.44694, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4349 - acc: 0.7655 - val_loss: 0.4469 - val_acc: 0.7676\nEpoch 47/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4343 - acc: 0.7658Epoch 00046: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4347 - acc: 0.7656 - val_loss: 0.4470 - val_acc: 0.7667\nEpoch 48/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4339 - acc: 0.7658Epoch 00047: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4343 - acc: 0.7656 - val_loss: 0.4478 - val_acc: 0.7661\nEpoch 49/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4335 - acc: 0.7660Epoch 00048: val_loss improved from 0.44694 to 0.44408, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4339 - acc: 0.7657 - val_loss: 0.4441 - val_acc: 0.7683\nEpoch 50/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4328 - acc: 0.7660Epoch 00049: val_loss did not improve\n10496/10496 [==============================] - 382s - loss: 0.4332 - acc: 0.7657 - val_loss: 0.4448 - val_acc: 0.7674\nEpoch 51/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4324 - acc: 0.7663Epoch 00050: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4328 - acc: 0.7661 - val_loss: 0.4442 - val_acc: 0.7674\nEpoch 52/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4320 - acc: 0.7666Epoch 00051: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4324 - acc: 0.7663 - val_loss: 0.4467 - val_acc: 0.7654\nEpoch 53/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4318 - acc: 0.7666Epoch 00052: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4321 - acc: 0.7664 - val_loss: 0.4468 - val_acc: 0.7658\nEpoch 54/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4312 - acc: 0.7669Epoch 00053: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4315 - acc: 0.7666 - val_loss: 0.4455 - val_acc: 0.7667\nEpoch 55/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4308 - acc: 0.7670Epoch 00054: val_loss did not improve\n10496/10496 [==============================] - 380s - loss: 0.4312 - acc: 0.7668 - val_loss: 0.4460 - val_acc: 0.7661\nEpoch 56/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4306 - acc: 0.7674Epoch 00055: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4309 - acc: 0.7671 - val_loss: 0.4442 - val_acc: 0.7678\nEpoch 57/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4302 - acc: 0.7674Epoch 00056: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4305 - acc: 0.7672 - val_loss: 0.4447 - val_acc: 0.7676\nEpoch 58/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4298 - acc: 0.7674Epoch 00057: val_loss improved from 0.44408 to 0.44340, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 381s - loss: 0.4301 - acc: 0.7671 - val_loss: 0.4434 - val_acc: 0.7689\nEpoch 59/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4292 - acc: 0.7674Epoch 00058: val_loss improved from 0.44340 to 0.44193, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4295 - acc: 0.7671 - val_loss: 0.4419 - val_acc: 0.7689\nEpoch 60/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4288 - acc: 0.7675Epoch 00059: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4291 - acc: 0.7672 - val_loss: 0.4461 - val_acc: 0.7667\nEpoch 61/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4283 - acc: 0.7676Epoch 00060: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4287 - acc: 0.7674 - val_loss: 0.4421 - val_acc: 0.7685\nEpoch 62/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4280 - acc: 0.7677Epoch 00061: val_loss improved from 0.44193 to 0.44081, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4283 - acc: 0.7674 - val_loss: 0.4408 - val_acc: 0.7694\nEpoch 63/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4275 - acc: 0.7678Epoch 00062: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4279 - acc: 0.7675 - val_loss: 0.4420 - val_acc: 0.7687\nEpoch 64/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4272 - acc: 0.7681Epoch 00063: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4275 - acc: 0.7678 - val_loss: 0.4416 - val_acc: 0.7680\nEpoch 65/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4268 - acc: 0.7679Epoch 00064: val_loss improved from 0.44081 to 0.44063, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 385s - loss: 0.4271 - acc: 0.7676 - val_loss: 0.4406 - val_acc: 0.7694\nEpoch 66/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4264 - acc: 0.7681Epoch 00065: val_loss did not improve\n10496/10496 [==============================] - 380s - loss: 0.4268 - acc: 0.7679 - val_loss: 0.4428 - val_acc: 0.7685\nEpoch 67/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4260 - acc: 0.7682Epoch 00066: val_loss improved from 0.44063 to 0.43819, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 381s - loss: 0.4264 - acc: 0.7680 - val_loss: 0.4382 - val_acc: 0.7696\nEpoch 68/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4257 - acc: 0.7683Epoch 00067: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4260 - acc: 0.7681 - val_loss: 0.4401 - val_acc: 0.7687\nEpoch 69/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4253 - acc: 0.7683Epoch 00068: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4257 - acc: 0.7681 - val_loss: 0.4415 - val_acc: 0.7683\nEpoch 70/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4250 - acc: 0.7685Epoch 00069: val_loss improved from 0.43819 to 0.43792, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 380s - loss: 0.4253 - acc: 0.7683 - val_loss: 0.4379 - val_acc: 0.7687\nEpoch 71/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4246 - acc: 0.7686Epoch 00070: val_loss did not improve\n10496/10496 [==============================] - 382s - loss: 0.4249 - acc: 0.7684 - val_loss: 0.4392 - val_acc: 0.7685\nEpoch 72/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4242 - acc: 0.7690Epoch 00071: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4245 - acc: 0.7687 - val_loss: 0.4414 - val_acc: 0.7674\nEpoch 73/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4238 - acc: 0.7688Epoch 00072: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4241 - acc: 0.7686 - val_loss: 0.4394 - val_acc: 0.7674\nEpoch 74/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4235 - acc: 0.7690Epoch 00073: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4239 - acc: 0.7687 - val_loss: 0.4432 - val_acc: 0.7656\nEpoch 75/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4231 - acc: 0.7691Epoch 00074: val_loss improved from 0.43792 to 0.43680, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 378s - loss: 0.4235 - acc: 0.7689 - val_loss: 0.4368 - val_acc: 0.7691\nEpoch 76/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4227 - acc: 0.7688Epoch 00075: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4231 - acc: 0.7686 - val_loss: 0.4382 - val_acc: 0.7676\nEpoch 77/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4224 - acc: 0.7689Epoch 00076: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4227 - acc: 0.7686 - val_loss: 0.4375 - val_acc: 0.7689\nEpoch 78/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4222 - acc: 0.7689Epoch 00077: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4226 - acc: 0.7687 - val_loss: 0.4410 - val_acc: 0.7672\nEpoch 79/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4219 - acc: 0.7689Epoch 00078: val_loss improved from 0.43680 to 0.43653, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 378s - loss: 0.4222 - acc: 0.7686 - val_loss: 0.4365 - val_acc: 0.7694\nEpoch 80/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4216 - acc: 0.7692Epoch 00079: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4219 - acc: 0.7689 - val_loss: 0.4365 - val_acc: 0.7685\nEpoch 81/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4212 - acc: 0.7692Epoch 00080: val_loss improved from 0.43653 to 0.43650, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 378s - loss: 0.4216 - acc: 0.7689 - val_loss: 0.4365 - val_acc: 0.7694\nEpoch 82/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4209 - acc: 0.7694Epoch 00081: val_loss did not improve\n10496/10496 [==============================] - 380s - loss: 0.4212 - acc: 0.7691 - val_loss: 0.4366 - val_acc: 0.7691\nEpoch 83/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4206 - acc: 0.7694Epoch 00082: val_loss did not improve\n10496/10496 [==============================] - 378s - loss: 0.4209 - acc: 0.7692 - val_loss: 0.4374 - val_acc: 0.7685\nEpoch 84/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4203 - acc: 0.7697Epoch 00083: val_loss improved from 0.43650 to 0.43378, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 379s - loss: 0.4206 - acc: 0.7695 - val_loss: 0.4338 - val_acc: 0.7707\nEpoch 85/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4200 - acc: 0.7696Epoch 00084: val_loss did not improve\n10496/10496 [==============================] - 378s - loss: 0.4203 - acc: 0.7694 - val_loss: 0.4344 - val_acc: 0.7707\nEpoch 86/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4197 - acc: 0.7696Epoch 00085: val_loss did not improve\n10496/10496 [==============================] - 378s - loss: 0.4201 - acc: 0.7694 - val_loss: 0.4339 - val_acc: 0.7702\nEpoch 87/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4194 - acc: 0.7700Epoch 00086: val_loss did not improve\n10496/10496 [==============================] - 378s - loss: 0.4197 - acc: 0.7698 - val_loss: 0.4368 - val_acc: 0.7676\nEpoch 88/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4191 - acc: 0.7701Epoch 00087: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4195 - acc: 0.7699 - val_loss: 0.4367 - val_acc: 0.7680\nEpoch 89/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4188 - acc: 0.7702Epoch 00088: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4192 - acc: 0.7700 - val_loss: 0.4354 - val_acc: 0.7683\nEpoch 90/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4186 - acc: 0.7702Epoch 00089: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4189 - acc: 0.7700 - val_loss: 0.4362 - val_acc: 0.7676\nEpoch 91/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4181 - acc: 0.7700Epoch 00090: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4185 - acc: 0.7698 - val_loss: 0.4344 - val_acc: 0.7700\nEpoch 92/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4178 - acc: 0.7703Epoch 00091: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4182 - acc: 0.7701 - val_loss: 0.4347 - val_acc: 0.7691\nEpoch 93/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4176 - acc: 0.7705Epoch 00092: val_loss improved from 0.43378 to 0.43366, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 378s - loss: 0.4180 - acc: 0.7702 - val_loss: 0.4337 - val_acc: 0.7696\nEpoch 94/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4173 - acc: 0.7705Epoch 00093: val_loss improved from 0.43366 to 0.43235, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 378s - loss: 0.4177 - acc: 0.7702 - val_loss: 0.4323 - val_acc: 0.7696\nEpoch 95/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4170 - acc: 0.7708Epoch 00094: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4174 - acc: 0.7706 - val_loss: 0.4364 - val_acc: 0.7680\nEpoch 96/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4167 - acc: 0.7710Epoch 00095: val_loss did not improve\n10496/10496 [==============================] - 379s - loss: 0.4171 - acc: 0.7708 - val_loss: 0.4327 - val_acc: 0.7689\nEpoch 97/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4164 - acc: 0.7710Epoch 00096: val_loss improved from 0.43235 to 0.43148, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 378s - loss: 0.4168 - acc: 0.7708 - val_loss: 0.4315 - val_acc: 0.7698\nEpoch 98/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4162 - acc: 0.7714Epoch 00097: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4166 - acc: 0.7712 - val_loss: 0.4328 - val_acc: 0.7698\nEpoch 99/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4159 - acc: 0.7711Epoch 00098: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4162 - acc: 0.7709 - val_loss: 0.4322 - val_acc: 0.7696\nEpoch 100/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4156 - acc: 0.7712Epoch 00099: val_loss improved from 0.43148 to 0.43118, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 379s - loss: 0.4160 - acc: 0.7710 - val_loss: 0.4312 - val_acc: 0.7711\nEpoch 101/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4154 - acc: 0.7710Epoch 00100: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4157 - acc: 0.7708 - val_loss: 0.4332 - val_acc: 0.7702\nEpoch 102/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4151 - acc: 0.7715Epoch 00101: val_loss improved from 0.43118 to 0.42933, saving model to ./cnn_140_edges_hog_grey_lr_0.001000_nesterov_training_weights_best.hdf5\n10496/10496 [==============================] - 378s - loss: 0.4155 - acc: 0.7713 - val_loss: 0.4293 - val_acc: 0.7720\nEpoch 103/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4149 - acc: 0.7718Epoch 00102: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4152 - acc: 0.7716 - val_loss: 0.4316 - val_acc: 0.7709\nEpoch 104/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4146 - acc: 0.7719Epoch 00103: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4149 - acc: 0.7717 - val_loss: 0.4329 - val_acc: 0.7705\nEpoch 105/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4143 - acc: 0.7719Epoch 00104: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4146 - acc: 0.7717 - val_loss: 0.4294 - val_acc: 0.7711\nEpoch 106/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4142 - acc: 0.7720Epoch 00105: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4145 - acc: 0.7718 - val_loss: 0.4307 - val_acc: 0.7711\nEpoch 107/500\n10464/10496 [============================>.] - ETA: 1s - loss: 0.4137 - acc: 0.7720Epoch 00106: val_loss did not improve\n10496/10496 [==============================] - 377s - loss: 0.4141 - acc: 0.7718 - val_loss: 0.4330 - val_acc: 0.7698\nEpoch 108/500\n 9216/10496 [=========================>....] - ETA: 43s - loss: 0.4120 - acc: 0.7731" ], [ "print('Evaluating model...')\nscore = model.evaluate_generator(test_data_gen, val_samples=test_data_gen.n_samples)\nprint('done.')\n\nprint('Test score:', score[0])\nprint('Test accuracy:', score[1])", "Evaluating model...\ndone.\nTest score: 0.423118899406\nTest accuracy: 0.766065140845\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]