code
stringlengths
235
11.6M
repo_path
stringlengths
3
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Convolutional Neural Networks: Step by Step # # Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. # # **Notation**: # - Superscript $[l]$ denotes an object of the $l^{th}$ layer. # - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters. # # # - Superscript $(i)$ denotes an object from the $i^{th}$ example. # - Example: $x^{(i)}$ is the $i^{th}$ training example input. # # # - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. # - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. # # # - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. # - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. # # We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! # ## 1 - Packages # # Let's first import all the packages that you will need during this assignment. # - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python. # - [matplotlib](http://matplotlib.org) is a library to plot graphs in Python. # - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. # + import numpy as np import h5py import matplotlib.pyplot as plt # %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # %load_ext autoreload # %autoreload 2 np.random.seed(1) # - # ## 2 - Outline of the Assignment # # You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed: # # - Convolution functions, including: # - Zero Padding # - Convolve window # - Convolution forward # - Convolution backward (optional) # - Pooling functions, including: # - Pooling forward # - Create mask # - Distribute value # - Pooling backward (optional) # # This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model: # # <img src="images/model.png" style="width:800px;height:300px;"> # # **Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. # ## 3 - Convolutional Neural Networks # # Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. # # <img src="images/conv_nn.png" style="width:350px;height:200px;"> # # In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. # ### 3.1 - Zero-Padding # # Zero-padding adds zeros around the border of an image: # # <img src="images/PAD.png" style="width:600px;height:400px;"> # <caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption> # # The main benefits of padding are the following: # # - It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. # # - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image. # # **Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do: # ```python # a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..)) # ``` # + # GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns: X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C) """ ### START CODE HERE ### (≈ 1 line) X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), 'constant', constant_values=0) ### END CODE HERE ### return X_pad # + np.random.seed(1) x = np.random.randn(4, 3, 3, 2) x_pad = zero_pad(x, 2) print ("x.shape =", x.shape) print ("x_pad.shape =", x_pad.shape) print ("x[1, 1] =", x[1, 1]) print ("x_pad[1, 1] =", x_pad[1, 1]) fig, axarr = plt.subplots(1, 2) axarr[0].set_title('x') axarr[0].imshow(x[0,:,:,0]) axarr[1].set_title('x_pad') axarr[1].imshow(x_pad[0,:,:,0]) # - # **Expected Output**: # # <table> # <tr> # <td> # **x.shape**: # </td> # <td> # (4, 3, 3, 2) # </td> # </tr> # <tr> # <td> # **x_pad.shape**: # </td> # <td> # (4, 7, 7, 2) # </td> # </tr> # <tr> # <td> # **x[1,1]**: # </td> # <td> # [[ 0.90085595 -0.68372786] # [-0.12289023 -0.93576943] # [-0.26788808 0.53035547]] # </td> # </tr> # <tr> # <td> # **x_pad[1,1]**: # </td> # <td> # [[ 0. 0.] # [ 0. 0.] # [ 0. 0.] # [ 0. 0.] # [ 0. 0.] # [ 0. 0.] # [ 0. 0.]] # </td> # </tr> # # </table> # ### 3.2 - Single step of convolution # # In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: # # - Takes an input volume # - Applies a filter at every position of the input # - Outputs another volume (usually of different size) # # <img src="images/Convolution_schematic.gif" style="width:500px;height:300px;"> # <caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption> # # In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. # # Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. # # **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html). # # + # GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev) b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns: Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data """ ### START CODE HERE ### (≈ 2 lines of code) # Element-wise product between a_slice and W. Add bias. s = np.multiply(a_slice_prev, W) + b # Sum over all entries of the volume s Z = np.sum(s) ### END CODE HERE ### return Z # + np.random.seed(1) a_slice_prev = np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z) # - # **Expected Output**: # <table> # <tr> # <td> # **Z** # </td> # <td> # -23.1602122025 # </td> # </tr> # # </table> # ### 3.3 - Convolutional Neural Networks - Forward pass # # In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: # # <center> # <video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls> # </video> # </center> # # **Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. # # **Hint**: # 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do: # ```python # a_slice_prev = a_prev[0:2,0:2,:] # ``` # This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define. # 2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. # # <img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;"> # <caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption> # # # **Reminder**: # The formulas relating the output shape of the convolution to the input shape is: # $$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ # $$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ # $$ n_C = \text{number of filters used in the convolution}$$ # # For this exercise, we won't worry about vectorization, and will just implement everything with for-loops. # + # GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shape (f, f, n_C_prev, n_C) b -- Biases, numpy array of shape (1, 1, 1, n_C) hparameters -- python dictionary containing "stride" and "pad" Returns: Z -- conv output, numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward() function """ ### START CODE HERE ### # Retrieve dimensions from A_prev's shape (≈1 line) (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (≈1 line) (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" (≈2 lines) stride = hparameters['stride'] pad = hparameters['pad'] # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines) n_H = int((n_H_prev - f + 2 * pad) / stride) + 1 n_W = int((n_W_prev - f + 2 * pad) / stride) + 1 # Initialize the output volume Z with zeros. (≈1 line) Z = np.zeros((m, n_H, n_W, n_C)) # Create A_prev_pad by padding A_prev A_prev_pad = zero_pad(A_prev, pad) for i in range(m): # loop over the batch of training examples a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over channels (= #filters) of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h * stride vert_end = vert_start + f horiz_start = w * stride horiz_end = horiz_start + f # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line) a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line) Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c]) ### END CODE HERE ### # Making sure your output shape is correct assert(Z.shape == (m, n_H, n_W, n_C)) # Save information in "cache" for the backprop cache = (A_prev, W, b, hparameters) return Z, cache # + np.random.seed(1) A_prev = np.random.randn(10, 4, 4, 3) W = np.random.randn(2, 2, 3, 8) b = np.random.randn(1, 1, 1, 8) hparameters = {"pad" : 2, "stride": 1} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) print("Z's mean =", np.mean(Z)) print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3]) # - # **Expected Output**: # # <table> # <tr> # <td> # **Z's mean** # </td> # <td> # 0.155859324889 # </td> # </tr> # <tr> # <td> # **cache_conv[0][1][2][3]** # </td> # <td> # [-0.20075807 0.18656139 0.41005165] # </td> # </tr> # # </table> # # Finally, CONV layer should also contain an activation, in which case we would add the following line of code: # # ```python # # Convolve the window to get back one output neuron # Z[i, h, w, c] = ... # # Apply activation # A[i, h, w, c] = activation(Z[i, h, w, c]) # ``` # # You don't need to do it here. # # ## 4 - Pooling layer # # The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: # # - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output. # # - Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output. # # <table> # <td> # <img src="images/max_pool1.png" style="width:500px;height:300px;"> # <td> # # <td> # <img src="images/a_pool.png" style="width:500px;height:300px;"> # <td> # </table> # # These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. # # ### 4.1 - Forward Pooling # Now, you are going to implement MAX-POOL and AVG-POOL, in the same function. # # **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below. # # **Reminder**: # As there's no padding, the formulas binding the output shape of the pooling to the input shape is: # $$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$ # $$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$ # $$ n_C = n_{C_{prev}}$$ # + # GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C) cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters """ # Retrieve dimensions from the input shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve hyperparameters from "hparameters" f = hparameters["f"] stride = hparameters["stride"] # Define the dimensions of the output n_H = int(1 + (n_H_prev - f) / stride) n_W = int(1 + (n_W_prev - f) / stride) n_C = n_C_prev # Initialize output matrix A A = np.zeros((m, n_H, n_W, n_C)) ### START CODE HERE ### for i in range(m): # loop over the training examples for h in range(n_H): # loop on the vertical axis of the output volume for w in range(n_W): # loop on the horizontal axis of the output volume for c in range (n_C): # loop over the channels of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h * stride vert_end = vert_start + f horiz_start = w * stride horiz_end = horiz_start + f # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line) a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean. if mode == "max": A[i, h, w, c] = np.max(a_prev_slice) elif mode == "average": A[i, h, w, c] = np.mean(a_prev_slice) ### END CODE HERE ### # Store the input and hparameters in "cache" for pool_backward() cache = (A_prev, hparameters) # Making sure your output shape is correct assert(A.shape == (m, n_H, n_W, n_C)) return A, cache # + np.random.seed(1) A_prev = np.random.randn(2, 4, 4, 3) hparameters = {"stride" : 1, "f": 4} A, cache = pool_forward(A_prev, hparameters) print("mode = max") print("A =", A) print() A, cache = pool_forward(A_prev, hparameters, mode = "average") print("mode = average") print("A =", A) # - # **Expected Output:** # <table> # # <tr> # <td> # A = # </td> # <td> # [[[[ 1.74481176 1.6924546 2.10025514]]] <br/> # # # [[[ 1.19891788 1.51981682 2.18557541]]]] # # </td> # </tr> # <tr> # <td> # A = # </td> # <td> # [[[[-0.09498456 0.11180064 -0.14263511]]] <br/> # # # [[[-0.09525108 0.28325018 0.33035185]]]] # # </td> # </tr> # # </table> # # Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. # # The remainer of this notebook is optional, and will not be graded. # # ## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED) # # In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. # # When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. # # ### 5.1 - Convolutional layer backward pass # # Let's start by implementing the backward pass for a CONV layer. # # #### 5.1.1 - Computing dA: # This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example: # # $$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$ # # Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. # # In code, inside the appropriate for-loops, this formula translates into: # ```python # da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] # ``` # # #### 5.1.2 - Computing dW: # This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss: # # $$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$ # # Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. # # In code, inside the appropriate for-loops, this formula translates into: # ```python # dW[:,:,:,c] += a_slice * dZ[i, h, w, c] # ``` # # #### 5.1.3 - Computing db: # # This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$: # # $$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$ # # As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. # # In code, inside the appropriate for-loops, this formula translates into: # ```python # db[:,:,:,c] += dZ[i, h, w, c] # ``` # # **Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above. def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns: dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev), numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) dW -- gradient of the cost with respect to the weights of the conv layer (W) numpy array of shape (f, f, n_C_prev, n_C) db -- gradient of the cost with respect to the biases of the conv layer (b) numpy array of shape (1, 1, 1, n_C) """ ### START CODE HERE ### # Retrieve information from "cache" (A_prev, W, b, hparameters) = cache # Retrieve dimensions from A_prev's shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" stride = hparameters["stride"] pad = hparameters["pad"] # Retrieve dimensions from dZ's shape (m, n_H, n_W, n_C) = dZ.shape # Initialize dA_prev, dW, db with the correct shapes dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) dW = np.zeros((f, f, n_C_prev, n_C)) db = np.zeros((1, 1, 1, n_C)) # Pad A_prev and dA_prev A_prev_pad = zero_pad(A_prev, pad) dA_prev_pad = zero_pad(dA_prev, pad) for i in range(m): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad a_prev_pad = A_prev_pad[i] da_prev_pad = dA_prev_pad[i] for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over the channels of the output volume # Find the corners of the current "slice" vert_start = h vert_end = vert_start + f horiz_start = w horiz_end = horiz_start + f # Use the corners to define the slice from a_prev_pad a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] # Update gradients for the window and the filter's parameters using the code formulas given above da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] dW[:,:,:,c] += a_slice * dZ[i, h, w, c] db[:,:,:,c] += dZ[i, h, w, c] # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :]) dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :] ### END CODE HERE ### # Making sure your output shape is correct assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db np.random.seed(1) dA, dW, db = conv_backward(Z, cache_conv) print("dA_mean =", np.mean(dA)) print("dW_mean =", np.mean(dW)) print("db_mean =", np.mean(db)) # print(dA.shape) # ** Expected Output: ** # <table> # <tr> # <td> # **dA_mean** # </td> # <td> # 9.60899067587 # </td> # </tr> # <tr> # <td> # **dW_mean** # </td> # <td> # 10.5817412755 # </td> # </tr> # <tr> # <td> # **db_mean** # </td> # <td> # 76.3710691956 # </td> # </tr> # # </table> # # ## 5.2 Pooling layer - backward pass # # Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. # # ### 5.2.1 Max pooling - backward pass # # Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: # # $$ X = \begin{bmatrix} # 1 && 3 \\ # 4 && 2 # \end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix} # 0 && 0 \\ # 1 && 0 # \end{bmatrix}\tag{4}$$ # # As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. # # **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. # Hints: # - [np.max()]() may be helpful. It computes the maximum of an array. # - If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that: # ``` # A[i,j] = True if X[i,j] = x # A[i,j] = False if X[i,j] != x # ``` # - Here, you don't need to consider cases where there are several maxima in a matrix. def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ ### START CODE HERE ### (≈1 line) mask = x == np.max(x) ### END CODE HERE ### return mask np.random.seed(1) x = np.random.randn(2,3) mask = create_mask_from_window(x) print('x = ', x) print("mask = ", mask) # **Expected Output:** # # <table> # <tr> # <td> # # **x =** # </td> # # <td> # # [[ 1.62434536 -0.61175641 -0.52817175] <br> # [-1.07296862 0.86540763 -2.3015387 ]] # # </td> # </tr> # # <tr> # <td> # **mask =** # </td> # <td> # [[ True False False] <br> # [False False False]] # </td> # </tr> # # # </table> # Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. # ### 5.2.2 - Average pooling - backward pass # # In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this. # # For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: # $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix} # 1/4 && 1/4 \\ # 1/4 && 1/4 # \end{bmatrix}\tag{5}$$ # # This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. # # **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html) def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we distributed the value of dz """ ### START CODE HERE ### # Retrieve dimensions from shape (≈1 line) (n_H, n_W) = shape # Compute the value to distribute on the matrix (≈1 line) average = dz / (n_H * n_W) # Create a matrix where every entry is the "average" value (≈1 line) a = np.ones(shape) * average ### END CODE HERE ### return a a = distribute_value(2, (2,2)) print('distributed value =', a) # **Expected Output**: # # <table> # <tr> # <td> # distributed_value = # </td> # <td> # [[ 0.5 0.5] # <br\> # [ 0.5 0.5]] # </td> # </tr> # </table> # ### 5.2.3 Putting it together: Pooling backward # # You now have everything you need to compute backward propagation on a pooling layer. # # **Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dZ. # + def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev """ ### START CODE HERE ### # Retrieve information from cache (≈1 line) (A_prev, hparameters) = cache # Retrieve hyperparameters from "hparameters" (≈2 lines) stride = hparameters["stride"] f = hparameters["f"] # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines) m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape m, n_H, n_W, n_C = dA.shape # Initialize dA_prev with zeros (≈1 line) dA_prev = np.zeros(A_prev.shape) for i in range(m): # loop over the training examples # select training example from A_prev (≈1 line) a_prev = A_prev[i] for h in range(n_H): # loop on the vertical axis for w in range(n_W): # loop on the horizontal axis for c in range(n_C): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines) vert_start = h vert_end = vert_start + f horiz_start = w horiz_end = horiz_start + f # Compute the backward propagation in both modes. if mode == "max": # Use the corners and "c" to define the current slice from a_prev (≈1 line) a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c] # Create the mask from a_prev_slice (≈1 line) mask = create_mask_from_window(a_prev_slice) # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line) dA_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] += np.multiply(mask, dA[i, h, w, c]) elif mode == "average": # Get the value a from dA (≈1 line) da = dA[i, h, w, c] # Define the shape of the filter as fxf (≈1 line) shape = (f, f) # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line) dA_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] += distribute_value(da, shape) ### END CODE ### # Making sure your output shape is correct assert(dA_prev.shape == A_prev.shape) return dA_prev # + np.random.seed(1) A_prev = np.random.randn(5, 5, 3, 2) hparameters = {"stride" : 1, "f": 2} A, cache = pool_forward(A_prev, hparameters) dA = np.random.randn(5, 4, 2, 2) dA_prev = pool_backward(dA, cache, mode = "max") print("mode = max") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) print() dA_prev = pool_backward(dA, cache, mode = "average") print("mode = average") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) # - # **Expected Output**: # # mode = max: # <table> # <tr> # <td> # # **mean of dA =** # </td> # # <td> # # 0.145713902729 # # </td> # </tr> # # <tr> # <td> # **dA_prev[1,1] =** # </td> # <td> # [[ 0. 0. ] <br> # [ 5.05844394 -1.68282702] <br> # [ 0. 0. ]] # </td> # </tr> # </table> # # mode = average # <table> # <tr> # <td> # # **mean of dA =** # </td> # # <td> # # 0.145713902729 # # </td> # </tr> # # <tr> # <td> # **dA_prev[1,1] =** # </td> # <td> # [[ 0.08485462 0.2787552 ] <br> # [ 1.26461098 -0.25749373] <br> # [ 1.17975636 -0.53624893]] # </td> # </tr> # </table> # ### Congratulations ! # # Congratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow.
Convolutional Neural Networks/Convolution model - Step by Step - v1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Multi-Layer Perceptrons # # To solve more complicated problems, we need to add additional layers. There are several immediate questions to consider: # - How many perceptrons do I need? # - How many layers is sufficient? # - Should the hidden layers be larger or smaller than the input layer? # # This notebook will shed light on some of these questions through examples. # ## XOR Problem # # Let's start out by creating the data representing the XOR problem. # + import numpy as np import matplotlib.pyplot as plt # %matplotlib notebook def calc_decision_boundary(weights): x = -weights[0] / weights[1] y = -weights[0] / weights[2] m = -y / x return np.array([m, y]) def gen_boundary_points(weights, m, b): # If the slope is undefined, it is vertical. if weights[2] != 0: x = np.linspace(-5, 5, 100) y = m * x + b else: x = np.zeros(100) y = np.linspace(-5, 5, 100) + b return x, y def sigmoid(x): return 1.0 / (1.0 + np.exp(-x)) def dot(w, x): x_bias = np.concatenate((np.ones((x.shape[0], 1)), x), axis=1) return w @ x_bias.T # + # Define XOR inputs -- prepend a constant of 1 for bias multiplication samples = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) targets = np.array([0, 1, 1, 0]) fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(samples[:, 0], samples[:, 1], c=targets) # - # What was the result of using a single perceptron to solve this problem? # # The most optimal outcome is 75\% accuracy. # + # Classifier Parameters weights = np.array([-0.5, 1, 1]) # For visualizing the line m, b = calc_decision_boundary(weights) x, y = gen_boundary_points(weights, m, b) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, y, c='g') ax.scatter(samples[:, 0], samples[:, 1], c=targets) ax.set_xlim([-0.2, 1.2]) ax.set_ylim([-0.2, 1.2]) # - # ## Adding another perceptron # # We begin by adding a hidden layer with a single perceptron having a sigmoidal, nonlinear activation function. # # ![mlp1.png](attachment:mlp1.png) # # If the hidden layer has only a single unit that produces a scalar output, then the initialization of our output perceptron changes. The weight matrix defining the output perceptron must have a weight for each incoming input. Since the hidden layer output is of size 1, the output perceptron only has a single weight. # # ## Forward Pass # # To compute the forward pass with a hidden layer, we must first transform the input into the hidden layer space before transforming the intermediate result into the output space. # # $$y = \sigma(\mathbf{w}_o \cdot \sigma(\mathbf{w}_h \cdot \mathbf{x}))$$ # # We can write this in algorithmic form as # # layer_out = input # # for layer in layers: # layer_out = layer(layer_out) # # return layer_out # + hidden_weights = np.array([-0.5, 1, 1]) out_weights = np.array([0, 1]) # For visualizing the line hidden_m, hidden_b = calc_decision_boundary(hidden_weights) hidden_x, hidden_y = gen_boundary_points(hidden_weights, hidden_m, hidden_b) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(hidden_x, hidden_y, c='g') ax.scatter(samples[:, 0], samples[:, 1], c=targets) ax.set_xlim([-0.2, 1.2]) ax.set_ylim([-0.2, 1.2]) # - # A single perceptron in the hidden layer means that we still only have a single decision boundary. It seems intuitive at this point that adding another neuron would give us 2 different decision boundaries. # # ![mlp2.png](attachment:mlp2.png) # + # hidden_weights = np.random.uniform(-1, 1, size=(2, 3)) # out_weights = np.random.uniform(-1, 1, size=(3,)) hidden_weights = np.array([[-0.5, 1, 1], [-1.5, 1, 1]]) out_weights = np.array([-0.22, 1.0, -1.0]) # For visualizing the line hidden_m0, hidden_b0 = calc_decision_boundary(hidden_weights[0]) hidden_x0, hidden_y0 = gen_boundary_points(hidden_weights[0], hidden_m0, hidden_b0) hidden_m1, hidden_b1 = calc_decision_boundary(hidden_weights[1]) hidden_x1, hidden_y1 = gen_boundary_points(hidden_weights[1], hidden_m1, hidden_b1) out_m, out_b = calc_decision_boundary(out_weights) out_x, out_y = gen_boundary_points(out_weights, out_m, out_b) # Forward propagation hidden_out = dot(hidden_weights, samples) hidden_act = sigmoid(hidden_out) print("Hidden layer BEFORE non-linearity") print(hidden_out) print("Hidden layer AFTER non-linearity") print(hidden_act) c = hidden_act.mean(1) h_min = hidden_act.min(1) h_max = hidden_act.max(1) b = np.abs(h_max - h_min).max() # Visualize hidden layer space fig1 = plt.figure() ax1 = fig1.add_subplot(111) ax1.set_title("Hidden Layer Space") ax1.plot(out_x, out_y, c='g') ax1.scatter(hidden_act[0, :], hidden_act[1, :], c=targets) ax1.set_xlim([c[0] - b, c[0] + b]) ax1.set_ylim([c[1] - b, c[1] + b]) # Forward pass finishing with final neuron out = dot(out_weights, hidden_act.T) print("Output BEFORE non-linearity") print(out) out_act = sigmoid(out) print("Output AFTER non-linearity") print(out_act) # Visualize input space fig2 = plt.figure() ax2 = fig2.add_subplot(111) ax2.set_title("Input Space") ax2.plot(hidden_x0, hidden_y0, c='g') ax2.plot(hidden_x1, hidden_y1, c='g') ax2.scatter(samples[:, 0], samples[:, 1], c=targets) ax2.set_xlim([-0.2, 1.2]) ax2.set_ylim([-0.2, 1.2])
neural_networks/mlp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:plotting_intro] # language: python # name: conda-env-plotting_intro-py # --- # # Python Plotting - An Introduction # ## Introduction # This notebook takes you through many different types of plot you'll come across in the atmospheric sciences. We'll use real climate data and some model output where appropriate. # # You'll need to download the BEST dataset - on a Linux machine this can be done straightforwardly by running `wget http://berkeleyearth.lbl.gov/auto/Global/Gridded/Land_and_Ocean_LatLong1.nc` in the `data` folder. # # Please send any comments or suggestions to dcw32.wade - at - gmail.com. # #Import all the packages we need now! This will take a while import cartopy.crs as ccrs import numpy as np import matplotlib.pylab as plt import math as m import os from netCDF4 import Dataset import pandas as pd #Specific packages import matplotlib.ticker as ticker import matplotlib.colors as colors import matplotlib.gridspec as gridspec from mpl_toolkits.axes_grid1 import host_subplot import mpl_toolkits.axisartist as AA import scipy.ndimage as ndimage # ## Scatter plots and errorbars with Farman et al 1985 # In this section we will plot the October mean ozone from 1957 to 1984. This long-term record of column ozone allowed for the detection of the ozone hole over Antarctica. The strong springtime depletion supported the role of heterogenous chemisty. #Read in all the files #These have been digitised from the original figure loc='data/' farman1=np.genfromtxt(loc+'farman_o32.csv',delimiter=',',skip_header=1) farman2=np.genfromtxt(loc+'farman_f11.csv',delimiter=',',skip_header=1) farman3=np.genfromtxt(loc+'farman_f12.csv',delimiter=',',skip_header=1) #Take an example to print print farman1 print farman1.shape #Ozone data o3_t=farman1[:,0] o3_mu=farman1[:,1] #DU o3_up=farman1[:,2] #DU o3_lo=farman1[:,3] #DU #F-11 data f11_t=farman2[:,0] f11_val=farman2[:,1] #pptv #F-12 data f12_t=farman3[:,0] f12_val=farman3[:,1] #pptv #Rough and ready plot plt.scatter(o3_t,o3_mu,marker='x',c='k') plt.show() #Now we want to include the upper and lower values on our plot fig,ax=plt.subplots() #better to create an axis object, then plot to that - makes things #easier when you want to plot multiple things on the same graph! ax.errorbar(o3_t,o3_mu,yerr=[o3_mu-o3_lo,o3_up-o3_mu],fmt='_',c='k',capthick=0) #Same ticks as the Farman plot: #Sets major xticks to given values ax.set_xticks([1960,1970,1980]) #Sets minor xticks every 2 years ax.xaxis.set_minor_locator(ticker.MultipleLocator(2)) ax.set_yticks([200,300]) #Sets ylabel ax.set_ylabel('Ozone Column / DU') ax.yaxis.set_minor_locator(ticker.MultipleLocator(20)) plt.show() #def make_patch_spines_invisible(ax): # ax.set_frame_on(True) # ax.patch.set_visible(False) # for sp in ax.spines.values(): # sp.set_visible(False) # To include the F-11, F-12 values, we need to do it slightly differently: #ax = host_subplot(111, axes_class=AA.Axes) fig,ax=plt.subplots(figsize=(5,6)) #Now want to create a second axis ax1 = ax.twinx() #Share x axis with the ozone # #Plot as before ax.errorbar(o3_t,o3_mu,yerr=[o3_mu-o3_lo,o3_up-o3_mu],fmt='_',c='k',capthick=0) #Now plot the scatter data ax1.scatter(f11_t,f11_val,c='k',marker='o') ax1.scatter(f12_t,f12_val/2.,facecolors='none', edgecolors='k',marker='o') # ax.set_xticks([1960,1970,1980]) ax.xaxis.set_minor_locator(ticker.MultipleLocator(2)) ax.set_yticks([200,300]) ax.yaxis.set_minor_locator(ticker.MultipleLocator(20)) #Note that matm cm in the orginal paper is identical to the Dobson unit ax.set_ylabel('Column Ozone / DU',fontsize=12) #Xlims ax.set_xlim(1956,1986) ax.set_ylim(170.,350.) #Reverse y axis ax1.set_ylim(300,-60) ax1.set_yticks([-60,0,100,200]) ax1.set_yticks([50,150],minor=True) ax1.set_yticklabels(["F11".center(5)+"F12".center(5), "0".center(7)+"0".center(7), "100".center(5)+"200".center(5), "200".center(5)+"400".center(5) ]) #Write October on the plot in the bottom left corner ax.annotate('October',xy=(1960,200),horizontalalignment='center',fontsize=12) plt.savefig('/homes/dcw32/figures/farman.png',bbox_inches='tight',dpi=200) plt.show() # + language="bash" # echo "hello from $BASH" # - # ## Line and bar charts with the NAO index # #Extract the NAO data nao_data=np.genfromtxt('data/nao.dat',skip_header=4)[:192,:] #No 2017 as incomplete print nao_data.shape print nao_data[:,0]#Calendar years # #For the NAO index we want the DJF (December, January, February averages) #Remove the first year (as only taking December) using [1:,0] meanining index 1 onwards years=nao_data[1:,0] # #Initialize nao_djf=np.zeros(len(years)) # Take the December of the previous year [i] then the January and February of the current year [i+1] and average # Note that `years` doesn't include the first year, hence the offset of i and i+1 (would otherwise be i-1 and i) for i in range(len(years)): nao_djf[i]=np.mean([nao_data[i,12],nao_data[i+1,1],nao_data[i+1,2]]) #def running_mean(x, N): # cumsum = np.cumsum(np.insert(x, 0, 0)) # return (cumsum[N:] - cumsum[:-N]) / N # + #nao_running=running_mean(nao_djf,11) #print nao_running.shape #print years[2:-3].shape # - fig,ax=plt.subplots(figsize=(6,4)) #Barchart - all negative values in blue ax.bar(years[nao_djf<0],nao_djf[nao_djf<0],color='#0018A8',edgecolor='#0018A8') #Barchart - all positive values in red ax.bar(years[nao_djf>0],nao_djf[nao_djf>0],color='#ED2939',edgecolor='#ED2939') #Plot the smoothed field - use a Gaussian filter ax.plot(years,ndimage.filters.gaussian_filter(nao_djf,2.),c='k',linewidth=4) #Set limits ax.set_xlim([np.min(years),np.max(years)]) ax.set_ylim([-3.5,3.5]) #Plot the zero line ax.axhline(0.,c='k') #Decrease label pad to make it closer to the axis ax.set_ylabel('NAO index',labelpad=-3,fontsize=14) plt.savefig('/homes/dcw32/figures/nao.png',bbox_inches='tight',dpi=200) plt.show() # ## Plot of the Berkeley Earth data sat_file=Dataset('data/Land_and_Ocean_LatLong1.nc') #This will raise a warning due to the missing data for early points sata=sat_file.variables['temperature'][:] sat_clim=sat_file.variables['climatology'][:] times=sat_file.variables['time'][:] lons=sat_file.variables['longitude'][:] print lons.shape lats=sat_file.variables['latitude'][:] print lats.shape print sata.shape sata=sata[np.logical_and(times>1950,times<2017),:,:] times=times[np.logical_and(times>1950,times<2017)] print sata.shape best_sata=np.reshape(sata,[12,sata.shape[0]/12,180,360]) nyrs=len(times)/12 print nyrs yrs=np.zeros(nyrs) annual_data=np.zeros([nyrs,len(lats),len(lons)]) for i in range(nyrs): annual_data[i,:,:]=np.mean(sata[12*i:12*i+12,:,:],axis=0) yrs[i]=np.mean(times[12*i:12*i+12]) yrs=yrs-0.5 zonal_annual=np.mean(annual_data,axis=2) def gbox_areas(x,y): # lats x lons area=np.zeros([x,y]) R=6.371E6 for j in range(x): area[j,:]=(R**2)*m.radians(360./y)*(m.sin(m.radians(90.-(j-0.5)*180./(x-1)))-m.sin(m.radians(90.-(180./(x-1))*(j+0.5)))) return area areas=gbox_areas(len(lats),len(lons)) gmst=np.zeros(nyrs) for i in range(nyrs): gmst[i]=np.average(annual_data[i,:,:],weights=areas) # + fig,ax=plt.subplots(figsize=(6,4)) ax.fill_between(yrs, 0., gmst,where=gmst>=0,facecolor='#ED2939',interpolate=True) ax.fill_between(yrs, 0., gmst,where=gmst<0,facecolor='#0018A8',interpolate=True) #Remove the right and top axes and make the ticks come out of the plot ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.yaxis.set_ticks_position('left') ax.xaxis.set_ticks_position('bottom') ax.tick_params(axis='y', direction='out') ax.tick_params(axis='x', direction='out') # ax.set_xlim([np.min(yrs),np.max(yrs)]) ax.set_ylim([-0.2,1.0]) ax.set_ylabel(r'GMST Anomaly / $\degree$C') #ax.plot(yrs,gmst,c='k',linewidth=2) plt.show() # - #Contour plot #This function shifts a colormap with uneven levels def shiftedColorMap(cmap, start=0, midpoint=0.5, stop=1.0, name='shiftedcmap'): cdict = { 'red': [], 'green': [], 'blue': [], 'alpha': [] } # regular index to compute the colors reg_index = np.linspace(start, stop, 257) # shifted index to match the data shift_index = np.hstack([ np.linspace(0.0, midpoint, 128, endpoint=False), np.linspace(midpoint, 1.0, 129, endpoint=True) ]) for ri, si in zip(reg_index, shift_index): r, g, b, a = cmap(ri) cdict['red'].append((si, r, r)) cdict['green'].append((si, g, g)) cdict['blue'].append((si, b, b)) cdict['alpha'].append((si, a, a)) newcmap = colors.LinearSegmentedColormap(name, cdict) plt.register_cmap(cmap=newcmap) return newcmap fig=plt.figure() ax1=fig.add_subplot(111) cmap=plt.get_cmap('RdBu_r') levs=[-0.9,-0.3,0.3,0.9,1.5,2.1] cmap=shiftedColorMap(cmap,0.30) cf1=ax1.contourf(yrs,lats,np.transpose(zonal_annual),levs,cmap=cmap,extend='both') ax1.set_yticks([-90,-45,0,45,90]) ax1.set_yticklabels(["90S","45S","EQ","45N","90N"]) fig=plt.figure() ax2=fig.add_subplot(111) cf2=ax2.contourf(yrs,np.sin(np.pi*lats/180.),np.transpose(zonal_annual),levs,cmap=cmap,extend='both') ax2.set_yticks([-1.0,-0.5,0.0,0.5,1.0]) ax2.set_yticklabels(['90S','30S','EQ','30N','90N']) cbaxes=fig.add_axes([0.15, 0.00, 0.7, 0.03]) cbar=plt.colorbar(cf1,cax=cbaxes,orientation="horizontal") #cbar=plt.colorbar(cf2,orientation='horizontal',pad=0.15) cbar.set_label('Surface Air Temperature Anomaly (1951-1980) / $\degree$C',fontsize=10) plt.show() #Note that the top plot is equal in latitude #while the bottom plot is equal in area #The high latitude warming is more accentuated in the top plot #If your interest is global mean, the bottom plot is more appropriate #If you want to highlight the high latitudes, the top plot is more appropriate # ### Global map projections and regional plots with Cartopy # gs=gridspec.GridSpec(2,1) gs.update(left=0.05, right=0.95, hspace=-0.2) levs=[10.,20.,30.,40.,50.] # These are the plotting levels extend='both' # Extend the colorbar above/below? Options are 'max','min','neither','both' colmap='RdBu_r' # colorscales, google "matplotlib colormaps" for other options colmap=plt.cm.get_cmap(colmap) colmap=shiftedColorMap(colmap,0.30) levs=[-1.0,-0.2,0.2,1.0,1.8,2.6,3.4] # Want to extract the SST for 2016 sst_2016=annual_data[np.where(yrs==2016)[0][0],:,:] #Create new figure fig=plt.figure(figsize=(5,8)) #Use a Robinson projection, draw coastlines im0=fig.add_subplot(gs[0],projection=ccrs.Robinson(central_longitude=0)) #im0=plt.axes(projection=ccrs.Robinson(central_longitude=0)) im0.coastlines() im0.set_global() #im1 is a reduced plot im1=fig.add_subplot(gs[1],projection=ccrs.PlateCarree()) im1.set_extent([-25,40,30,70]) im1.coastlines() # #Trickery to get the colormap to append for the 'both' extension - insert levels above and below levs2=np.insert(levs,0,levs[0]-1) levs2=np.append(levs2,levs2[len(levs2)-1]+1) # This normalises the levels so that if there are large differences between the sizes # of bins that the colors are uniform norm=colors.BoundaryNorm(levs2, ncolors=cmap.N, clip=True) # Filled contour at defined levels cay=im0.contourf(lons,lats,sst_2016,levs,transform=ccrs.PlateCarree(),cmap=colmap,extend=extend,norm=norm) caz=im1.contourf(lons,lats,sst_2016,levs,transform=ccrs.PlateCarree(),cmap=colmap,extend=extend,norm=norm) #Add colorbar, this is a more 'precise' way to add the colorbar by defining a new axis cbaxes=fig.add_axes([0.05, 0.1, 0.9, 0.03]) cbar=plt.colorbar(cay,cax=cbaxes,orientation="horizontal") cbar.set_label('2016 SAT Anomaly (1951-1980 Climatology) / $\degree$C') #plt.suptitle('2016 Surface Temperature Anomaly (from 1951-1980)') plt.savefig('/homes/dcw32/figures/best.png',bbox_inches='tight',dpi=200) plt.show() # ### Central England Temperature record vs BEST # Extract the Met Office Central England Temperature record # cet_data=np.genfromtxt('data/cetml1659on.dat',skip_header=7) fig=plt.figure(figsize=(4,4)) #1950-->2016 nyrs=2017-1950 sdate=np.where(cet_data[:,0]==1950)[0][0] cet=np.zeros([12,nyrs]) for i in range(nyrs): cet[:,i]=cet_data[sdate+i,1:13] print cet.shape # # # +asume that the CET can be represented by the box at 52N, -0.5&-1.5W x=np.where(lats==52.5)[0][0] y=np.where(lons==-1.5)[0][0] best_cet=np.mean(best_sata[:,:,x,y:y+2],axis=2) for i in range(nyrs): best_cet[:,i]=best_cet[:,i]+np.mean(sat_clim[:,x,y:y+2],axis=1) print best_cet.shape # # Now plot xmin=-4. xmax=22. plt.scatter(cet,best_cet,marker='.',c='darkred') plt.plot(np.linspace(xmin,xmax,100),np.linspace(xmin,xmax,100),c='k',linestyle='--') plt.xlabel(r'CET Monthly Mean Temperature / $\degree$C') plt.xlim(xmin,xmax) plt.ylim(xmin,xmax) plt.ylabel(r'BEST Monthly Mean Temperature / $\degree$C') plt.show() # + # Set names to plot and number of months scenarios = ['Obs', 'Model'] months = list(range(1, 13)) # Make some random data: var_obs = pd.DataFrame() # Start with empty dataframes var_model = pd.DataFrame() N_data = nyrs # Loop through months of years, feeding with random distributions for month in months: var_obs[month] = cet[month-1,:] var_model[month] = best_cet[month-1,:] # Set plotting settings scen_colours = {'Obs': 'black', 'Model': 'red'} scen_lstyle = {'Obs': '-', 'Model': '-.'} scen_marker = {'Obs': 'o', 'Model': 'v'} scen_flier = {'Obs': '+', 'Model': 'x'} labels = {'Obs': 'CET Record', 'Model': 'BEST Reconstruction'} labelsxy = {'Obs': [0.05,0.9], 'Model': [0.05,0.85]} linewidth = 2.5 # Combine data into dict var_all = {'Obs': var_obs, 'Model': var_model} # Set plotting options for each scenario displace_vals = [-.2, 0.2] widths = 0.3 markersize = 3 # Set percentiles for whiskers whis_perc = [5, 95] showfliers = True showmeans = True # Open figure fig = plt.figure(1, figsize=[8.5,4.5]) ax = fig.add_axes([0.15, 0.15, 0.65, 0.75]) # Loop over months and scenrios for month in months: for iscen, scen in enumerate(scenarios): # Load data data = var_all[scen][month] # Make plotting option dicts for boxplot function meanprops = dict(marker=scen_marker[scen], markerfacecolor=scen_colours[scen], markeredgecolor=scen_colours[scen] ) boxprops = dict(linestyle=scen_lstyle[scen], linewidth=linewidth, color=scen_colours[scen] ) medianprops = dict(linestyle=scen_lstyle[scen], linewidth=linewidth, color=scen_colours[scen] ) whiskerprops = dict(linestyle=scen_lstyle[scen], linewidth=linewidth, color=scen_colours[scen] ) capprops = dict(linestyle=scen_lstyle[scen], linewidth=linewidth, color=scen_colours[scen] ) flierprops = dict(marker=scen_flier[scen], markerfacecolor=scen_colours[scen], markeredgecolor=scen_colours[scen] ) # Plot data for this month and scenario plt.boxplot(data, positions=[month+displace_vals[iscen]], showmeans=showmeans, whis=whis_perc, showfliers=showfliers, flierprops=flierprops, meanprops=meanprops, medianprops=medianprops, boxprops=boxprops, whiskerprops=whiskerprops, capprops=capprops, widths=widths ) ax.annotate(labels[scen],xy=labelsxy[scen],xycoords='axes fraction',color=scen_colours[scen]) # Set axis labels ax.set_title('Central England Temperature') ax.set_xlim([months[0]-1, months[-1]+1]) ax.set_xticks(months) ax.set_xticklabels(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'], fontsize=12) #ax.set_xlabel('Month of Year') # ax.set_ylim(ymin,ymax) ax.set_ylabel(r'Montly Mean Temperature / $\degree$C') plt.savefig('/homes/dcw32/figures/best_boxwhisker.png',transparent=True,bbox_inches='tight',dpi=200) plt.show() # - # ## Surface Ozone - Trends and Spectral Decomposition # To come!
plotting_intro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8 # language: python # name: python3 # --- # <center> # <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%204/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> # </center> # # # Hierarchical Clustering # # Estimated time needed: **25** minutes # # ## Objectives # # After completing this lab you will be able to: # # * Use scikit-learn to do Hierarchical clustering # * Create dendograms to visualize the clustering # # <h1>Table of contents</h1> # # <div class="alert alert-block alert-info" style="margin-top: 20px"> # <ol> # <li><a href="https://#hierarchical_agglomerative">Hierarchical Clustering - Agglomerative</a></li> # <ol> # <li><a href="https://#generating_data">Generating Random Data</a></li> # <li><a href="https://#agglomerative_clustering">Agglomerative Clustering</a></li> # <li><a href="https://#dendrogram">Dendrogram Associated for the Agglomerative Hierarchical Clustering</a></li> # </ol> # <li><a href="https://#clustering_vehicle_dataset">Clustering on the Vehicle Dataset</a></li> # <ol> # <li><a href="https://#data_cleaning">Data Cleaning</a></li> # <li><a href="https://#clustering_using_scipy">Clustering Using Scipy</a></li> # <li><a href="https://#clustering_using_skl">Clustering using scikit-learn</a></li> # </ol> # </ol> # </div> # <br> # <hr> # # <h1 id="hierarchical_agglomerative">Hierarchical Clustering - Agglomerative</h1> # # We will be looking at a clustering technique, which is <b>Agglomerative Hierarchical Clustering</b>. Remember that agglomerative is the bottom up approach. <br> <br> # In this lab, we will be looking at Agglomerative clustering, which is more popular than Divisive clustering. <br> <br> # We will also be using Complete Linkage as the Linkage Criteria. <br> <b> <i> NOTE: You can also try using Average Linkage wherever Complete Linkage would be used to see the difference! </i> </b> # import numpy as np import pandas as pd from scipy import ndimage from scipy.cluster import hierarchy from scipy.spatial import distance_matrix from matplotlib import pyplot as plt from sklearn import manifold, datasets from sklearn.cluster import AgglomerativeClustering from sklearn.datasets.samples_generator import make_blobs # %matplotlib inline # <hr> # <h3 id="generating_data">Generating Random Data</h3> # We will be generating a set of data using the <b>make_blobs</b> class. <br> <br> # Input these parameters into make_blobs: # <ul> # <li> <b>n_samples</b>: The total number of points equally divided among clusters. </li> # <ul> <li> Choose a number from 10-1500 </li> </ul> # <li> <b>centers</b>: The number of centers to generate, or the fixed center locations. </li> # <ul> <li> Choose arrays of x,y coordinates for generating the centers. Have 1-10 centers (ex. centers=[[1,1], [2,5]]) </li> </ul> # <li> <b>cluster_std</b>: The standard deviation of the clusters. The larger the number, the further apart the clusters</li> # <ul> <li> Choose a number between 0.5-1.5 </li> </ul> # </ul> <br> # Save the result to <b>X1</b> and <b>y1</b>. # X1, y1 = make_blobs(n_samples=50, centers=[[4,4], [-2, -1], [1, 1], [10,4]], cluster_std=0.9) # Plot the scatter plot of the randomly generated data. # plt.scatter(X1[:, 0], X1[:, 1], marker='o') # <hr> # <h3 id="agglomerative_clustering">Agglomerative Clustering</h3> # # We will start by clustering the random data points we just created. # # The <b> Agglomerative Clustering </b> class will require two inputs: # # <ul> # <li> <b>n_clusters</b>: The number of clusters to form as well as the number of centroids to generate. </li> # <ul> <li> Value will be: 4 </li> </ul> # <li> <b>linkage</b>: Which linkage criterion to use. The linkage criterion determines which distance to use between sets of observation. The algorithm will merge the pairs of cluster that minimize this criterion. </li> # <ul> # <li> Value will be: 'complete' </li> # <li> <b>Note</b>: It is recommended you try everything with 'average' as well </li> # </ul> # </ul> <br> # Save the result to a variable called <b> agglom </b>. # agglom = AgglomerativeClustering(n_clusters = 4, linkage = 'average') # Fit the model with <b> X2 </b> and <b> y2 </b> from the generated data above. # agglom.fit(X1,y1) # Run the following code to show the clustering! <br> # Remember to read the code and comments to gain more understanding on how the plotting works. # # + # Create a figure of size 6 inches by 4 inches. plt.figure(figsize=(6,4)) # These two lines of code are used to scale the data points down, # Or else the data points will be scattered very far apart. # Create a minimum and maximum range of X1. x_min, x_max = np.min(X1, axis=0), np.max(X1, axis=0) # Get the average distance for X1. X1 = (X1 - x_min) / (x_max - x_min) # This loop displays all of the datapoints. for i in range(X1.shape[0]): # Replace the data points with their respective cluster value # (ex. 0) and is color coded with a colormap (plt.cm.spectral) plt.text(X1[i, 0], X1[i, 1], str(y1[i]), color=plt.cm.nipy_spectral(agglom.labels_[i] / 10.), fontdict={'weight': 'bold', 'size': 9}) # Remove the x ticks, y ticks, x and y axis plt.xticks([]) plt.yticks([]) #plt.axis('off') # Display the plot of the original data before clustering plt.scatter(X1[:, 0], X1[:, 1], marker='.') # Display the plot plt.show() # - # <h3 id="dendrogram">Dendrogram Associated for the Agglomerative Hierarchical Clustering</h3> # # Remember that a <b>distance matrix</b> contains the <b> distance from each point to every other point of a dataset </b>. # # Use the function <b> distance_matrix, </b> which requires <b>two inputs</b>. Use the Feature Matrix, <b> X1 </b> as both inputs and save the distance matrix to a variable called <b> dist_matrix </b> <br> <br> # Remember that the distance values are symmetric, with a diagonal of 0's. This is one way of making sure your matrix is correct. <br> (print out dist_matrix to make sure it's correct) # dist_matrix = distance_matrix(X1,X1) print(dist_matrix) # Using the <b> linkage </b> class from hierarchy, pass in the parameters: # # <ul> # <li> The distance matrix </li> # <li> 'complete' for complete linkage </li> # </ul> <br> # Save the result to a variable called <b> Z </b>. # Z = hierarchy.linkage(dist_matrix, 'complete') # A Hierarchical clustering is typically visualized as a dendrogram as shown in the following cell. Each merge is represented by a horizontal line. The y-coordinate of the horizontal line is the similarity of the two clusters that were merged, where cities are viewed as singleton clusters. # By moving up from the bottom layer to the top node, a dendrogram allows us to reconstruct the history of merges that resulted in the depicted clustering. # # Next, we will save the dendrogram to a variable called <b>dendro</b>. In doing this, the dendrogram will also be displayed. # Using the <b> dendrogram </b> class from hierarchy, pass in the parameter: # # <ul> <li> Z </li> </ul> # dendro = hierarchy.dendrogram(Z) # ## Practice # # We used **complete** linkage for our case, change it to **average** linkage to see how the dendogram changes. # Z = hierarchy.linkage(dist_matrix, 'average') dendro = hierarchy.dendrogram(Z) # <hr> # <h1 id="clustering_vehicle_dataset">Clustering on Vehicle dataset</h1> # # Imagine that an automobile manufacturer has developed prototypes for a new vehicle. Before introducing the new model into its range, the manufacturer wants to determine which existing vehicles on the market are most like the prototypes--that is, how vehicles can be grouped, which group is the most similar with the model, and therefore which models they will be competing against. # # Our objective here, is to use clustering methods, to find the most distinctive clusters of vehicles. It will summarize the existing vehicles and help manufacturers to make decision about the supply of new models. # # ### Download data # # To download the data, we will use **`!wget`** to download it from IBM Object Storage.\ # **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) # # !wget -O cars_clus.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%204/data/cars_clus.csv # ## Read data # # Let's read dataset to see what features the manufacturer has collected about the existing models. # # + filename = 'cars_clus.csv' #Read csv pdf = pd.read_csv(filename) print ("Shape of dataset: ", pdf.shape) pdf.head(5) # - # The feature sets include price in thousands (price), engine size (engine_s), horsepower (horsepow), wheelbase (wheelbas), width (width), length (length), curb weight (curb_wgt), fuel capacity (fuel_cap) and fuel efficiency (mpg). # # <h2 id="data_cleaning">Data Cleaning</h2> # # Let's clean the dataset by dropping the rows that have null value: # print ("Shape of dataset before cleaning: ", pdf.size) pdf[[ 'sales', 'resale', 'type', 'price', 'engine_s', 'horsepow', 'wheelbas', 'width', 'length', 'curb_wgt', 'fuel_cap', 'mpg', 'lnsales']] = pdf[['sales', 'resale', 'type', 'price', 'engine_s', 'horsepow', 'wheelbas', 'width', 'length', 'curb_wgt', 'fuel_cap', 'mpg', 'lnsales']].apply(pd.to_numeric, errors='coerce') pdf = pdf.dropna() pdf = pdf.reset_index(drop=True) print ("Shape of dataset after cleaning: ", pdf.size) pdf.head(5) # ### Feature selection # # Let's select our feature set: # featureset = pdf[['engine_s', 'horsepow', 'wheelbas', 'width', 'length', 'curb_wgt', 'fuel_cap', 'mpg']] # ### Normalization # # Now we can normalize the feature set. **MinMaxScaler** transforms features by scaling each feature to a given range. It is by default (0, 1). That is, this estimator scales and translates each feature individually such that it is between zero and one. # from sklearn.preprocessing import MinMaxScaler x = featureset.values #returns a numpy array min_max_scaler = MinMaxScaler() feature_mtx = min_max_scaler.fit_transform(x) feature_mtx [0:5] # <h2 id="clustering_using_scipy">Clustering using Scipy</h2> # # In this part we use Scipy package to cluster the dataset. # # First, we calculate the distance matrix. # import scipy leng = feature_mtx.shape[0] D = scipy.zeros([leng,leng]) for i in range(leng): for j in range(leng): D[i,j] = scipy.spatial.distance.euclidean(feature_mtx[i], feature_mtx[j]) D # In agglomerative clustering, at each iteration, the algorithm must update the distance matrix to reflect the distance of the newly formed cluster with the remaining clusters in the forest. # The following methods are supported in Scipy for calculating the distance between the newly formed cluster and each: # \- single # \- complete # \- average # \- weighted # \- centroid # # We use **complete** for our case, but feel free to change it to see how the results change. # import pylab import scipy.cluster.hierarchy Z = hierarchy.linkage(D, 'complete') # Essentially, Hierarchical clustering does not require a pre-specified number of clusters. However, in some applications we want a partition of disjoint clusters just as in flat clustering. # So you can use a cutting line: # from scipy.cluster.hierarchy import fcluster max_d = 3 clusters = fcluster(Z, max_d, criterion='distance') clusters # Also, you can determine the number of clusters directly: # from scipy.cluster.hierarchy import fcluster k = 5 clusters = fcluster(Z, k, criterion='maxclust') clusters # Now, plot the dendrogram: # # + fig = pylab.figure(figsize=(18,50)) def llf(id): return '[%s %s %s]' % (pdf['manufact'][id], pdf['model'][id], int(float(pdf['type'][id])) ) dendro = hierarchy.dendrogram(Z, leaf_label_func=llf, leaf_rotation=0, leaf_font_size =12, orientation = 'right') # - # <h2 id="clustering_using_skl">Clustering using scikit-learn</h2> # # Let's redo it again, but this time using the scikit-learn package: # from sklearn.metrics.pairwise import euclidean_distances dist_matrix = euclidean_distances(feature_mtx,feature_mtx) print(dist_matrix) Z_using_dist_matrix = hierarchy.linkage(dist_matrix, 'complete') # + fig = pylab.figure(figsize=(18,50)) def llf(id): return '[%s %s %s]' % (pdf['manufact'][id], pdf['model'][id], int(float(pdf['type'][id])) ) dendro = hierarchy.dendrogram(Z_using_dist_matrix, leaf_label_func=llf, leaf_rotation=0, leaf_font_size =12, orientation = 'right') # - # Now, we can use the 'AgglomerativeClustering' function from scikit-learn library to cluster the dataset. The AgglomerativeClustering performs a hierarchical clustering using a bottom up approach. The linkage criteria determines the metric used for the merge strategy: # # * Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. # * Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters. # * Average linkage minimizes the average of the distances between all observations of pairs of clusters. # # + agglom = AgglomerativeClustering(n_clusters = 6, linkage = 'complete') agglom.fit(dist_matrix) agglom.labels_ # - # We can add a new field to our dataframe to show the cluster of each row: # pdf['cluster_'] = agglom.labels_ pdf.head() # + import matplotlib.cm as cm n_clusters = max(agglom.labels_)+1 colors = cm.rainbow(np.linspace(0, 1, n_clusters)) cluster_labels = list(range(0, n_clusters)) # Create a figure of size 6 inches by 4 inches. plt.figure(figsize=(16,14)) for color, label in zip(colors, cluster_labels): subset = pdf[pdf.cluster_ == label] for i in subset.index: plt.text(subset.horsepow[i], subset.mpg[i],str(subset['model'][i]), rotation=25) plt.scatter(subset.horsepow, subset.mpg, s= subset.price*10, c=color, label='cluster'+str(label),alpha=0.5) # plt.scatter(subset.horsepow, subset.mpg) plt.legend() plt.title('Clusters') plt.xlabel('horsepow') plt.ylabel('mpg') # - # As you can see, we are seeing the distribution of each cluster using the scatter plot, but it is not very clear where is the centroid of each cluster. Moreover, there are 2 types of vehicles in our dataset, "truck" (value of 1 in the type column) and "car" (value of 0 in the type column). So, we use them to distinguish the classes, and summarize the cluster. First we count the number of cases in each group: # pdf.groupby(['cluster_','type'])['cluster_'].count() # Now we can look at the characteristics of each cluster: # agg_cars = pdf.groupby(['cluster_','type'])['horsepow','engine_s','mpg','price'].mean() agg_cars # It is obvious that we have 3 main clusters with the majority of vehicles in those. # # **Cars**: # # * Cluster 1: with almost high mpg, and low in horsepower. # # * Cluster 2: with good mpg and horsepower, but higher price than average. # # * Cluster 3: with low mpg, high horsepower, highest price. # # **Trucks**: # # * Cluster 1: with almost highest mpg among trucks, and lowest in horsepower and price. # * Cluster 2: with almost low mpg and medium horsepower, but higher price than average. # * Cluster 3: with good mpg and horsepower, low price. # # Please notice that we did not use **type** and **price** of cars in the clustering process, but Hierarchical clustering could forge the clusters and discriminate them with quite a high accuracy. # plt.figure(figsize=(16,10)) for color, label in zip(colors, cluster_labels): subset = agg_cars.loc[(label,),] for i in subset.index: plt.text(subset.loc[i][0]+5, subset.loc[i][2], 'type='+str(int(i)) + ', price='+str(int(subset.loc[i][3]))+'k') plt.scatter(subset.horsepow, subset.mpg, s=subset.price*20, c=color, label='cluster'+str(label)) plt.legend() plt.title('Clusters') plt.xlabel('horsepow') plt.ylabel('mpg') # <h2>Want to learn more?</h2> # # IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">SPSS Modeler</a> # # Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">Watson Studio</a> # # ### Thank you for completing this lab! # # ## Author # # <NAME> # # ### Other Contributors # # <a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01" target="_blank"><NAME></a> # # ## Change Log # # | Date (YYYY-MM-DD) | Version | Changed By | Change Description | # | ----------------- | ------- | ---------- | --------------------------------------------------- | # | 2021-01-11 | 2.2 | Lakshmi | Changed distance matrix in agglomerative clustering | # | 2020-11-03 | 2.1 | Lakshmi | Updated URL | # | 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab | # # ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/> #
9_Machine Learning with Python/4-2.Hierarchical-Cars.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img src="https://github.com/insaid2018/Term-1/blob/master/Images/INSAID_Full%20Logo.png?raw=true" width="240" height="360" /> # # # LINEAR REGRESSION # ## Table of Content # # 1. [Problem Statement](#section1)<br> # 2. [Data Loading and Description](#section2)<br> # 3. [Exploratory Data Analysis](#section3)<br> # 4. [Introduction to Linear Regression](#section4)<br> # - 4.1 [Linear Regression Equation with Errors in consideration](#section401)<br> # - 4.1.1 [Assumptions of Linear Regression](#sectionassumptions)<br> # - 4.2 [Preparing X and y using pandas](#section402)<br> # - 4.3 [Splitting X and y into training and test datasets](#section403)<br> # - 4.4 [Linear regression in scikit-learn](#section404)<br> # - 4.5 [Interpreting Model Coefficients](#section405)<br> # - 4.3 [Using the Model for Prediction](#section406)<br> # 5. [Model evaluation](#section5)<br> # - 5.1 [Model evaluation using metrics](#section501)<br> # - 5.2 [Model Evaluation using Rsquared value.](#section502)<br> # 6. [Feature Selection](#section6)<br> # 7. [Handling Categorical Features](#section7)<br> # <a id=section1></a> # ## 1. Problem Statement # # __Sales__ (in thousands of units) for a particular product as a __function__ of __advertising budgets__ (in thousands of dollars) for _TV, radio, and newspaper media_. Suppose that in our role as __Data Scientist__ we are asked to suggest. # # - We want to find a function that given input budgets for TV, radio and newspaper __predicts the output sales__. # # - Which media __contribute__ to sales? # # - Visualize the __relationship__ between the _features_ and the _response_ using scatter plots. # <a id=section2></a> # ## 2. Data Loading and Description # # The adverstising dataset captures sales revenue generated with respect to advertisement spends across multiple channles like radio, tv and newspaper. # - TV - Spend on TV Advertisements # - Radio - Spend on radio Advertisements # - Newspaper - Spend on newspaper Advertisements # - Sales - Sales revenue generated # __Importing Packages__ # + import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn import metrics import numpy as np # allow plots to appear directly in the notebook # %matplotlib inline # - # #### Importing the Dataset data = pd.read_csv('https://raw.githubusercontent.com/insaid2018/Term-2/master/CaseStudy/Advertising.csv', index_col=0) data.head() # What are the **features**? # - TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars) # - Radio: advertising dollars spent on Radio # - Newspaper: advertising dollars spent on Newspaper # # What is the **response**? # - Sales: sales of a single product in a given market (in thousands of widgets) # <a id=section3></a> # ## 3. Exploratory Data Analysis data.shape data.info() data.describe(include="all") # There are 200 **observations**, and thus 200 markets in the dataset. # __Distribution of Features__ # + feature_cols = ['TV', 'radio', 'newspaper'] # create a Python list of feature names X = data[feature_cols] y=data.sales from sklearn.model_selection import train_test_split X_train,X_test,Y_train,Y_test=train_test_split(X,y,test_size=0.20,random_state=1) from sklearn.linear_model import LinearRegression lr=LinearRegression() linreg=lr.fit(X_train,Y_train) print(linreg.coef_) print(linreg.intercept_) y_pred=linreg.predict(X_train) from sklearn.metrics import mean_squared_error mean_squared_error(Y_train,y_pred) # + f, axes = plt.subplots(2, 2, figsize=(7, 7), sharex=False) # Set up the matplotlib figure sns.despine(left=True) sns.distplot(data.sales, color="b", ax=axes[0, 0]) sns.distplot(data.TV, color="r", ax=axes[0, 1]) sns.distplot(data.radio, color="g", ax=axes[1, 0]) sns.distplot(data.newspaper, color="m", ax=axes[1, 1]) # - # __Observations__<br/> # _Sales_ seems to be __normal distribution__. Spending on _newspaper advertisement_ seems to be __right skewed__. Most of the spends on _newspaper_ is __fairly low__ where are spend on _radio and tv_ seems be __uniform distribution__. Spends on _tv_ are __comparatively higher__ then spend on _radio and newspaper_. # ### Is there a relationship between sales and spend various advertising channels? # + JG1 = sns.jointplot("newspaper", "sales", data=data, kind='reg') JG2 = sns.jointplot("radio", "sales", data=data, kind='reg') JG3 = sns.jointplot("TV", "sales", data=data, kind='reg') #subplots migration f = plt.figure() for J in [JG1, JG2,JG3]: for A in J.fig.axes: f._axstack.add(f._make_key(A), A) # - # __Observation__<br/> # _Sales and spend on newpaper_ is __not__ highly correlaed where are _sales and spend on tv_ is __highly correlated__. # ### Visualising Pairwise correlation sns.pairplot(data, size = 2, aspect = 1.5) sns.pairplot(data, x_vars=['TV', 'radio', 'newspaper'], y_vars='sales', size=5, aspect=1, kind='reg') # __Observation__ # # - Strong relationship between TV ads and sales # - Weak relationship between Radio ads and sales # - Very weak to no relationship between Newspaper ads and sales # # # ### Calculating and plotting heatmap correlation data.corr() sns.heatmap( data.corr(), annot=True ); # __Observation__ # # - The diagonal of the above matirx shows the auto-correlation of the variables. It is always 1. You can observe that the correlation between __TV and Sales is highest i.e. 0.78__ and then between __sales and radio i.e. 0.576__. # # - correlations can vary from -1 to +1. Closer to +1 means strong positive correlation and close -1 means strong negative correlation. Closer to 0 means not very strongly correlated. variables with __strong correlations__ are mostly probably candidates for __model builing__. # # <a id=section4></a> # ## 4. Introduction to Linear Regression # # __Linear regression__ is a _basic_ and _commonly_ used type of __predictive analysis__. The overall idea of regression is to examine two things: # - Does a set of __predictor variables__ do a good job in predicting an __outcome__ (dependent) variable? # - Which variables in particular are __significant predictors__ of the outcome variable, and in what way they do __impact__ the outcome variable? # # These regression estimates are used to explain the __relationship between one dependent variable and one or more independent variables__. The simplest form of the regression equation with one dependent and one independent variable is defined by the formula :<br/> # $y = \beta_0 + \beta_1x$ # # ![image.png](attachment:image.png) # # What does each term represent? # - $y$ is the response # - $x$ is the feature # - $\beta_0$ is the intercept # - $\beta_1$ is the coefficient for x # # # Three major uses for __regression analysis__ are: # - determining the __strength__ of predictors, # - Typical questions are what is the strength of __relationship__ between _dose and effect_, _sales and marketing spending_, or _age and income_. # - __forecasting__ an effect, and # - how much __additional sales income__ do I get for each additional $1000 spent on marketing? # - __trend__ forecasting. # - what will the __price of house__ be in _6 months_? # <a id=section401></a> # ### 4.1 Linear Regression Equation with Errors in consideration # # While taking errors into consideration the equation of linear regression is: # ![image.png](attachment:image.png) # Generally speaking, coefficients are estimated using the **least squares criterion**, which means we are find the line (mathematically) which minimizes the **sum of squared residuals** (or "sum of squared errors"): # # What elements are present in the diagram? # - The black dots are the **observed values** of x and y. # - The blue line is our **least squares line**. # - The red lines are the **residuals**, which are the distances between the observed values and the least squares line. # ![image.png](attachment:image.png) # # How do the model coefficients relate to the least squares line? # - $\beta_0$ is the **intercept** (the value of $y$ when $x$ = 0) # - $\beta_1$ is the **slope** (the change in $y$ divided by change in $x$) # # Here is a graphical depiction of those calculations: # ![image.png](attachment:image.png) # <a id = sectionassumptions></a> # #### 4.1.1 Assumptions of Linear Regression # 1. There should be a linear and additive relationship between dependent (response) variable and independent (predictor) variable(s). A linear relationship suggests that a change in response Y due to one unit change in X¹ is constant, regardless of the value of X¹. An additive relationship suggests that the effect of X¹ on Y is independent of other variables. # 2. There should be no correlation between the residual (error) terms. Absence of this phenomenon is known as Autocorrelation. # 3. The independent variables should not be correlated. Absence of this phenomenon is known as multicollinearity. # 4. The error terms must have constant variance. This phenomenon is known as homoskedasticity. The presence of non-constant variance is referred to heteroskedasticity. # 5. The error terms must be normally distributed. # <a id=section402></a> # ### 4.2 Preparing X and y using pandas # - __Standardization__. <br/> # Standardize features by removing the _mean_ and scaling to _unit standard deviation_. sns.distplot(data['TV']) from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit(data) data1 = scaler.transform(data) data = pd.DataFrame(data1) data.head() data.columns = ['TV','radio','newspaper','sales'] data.head() plt.scatter(data['radio'],data['sales']) plt.show() sns.distplot(data['TV']) feature_cols = ['TV', 'radio', 'newspaper'] # create a Python list of feature names X = data[feature_cols] # use the list to select a subset of the original DataFrame-+ # - Checking the type and shape of X. print(type(X)) print(X.shape) y = data.sales y.head() # - Check the type and shape of y print(type(y)) print(y.shape) # <a id=section403></a> ####SUKRUTH: # ### 4.3 Splitting X and y into training and test datasets. # + from sklearn.model_selection import train_test_split def split(X,y): return train_test_split(X, y, test_size=0.20, random_state=1) # - X_train, X_test, y_train, y_test=split(X,y) print('Train cases as below') print('X_train shape: ',X_train.shape) print('y_train shape: ',y_train.shape) print('\nTest cases as below') print('X_test shape: ',X_test.shape) print('y_test shape: ',y_test.shape) # <a id=section404></a> # ### 4.4 Linear regression in scikit-learn # To apply any machine learning algorithm on your dataset, basically there are 4 steps: # 1. Load the algorithm # 2. Instantiate and Fit the model to the training dataset # 3. Prediction on the test set # 4. Calculating Root mean square error # The code block given below shows how these steps are carried out:<br/> # # ``` from sklearn.linear_model import LinearRegression # linreg = LinearRegression() # linreg.fit(X_train, y_train) # RMSE_test = np.sqrt(metrics.mean_squared_error(y_test, y_pred_test))``` def linear_reg( X, y, gridsearch = False): X_train, X_test, y_train, y_test = split(X,y) from sklearn.linear_model import LinearRegression linreg = LinearRegression() if not(gridsearch): linreg.fit(X_train, y_train) else: from sklearn.model_selection import GridSearchCV parameters = {'normalize':[True,False], 'copy_X':[True, False]} linreg = GridSearchCV(linreg,parameters, cv = 10,refit = True) linreg.fit(X_train, y_train) # fit the model to the training data (learn the coefficients) print("Mean cross-validated score of the best_estimator : ", linreg.best_score_) y_pred_test = linreg.predict(X_test) # make predictions on the testing set RMSE_test = np.sqrt(metrics.mean_squared_error(y_test, y_pred_test)) # compute the RMSE of our predictions print('RMSE for the test set is {}'.format(RMSE_test)) return linreg # ### Linear Regression Model without GridSearcCV # Note: Linear Regression Model with GridSearcCV is implemented at Table of Contents: 8 X = data[feature_cols] y = data.sales linreg = linear_reg(X,y) # <a id=section405></a> # ### 4.5 Interpreting Model Coefficients print('Intercept:',linreg.intercept_) # print the intercept print('Coefficients:',linreg.coef_) # Its hard to remember the order of the feature names, we so we are __zipping__ the features to pair the feature names with the coefficients feature_cols.insert(0,'Intercept') coef = linreg.coef_.tolist() coef.insert(0, linreg.intercept_) # + eq1 = zip(feature_cols, coef) for c1,c2 in eq1: print(c1,c2) # - # __y = 0.00116 + 0.7708 `*` TV + 0.508 `*` radio + 0.010 `*` newspaper__ # How do we interpret the TV coefficient (_0.77081_) # - A "unit" increase in TV ad spending is **associated with** a _"0.7708_ unit" increase in Sales. # - Or more clearly: An additional $1,000 spent on TV ads is **associated with** an increase in sales of 770.8 widgets. # # Important Notes: # - This is a statement of __association__, not __causation__. # - If an increase in TV ad spending was associated with a __decrease__ in sales, β1 would be __negative.__ # <a id=section406></a> # ### 4.6 Using the Model for Prediction y_pred_train = linreg.predict(X_train) y_pred_test = linreg.predict(X_test) # make predictions on the testing set # - We need an evaluation metric in order to compare our predictions with the actual values. # <a id=section5></a> # ## 5. Model evaluation # __Error__ is the _deviation_ of the values _predicted_ by the model with the _true_ values.<br/> # For example, if a model predicts that the price of apple is Rs75/kg, but the actual price of apple is Rs100/kg, then the error in prediction will be Rs25/kg.<br/> # Below are the types of error we will be calculating for our _linear regression model_: # - Mean Absolute Error # - Mean Squared Error # - Root Mean Squared Error # <a id=section501></a> # ### 5.1 Model Evaluation using __metrics.__ # __Mean Absolute Error__ (MAE) is the mean of the absolute value of the errors: # $$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$ # Computing the MAE for our Sales predictions MAE_train = metrics.mean_absolute_error(y_train, y_pred_train) MAE_test = metrics.mean_absolute_error(y_test, y_pred_test) print('MAE for training set is {}'.format(MAE_train)) print('MAE for test set is {}'.format(MAE_test)) # __Mean Squared Error__ (MSE) is the mean of the squared errors: # $$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$ # # Computing the MSE for our Sales predictions MSE_train = metrics.mean_squared_error(y_train, y_pred_train) MSE_test = metrics.mean_squared_error(y_test, y_pred_test) print('MSE for training set is {}'.format(MSE_train)) print('MSE for test set is {}'.format(MSE_test)) # __Root Mean Squared Error__ (RMSE) is the square root of the mean of the squared errors: # # $$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$ # # Computing the RMSE for our Sales predictions RMSE_train = np.sqrt( metrics.mean_squared_error(y_train, y_pred_train)) RMSE_test = np.sqrt(metrics.mean_squared_error(y_test, y_pred_test)) print('RMSE for training set is {}'.format(RMSE_train)) print('RMSE for test set is {}'.format(RMSE_test)) # Comparing these metrics: # # - __MAE__ is the easiest to understand, because it's the __average error.__ # - __MSE__ is more popular than MAE, because MSE "punishes" larger errors. # - __RMSE__ is even more popular than MSE, because RMSE is _interpretable_ in the "y" units. # - Easier to put in context as it's the same units as our response variable. # <a id=section502></a> # ### 5.2 Model Evaluation using Rsquared value. # - There is one more method to evaluate linear regression model and that is by using the __Rsquared__ value.<br/> # - R-squared is the **proportion of variance explained**, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the **null model**. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.) # # - R-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. But there is one shortcoming of Rsquare method and that is **R-squared will always increase as you add more features to the model**, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model. # # There is alternative to R-squared called **adjusted R-squared** that penalizes model complexity (to control for overfitting). yhat = linreg.predict(X_train) SS_Residual = sum((y_train-yhat)**2) SS_Total = sum((y_train-np.mean(y_train))**2) r_squared = 1 - (float(SS_Residual))/SS_Total adjusted_r_squared = 1 - (1-r_squared)*(len(y_train)-1)/(len(y_train)-X_train.shape[1]-1) print(r_squared, adjusted_r_squared) yhat = linreg.predict(X_test) SS_Residual = sum((y_test-yhat)**2) SS_Total = sum((y_test-np.mean(y_test))**2) r_squared = 1 - (float(SS_Residual))/SS_Total adjusted_r_squared = 1 - (1-r_squared)*(len(y_test)-1)/(len(y_test)-X_test.shape[1]-1) print(r_squared, adjusted_r_squared) # <a id=section6></a> # ## 6. Feature Selection # # At times some features do not contribute much to the accuracy of the model, in that case its better to discard those features.<br/> # - Let's check whether __"newspaper"__ improve the quality of our predictions or not.<br/> # To check this we are going to take all the features other than "newspaper" and see if the error (RMSE) is reducing or not. # - Also Applying __gridsearch__ method for exhaustive search over specified parameter values of estimator. feature_cols = ['TV','radio'] # create a Python list of feature names X = data[feature_cols] y = data.sales linreg=linear_reg(X,y,gridsearch=True) # - _Before_ doing feature selection _RMSE_ for the test dataset was __0.271182__.<br/> # - _After_ discarding 'newspaper' column, RMSE comes to be __0.268675__.<br/> # - As you can see there is __no significant improvement__ in the quality, therefore, the 'newspaper' column shouldn't be discarded. But if in some other case if there is significant decrease in the RMSE, then you must discard that feature. # - Give a try to other __features__ and check the RMSE score for each one. # <a id=section7></a> # + features=['TV','newspaper','radio'] X=data[features] y=data.sales #linreg=linear_reg(X,y,gridsearch=True) from sklearn.model_selection import train_test_split X_train,X_test,Y_train,Y_test=train_test_split(X,y,test_size=0.2,random_state=1) from sklearn.metrics import r2_score from sklearn.metrics import mean_absolute_error from sklearn.metrics import mean_squared_error from sklearn.linear_model import LinearRegression lr = LinearRegression() linreg=lr.fit(X_train,Y_train) Y_pred=linreg.predict(X_train) mean_squared_error(Y_train,Y_pred) #print("MSE = {0}".format(mean_squared_error(Y_train,Y_pred))) # - # ## 7. Handling Categorical Features # # Let's create a new feature called **Area**, and randomly assign observations to be **rural, suburban, or urban** : np.random.seed(123456) # set a seed for reproducibility nums = np.random.rand(len(data)) mask_suburban = (nums > 0.33) & (nums < 0.66) # assign roughly one third of observations to each group mask_urban = nums > 0.66 data['Area'] = 'rural' data.loc[mask_suburban, 'Area'] = 'suburban' data.loc[mask_urban, 'Area'] = 'urban' data.head() # We want to represent Area numerically, but we can't simply code it as:<br/> # - 0 = rural,<br/> # - 1 = suburban,<br/> # - 2 = urban<br/> # Because that would imply an **ordered relationship** between suburban and urban, and thus urban is somehow "twice" the suburban category.<br/> Note that if you do have ordered categories (i.e., strongly disagree, disagree, neutral, agree, strongly agree), you can use a single dummy variable to represent the categories numerically (such as 1, 2, 3, 4, 5).<br/> # # Anyway, our Area feature is unordered, so we have to create **additional dummy variables**. Let's explore how to do this using pandas: area_dummies = pd.get_dummies(data.Area, prefix='Area') # create three dummy variables using get_dummies area_dummies.head() # However, we actually only need **two dummy variables, not three**. # __Why???__ # Because two dummies captures all the "information" about the Area feature, and implicitly defines rural as the "baseline level". # # Let's see what that looks like: area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:] area_dummies.head() # Here is how we interpret the coding: # - **rural** is coded as Area_suburban = 0 and Area_urban = 0 # - **suburban** is coded as Area_suburban = 1 and Area_urban = 0 # - **urban** is coded as Area_suburban = 0 and Area_urban = 1 # # If this sounds confusing, think in general terms that why we need only __k-1 dummy variables__ if we have a categorical feature with __k "levels"__. # # Anyway, let's add these two new dummy variables onto the original DataFrame, and then include them in the linear regression model. # concatenate the dummy variable columns onto the DataFrame (axis=0 means rows, axis=1 means columns) data = pd.concat([data, area_dummies], axis=1) data.head() feature_cols = ['TV', 'radio', 'newspaper', 'Area_suburban', 'Area_urban'] # create a Python list of feature names X = data[feature_cols] y = data.sales linreg = linear_reg(X,y) # + feature_cols.insert(0,'Intercept') coef = linreg.coef_.tolist() coef.insert(0, linreg.intercept_) eq1 = zip(feature_cols, coef) for c1,c2 in eq1: print(c1,c2) # - # __y = - 0.00218 + 0.7691 `*` TV + 0.505 `*` radio + 0.011 `*` newspaper - 0.0311 `*` Area_suburban + 0.0418 `*` Area_urban__<br/> # How do we interpret the coefficients?<br/> # - Holding all other variables fixed, being a **suburban** area is associated with an average **decrease** in Sales of 0.0311 widgets (as compared to the baseline level, which is rural). # - Being an **urban** area is associated with an average **increase** in Sales of 0.0418 widgets (as compared to rural). # <a id=section8></a>
INSAID/Course_Material/Term1-ML/Linear_Regression/LinearRegression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Initialization # # Welcome to the first assignment of "Improving Deep Neural Networks". # # Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. # # If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. # # A well chosen initialization can: # - Speed up the convergence of gradient descent # - Increase the odds of gradient descent converging to a lower training (and generalization) error # # To get started, run the following cell to load the packages and the planar dataset you will try to classify. # + import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.datasets from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec # %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # load image dataset: blue/red dots in circles train_X, train_Y, test_X, test_Y = load_dataset() # - # You would like a classifier to separate the blue dots from the red dots. # ## 1 - Neural Network model # You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: # - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument. # - *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. # - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. # # **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls. def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"): """ Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples) learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descent print_cost -- if True, print the cost every 1000 iterations initialization -- flag to choose which initialization to use ("zeros","random" or "he") Returns: parameters -- parameters learnt by the model """ grads = {} costs = [] # to keep track of the loss m = X.shape[1] # number of examples layers_dims = [X.shape[0], 10, 5, 1] # Initialize parameters dictionary. if initialization == "zeros": parameters = initialize_parameters_zeros(layers_dims) elif initialization == "random": parameters = initialize_parameters_random(layers_dims) elif initialization == "he": parameters = initialize_parameters_he(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID. a3, cache = forward_propagation(X, parameters) # Loss cost = compute_loss(a3, Y) # Backward propagation. grads = backward_propagation(X, Y, cache) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the loss every 1000 iterations if print_cost and i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) costs.append(cost) # plot the loss plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters # ## 2 - Zero initialization # # There are two types of parameters to initialize in a neural network: # - the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$ # - the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$ # # **Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes. # + # GRADED FUNCTION: initialize_parameters_zeros def initialize_parameters_zeros(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ parameters = {} L = len(layers_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1])) parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters # - parameters = initialize_parameters_zeros([3,2,1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) # **Expected Output**: # # <table> # <tr> # <td> # **W1** # </td> # <td> # [[ 0. 0. 0.] # [ 0. 0. 0.]] # </td> # </tr> # <tr> # <td> # **b1** # </td> # <td> # [[ 0.] # [ 0.]] # </td> # </tr> # <tr> # <td> # **W2** # </td> # <td> # [[ 0. 0.]] # </td> # </tr> # <tr> # <td> # **b2** # </td> # <td> # [[ 0.]] # </td> # </tr> # # </table> # Run the following code to train your model on 15,000 iterations using zeros initialization. parameters = model(train_X, train_Y, initialization = "zeros") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) # The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary: print ("predictions_train = " + str(predictions_train)) print ("predictions_test = " + str(predictions_test)) plt.title("Model with Zeros initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) # The model is predicting 0 for every example. # # In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. # <font color='blue'> # **What you should remember**: # - The weights $W^{[l]}$ should be initialized randomly to break symmetry. # - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. # # ## 3 - Random initialization # # To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. # # **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters. # + # GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*10 parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters # - parameters = initialize_parameters_random([3, 2, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) # **Expected Output**: # # <table> # <tr> # <td> # **W1** # </td> # <td> # [[ 17.88628473 4.36509851 0.96497468] # [-18.63492703 -2.77388203 -3.54758979]] # </td> # </tr> # <tr> # <td> # **b1** # </td> # <td> # [[ 0.] # [ 0.]] # </td> # </tr> # <tr> # <td> # **W2** # </td> # <td> # [[-0.82741481 -6.27000677]] # </td> # </tr> # <tr> # <td> # **b2** # </td> # <td> # [[ 0.]] # </td> # </tr> # # </table> # Run the following code to train your model on 15,000 iterations using random initialization. parameters = model(train_X, train_Y, initialization = "random") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) # If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. # # Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s. print (predictions_train) print (predictions_test) plt.title("Model with large random initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) # **Observations**: # - The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity. # - Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. # - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization. # # <font color='blue'> # **In summary**: # - Initializing weights to very large random values does not work well. # - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! # ## 4 - He initialization # # Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.) # # **Exercise**: Implement the following function to initialize your parameters with He initialization. # # **Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation. # + # GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*(np.sqrt(2/layers_dims[l-1])) parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters # - parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) # **Expected Output**: # # <table> # <tr> # <td> # **W1** # </td> # <td> # [[ 1.78862847 0.43650985] # [ 0.09649747 -1.8634927 ] # [-0.2773882 -0.35475898] # [-0.08274148 -0.62700068]] # </td> # </tr> # <tr> # <td> # **b1** # </td> # <td> # [[ 0.] # [ 0.] # [ 0.] # [ 0.]] # </td> # </tr> # <tr> # <td> # **W2** # </td> # <td> # [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] # </td> # </tr> # <tr> # <td> # **b2** # </td> # <td> # [[ 0.]] # </td> # </tr> # # </table> # Run the following code to train your model on 15,000 iterations using He initialization. parameters = model(train_X, train_Y, initialization = "he") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) plt.title("Model with He initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) # **Observations**: # - The model with He initialization separates the blue and the red dots very well in a small number of iterations. # # ## 5 - Conclusions # You have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is: # # <table> # <tr> # <td> # **Model** # </td> # <td> # **Train accuracy** # </td> # <td> # **Problem/Comment** # </td> # # </tr> # <td> # 3-layer NN with zeros initialization # </td> # <td> # 50% # </td> # <td> # fails to break symmetry # </td> # <tr> # <td> # 3-layer NN with large random initialization # </td> # <td> # 83% # </td> # <td> # too large weights # </td> # </tr> # <tr> # <td> # 3-layer NN with He initialization # </td> # <td> # 99% # </td> # <td> # recommended method # </td> # </tr> # </table> # <font color='blue'> # **What you should remember from this notebook**: # - Different initializations lead to different results # - Random initialization is used to break symmetry and make sure different hidden units can learn different things # - Don't intialize to values that are too large # - He initialization works well for networks with ReLU activations.
deep-neural-network/Week 5/Initialization/Initialization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # [![img](img/pythonista.png)](https://www.pythonista.io) # # Desarrollo de una aplicación web simple. # # ## Objetivos. # # 1. Desplegar un documento HTML con los datos completos del objeto de tipo *dict* contenido en la representación del objeto tipo *list* guardado en [data/alumnos.txt](data/alumnos.txt), cuando el valor correspondiente al identificador *'Cuenta'* coincida con el número ingresado como parte de la URL ```http://localhost:5000/alumno/<número>```. # # * Desplegar un documento HTML que contenga todas las coincidencias de la búsqueda de la cadena que se ingrese como parte de la URL ```http://localhost:5000/busca/<cadena>``` en los objetos de tipo *dict* contenidos en la representación del objeto tipo *list* guardado en [data/alumnos.txt](data/alumnos.txt). # * La busqueda se realizará en los valores correspondientes a los identificadores *'Nombre'*, *'Primer Apellido'* y *'Segundo Apellido'*. # * El documento HTML mostrará la lista de coincidencias de los objetos tipo _dict_ incluyendo los valores correspondientes a *'Nombre'*, *'Primer Apellido'*, *'Segundo Apellido'*, así como una URL que incluya el número correspondiente a *Cuenta* en el formato ```http://localhost:5000/alumno/<número>```. # ## Plantillas. # # Los documentos HTML se elaborarán a partir de platillas de Jinja 2. # ### Plantilla para ```http://localhost:5000/alumno/<número>```. # # La plantilla [templates/despliega.html](templates/despliega.html) contiene el siguiente código: # # ```html # <h1> Alumno {{ alumno['Cuenta'] }} </h1> # <ul> # <li>Nombre: {%for campo in ['Nombre', 'Primer Apellido', 'Segundo Apellido'] %} # {{alumno[campo]}}{% endfor %}</li> # <li>Carrera: {{ alumno['Carrera'] }} </li> # <li>Semestre: {{ alumno['Semestre'] }} </li> # <li>Promedio: {{ alumno['Promedio'] }} </li> # {% if alumno["Al Corriente"] %} <li>El alumno está al corriente de pagos.</li> {% endif %} # </ul> # ``` # ### Plantilla para ```http://localhost:5000/busca/<cadena>```. # # La plantilla [templates/busqueda_avanzada.html](templates/busqueda_avanzada.html) contiene el siguiente código: # # ``` html # <h1> Alumnos Encontrados</h1> # <ul> # {%for alumno in alumnos %} # <li> <a href={{ url_for('despliega', cuenta=alumno['Cuenta']) }}> {{ alumno['Cuenta']}}</a>: # {%for campo in ['Nombre', 'Primer Apellido', 'Segundo Apellido'] %} # {{alumno[campo]}} # {% endfor %} </li> # {% endfor %} # </ul> # ``` # ## Código de la aplicación. # ### Sección de datos. campos = ('Nombre', 'Primer Apellido', 'Segundo Apellido') ruta = 'data/alumnos.txt' # ### La función ```encuentra()``` . # # * Busca una cadena de caracteres dentro de los campos indicados de un objeto tipo ```dict```. # * En caso de encontrar una coincidencia, el resultado es ```True```. encuentra = lambda cadena, registro, campos: bool(sum([cadena.casefold() \ in registro[campo].casefold() for campo in campos])) # ### La función ```buscar_archivo()```. # # * Lee el contenido del archivo de texto indicado en el parámetro ```ruta``` y lo transforma mediante la función ```eval()```. Se da por sentado de que el objeto `` `base``` es de tipo ```tuple``` o ```list```que a su vez contiene objetos tipo ```dict```. # * A cada elemento del objeto ```base``` se le aplica la función ```encuentra()``` y se crea una lista de aquellos elementos en los que exista una coicidencia de la cadena en los campos indicados. def buscar_archivo(cadena, ruta, campos): with open(ruta, 'tr') as archivo: base = eval(archivo.read()) return [registro for registro in base if encuentra(cadena, registro, campos)] # Se importan los componentes requeridos. import jinja2 from flask import Flask, render_template, url_for, abort # Se instancia el objeto *app* a partir de la clase *Flask*. app = Flask(__name__) # Se crea la función de vista para ```http://localhost:5000/busca/<cadena>```. @app.route('/busca/<cadena>') def busca(cadena): return render_template('busqueda_avanzada.html', alumnos=buscar_archivo(str(cadena), ruta, campos)) # Se crea la función de vista para ```http://localhost:5000/alumno/<cuenta>```. @app.route('/alumno/<cuenta>') def despliega(cuenta): falla = True with open(ruta, 'tr') as archivo: base = eval(archivo.read()) for registro in base: try: if registro['Cuenta'] == int(cuenta): alumno = registro falla = False break except: pass if falla : abort(404) return render_template('despliega.html', alumno=alumno) # Se crea la función de vista en caso de un error 404. @app.errorhandler(404) def no_encontrado(error): return '<h1> Error</h1><p>Recurso no encontrado.</p>', 404 # **Advertencia:** Una vez ejecutada la siguente celda es necesario interrumpir el kernel de Jupyter para poder ejecutar el resto de las celdas de la notebook. #Si no se define el parámetro host, flask sólo será visible desde localhost # app.run(host='localhost') app.run(host="0.0.0.0", port=5000) # **Ejemplos:** # # * Regresa la lista de coincidencias usando la cadena *Ramos*. # * http://localhost:5000/busca/Ramos # # * Regresa al registro con el campo 'Cuenta' igual a *1231223*. # * http://localhost:5000/alumno/1231223 # # * Regresa la página de error 404. # * http://localhost:5000/alumno/1231217 # <p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p> # <p style="text-align: center">&copy; <NAME>. 2018.</p>
13_aplicacion_web_simple.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Gamma Ray Normalisation # **Created by:** <NAME> # # This notebook illustrates carry out a simple normalisation on Gamma Ray data from the Volve Dataset. # Medium Article Link: # ## What is Normalization? # Normalization is the process of re-scaling or re-calibrating the well logs so that they are consistent with other logs in other wells within the field. This can be achieved by applying a single point normalization (linear shift) or a two point normalization ('stretch and squeeze') to the required curve. # # Normalization is commonly applied to gamma ray logs, but can be applied to neutron porosity, bulk density, sonic and spontaneous potential logs. Resistivity logs are generally not normalized unless there is a sufficient reason to do so (Shier, 2004). It should be noted that applying normalization can remove geological variations and features across the study area and should be considered carefully. Shier (2004) provides an excellent discussion and guidelines on how to carry out normalization on well log data. # ## Loading and Checking Data # The first step is to import the required libraries: pandas and matplotlib. import os import pandas as pd import matplotlib.pyplot as plt root = '/users/kai/desktop/data_science/data/dongara' well_name = 'DONGARA_26_file003' file_format = '.csv' data = pd.read_csv(os.path.join(root,well_name+file_format), header=0) data.head(1000) data['WELL'].unique() # Using the unique method on the dataframe, we can see that we have 3 wells within this Volve Data subset: # - 15/9-F-1 C # - 15/9-F-4 # - 15/9-F-7 # ## Plotting the Raw Data # wells = data.groupby('WELL') wells.head() wells.min() fig, ax = plt.subplots(figsize=(8,6)) for label, df in wells: df.GR.plot(kind ='kde', ax=ax, label=label) plt.xlim(0, 200) plt.grid(True) plt.legend() plt.savefig('before_normalisation.png', dpi=300) # From the plot above, we will assume that the key well is 15/9-F-7 and we will normalise the other two datasets to this one. # # ## Calculating the Percentiles # It is possible that datasets can contain erroneous values which may affect the minimum and the maximum values within a curve. Therefore, some interpreters prefer to base their normalisation parameters on percentiles. # # In this example, I have used the 5th and 95th percentiles. # # The first step is to calculate the percentile (or quantile as pandas refers to it) by grouping the data by wells and then applying the .quantile method to a specific column. In this case, GR. The quantile function takes in a decimal value, so a value of 0.05 is equivalent to the 5th percentile and 0.95 is equivalent to the 95th percentile. gr_percentile_05 = data.groupby('WELL')['GR'].quantile(0.05) print(gr_percentile_05) # This calculation generates a pandas Series object. We can see what is in the series by calling upon it like so. # So now we need to bring that back into our main dataframe. We can do this using the map function, which will combine two data series that share a common column. Once it is mapped we can call upon the `.describe()` method and confirm that it has been added to the dataframe. data['05_PERC'] = data['WELL'].map(gr_percentile_05) data.describe() # We can then repeat the process for the 95th percentile: gr_percentile_95 = data.groupby('WELL')['GR'].quantile(0.95) gr_percentile_95 data['95_PERC'] = data['WELL'].map(gr_percentile_95) data.describe() # ## Create the Normalisation Function # In order to normalize the data, we need create a custom function. # The following equation comes from <NAME>: 'Well Log Normalization: Methods and Guidelines'. # # $$Curve_{norm} = Ref_{low} +(Ref_{high}-Ref_{low}) * \Bigg[ \frac {CurveValue - Well_{low}}{ Well_{high} - Well_{low}}\Bigg]$$ def normalise(curve, ref_low, ref_high, well_low, well_high): return ref_low + ((ref_high - ref_low) * ((curve - well_low) / (well_high - well_low))) # We can now set of key well high and low parameters. key_well_low = 25.6464 key_well_high = 110.5413 # To apply the function to each value and use the correct percentiles for each well we can use the `apply()` method to the pandas dataframe and then a `lamda` function for our custom function. data['GR_NORM'] = data.apply(lambda x: normalise(x['GR'], key_well_low, key_well_high, x['05_PERC'], x['95_PERC']), axis=1) # ## Plotting the Normalized Data # To view our final normalized data, we can re-use the code from above to generate the histogram. When we do, we can see that all curves have been normalized to our reference well. fig, ax = plt.subplots(figsize=(8,6)) for label, df in wells: df.GR_NORM.plot(kind ='kde', ax=ax, label=label) plt.xlim(0, 200) plt.grid(True) plt.legend() plt.savefig('after_normalisation.png', dpi=300)
08 - Curve Normalisation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Cleansing - CSV Files # Still, since our analysis involves gender based data, let's continue with our data cleansing for the other 7 sources we've considered also from [Open Source Data From Mexican Government]('https://datos.gob.mx/')<br /> # <ul> # <li>Graduated students</li> # <li>Taxable income</li> # <li>Escolarity levels</li> # <li>Active population on formal economical activities</li> # <li>Active population on informal economical activities</li> # <li>Working people who earn more than three minimum wage</li> # <li>Percentage of women on the sindical leadership</li> # </ul> # Since all these are csv and considerably small, I'll invoque read_csv method to pace up my analysis. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline # - # ## Graduates per Program grad = pd.read_csv('Data\graduados.csv') grad.sort_values(by='Programa',ascending=True) grad.info() # Again, we have no gender based data; next source: # ## Income per Taxes by Million MXN tax_inc=pd.read_csv('Data\IngresosTributarios.csv') tax_inc.shape tax_inc.info() # We have no gender based data but let's visualize it in case if needed in the future. tax_inc['Year'].unique() # Let's compare 2 diferent types of taxes from time-spaned 5 last years. # + fig, ((ax1,ax2)) = plt.subplots(nrows=2,ncols=1,figsize=(10,15)) #ISR - Fosils 5 last years x11 = tax_inc.loc[tax_inc['Year'] == 2010, 'Month'] y11 = tax_inc.loc[tax_inc['Year'] == 2014, 'Impuesto Sobre la Renta'] y12 = tax_inc.loc[tax_inc['Year'] == 2015, 'Impuesto Sobre la Renta'] y13 = tax_inc.loc[tax_inc['Year'] == 2016, 'Impuesto Sobre la Renta'] y14 = tax_inc.loc[tax_inc['Year'] == 2017, 'Impuesto Sobre la Renta'] y15 = tax_inc.loc[tax_inc['Year'] == 2018, 'Impuesto Sobre la Renta'] ax1.plot(x11,y11, color='firebrick', linewidth=1,marker='o', markersize=8, label='ISR on 2015') ax1.plot(x11,y12, color='gold', linewidth=1,marker='v', markersize=8, label='ISR on 2016') ax1.plot(x11,y13, color='green', linewidth=1,marker='1', markersize=8, label='ISR on 2017') ax1.plot(x11,y14, color='blue', linewidth=1,marker='s', markersize=8, label='ISR on 2018') ax1.plot(x11,y15, color='mediumorchid', linewidth=1,marker='p', markersize=8, label='ISR on 2019') ax1.set_yticks([tax_inc['Impuesto Sobre la Renta'].min(),tax_inc['Impuesto Sobre la Renta'].max()]) ax1.legend() #IEPS - Gasolinas y diesel - Fosils 5 last years x21 = tax_inc.loc[tax_inc['Year'] == 2010, 'Month'] y21 = tax_inc.loc[tax_inc['Year'] == 2014, 'IEPS - Gasolinas y diesel'] y22 = tax_inc.loc[tax_inc['Year'] == 2015, 'IEPS - Gasolinas y diesel'] y23 = tax_inc.loc[tax_inc['Year'] == 2016, 'IEPS - Gasolinas y diesel'] y24 = tax_inc.loc[tax_inc['Year'] == 2017, 'IEPS - Gasolinas y diesel'] y25 = tax_inc.loc[tax_inc['Year'] == 2018, 'IEPS - Gasolinas y diesel'] ax2.plot(x21,y21, color='firebrick', linewidth=1,marker='o', markersize=8, label='IEPS on Fosils 2015') ax2.plot(x21,y22, color='gold', linewidth=1,marker='v', markersize=8, label='IEPS on Fosils 2016') ax2.plot(x21,y23, color='green', linewidth=1,marker='1', markersize=8, label='IEPS on Fosils 2017') ax2.plot(x21,y24, color='blue', linewidth=1,marker='s', markersize=8, label='IEPS on Fosils 2018') ax2.plot(x21,y25, color='mediumorchid', linewidth=1,marker='p', markersize=8, label='IEPS on Fosils 2019') ax2.set_yticks([tax_inc['IEPS - Gasolinas y diesel'].min(),tax_inc['IEPS - Gasolinas y diesel'].max()]) ax2.legend() # - # #### A brief cleansing for data: # With a sample: tax_inc.head() # We can see that at least four columns have 'n.d.' as not defined or nan or null values, so let's assign the mean to the null values. <br /> # First let's get the proper count of how many on or sample we have to see if it would be significant to replace witht the mean. tax_inc['IEPS - Alimentos alta densidad calorica']=tax_inc['IEPS - Alimentos alta densidad calorica'].replace('n.d.',np.NaN) tax_inc['IEPS - Plaguicidas']=tax_inc['IEPS - Plaguicidas'].replace('n.d.',np.NaN) tax_inc['IEPS - Carbono']=tax_inc['IEPS - Carbono'].replace('n.d.',np.NaN) tax_inc['Impuesto por la Actividad de Exploracion y Extraccion de Hidrocarburos']=tax_inc['Impuesto por la Actividad de Exploracion y Extraccion de Hidrocarburos'].replace('n.a',np.NaN) # If we get the info from the dataframe again, we can see the Null value counts out of our sample from 113 rows to see how to replace them significantly. tax_inc[[ 'IEPS - Alimentos alta densidad calorica', 'IEPS - Plaguicidas', 'IEPS - Carbono', 'Impuesto por la Actividad de Exploracion y Extraccion de Hidrocarburos']].info() # Since the amount of non-null is barely more than 50% of the records, we don't have enough information about with which data to replace null values. And we'll continue exploring our sources to get gender-based data. # ## Escolaridad escolarity=pd.read_csv('Data\escolaridad_Over15yo.csv') escolarity # ## Ocupied Pop by Ec Activity # Amount of population by gender who are part of remunerated activities: pop_ec=pd.read_csv('Data\Poblacion_Ocupada_Actividad_Economica.csv') pop_ec # As we can see on the `.info()` method this source can be used on our brief analysis to check the glass ceiling on mexican women. # ## Ocupied Pop by Informal Activity # Amount of population by gender who are part of formal or informal labour condition in México. pop_inf=pd.read_csv('Data\Poblacion_Ocupada_Condicion_Informalidad.csv') pop_inf # As we can see on the `.info()` method this source can be used on our brief analysis to check the glass ceiling on mexican women. # ## Percentage of Asalariados que ganan hasta 3 salarios mínimos asalariados=pd.read_csv('Data\Porcentaje_de_Asalariados_que_Ganan_hasta_tres_Salarios_Minimos.csv') asalariados # As per dataframe glance, let's remove the last rows since they were part of a note as per csv was saved. asalariados.dropna(thresh=6) # ## Women who are sindical leaders sindical=pd.read_csv('Data\Porcentaje_de_mujeres_en_la_dirigencia_sindical_UdR02.csv') sindical
.ipynb_checkpoints/DataCleansing_CSVFiles-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 (tensorflow) # language: python # name: tensorflow # --- # # T81-558: Applications of Deep Neural Networks # **Module 14: Other Neural Network Techniques** # * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) # * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # # Module 14 Video Material # # * Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb) # * Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb) # * Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb) # * **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb) # * Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) # # # # Part 14.4: Training an Intrusion Detection System with KDD99 # # The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning. # # # Read in Raw KDD-99 Dataset # + import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5] # - # # Analyzing a Dataset # # The following script can be used to give a high-level overview of how a dataset appears. # + ENCODING = 'utf-8' def expand_categories(values): result = [] s = values.value_counts() t = float(len(values)) for v in s.index: result.append("{}:{}%".format(v,round(100*(s[v]/t),2))) return "[{}]".format(",".join(result)) def analyze(df): print() cols = df.columns.values total = float(len(df)) print("{} rows".format(int(total))) for col in cols: uniques = df[col].unique() unique_count = len(uniques) if unique_count>100: print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100))) else: print("** {}:{}".format(col,expand_categories(df[col]))) expand_categories(df[col]) # + # Analyze KDD-99 import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore analyze(df) # - # # Encode the feature vector # Encode every row in the database. This is not instant! # + # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # + # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net # Convert to numpy - Classification x_columns = df.columns.drop('outcome') x = df[x_columns].values dummies = pd.get_dummies(df['outcome']) # Classification outcomes = dummies.columns num_classes = len(outcomes) y = dummies.values # - df.groupby('outcome')['outcome'].count() # # Train the Neural Network # + import pandas as pd import io import requests import numpy as np import os from sklearn.model_selection import train_test_split from sklearn import metrics from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create neural net model = Sequential() model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(50, input_dim=x.shape[1], activation='relu')) model.add(Dense(10, input_dim=x.shape[1], activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor],verbose=2,epochs=1000) # - # Measure accuracy pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_eval = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_eval, pred) print("Validation score: {}".format(score))
t81_558_class_14_04_ids_kdd99.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # Copyright (c) Microsoft Corporation. All rights reserved. # # Licensed under the MIT License. # ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) # # Automated Machine Learning # _**Forecasting using the Energy Demand Dataset**_ # # ## Contents # 1. [Introduction](#introduction) # 1. [Setup](#setup) # 1. [Data and Forecasting Configurations](#data) # 1. [Train](#train) # 1. [Generate and Evaluate the Forecast](#forecast) # # Advanced Forecasting # 1. [Advanced Training](#advanced_training) # 1. [Advanced Results](#advanced_results) # # Introduction<a id="introduction"></a> # # In this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data. # # If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace. # # In this notebook you will learn how to: # 1. Creating an Experiment using an existing Workspace # 1. Configure AutoML using 'AutoMLConfig' # 1. Train the model using AmlCompute # 1. Explore the engineered features and results # 1. Generate the forecast and compute the out-of-sample accuracy metrics # 1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features # 1. Run and explore the forecast with lagging features # # Setup<a id="setup"></a> # + import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime # - # This sample notebook may use features that are not available in previous versions of the Azure ML SDK. print("This notebook was created using version 1.35.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") # As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. # + ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T # - # ## Create or Attach existing AmlCompute # A compute target is required to execute a remote Automated ML run. # # [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. # # #### Creation of AmlCompute takes approximately 5 minutes. # If the AmlCompute with that name is already in your workspace this code will skip the creation process. # As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. # + from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) # - # # Data<a id="data"></a> # # We will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. # # With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasets#dataset-types) to be used training and prediction. # Let's set up what we know about the dataset. # # <b>Target column</b> is what we want to forecast.<br></br> # <b>Time column</b> is the time axis along which to predict. # # The other columns, "temp" and "precip", are implicitly designated as features. target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) # The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) # ## Split the data into train and test sets # The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) # ### Setting the maximum forecast horizon # # The forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. # # Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecast#configure-and-run-experiment) guide. # # In this example, we set the horizon to 48 hours. forecast_horizon = 48 # ## Forecasting Parameters # To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment. # # |Property|Description| # |-|-| # |**time_column_name**|The name of your time column.| # |**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| # |**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information. # # Train<a id="train"></a> # # Instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters. # # |Property|Description| # |-|-| # |**task**|forecasting| # |**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>| # |**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).| # |**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.| # |**training_data**|The training data to be used within the experiment.| # |**label_column_name**|The name of the label column.| # |**compute_target**|The remote compute for training.| # |**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.| # |**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.| # |**forecasting_parameters**|A class holds all the forecasting related parameters.| # # This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. # + from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) # - # Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while. # One may specify `show_output = True` to print currently running iterations to the console. remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() # ## Retrieve the Best Model # Below we select the best model from all the training iterations using get_output method. best_run, fitted_model = remote_run.get_output() fitted_model.steps # ## Featurization # You can access the engineered feature names generated in time-series featurization. fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() # ### View featurization summary # You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed: # # + Raw feature name # + Number of engineered features formed out of this raw feature # + Type detected # + If feature was dropped # + List of feature transformations for the raw feature # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) # # Forecasting<a id="forecast"></a> # # Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset. # # The inference will run on a remote compute. In this example, it will re-use the training compute. test_experiment = Experiment(ws, experiment_name + "_inference") # ### Retreiving forecasts from the model # We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. # + from run_forecast import run_remote_inference remote_run_infer = run_remote_inference(test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv') # - # ### Evaluate # To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals). # load forecast data frame fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name]) fcst_df.head() # + from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs # %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b') test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() # - # # Advanced Training <a id="advanced_training"></a> # We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. # ### Using lags and rolling window features # Now we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. # # This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. # + advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) # - # We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() # ### Retrieve the Best Model best_run_lags, fitted_model_lags = advanced_remote_run.get_output() # # Advanced Results<a id="advanced_results"></a> # We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. # + test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference(test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder='./forecast_advanced') advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file('outputs/predictions.csv', 'predictions_advanced.csv') # - fcst_adv_df = pd.read_csv('predictions_advanced.csv', parse_dates=[time_column_name]) fcst_adv_df.head() # + from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs # %matplotlib inline test_pred = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df['predicted'], color='b') test_test = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show()
how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Network Analysis # # --- # # # ## Introduction # # Networks are mathematical or graphical representations of patterns of relationships between entities. These relationships are defined by some measure of "closeness" between individuals, and can exist in an abstract or actual space (for example, whether you are related to someone versus how far away you live from each other). Networks have been used to model everything from airplane traffic to supply chains, and even amorphous materials like window glass, cells, and proteins. They can also be used to model relationships among people. Social networks are patterns of relationships among people or organizations that affect and are affected by actions of individuals within the network. Network analysis captures the effect of the complete pattern of connections among individuals in a group to help us perform structural analysis of outcomes of interest for individuals and the group as a whole. # # Networks can be represented as **graphs**, where a graph is made up of **nodes** connected by **ties**. The flexibility of network analysis means that the first step toward analysis is to clearly define what constitutes a node and what constitutes a tie in your network. There are several type of graphs: connected, unconnected, directional, and many more (see [glossary](#glossary-of-terms) for a list of terms). # # This tutorial is based on Chapter 8 of [Big Data and Social Science](https://github.com/BigDataSocialScience). # # # # ## Glossary of Terms # - A **node** is an individual entity within a graph. # # - A **tie** is a link between nodes. Ties can be **undirected**, meaning they represent a symmetrical # relationship, or **directed**, meaning they represent an asymmetrical relationship (one that doesn't necessarily # go both ways). # # - A directed tie is known as an **arc**. An undirected tie is known as an **edge**. # tth <NAME>, then he is also Facebook friends with me. # # - A **cutpoint** is a *node* that cannot be removed without disconnecting the network. # # - A **bridge** is a *tie* that cannot be removed without disconnecting the network. # # - Two nodes are said to be **reachable** when they are connected by an unbroken chain of relationships through # other nodes. # # - **Network density** is the number of *actual* connections in a network divided by the number of *potential* # connections in that network. # # - **Average distance** is the average path length between nodes in a graph. It is a measure of how many nodes # it takes to transmit information across the network. This metric is only valid for fully connected graphs. # # - **Centrality** is the degree to which a given node influences the entire network. # # ## Table of Contents # # 1. [Loading the Data](#Loading-the-data) # 2. [Representations of Networks](#Representations-of-Networks) # 1. [Adjacency Matrix](#Adjacency-matrix) # 2. [List of Edges](#List-of-edges) # 3. [Graphs](#Graphs) # 3. [Network Measures](#network-measures) # 1. [Summary Statistics](#summary-statistics) # 2. [Degree Distribution](#Degree-Distribution) # 3. [Components and Reachability](#Components-and-reachability) # 4. [Path Length](#Path-Length) # 4. [Centrality Metrics](#Centrality-metrics) # 1. [Degree Centrality](#Degree-Centrality) # 2. [Closeness Centrality](#Closeness-Centrality) # 3. [Betweenness Centrality](#Betweenness-Centrality) # 5. [Cliques](#Cliques) # 6. [Community Detection](#Community-Detection) # 7. [Exercises](#Exercises) # 8. [Resources](#resources) # %pylab inline from __future__ import print_function import sys import community import networkx as nx import seaborn as sns import pandas as pd from sqlalchemy import create_engine # # Creating a Network # # In this tutorial we are going to explore employment patterns of individuals that have recently stoppped receiving tanf benefits. The first step in creating a network is defining the question or questions we want to explore using the network. This then allows us to define what a *node* and *tie* will be. In our case we want to explore employment patterns. A node in our case is a single individual, and a tie will exist between two individuals if they worked for the same employer as determined by the employer's ein number. # # The following is a SQL script to create a network of all people that exited in 2014 and were employed in the first quarter of 2015. First we are going to create a new table from the `idhs.ind_spells` table as `ada_class3.ind_spells_dates` with the start_date and end_date converted to date columns. This table is quite large so we are going to take of subset of the data and make two tables of people on tanf benefits. One is for people receiving tanf benefits for the last half of 2014 and and the second table if for people receiving tanf benefits for 2015. Then we can do a `LEFT JOIN` to find individuals who no longer received tanf benefits in 2015 that did receive benfits in 2014 and also a second `LEFT JOIN` to grab the `ssn_hash`. We can then grab the wage records for the first quarter of 2015 using the `ssn_hash`. From there we can to a `self-join` and join the table onto itself using the `ein` forming the network. EIN doesn't really tell us anything about the type of job somone has but the legal name of the business will. We can create a table of ein and legal name and join that to our network table. # # Note that every person has the same ein so we remove "self-ties", entries where ein is 000000000 and where the legal name is nan. # ``` # -- make a new table where the dates are date type rather # -- than text to do date manipulation # \echo "Munging The Data" # CREATE TABLE if NOT EXISTS ada_class3.ind_spells_dates AS # SELECT recptno, # benefit_type, # to_date(start_date::text,'YYYY-MM-DD') start_date, # to_date(end_date::text, 'YYYY-MM-DD') end_date # FROM idhs.ind_spells; # # -- subset for 2014 of everyone on tanf46 # CREATE TABLE if NOT EXISTS ada_class3.individual_spells_2014 AS # SELECT * # FROM ada_class3.ind_spells_dates # WHERE start_date > '2014-06-01' and # end_date > '2014-12-31' and # benefit_type = 'tanf46'; # -- make an index for faster queries # CREATE INDEX if NOT EXISTS recptno_ind # ON ada_class3.individual_spells_2014 (recptno); # # -- subset for 2015 of everone on tanf46 # CREATE TABLE if NOT EXISTS ada_class3.individual_spells_2015 AS # SELECT * # FROM ada_class3.ind_spells_dates # WHERE start_date > '2015-01-01' AND # end_date > '2015-12-31' and # benefit_type = 'tan46'; # -- make an index for faster queries # CREATE INDEX if NOT EXISTS receptno_ind # ON ada_class3.individual_spells_2015 (recptno); # # --grab the records of everyone in 2014 that did not have # --benefits in 2015 # CREATE TABLE if NOT EXISTS ada_class3.benefits_2014_not2015 as # SELECT a.recptno recptno_2014, # a.benefit_type benefit_type_2014, # a.start_date start_date_2014, # a.end_date end_date_2014, # b.recptno recptno_2015, # b.benefit_type benefit_type_2015, # b.start_date start_date_2015, # b.end_date end_date_2015, # c.ssn_hash ssn_hash # FROM ada_class3.individual_spells_2014 a # LEFT JOIN ada_class3.individual_spells_2015 b ON a.recptno = b.recptno # LEFT JOIN idhs.member c ON a.recptno = c.recptno # WHERE b.recptno IS NULL; # # --grab the first quarter date from the ides data # CREATE TABLE IF NOT EXISTS ada_class3.ssn_ein_2015_1 as # SELECT ssn, ein # FROM ides.il_wage # where ssn in (select distinct(ssn_hash) from ada_class3.benefits_2014_not2015) # and year = 2015 # and quarter = 1; # # CREATE TABLE IF NOT EXISTS ada_class3.ssn_ein AS # SELECT ssn,ein, count(*) # FROM ada_class3.ssn_ein_2015_1 # GROUP BY ssn, ein # ORDER BY 3 desc; # # \echo "making the network" # DROP TABLE IF EXISTS ada_class3.ein_network; # CREATE TABLE IF NOT EXISTS ada_class3.ein_network AS # SELECT a.ssn ssn_l, # a.ein, # b.ssn ssn_r # FROM ada_class3.ssn_ein a # JOIN ada_class3.ssn_ein b on a.ein = b.ein; # # DELETE FROM ada_class3.ein_network # WHERE ssn_l = ssn_r # OR ein = '000000000' # OR ein='0'; # # --map the ein number to legal name # -- of the entity. # # DROP TABLE IF EXISTS ada_class3.ein_name; # CREATE TABLE ada_class3.ein_name AS # SELECT ein, name_legal, count(*) # from ides.il_qcew_employers # group by ein, name_legal # order by 3 desc; # # DROP TABLE IF EXISTS ada_class3.ein_network_2015; # CREATE TABLE ada_class3.ein_network_2015 AS # SELECT n.ssn_l, n.ein, e.name_legal, n.ssn_r # FROM ada_class3.ein_network n # JOIN ada_class3.ein_name e ON n.ein = e.ein; # # DELETE FROM ada_class3.ein_network_2015 # WHERE name_legal = 'nan'; # ``` # # # # # Loading the Data # # In this tutorial we will explore graphical representations of this network, degree metrics, centrality metrics, how to calculate the shortest path between nodes, and community detection. We will be using the [NetworkX Python Library](https://networkx.github.io) developed at Los Alamos National Laboratory (LANL). # # First we have to load the data from the database. *Note we did the hard work of creating the network in SQL and now doing our more complex analysis in Python.* engine = create_engine("postgresql://10.10.2.10:5432/appliedda") df_network = pd.read_sql('SELECT * from ada_class3.ein_network_2015;', engine) df_network.head() network = list(zip(df_network.ssn_l, df_network.ssn_r)) G = nx.Graph() G.add_edges_from(network) # # Representations of Networks # # ## Adjacency Matrix # One way to represent networks is an **adjacency matrix**, a binary (all entries either 0 or 1) square matrix. Each row represents the connections between one node and the other nodes in the network. For instance, the first row represents the first node. Each entry in a row corresponding to a node represents possible connections to the other nodes as indicated by 1 (connected) or 0 (not connected). plt.figure(figsize=(30,30)) plt.spy(nx.adjacency_matrix(G)) # ## List of Edges # Graphs can also be represented as **edge lists**, where you list the connections between nodes exhaustively. If we know the graph is undirected, we only need to list each relationship one time. For example, we say that 1 is connected to 32, but it would be redundant to also say that 32 is connected to 1. Representing a network as an edge list is typically preferable to an adjacency matrix in the case of a sparse matrix -- where most of the entries of the matrix are 0 due to taking much less space to store. An edge list is typically how a network is stored in a database. network[:10] # ## Graphs # Networks can also be displayed as graphs, which is probably the most intuitive way to visualize them. The top visualization below emphasizes the nodes, or individuals, how close they are to one another, and the groups that emerge. # The visualization below emphasizes the edges, or the connections themselves. *Note: this network is too large to visualize* # + active="" # nx.draw(G) # - # Due to the large number of nodes this visualization is not helpful. Given that we can't derive much information from this particular visualization we need to turn to other network measures. # # Network Measures # It is useful to know the size (in terms of nodes and ties) of the network, both to have an idea of the size and connectivity of the network, and because most of the measures you will use to describe the network will need # to be standardized by the number of nodes or the number of potential connections. # # One of the most important things to understand about larger networks is the pattern of indirect connections among nodes, because it is these chains of indirect connections that make the network function as a whole, and make networks a # useful level of analysis. Much of the power of networks is due to indirect ties that create **reachability.** Two nodes can reach each other if they are connected by an unbroken chain of relationships, often called **indirect ties**. # # Structural differences between node positions, the presence and characteristics of smaller "communities" within larger networks, and properties of the structure of the whole group can be quantified using different **network measures.** # ## Summary Statistics # Print out some summary statistics on the network print( nx.info(G) ) # We see that there are 568892 ties (relationships) and 13716 nodes (individuals). # # The **average degree** of the network is the average number of edges connected to each node. # # We see that the average degree of this network is 83, meaning that the average individual in the network is connected to 83 other individuals. Recall we made the tie based on EIN, which means that in the first quarter the average person in our network worked with 83 people also receiving benefits in 2014, indicating these people are often working the same types of jobs. # Print out the average density of the netwo print(nx.density(G)) # The average density is calculated as the $$\text{average density} = \frac{\text{actual ties}}{\text{possible number of ties}} $$ # # where the possible number of ties for an undirected graph (if every node had a tie to every other node) is $\frac{n(n-1)}{2}$. # # If every node were connected to every other node, the average density would be 1. If there were no ties between any of the nodes, the average density would be 0. The average density of this network is 0.0006, which indicates it is not a very dense network. In this example, we can interpret this to mean that individuals are mostly in small groups, and the groups don't overlap very much. # Now that we have looked at some summary statistics as a whole we are going to drill down to the individual actors in our network. # ## Degree Distribution (Who has the most relationships?) # # # We can cast this question as a network analysis problem by asking *which node has the most ties*. dict_degree = G.degree() df_degree = pd.DataFrame.from_dict(dict_degree, orient='index') df_degree.columns=['degree'] df_degree.index.name = 'node_id' sns.set_style("whitegrid") plt.figure(figsize=(22, 12)) sns.set_context("poster", font_scale=1.00, rc={"lines.linewidth": 1.00,"lines.markersize":8}) df_degree.sort_values(by='degree', ascending=False)[:10].plot(kind='barh') # The last five entries have over 1000 connectionctions. This likely means they work for a large company. df_degree.sort_values(by='degree', ascending=False)[:10] G.neighbors('a7cb780013ee0fa3a2c48874e9d1c9a06eafa8a6d46fe3898f9529efc6d7c982') # ## Components and Reachability # # Two nodes are said to be **reachable** when they are connected by an unbroken chain of relationships through other nodes. Networks in which more of the possible connections (direct and indirect) among nodes are realized are denser and more cohesive than networks in which fewer of these connections are realized. # # The reachability of individuals in a network is determined by membership in **components**, which are subsets of the # larger network in which every member of the group is indirectly connected to every other. Imagining the standard node and line drawing of a graph, a component is a portion of the network where you can trace a path between every pair of nodes without ever lifting your pen. # # Many larger networks consist of a single dominant component including anywhere from 50% to 90% of the individuals, and a few smaller components that are not connected. In this case, is common to perform analysis on only the main connected component of the graph, because there is not a convenient way to mathematically represent how "far away" unconnected nodes are. In our karate class example, our graph is connected, meaning that you can reach any individual from any other individual by moving along the edges of the graph, so we don't need to worry about that problem. # # # ## Path Length # # A **shortest path** between two nodes is a path from one node to the other, not repeating any nodes. One way to think of a shortest path between two individuals is how many people it would take to broker an introduction between them (think [six degrees of Kevin Bacon](https://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon)). # # Most pairs will have several "shortest paths" between them; the * shortest path* is called the **geodesic**. # Calculate the length of the shortest path between 12 and 15 ls_path = nx.shortest_path(G, 'a7cb780013ee0fa3a2c48874e9d1c9a06eafa8a6d46fe3898f9529efc6d7c982', '<KEY>') print('The path length from {} to {} is {}.'.format( 'a7cb780013ee0fa3a2c48874e9d1c9a06eafa8a6d46fe3898f9529efc6d7c982', '<KEY>', len(ls_path))) print('path length: ', ls_path) # In this case there is no path between the two nodes. # Calculate the length of the shortest path between 12 and 15 ls_path = nx.shortest_path(G, 'a7cb780013ee0fa3a2c48874e9d1c9a06eafa8a6d46fe3898f9529efc6d7c982', '92b3eaa82b2f68f96dd9c18dace00a642b6af88c1612b9ded6960c69389ce7eb') print('The path length from {} to {} is {}.'.format( 'a7cb780013ee0fa3a2c48874e9d1c9a06eafa8a6d46fe3898f9529efc6d7c982', '92b3eaa82b2f68f96dd9c18dace00a642b6af88c1612b9ded6960c69389ce7eb', len(ls_path))) print('path length: ', ls_path) # The **average shortest path length** describes how quickly information or goods can disburse through the network. # # The average shortest length $l$ is defined as $$ l = \frac{1}{n(n-1)} \sum_{i \ne j}d(v_{i},v_{j}) $$ where $n$ is the number of nodes in the graph and $d(v_{i},v_{j})$ is the shortest path length between nodes $i$ and $j$. print(nx.average_shortest_path_length(G)) # In this case, we cannot calculate the average shortest path, since our network is not fully connected (the network has islands within it that are cut off from the rest of the network). Since there is no way to calculate the distance between two nodes that can't be reached from one another, there is no way to calculate the average shortest distance across all pairs. # # Centrality Metrics # # Centrality metrics measure how important, or "central," a node is to the network. These can indicate what individual has the most social contacts, who is closest to people, or the person where information most transfers through. There are many **centrality metrics** -- degree centrality, betweenness centrality, closeness centrality, eigenvalue centrality, percolation centrality, PageRank -- all capturing different aspects of a node's contribution to a network. # # Centrality measures are the most commonly used means to explore network effects at the level of certain individual participants. Typically, these metrics identify and describe a few important nodes, but don't tell us much about the rest of the nodes in the network. This is akin to Google's search results: the first few matches are the most relevant, but if you go a few pages in to the search results, you might as well have been searching for something else entirely. # ## Degree Centrality (Who has the most relationships?) # # The most basic and intuitive measure of centrality, **degree centrality**, simply counts the number of ties that each node has. Degree centrality represents a clear measure of the prominence or visibility of a node. The degree centrality $C_{D}(x)$ of a node $x$ is # # $$C_{D}(x) = \frac{deg(x)}{n-1}$$ # # where $deg(x)$ is the number of connections that node $x$ has, and $n-1$ is a normalization factor for the total amount of possible connections. # # If a node has no connections to any other nodes, its degree centrality will be 0. If it is directly connected to every other node, its degree centrality will be 1. # dict_degree_centrality = nx.degree_centrality(G) df_degree_centrality = pd.DataFrame.from_dict(dict_degree_centrality, orient='index') df_degree_centrality.columns=['degree_centrality'] df_degree_centrality.index.name = 'node_id' df_degree_centrality.sort_values(by='degree_centrality', ascending=False)[:10].plot(kind='barh') # As we can see, this is simply a recasting of the [degree distribution](#degree-distribution). # ## Closeness Centrality (Who has the shortest of shortest paths going between them?) # # **Closeness centrality** is based on the idea that networks position some individuals closer to or farther away # from other individuals, and that shorter paths between actors increase the likelihood of communication, and # consequently the ability to coordinate complicated activities. The closeness centrality $C_C(x)$ of a node $x$ is calculated as: # # $$C_C(x) = \frac{n-1}{\sum_{y}d(x,y)} $$ # # where $d(x,y)$ is the length of the geodesic between nodes $x$ and $y$. dict_closeness_centrality = {} for ssn_hash in zip(*network[:25])[0]: dict_closeness_centrality[ssn_hash] = nx.closeness_centrality(G,u=ssn_hash) df_closeness_centrality = pd.DataFrame.from_dict(dict_closeness_centrality, orient='index') df_closeness_centrality.columns=['closeness_centrality'] df_closeness_centrality.index.name = 'node_id' df_closeness_centrality.sort_values(by='closeness_centrality', ascending=False)[:10].plot(kind='barh') # The last three individuals have the highest closeness centrality. This implies that these individuals have the most close connections to the most members in the network. However, all of these individuals have a closeness centrality of around 0.025, so it is clear there is not really anyone in the dataset that is very closely related to a lot of the other members. This makes sense given the other statistics we've calculated about this graph - there are lots of small, disconnected groups. # ## Betweenness Centrality (Who has the most shortest paths between them?) # # Where closeness assumes that communication and information flow increase with proximity, **betweenness centrality** # captures "brokerage," or the idea that a node that is positioned "in between" many other pairs of nodes gains some individual advantage. To calculate betweenness, we must assume that when people search for new # information through networks, they are capable of identifying the shortest path (so that we know that the path between two nodes actually includes the "in between" node); additionally, we must assume # that when multiple shortest paths exist, each path is equally likely to be chosen. # # The betweenness centrality $C_B(x)$ of a node $x$ is given by # # $$ C_B{x} = \sum_{s,t} \frac{\sigma_{st}(x)}{\sigma_{st}}$$ # # where $\sigma_{st}$ is the number of shortest paths from node $s$ to node $t$ and $\sigma_{st}(x)$ is the number of shortest paths $\sigma_{st}$ passing through node $x$. Intuitively, for each node, we look at how many of the shortest paths between every other pair of nodes includes that node. # dict_betweenness_centrality = nx.betweenness_centrality(G, k=50) df_betweenness_centrality = pd.DataFrame.from_dict(dict_betweenness_centrality, orient='index') df_betweenness_centrality.columns=['betweeness_centrality'] df_betweenness_centrality.index.name = 'node_id' df_betweenness_centrality.sort_values(by='betweeness_centrality', ascending=False)[:10].plot(kind='barh') # Given the small values for betweenness centrality, it appears that there is no large single broker in this network. # # Cliques # # A clique is a maximally connected sub-network, or a group of individuals who are all connected to one another. # # In our case, this would be a group of individuals that are all connected to each other: We might expect to see a lot of cliques in this network, because we defined the relationships within our network based on these groupings. cliques = list(nx.find_cliques(G)) import functools #summary stats of cliques num_cliques = len(cliques) ls_len_cliqs = [len(cliq) for cliq in cliques ] max_clique_size = max(ls_len_cliqs) avg_clique_size = np.mean(ls_len_cliqs) max_cliques = [c for c in cliques if len(c) == max_clique_size] max_clique_sets = [set(c) for c in max_cliques] people_in_max_cliques = list(functools.reduce(lambda x,y: x.intersection(y), max_clique_sets)) print(num_cliques) print(max_clique_size) print(avg_clique_size) # There are *2231* cliques in the network. The maximum clique size is *689* people and the average clique size is *7.60*, ~8 people. # # Let's see what the maximum cliques look like. max_cliques Graph_max_clique1 = G.subgraph(max_cliques[0]) # + active="" # nx.draw(Graph_max_clique1, with_labels=False) # - df_network[ df_network['ssn_l'].isin(max_cliques[0]) & df_network['ssn_r'].isin(max_cliques[0])] # It appears WalMart is a popular employer and there are some smaller business that employ sub-populations of our largest clique. # # Community Detection (This may take some time) # # In **community detection**, we try to find sub-networks, or communities, of densely populated connections. Community detection is similar to clustering, in that strong communities will display an abundance of intra-community (within community) connections and few inter-community (between community) connections. # # # # The technical implementation of the algorithm can be found [here](https://arxiv.org/pdf/0803.0476v2.pdf). # # dict_clusters = community.best_partition(G) clusters = [dict_clusters.get(node) for node in G.nodes()] plt.axis("off") #nx.draw_networkx(G, # cmap = plt.get_cmap("terrain"), # node_color = clusters, # node_size = 600, # with_labels = True, # fontsize=200) dict_clusters # [Back to Table of Contents](#Table-of-Contents) # # Resources # - [International Network for Social Network Analysis](http://www.insna.org/) is a large, interdisciplinary association # dedicated to network analysis. # - [Pajek](http://mrvar.fdv.uni-lj.si/pajek/) is a freeware package for network analysis and visualization. # - [Gephi](https://gephi.org/) is another freeware package that supports large-scale network visualization. # - [Network Workbench](http://nwb.cns.iu.edu/) is a freeware package that supports extensive analysis and # visualization of networks. # - [NetworkX](https://networkx.github.io/) is the Python package used in this tutorial to analyze and visualize networks. # - [iGraph](http://igraph.org/) is a network analysis package with implementations in R, Python, and C libraries. # - [A Fast and Dirty Intro to NetworkX (and D3)](http://www.slideshare.net/arnicas/a-quick-and-dirty-intro-to-networkx-and-d3)
notebooks/session_06/Introduction_to_Networks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Programming with Python # # ## Episode 1a - Introduction - Analysing Patient Data # # Teaching: 60 min, # Exercises: 30 min # # Objectives # # - Assign values to variables. # # - Explain what a library is and what libraries are used for. # # - Import a Python library and use the functions it contains. # # - Read tabular data from a file into a program. # # - Select individual values and subsections from data. # # - Perform operations on arrays of data. # ## Our Dataset # In this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. # # ![Data Files](data/inflammation-excel.png) # # # However, before we discuss how to deal with many data points, let's learn how to work with single data values. # # # ## Variables # Any Python interpreter can be used as a calculator: # # ``` # 3 + 5 * 4 # ``` 3 + 5 * 4 # This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute: # # ``` # weight_kg = 60 # ``` weight_kg = 60 # From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value. # # ``` # weight_kg + 5 # ``` weight_kg + 5 # In Python, variable names: # # - can include letters, digits, and underscores - `A-z, a-z, _` # - cannot start with a digit # - are case sensitive. # # This means that, for example: # # `weight0` is a valid variable name, whereas `0weight` is not # `weight` and `Weight` are different variables # # #### Types of data # Python knows various types of data. Three common ones are: # # - integer numbers (whole numbers) # - floating point numbers (numbers with a decimal point) # - and strings (of characters). # # In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute: # # ``` # weight_kg = 60.0 # ``` weight_kg = 60.0 # And to create a string we simply have to add single or double quotes around some text, for example: # # ``` # weight_kg_text = 'weight in kilograms:' # ``` # # To display the value of a variable to the screen in Python, we can use the print function: # # ``` # print(weight_kg) # ``` weight_kg_text = 'weight in kilograms:' print(weight_kg) # We can display multiple things at once using only one print command: # # ``` # print(weight_kg_text, weight_kg) # ``` print(weight_kg_text, weight_kg) # Moreover, we can do arithmetic with variables right inside the print function: # # ``` # print('weight in pounds:', 2.2 * weight_kg) # ``` print('weight in pounds:', 2.2*weight_kg) # The above command, however, did not change the value of ``weight_kg``: # # ``` # print(weight_kg) # ``` print(weight_kg) # To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign: # # ``` # weight_kg = 65.0 # print('weight in kilograms is now:', weight_kg) # ``` weight_kg = 65.0 print('weight in kilograms is now:', weight_kg) # #### Variables as Sticky Notes # # A variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name. # # This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable: # # ``` # # There are 2.2 pounds per kilogram # weight_lb = 2.2 * weight_kg # print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb) # ``` # There are 2.2 pounds per kilogram weight_lb = 2.2 * weight_kg print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb) # #### Updating a Variable # # Variables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel): # # ``` # weight_kg = 100.0 # print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb) # ``` weight_kg = 100.0 print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb) # Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. weight_lb = 2.2*weight_kg print(weight_lb) # ## Libraries # # Words are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. # ### Loading data into Python # # In order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). # # In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using: # # ``` # import numpy # ``` import numpy # Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us: # # ``` # numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',') # ``` # The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after. # # As an example, <NAME> is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library. # # `numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes. # # Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point. # # Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data: # # ``` # data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',') # ``` data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',') # This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value: # # ``` # print(data) # ``` print(data) # Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to: # # ``` # print(type(data)) # ``` print(type(data)) # The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. # # #### Data Type # # A NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array. # # ``` # print(data.dtype) # ``` print(data.dtype) # This tells us that the NumPy array's elements are floating-point numbers. # # With the following command, we can see the array's shape: # # ``` # print(data.shape) # ``` # The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship. # # If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value: # # ``` # print('first value in data:', data[0, 0]) # print('middle value in data:', data[30, 20]) # ``` # The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. # # #### Zero Indexing # # Programming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). # # As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. # #### In the Corner # # What may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. # #### Slicing data # # An index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this: # # ``` # print(data[0:4, 0:10]) # ``` # The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. # # Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice. # # Also, we don't have to start slices at `0`: # # ``` # print(data[5:10, 0:10]) # ``` # and we don't have to include the upper or lower bound on the slice. # # If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything: # # ``` # small = data[:3, 36:] # print('small is:') # print(small) # ``` # The above example selects rows 0 through 2 and columns 36 through to the end of the array. # # thus small is: # ``` # [[ 2. 3. 0. 0.] # [ 1. 1. 0. 1.] # [ 2. 2. 1. 1.]] # ``` # # Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus: # # ``` # doubledata = data * 2.0 # ``` # will create a new array doubledata each element of which is twice the value of the corresponding element in data: # # ``` # print('original:') # print(data[:3, 36:]) # print('doubledata:') # print(doubledata[:3, 36:]) # ``` # If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus: # # ``` # tripledata = doubledata + data # ``` # will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays. # # ``` # print('tripledata:') # print(tripledata[:3, 36:]) # ``` # ## Exercises # ### Variables # # What values do the variables mass and age have after each statement in the following program? # ``` # mass = 47.5 # age = 122 # mass = mass * 2.0 # age = age - 20 # print(mass, age) # ``` # Test your answers by executing the commands. # Solution: # ### Sorting Out References # # What does the following program print out? # ``` # first, second = 'Grace', 'Hopper' # third, fourth = second, first # print(third, fourth) # ``` # Solution: # ### Slicing Strings # A section of an array is called a slice. We can take slices of character strings as well: # ``` # element = 'oxygen' # print('first three characters:', element[0:3]) # print('last three characters:', element[3:6]) # ``` # # What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ? # # What about `element[-1]` and `element[-2]` ? # Solution: # Given those answers, explain what `element[1:-1]` does. # Solution: # ### Thin Slices # # The expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ? # Solution: # ## Key Points # Import a library into a program using import library_name. # # Use the numpy library to work with arrays in Python. # # Use `variable` `=` `value` to assign a value to a variable in order to record it in memory. # # Variables are created on demand whenever a value is assigned to them. # # Use `print(something)` to display the value of something. # # The expression `array.shape` gives the shape of an array. # # Use `array[x, y]` to select a single element from a 2D array. # # Array indices start at 0, not 1. # # Use `low:high` to specify a slice that includes the indices from low to high-1. # # All the indexing and slicing that works on arrays also works on strings. # # Use `#` and some kind of explanation to add comments to programs. # # Save, and version control your changes # # - save your work: `File -> Save` # - add all your changes to your local repository: `Terminal -> git add .` # - commit your updates a new Git version: `Terminal -> git commit -m "End of Episode 1"` # - push your latest commits to GitHub: `Terminal -> git push`
lessons/python/.ipynb_checkpoints/ep1a-introduction-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # # 16 PDEs: Waves – Students # # (See *Computational Physics* Ch 21 and *Computational Modeling* Ch 6.5.) # ## Background: waves on a string # # Assume a 1D string of length $L$ with mass density per unit length $\rho$ along the $x$ direction. It is held under constant tension $T$ (force per unit length). Ignore frictional forces and the tension is so high that we can ignore sagging due to gravity. # # # ### 1D wave equation # The string is displaced in the $y$ direction from its rest position, i.e., the displacement $y(x, t)$ is a function of space $x$ and time $t$. # # For small relative displacements $y(x, t)/L \ll 1$ and therefore small slopes $\partial y/\partial x$ we can describe $y(x, t)$ with a *linear* equation of motion: # Newton's second law applied to short elements of the string with length $\Delta x$ and mass $\Delta m = \rho \Delta x$: the left hand side contains the *restoring force* that opposes the displacement, the right hand side is the acceleration of the string element: # # \begin{align} # \sum F_{y}(x) &= \Delta m\, a(x, t)\\ # T \sin\theta(x+\Delta x) - T \sin\theta(x) &= \rho \Delta x \frac{\partial^2 y(x, t)}{\partial t^2} # \end{align} # # The angle $\theta$ measures by how much the string is bent away from the resting configuration. # Because we assume small relative displacements, the angles are small ($\theta \ll 1$) and we can make the small angle approximation # # $$ # \sin\theta \approx \tan\theta = \frac{\partial y}{\partial x} # $$ # # and hence # \begin{align} # T \left.\frac{\partial y}{\partial x}\right|_{x+\Delta x} - T \left.\frac{\partial y}{\partial x}\right|_{x} &= \rho \Delta x \frac{\partial^2 y(x, t)}{\partial t^2}\\ # \frac{T \left.\frac{\partial y}{\partial x}\right|_{x+\Delta x} - T \left.\frac{\partial y}{\partial x}\right|_{x}}{\Delta x} &= \rho \frac{\partial^2 y}{\partial t^2} # \end{align} # or in the limit $\Delta x \rightarrow 0$ a linear hyperbolic PDE results: # # \begin{gather} # \frac{\partial^2 y(x, t)}{\partial x^2} = \frac{1}{c^2} \frac{\partial^2 y(x, t)}{\partial t^2}, \quad c = \sqrt{\frac{T}{\rho}} # \end{gather} # # where $c$ has the dimension of a velocity. This is the (linear) **wave equation**. # ### General solution: waves # General solutions are propagating waves: # # If $f(x)$ is a solution at $t=0$ then # # $$ # y_{\mp}(x, t) = f(x \mp ct) # $$ # # are also solutions at later $t > 0$. # Because of linearity, any linear combination is also a solution, so the most general solution contains both right and left propagating waves # # $$ # y(x, t) = A f(x - ct) + B g(x + ct) # $$ # # (If $f$ and/or $g$ are present depends on the initial conditions.) # In three dimensions the wave equation is # # $$ # \boldsymbol{\nabla}^2 y(\mathbf{x}, t) - \frac{1}{c^2} \frac{\partial^2 y(\mathbf{x}, t)}{\partial t^2} = 0\ # $$ # ### Boundary and initial conditions # * The boundary conditions could be that the ends are fixed # # $$y(0, t) = y(L, t) = 0$$ # # * The *initial condition* is a shape for the string, e.g., a Gaussian at the center # # $$ # y(x, t=0) = g(x) = y_0 \frac{1}{\sqrt{2\pi\sigma}} \exp\left[-\frac{(x - x_0)^2}{2\sigma^2}\right] # $$ # # at time 0. # * Because the wave equation is *second order in time* we need a second initial condition, for instance, the string is released from rest: # # $$ # \frac{\partial y(x, t=0)}{\partial t} = 0 # $$ # # (The derivative, i.e., the initial displacement velocity is provided.) # ### Analytical solution # Solve (as always) with *separation of variables*. # # $$ # y(x, t) = X(x) T(t) # $$ # # and this yields the general solution (with boundary conditions of fixed string ends and initial condition of zero velocity) as a superposition of normal modes # # $$ # y(x, t) = \sum_{n=0}^{+\infty} B_n \sin k_n x\, \cos\omega_n t, # \quad \omega_n = ck_n,\ k_n = n \frac{2\pi}{L} = n k_0. # $$ # # (The angular frequency $\omega$ and the wave vector $k$ are determined from the boundary conditions.) # The coefficients $B_n$ are obtained from the initial shape: # # $$ # y(x, t=0) = \sum_{n=0}^{+\infty} B_n \sin n k_0 x = g(x) # $$ # In principle one can use the fact that $\int_0^L dx \sin m k_0 x \, \sin n k_0 x = \pi \delta_{mn}$ (orthogonality) to calculate the coefficients: # # \begin{align} # \int_0^L dx \sin m k_0 x \sum_{n=0}^{+\infty} B_n \sin n k_0 x &= \int_0^L dx \sin(m k_0 x) \, g(x)\\ # \pi \sum_{n=0}^{+\infty} B_n \delta_{mn} &= \dots \\ # B_m &= \pi^{-1} \dots # \end{align} # # (but the analytical solution is ugly and I cannot be bothered to put it down here.) # ## Numerical solution # # 1. discretize wave equation # 2. time stepping: leap frog algorithm (iterate) # Use the central difference approximation for the second order derivatives: # # \begin{align} # \frac{\partial^2 y}{\partial t^2} &\approx \frac{y(x, t+\Delta t) + y(x, t-\Delta t) - 2y(x, t)}{\Delta t ^2} = \frac{y_{i, j+1} + y_{i, j-1} - 2y_{i,j}}{\Delta t^2}\\ # \frac{\partial^2 y}{\partial x^2} &\approx \frac{y(x+\Delta x, t) + y(x-\Delta x, t) - 2y(x, t)}{\Delta x ^2} = \frac{y_{i+1, j} + y_{i-1, j} - 2y_{i,j}}{\Delta x^2} # \end{align} # # and substitute into the wave equation to yield the *discretized* wave equation: # $$ # \frac{y_{i+1, j} + y_{i-1, j} - 2y_{i,j}}{\Delta x^2} = \frac{1}{c^2} \frac{y_{i, j+1} + y_{i, j-1} - 2y_{i,j}}{\Delta t^2} # $$ # #### Student activity: derive the finite difference version of the 1D wave equation # Re-arrange so that the future terms $j+1$ can be calculated from the present $j$ and past $j-1$ terms: # # $$ # ? = ? # $$ # This is the time stepping algorithm for the wave equation. # ## Numerical implementation # # + # if you have plotting problems, try # # %matplotlib inline # %matplotlib notebook import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D plt.style.use('ggplot') # - # Implement the time stepping algorithm in the code below. Look for sections `# TODO`. # + L = 0.5 # m Nx = 50 Nt = 100 Dx = L/Nx # TODO: choose Dt Dt = # s rho = 1.5e-2 # kg/m tension = 150 # N c = np.sqrt(tension/rho) # TODO: calculate beta beta = beta2 = print("c = {0} m/s".format(c)) print("Dx = {0} m, Dt = {1} s, Dx/Dt = {2} m/s".format(Dx, Dt, Dx/Dt)) print("beta = {}".format(beta)) X = np.linspace(0, L, Nx+1) # need N+1! def gaussian(x, y0=0.05, x0=L/2, sigma=0.1*L): return y0/np.sqrt(2*np.pi*sigma) * np.exp(-(x-x0)**2/(2*sigma**2)) # displacements at j-1, j, j+1 y0 = np.zeros_like(X) y1 = np.zeros_like(y0) y2 = np.zeros_like(y0) # save array y_t = np.zeros((Nt+1, Nx+1)) # boundary conditions # TODO: set boundary conditions y2[:] = y0 # initial conditions: velocity 0, i.e. no difference between y0 and y1 y0[1:-1] = y1[1:-1] = gaussian(X)[1:-1] # save initial t_index = 0 y_t[t_index, :] = y0 t_index += 1 y_t[t_index, :] = y1 for jt in range(2, Nt): # TODO: time stepping algorithm t_index += 1 y_t[t_index, :] = y2 print("Iteration {0:5d}".format(jt), end="\r") else: print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt)) # - # ### 1D plot # Plot the output in the save array `y_t`. Vary the time steps that you look at with `y_t[start:end]`. # # We indicate time by color changing. ax = plt.subplot(111) ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(y_t))]) ax.plot(X, y_t.T); # ### 1D Animation # For 1D animation to work in a Jupyter notebook, use # %matplotlib notebook # If no animations are visible, restart kernel and execute the `%matplotlib notebook` cell as the very first one in the notebook. # # We use `matplotlib.animation` to look at movies of our solution: import matplotlib.animation as animation # The `update_wave()` function simply re-draws our image for every `frame`. # + y_limits = 1.05*y_t.min(), 1.05*y_t.max() fig1 = plt.figure(figsize=(5,5)) ax = fig1.add_subplot(111) ax.set_aspect(1) def update_wave(frame, data): global ax, Dt, y_limits ax.clear() ax.set_xlabel("x (m)") ax.set_ylabel("y (m)") ax.plot(X, data[frame]) ax.set_ylim(y_limits) ax.text(0.1, 0.9, "t = {0:3.1f} ms".format(frame*Dt*1e3), transform=ax.transAxes) wave_anim = animation.FuncAnimation(fig1, update_wave, frames=len(y_t), fargs=(y_t,), interval=30, blit=True, repeat_delay=100) # - # ### 3D plot # (Uses functions from previous lessons.) # + def plot_y(y_t, Dx, Dt, step=1): X, Y = np.meshgrid(range(y_t.shape[0]), range(y_t.shape[1])) Z = y_t.T[Y, X] fig = plt.figure() ax = fig.add_subplot(111, projection="3d") ax.plot_wireframe(Y*Dx, X*Dt*step, Z) ax.set_ylabel(r"time $t$ (s)") ax.set_xlabel(r"position $x$ (m)") ax.set_zlabel(r"displacement $y$ (m)") fig.tight_layout() return ax def plot_surf(y_t, Dt, Dx, step=1, filename=None, offset=-1, zlabel=r'displacement', elevation=40, azimuth=20, cmap=plt.cm.coolwarm): """Plot y_t as a 3D plot with contour plot underneath. Arguments --------- y_t : 2D array displacement y(t, x) filename : string or None, optional (default: None) If `None` then show the figure and return the axes object. If a string is given (like "contour.png") it will only plot to the filename and close the figure but return the filename. offset : float, optional (default: 20) position the 2D contour plot by offset along the Z direction under the minimum Z value zlabel : string, optional label for the Z axis and color scale bar elevation : float, optional choose elevation for initial viewpoint azimuth : float, optional chooze azimuth angle for initial viewpoint """ t = np.arange(y_t.shape[0]) x = np.arange(y_t.shape[1]) T, X = np.meshgrid(t, x) Y = y_t.T[X, T] fig = plt.figure() ax = fig.add_subplot(111, projection='3d') surf = ax.plot_surface(X*Dx, T*Dt*step, Y, cmap=cmap, rstride=2, cstride=2, alpha=1) cset = ax.contourf(X*Dx, T*Dt*step, Y, 20, zdir='z', offset=offset+Y.min(), cmap=cmap) ax.set_xlabel('x') ax.set_ylabel('t') ax.set_zlabel(zlabel) ax.set_zlim(offset + Y.min(), Y.max()) ax.view_init(elev=elevation, azim=azimuth) cb = fig.colorbar(surf, shrink=0.5, aspect=5) cb.set_label(zlabel) if filename: fig.savefig(filename) plt.close(fig) return filename else: return ax # - plot_y(y_t, Dx, Dt, step) plot_surf(y_t, Dt, Dx, step, offset=0, cmap=plt.cm.coolwarm) # ## von Neumann stability analysis: Courant condition # Assume that the solutions of the discretized equation can be written as normal modes # # $$ # y_{m,j} = \xi(k)^j e^{ikm\Delta x}, \quad t=j\Delta t,\ x=m\Delta x # $$ # # The time stepping algorith is stable if # # $$ # |\xi(k)| < 1 # $$ # Insert normal modes into the discretized equation # # # $$ # y_{i,j+1} = 2(1 - \beta^2)y_{i,j} - y_{i, j-1} + \beta^2 (y_{i+1,j} + y_{i-1,j}), \quad # \beta := \frac{c}{\Delta x/\Delta t} # $$ # # and simplify (use $1-\cos x = 2\sin^2\frac{x}{2}$): # # $$ # \xi^2 - 2(1-2\beta^2 s^2)\xi + 1 = 0, \quad s=\sin(k\Delta x/2) # $$ # # The characteristic equation has roots # # $$ # \xi_{\pm} = 1 - 2\beta^2 s^2 \pm \sqrt{(1-2\beta^2 s^2)^2 - 1}. # $$ # # It has one root for # # $$ # \left|1-2\beta^2 s^2\right| = 1, # $$ # # i.e., for # # $$ # \beta s = 1 # $$ # # We have two real roots for # # $$ # \left|1-2\beta^2 s^2\right| < 1 \\ # \beta s > 1 # $$ # # but one of the roots is always $|\xi| > 1$ and hence these solutions will diverge and not be stable. # # For # # $$ # \left|1-2\beta^2 s^2\right| ≥ 1 \\ # \beta s ≤ 1 # $$ # # the roots will be *complex conjugates of each other* # # $$ # \xi_\pm = 1 - 2\beta^2s^2 \pm i\sqrt{1-(1-2\beta^2s^2)^2} # $$ # # and the *magnitude* # # $$ # |\xi_{\pm}|^2 = (1 - 2\beta^2s^2)^2 - (1-(1-2\beta^2s^2)^2) = 1 # $$ # # is unity: Thus the solutions will not grow and will be *stable* for # # $$ # \beta s ≤ 1\\ # \frac{c}{\frac{\Delta x}{\Delta t}} \sin\frac{k \Delta x}{2} ≤ 1 # $$ # # Assuming the "worst case" for the $\sin$ factor (namely, 1), the **condition for stability** is # # $$ # c ≤ \frac{\Delta x}{\Delta t} # $$ # # or # # $$ # \beta ≤ 1. # $$ # # This is also known as the **Courant condition**. When written as # # $$ # \Delta t ≤ \frac{\Delta x}{c} # $$ # # it means that the time step $\Delta t$ (for a given $\Delta x$) must be *smaller than the time that the wave takes to travel one grid step*. # #
16_PDEs_waves/16_PDEs_waves-students.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Demystifying Neural Networks # # --- # # Autograd DAG # # How `autograd` can actually perform so many chain rules? # It builds a *Directed Acyclic Graph* (DAG). # # We will analyse one directed graph on top of a tiny ANN. # Such a DAG can become big very quickly, # therefore we will use only two layers. import autograd.numpy as np from autograd import grad # Since we will not actually train the ANN below, # we will define a very simple activation function: $y = 2x$. # The hyperbolic tangent is quite complex and would make DAG # very long. x = np.array([[0.3], [0.1], [0.5]]) y = np.array([[1.], [0.]]) w1 = np.array([[0.3, 0.1, 0.2], [0.2, -0.1, -0.1], [0.7, 0.5, -0.3], [0.5, 0.5, -0.5]]) w1b = np.array([[0.3], [0.2], [0.2], [0.3]]) w2 = np.array([[0.2, 0.3, 0.1, 0.1], [0.7, -0.2, -0.1, 0.3]]) w2b = np.array([[ 0.3], [-0.2]]) def act(x): return 2*x # We define an ANN function as normal and execute it with against $\vec{x}$ and $\vec{y}$. # Our only interest are the gradients not the actual output of the ANN. # + def netMSE(arg): x, w1, w1b, w2, w2b, y = arg y_hat = act(w2 @ act(w1 @ x + w1b) + w2b) return np.mean((y - y_hat)**2) netMSE_grad = grad(netMSE) grads = netMSE_grad([x, w1, w1b, w2, w2b, y]) for g in grads: print(g) # - # These are the final gradients against every single weight. # # Below we have the complete graph that has been constructed # in order to compute these gradients. # The graph has been constructed when the function executed. # Then, after the function finished executing the graph has been # walked backwards to calculate the gradients. # # The ID's at the nodes of the graph are increasing when walking the graph # top to bottom and decreasing when walking bottom to top. # `autograd` computes gradients in order from the biggest node ID # to the lowest node ID, this way one can be sure that all gradients needed # to compute the gradient on the current graph node are already computed. # # The computation at each node is performed using the *Jacobian Vector Product* # (JVP) rule for the operation that was originally performed on the node. # Each operation that can be differentiated by `autograd` has a JVP rule. # For example, there are JVP rules for sum, subtraction, or even mean operations. # ![graph-ann.svg](attachment:graph-ann.svg) # # <div style="text-align:right;font-size:0.7em;">graph-ann.svg</div> # In summary: `autograd` builds a DAG and then walks it backwards # performing the chain rule. # It is this *backwards* that is meant in the backpropagation technique # of ANN training.
12-autograd-dag.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Trust Scores applied to Iris # + [markdown] pycharm={"name": "#%% md\n"} # It is important to know when a machine learning classifier's predictions can be trusted. Relying on the classifier's (uncalibrated) prediction probabilities is not optimal and can be improved upon. *Trust scores* measure the agreement between the classifier and a modified nearest neighbor classifier on the test set. The trust score is the ratio between the distance of the test instance to the nearest class different from the predicted class and the distance to the predicted class. Higher scores correspond to more trustworthy predictions. A score of 1 would mean that the distance to the predicted class is the same as to another class. # # The original paper on which the algorithm is based is called [To Trust Or Not To Trust A Classifier](https://arxiv.org/abs/1805.11783). Our implementation borrows heavily from https://github.com/google/TrustScore, as does the example notebook. # + pycharm={"name": "#%%\n"} import matplotlib # %matplotlib inline import matplotlib.cm as cm import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression from sklearn.model_selection import StratifiedShuffleSplit from alibi.confidence import TrustScore # + [markdown] pycharm={"name": "#%% md\n"} # ## Load and prepare Iris dataset # + pycharm={"name": "#%%\n"} dataset = load_iris() # + [markdown] pycharm={"name": "#%% md\n"} # Scale data # + pycharm={"name": "#%%\n"} dataset.data = (dataset.data - dataset.data.mean(axis=0)) / dataset.data.std(axis=0) # + [markdown] pycharm={"name": "#%% md\n"} # Define training and test set # + pycharm={"name": "#%%\n"} idx = 140 X_train,y_train = dataset.data[:idx,:], dataset.target[:idx] X_test, y_test = dataset.data[idx+1:,:], dataset.target[idx+1:] # + [markdown] pycharm={"name": "#%% md\n"} # ## Fit model and make predictions # + pycharm={"name": "#%%\n"} np.random.seed(0) clf = LogisticRegression(solver='liblinear', multi_class='auto') clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(f'Predicted class: {y_pred}') # + [markdown] pycharm={"name": "#%% md\n"} # ## Basic Trust Score Usage # + [markdown] pycharm={"name": "#%% md\n"} # ### Initialise Trust Scores and fit on training data # + [markdown] pycharm={"name": "#%% md\n"} # The trust score algorithm builds [k-d trees](https://en.wikipedia.org/wiki/K-d_tree) for each class. The distance of the test instance to the $k$th nearest neighbor of each tree (or the average distance to the $k$th neighbor) can then be used to calculate the trust score. We can optionally filter out outliers in the training data before building the trees. The example below uses the *distance_knn* (`filter_type`) method to filter out the 5% (`alpha`) instances of each class with the highest distance to its 10th nearest neighbor (`k_filter`) in that class. # + pycharm={"name": "#%%\n"} ts = TrustScore(k_filter=10, # nb of neighbors used for kNN distance or probability to filter out outliers alpha=.05, # target fraction of instances to filter out filter_type='distance_knn', # filter method: None, 'distance_knn' or 'probability_knn' leaf_size=40, # affects speed and memory to build KDTrees, memory scales with n_samples / leaf_size metric='euclidean', # distance metric used for the KDTrees dist_filter_type='point') # 'point' uses distance to k-nearest point # 'mean' uses average distance from the 1st to the kth nearest point # + pycharm={"name": "#%%\n"} ts.fit(X_train, y_train, classes=3) # classes = nb of prediction classes # + [markdown] pycharm={"name": "#%% md\n"} # ### Calculate Trust Scores on test data # + [markdown] pycharm={"name": "#%% md\n"} # Since the trust score is the ratio between the distance of the test instance to the nearest class different from the predicted class and the distance to the predicted class, higher scores correspond to more trustworthy predictions. A score of 1 would mean that the distance to the predicted class is the same as to another class. The `score` method returns arrays with both the trust scores and the class labels of the closest not predicted class. # + pycharm={"name": "#%%\n"} score, closest_class = ts.score(X_test, y_pred, k=2, # kth nearest neighbor used # to compute distances for each class dist_type='point') # 'point' or 'mean' distance option print(f'Trust scores: {score}') print(f'\nClosest not predicted class: {closest_class}') # + [markdown] pycharm={"name": "#%% md\n"} # ## Comparison of Trust Scores with model prediction probabilities # + [markdown] pycharm={"name": "#%% md\n"} # Let's compare the prediction probabilities from the classifier with the trust scores for each prediction. The first use case checks whether trust scores are better than the model's prediction probabilities at identifying correctly classified examples, while the second use case does the same for incorrectly classified instances. # # First we need to set up a couple of helper functions. # + [markdown] pycharm={"name": "#%% md\n"} # * Define a function that handles model training and predictions for a simple logistic regression: # + pycharm={"name": "#%%\n"} def run_lr(X_train, y_train, X_test): clf = LogisticRegression(solver='liblinear', multi_class='auto') clf.fit(X_train, y_train) y_pred = clf.predict(X_test) y_pred_proba = clf.predict_proba(X_test) probas = y_pred_proba[range(len(y_pred)), y_pred] # probabilities of predicted class return y_pred, probas # + [markdown] pycharm={"name": "#%% md\n"} # * Define the function that generates the precision plots: # + pycharm={"name": "#%%\n"} def plot_precision_curve(plot_title, percentiles, labels, final_tp, final_stderr, final_misclassification, colors = ['blue', 'darkorange', 'brown', 'red', 'purple']): plt.title(plot_title, fontsize=18) colors = colors + list(cm.rainbow(np.linspace(0, 1, len(final_tp)))) plt.xlabel("Percentile", fontsize=14) plt.ylabel("Precision", fontsize=14) for i, label in enumerate(labels): ls = "--" if ("Model" in label) else "-" plt.plot(percentiles, final_tp[i], ls, c=colors[i], label=label) plt.fill_between(percentiles, final_tp[i] - final_stderr[i], final_tp[i] + final_stderr[i], color=colors[i], alpha=.1) if 0. in percentiles: plt.legend(loc="lower right", fontsize=14) else: plt.legend(loc="upper left", fontsize=14) model_acc = 100 * (1 - final_misclassification) plt.axvline(x=model_acc, linestyle="dotted", color="black") plt.show() # + [markdown] pycharm={"name": "#%% md\n"} # * The function below trains the model on a number of folds, makes predictions, calculates the trust scores, and generates the precision curves to compare the trust scores with the model prediction probabilities: # + pycharm={"name": "#%%\n"} def run_precision_plt(X, y, nfolds, percentiles, run_model, test_size=.5, plt_title="", plt_names=[], predict_correct=True, classes=3): def stderr(L): return np.std(L) / np.sqrt(len(L)) all_tp = [[[] for p in percentiles] for _ in plt_names] misclassifications = [] mult = 1 if predict_correct else -1 folds = StratifiedShuffleSplit(n_splits=nfolds, test_size=test_size, random_state=0) for train_idx, test_idx in folds.split(X, y): # create train and test folds, train model and make predictions X_train, y_train = X[train_idx, :], y[train_idx] X_test, y_test = X[test_idx, :], y[test_idx] y_pred, probas = run_model(X_train, y_train, X_test) # target points are the correctly classified points target_points = np.where(y_pred == y_test)[0] if predict_correct else np.where(y_pred != y_test)[0] final_curves = [probas] # calculate trust scores ts = TrustScore() ts.fit(X_train, y_train, classes=classes) scores, _ = ts.score(X_test, y_pred) final_curves.append(scores) # contains prediction probabilities and trust scores # check where prediction probabilities and trust scores are above a certain percentage level for p, perc in enumerate(percentiles): high_proba = [np.where(mult * curve >= np.percentile(mult * curve, perc))[0] for curve in final_curves] if 0 in map(len, high_proba): continue # calculate fraction of values above percentage level that are correctly (or incorrectly) classified tp = [len(np.intersect1d(hp, target_points)) / (1. * len(hp)) for hp in high_proba] for i in range(len(plt_names)): all_tp[i][p].append(tp[i]) # for each percentile, store fraction of values above cutoff value misclassifications.append(len(target_points) / (1. * len(X_test))) # average over folds for each percentile final_tp = [[] for _ in plt_names] final_stderr = [[] for _ in plt_names] for p, perc in enumerate(percentiles): for i in range(len(plt_names)): final_tp[i].append(np.mean(all_tp[i][p])) final_stderr[i].append(stderr(all_tp[i][p])) for i in range(len(all_tp)): final_tp[i] = np.array(final_tp[i]) final_stderr[i] = np.array(final_stderr[i]) final_misclassification = np.mean(misclassifications) # create plot plot_precision_curve(plt_title, percentiles, plt_names, final_tp, final_stderr, final_misclassification) # + [markdown] pycharm={"name": "#%% md\n"} # ### Detect correctly classified examples # + [markdown] pycharm={"name": "#%% md\n"} # The x-axis on the plot below shows the percentiles for the model prediction probabilities of the predicted class for each instance and for the trust scores. The y-axis represents the precision for each percentile. For each percentile level, we take the test examples whose trust score is above that percentile level and plot the percentage of those points that were correctly classified by the classifier. We do the same with the classifier's own model confidence (i.e. softmax probabilities). For example, at percentile level 80, we take the top 20% scoring test examples based on the trust score and plot the percentage of those points that were correctly classified. We also plot the top 20% scoring test examples based on model probabilities and plot the percentage of those that were correctly classified. The vertical dotted line is the error of the logistic regression classifier. The plots are an average over 10 folds of the dataset with 50% of the data kept for the test set. # # The *Trust Score* and *Model Confidence* curves then show that the model precision is typically higher when using the trust scores to rank the predictions compared to the model prediction probabilities. # + pycharm={"name": "#%%\n"} X = dataset.data y = dataset.target percentiles = [0 + 0.5 * i for i in range(200)] nfolds = 10 plt_names = ['Model Confidence', 'Trust Score'] plt_title = 'Iris -- Logistic Regression -- Predict Correct' # + pycharm={"name": "#%%\n"} run_precision_plt(X, y, nfolds, percentiles, run_lr, plt_title=plt_title, plt_names=plt_names, predict_correct=True) # + [markdown] pycharm={"name": "#%% md\n"} # ### Detect incorrectly classified examples # + [markdown] pycharm={"name": "#%% md\n"} # By taking the *negative of the prediction probabilities and trust scores*, we can also see on the plot below how the trust scores compare to the model predictions for incorrectly classified instances. The vertical dotted line is the accuracy of the logistic regression classifier. The plot shows the precision of identifying incorrectly classified instances. Higher is obviously better. # + pycharm={"name": "#%%\n"} percentiles = [50 + 0.5 * i for i in range(100)] plt_title = 'Iris -- Logistic Regression -- Predict Incorrect' run_precision_plt(X, y, nfolds, percentiles, run_lr, plt_title=plt_title, plt_names=plt_names, predict_correct=False)
doc/source/examples/trustscore_iris.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # NLP datasets # + hide_input=true from fastai.gen_doc.nbdoc import * from fastai.text import * from fastai.gen_doc.nbdoc import * # - # This module contains the [`TextDataset`](/text.data.html#TextDataset) class, which is the main dataset you should use for your NLP tasks. It automatically does the preprocessing steps described in [`text.transform`](/text.transform.html#text.transform). It also contains all the functions to quickly get a [`TextDataBunch`](/text.data.html#TextDataBunch) ready. # ## Quickly assemble your data # You should get your data in one of the following formats to make the most of the fastai library and use one of the factory methods of one of the [`TextDataBunch`](/text.data.html#TextDataBunch) classes: # - raw text files in folders train, valid, test in an ImageNet style, # - a csv where some column(s) gives the label(s) and the folowwing one the associated text, # - a dataframe structured the same way, # - tokens and labels arrays, # - ids, vocabulary (correspondance id to word) and labels. # # If you are assembling the data for a language model, you should define your labels as always 0 to respect those formats. The first time you create a [`DataBunch`](/basic_data.html#DataBunch) with one of those functions, your data will be preprocessed automatically. You can save it, so that the next time you call it is almost instantaneous. # # Below are the classes that help assembling the raw data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for NLP. # + hide_input=true show_doc(TextLMDataBunch, title_level=3) # - # All the texts in the [`datasets`](/datasets.html#datasets) are concatenated and the labels are ignored. Instead, the target is the next word in the sentence. # + hide_input=true show_doc(TextLMDataBunch.create) # + hide_input=true show_doc(TextClasDataBunch, title_level=3) # + hide_input=true show_doc(TextClasDataBunch.create) # - # All the texts are grouped by length (with a bit of randomness for the training set) then padded so that the samples have the same length to get in a batch. # + hide_input=true show_doc(TextDataBunch, title_level=3) # + hide_input=true jekyll_warn("This class can only work directly if all the texts have the same length.") # - # ### Factory methods (TextDataBunch) # All those classes have the following factory methods. # + hide_input=true show_doc(TextDataBunch.from_folder) # - # The floders are scanned in `path` with a <code>train</code>, `valid` and maybe `test` folders. Text files in the <code>train</code> and `valid` folders should be places in subdirectories according to their classes (not applicable for a language model). `tokenizer` will be used to parse those texts into tokens. # # You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data). # + hide_input=true show_doc(TextDataBunch.from_csv) # - # This method will look for `csv_name` in `path`, and maybe a `test` csv file opened with `header`. You can specify `text_cols` and `label_cols`. If there are several `text_cols`, the texts will be concatenated together with an optional field token. If there are several `label_cols`, the labels will be assumed to be one-hot encoded and `classes` will default to `label_cols` (you can ignore that argument for a language model). `tokenizer` will be used to parse those texts into tokens. # # You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data). # + hide_input=true show_doc(TextDataBunch.from_df) # - # This method will use `train_df`, `valid_df` and maybe `test_df` to build the [`TextDataBunch`](/text.data.html#TextDataBunch) in `path`. You can specify `text_cols` and `label_cols`. If there are several `text_cols`, the texts will be concatenated together with an optional field token. If there are several `label_cols`, the labels will be assumed to be one-hot encoded and `classes` will default to `label_cols` (you can ignore that argument for a language model). `tokenizer` will be used to parse those texts into tokens. # # You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data). # + hide_input=true show_doc(TextDataBunch.from_tokens) # - # This function will create a [`DataBunch`](/basic_data.html#DataBunch) from `trn_tok`, `trn_lbls`, `val_tok`, `val_lbls` and maybe `tst_tok`. # # You can pass a specific `vocab` for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the [`TextDataset`](/text.data.html#TextDataset) function and to the class initialization, you can precise there parameters such as `max_vocab`, `chunksize`, `min_freq`, `n_labels`, `tok_suff` and `lbl_suff` (see the [`TextDataset`](/text.data.html#TextDataset) documentation) or `bs`, `bptt` and `pad_idx` (see the sections LM data and classifier data). # + hide_input=true show_doc(TextDataBunch.from_ids) # - # Texts are already preprocessed into `train_ids`, `train_lbls`, `valid_ids`, `valid_lbls` and maybe `test_ids`. You can specify the corresponding `classes` if applicable. You must specify a `path` and the `vocab` so that the [`RNNLearner`](/text.learner.html#RNNLearner) class can later infer the corresponding sizes in the model it will create. kwargs will be passed to the class initialization. # ### Load and save # To avoid losing time preprocessing the text data more than once, you should save/load your [`TextDataBunch`](/text.data.html#TextDataBunch) using thse methods. # + hide_input=true show_doc(TextDataBunch.load) # + hide_input=true show_doc(TextDataBunch.save) # - # ### Example # Untar the IMDB sample dataset if not already done: path = untar_data(URLs.IMDB_SAMPLE) path # Since it comes in the form of csv files, we will use the corresponding `text_data` method. Here is an overview of what your file you should look like: pd.read_csv(path/'texts.csv').head() # And here is a simple way of creating your [`DataBunch`](/basic_data.html#DataBunch) for language modelling or classification. data_lm = TextLMDataBunch.from_csv(Path(path), 'texts.csv') data_clas = TextClasDataBunch.from_csv(Path(path), 'texts.csv') # ## The TextList input classes # Behind the scenes, the previous functions will create a training, validation and maybe test [`TextList`](/text.data.html#TextList) that will be tokenized and numericalized (if needed) using [`PreProcessor`](/data_block.html#PreProcessor). # + hide_input=true show_doc(Text, title_level=3) # + hide_input=true show_doc(TextList, title_level=3) # - # `vocab` contains the correspondance between ids and tokens, `pad_idx` is the id used for padding. You can pass a custom `processor` in the `kwargs` to change the defaults for tokenization or numericalization. It should have the following form: processor = [TokenizeProcessor(tokenizer=SpacyTokenizer('en')), NumericalizeProcessor(max_vocab=30000)] # See below for all the arguments those tokenizers can take. # + hide_input=true show_doc(TextList.label_for_lm) # + hide_input=true show_doc(TextList.from_folder) # + hide_input=true show_doc(TextList.show_xys) # + hide_input=true show_doc(TextList.show_xyzs) # + hide_input=true show_doc(OpenFileProcessor, title_level=3) # + hide_input=true show_doc(open_text) # + hide_input=true show_doc(TokenizeProcessor, title_level=3) # - # `tokenizer` is uded on bits of `chunsize`. If `mark_fields=True`, add field tokens between each parts of the texts (given when the texts are read in several columns of a dataframe). See more about tokenizers in the [transform documentation](/text.transform.html). # + hide_input=true show_doc(NumericalizeProcessor, title_level=3) # - # Uses `vocab` for this (if not None), otherwise create one with `max_vocab` and `min_freq` from tokens. # ## Language Model data # A language model is trained to guess what the next word is inside a flow of words. We don't feed it the different texts separately but concatenate them all together in a big array. To create the batches, we split this array into `bs` chuncks of continuous texts. Note that in all NLP tasks, we don't use the usual convention of sequence length being the first dimension so batch size is the first dimension and sequence lenght is the second. Here you can read the chunks of texts in lines. path = untar_data(URLs.IMDB_SAMPLE) data = TextLMDataBunch.from_csv(path, 'texts.csv') x,y = next(iter(data.train_dl)) example = x[:15,:15].cpu() texts = pd.DataFrame([data.train_ds.vocab.textify(l).split(' ') for l in example]) texts # + hide_input=true jekyll_warn("If you are used to another convention, beware! fastai always uses batch as a first dimension, even in NLP.") # - # This is all done internally when we use [`TextLMDataBunch`](/text.data.html#TextLMDataBunch), by wrapping the dataset in the following pre-loader before calling a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). # + hide_input=true show_doc(LanguageModelPreLoader) # - # Takes the texts from `dataset` that have certain `lengths` (if this argument isn't passed, `lengths` are computed at initiliazation). It will prepare the data for batches with a batch size of `bs` and a sequence length `bptt`. If `backwards=True`, reverses the original text. If `shuffle=True`, we shuffle the texts before going through them, at the start of each epoch. If `batch_first=True`, the last batch of texts (with a sequence length < `bptt`) is discarded. # ## Classifier data # When preparing the data for a classifier, we keep the different texts separate, which poses another challenge for the creation of batches: since they don't all have the same length, we can't easily collate them together in batches. To help with this we use two different techniques: # - padding: each text is padded with the `PAD` token to get all the ones we picked to the same size # - sorting the texts (ish): to avoid having together a very long text with a very short one (which would then have a lot of `PAD` tokens), we regroup the texts by order of length. For the training set, we still add some randomness to avoid showing the same batches at every step of the training. # # Here is an example of batch with padding (the padding index is 1, and the padding is applied before the sentences start). path = untar_data(URLs.IMDB_SAMPLE) data = TextClasDataBunch.from_csv(path, 'texts.csv') iter_dl = iter(data.train_dl) _ = next(iter_dl) x,y = next(iter_dl) x[-10:,:20] # This is all done internally when we use [`TextClasDataBunch`](/text.data.html#TextClasDataBunch), by using the following classes: # + hide_input=true show_doc(SortSampler) # - # This pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) is used for the validation and (if applicable) the test set. # + hide_input=true show_doc(SortishSampler) # - # This pytorch [`Sampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler) is generally used for the training set. # + hide_input=true show_doc(pad_collate) # - # This will collate the `samples` in batches while adding padding with `pad_idx`. If `pad_first=True`, padding is applied at the beginning (before the sentence starts) otherwise it's applied at the end. # ## Undocumented Methods - Methods moved below this line will intentionally be hidden show_doc(TextList.new) show_doc(TextList.get) show_doc(TokenizeProcessor.process_one) show_doc(TokenizeProcessor.process) show_doc(OpenFileProcessor.process_one) show_doc(NumericalizeProcessor.process) show_doc(NumericalizeProcessor.process_one) show_doc(TextList.reconstruct) show_doc(LanguageModelPreLoader.on_epoch_begin) show_doc(LanguageModelPreLoader.on_epoch_end) # ## New Methods - Please document or move to the undocumented section
docs_src/text.data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Tutorial on large-scale Thompson sampling # # This demo currently considers four approaches to discrete Thompson sampling on `m` candidates points: # # 1. **Exact sampling with Cholesky:** Computing a Cholesky decomposition of the corresponding `m x m` covariance matrix which reuqires `O(m^3)` computational cost and `O(m^2)` space. This is the standard approach to sampling from a Gaussian process, but the quadratic memory usage and cubic compliexity limits the number of candidate points. # # 2. **Contour integral quadrature (CIQ):** CIQ [1] is a Krylov subspace method combined with a rational approximation that can be used for computing matrix square roots of covariance matrices, which is the main bottleneck when sampling from a Gaussian process. CIQ relies on computing matrix vector multiplications with the exact kernel matrix which requires `O(m^2)` computational complexity and space. The space complexity can be lowered to `O(m)` by using [KeOps](https://github.com/getkeops/keops), which is necessary to scale to large values of `m`. # # 3. **Lanczos:** Rather than using CIQ, we can solve the linear systems `K^(1/2) v = b` using Lanczos and the conjugate gradient (CG) method. This will be faster than CIQ, but will generally produce samples of worse quality. Similarly to CIQ, we need to use KeOps as we reqiuire computing matrix vector multiplications with the exact kernel matrix. # # 4. **Random Fourier features (RFFs):** The RFF kernel was originally proposed in [2] and we use it as implemented in GPyTorch. RFFs are computationally cheap to work with as the computational cost and space are both `O(km)` where `k` is the number of Fourier features. Note that while Cholesky and CIQ are able to generate exact samples from the GP model, RFFs are an unbiased approximation and the resulting samples often aren't perfectly calibrated. # # # [1] [<NAME>, et al. "Fast matrix square roots with applications to Gaussian processes and Bayesian optimization.", Advances in neural information processing systems (2020)](https://proceedings.neurips.cc/paper/2020/file/fcf55a303b71b84d326fb1d06e332a26-Paper.pdf) # # [2] [<NAME>, and <NAME>. "Random features for large-scale kernel machines.", Advances in neural information processing systems (2007)](https://people.eecs.berkeley.edu/~brecht/papers/07.rah.rec.nips.pdf) # + import os import time from contextlib import ExitStack import torch from torch.quasirandom import SobolEngine import gpytorch import gpytorch.settings as gpts import pykeops from botorch.fit import fit_gpytorch_model from botorch.generation import MaxPosteriorSampling from botorch.models import SingleTaskGP from botorch.test_functions import Hartmann from botorch.utils.transforms import unnormalize from gpytorch.constraints import Interval from gpytorch.distributions import MultivariateNormal from gpytorch.kernels import MaternKernel, RFFKernel, ScaleKernel from gpytorch.kernels.keops import MaternKernel as KMaternKernel from gpytorch.likelihoods import GaussianLikelihood from gpytorch.mlls import ExactMarginalLogLikelihood device = torch.device("cuda" if torch.cuda.is_available() else "cpu") dtype = torch.double SMOKE_TEST = os.environ.get("SMOKE_TEST") # - pykeops.test_torch_bindings() # Make sure the KeOps bindings are working # + hart6 = Hartmann(dim=6, negate=True).to(device=device, dtype=dtype) dim = hart6.dim def eval_objective(x): """This is a helper function we use to unnormalize and evalaute a point""" return hart6(unnormalize(x, hart6.bounds)) # - def get_initial_points(dim, n_pts, seed=None): sobol = SobolEngine(dimension=dim, scramble=True, seed=seed) X_init = sobol.draw(n=n_pts).to(dtype=dtype, device=device) return X_init def generate_batch( X, Y, batch_size, n_candidates, sampler="cholesky", # "cholesky", "ciq", "rff" use_keops=False, ): assert sampler in ("cholesky", "ciq", "rff", "lanczos") assert X.min() >= 0.0 and X.max() <= 1.0 and torch.all(torch.isfinite(Y)) # NOTE: We probably want to pass in the default priors in SingleTaskGP here later kernel_kwargs = {"nu": 2.5, "ard_num_dims": X.shape[-1]} if sampler == "rff": base_kernel = RFFKernel(**kernel_kwargs, num_samples=1024) else: base_kernel = ( KMaternKernel(**kernel_kwargs) if use_keops else MaternKernel(**kernel_kwargs) ) covar_module = ScaleKernel(base_kernel) # Fit a GP model train_Y = (Y - Y.mean()) / Y.std() likelihood = GaussianLikelihood(noise_constraint=Interval(1e-8, 1e-3)) model = SingleTaskGP(X, train_Y, likelihood=likelihood, covar_module=covar_module) mll = ExactMarginalLogLikelihood(model.likelihood, model) fit_gpytorch_model(mll) # Draw samples on a Sobol sequence sobol = SobolEngine(X.shape[-1], scramble=True) X_cand = sobol.draw(n_candidates).to(dtype=dtype, device=device) # Thompson sample with ExitStack() as es: if sampler == "cholesky": es.enter_context(gpts.max_cholesky_size(float("inf"))) elif sampler == "ciq": es.enter_context(gpts.fast_computations(covar_root_decomposition=True)) es.enter_context(gpts.max_cholesky_size(0)) es.enter_context(gpts.ciq_samples(True)) es.enter_context(gpts.minres_tolerance(2e-3)) # Controls accuracy and runtime es.enter_context(gpts.num_contour_quadrature(15)) elif sampler == "lanczos": es.enter_context(gpts.fast_computations(covar_root_decomposition=True)) es.enter_context(gpts.max_cholesky_size(0)) es.enter_context(gpts.ciq_samples(False)) elif sampler == "rff": es.enter_context(gpts.fast_computations(covar_root_decomposition=True)) thompson_sampling = MaxPosteriorSampling(model=model, replacement=False) X_next = thompson_sampling(X_cand, num_samples=batch_size) return X_next def run_optimization(sampler, n_candidates, n_init, max_evals, batch_size, use_keops=False, seed=None): X = get_initial_points(dim, n_init, seed) Y = torch.tensor([eval_objective(x) for x in X], dtype=dtype, device=device).unsqueeze(-1) print(f"{len(X)}) Best value: {Y.max().item():.2e}") while len(X) < max_evals: # Create a batch start = time.time() X_next = generate_batch( X=X, Y=Y, batch_size=min(batch_size, max_evals - len(X)), n_candidates=n_candidates, sampler=sampler, use_keops=use_keops, ) end = time.time() print(f"Generated batch in {end - start:.1f} seconds") Y_next = torch.tensor( [eval_objective(x) for x in X_next], dtype=dtype, device=device ).unsqueeze(-1) # Append data X = torch.cat((X, X_next), dim=0) Y = torch.cat((Y, Y_next), dim=0) print(f"{len(X)}) Best value: {Y.max().item():.2e}") return X, Y # + batch_size = 5 n_init = 10 max_evals = 60 seed = 0 # To get the same Sobol points shared_args = { "n_init": n_init, "max_evals": max_evals, "batch_size": batch_size, "seed": seed, } # - USE_KEOPS = True if not SMOKE_TEST else False N_CAND = 50000 if not SMOKE_TEST else 10 N_CAND_CHOL = 10000 if not SMOKE_TEST else 10 # ## Track memory footprint # %load_ext memory_profiler # ## Cholesky with 10,000 candidates # %memit X_chol, Y_chol = run_optimization("cholesky", N_CAND_CHOL, **shared_args) # ## RFF with 50,000 candidates # %memit X_rff, Y_rff = run_optimization("rff", N_CAND, **shared_args) # ## Lanczos # %memit X_lanczos, Y_lanczos = run_optimization("lanczos", N_CAND, use_keops=USE_KEOPS, **shared_args) # ## CIQ with 50,000 candidates # %memit X_ciq, Y_ciq = run_optimization("ciq", N_CAND, use_keops=USE_KEOPS, **shared_args) # ## Plot # + import matplotlib import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=(10, 8)) matplotlib.rcParams.update({"font.size": 20}) results = [ (Y_chol.cpu(), "Cholesky-10,000", "b", "", 14, "--"), (Y_rff.cpu(), "RFF-50,000", "r", ".", 16, "-"), (Y_lanczos.cpu(), "Lanczos-50,000", "m", "^", 9, "-"), (Y_ciq.cpu(), "CIQ-50,000", "g", "*", 12, "-"), ] optimum = hart6.optimal_value ax = fig.add_subplot(1, 1, 1) names = [] for res, name, c, m, ms, ls in results: names.append(name) fx = res.cummax(dim=0)[0] t = 1 + np.arange(len(fx)) plt.plot(t[0::2], fx[0::2], c=c, marker=m, linestyle=ls, markersize=ms) plt.plot([0, max_evals], [hart6.optimal_value, hart6.optimal_value], "k--", lw=3) plt.xlabel("Function value", fontsize=18) plt.xlabel("Number of evaluations", fontsize=18) plt.title("Hartmann6", fontsize=24) plt.xlim([0, max_evals]) plt.ylim([0.5, 3.5]) plt.grid(True) plt.tight_layout() plt.legend( names + ["Global optimal value"], loc="lower right", ncol=1, fontsize=18, ) plt.show() # -
tutorials/thompson_sampling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Convolutional Layer # # In this notebook, we visualize four filtered outputs (a.k.a. activation maps) of a convolutional layer. # # In this example, *we* are defining four filters that are applied to an input image by initializing the **weights** of a convolutional layer, but a trained CNN will learn the values of these weights. # # <img src='notebook_ims/conv_layer.gif' height=60% width=60% /> # ### Import the image # + import cv2 import matplotlib.pyplot as plt # %matplotlib inline # TODO: Feel free to try out your own images here by changing img_path # to a file path to another image on your computer! img_path = 'data/udacity_sdc.png' # load color image bgr_img = cv2.imread(img_path) # convert to grayscale gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY) # - print(gray_img) print(gray_img.astype("float32")/255) # + # normalize, rescale entries to lie in [0,1] gray_img = gray_img.astype("float32")/255 print(gray_img.shape) # plot image plt.imshow(gray_img, cmap='gray') plt.show() # - # ### Define and visualize the filters # + import numpy as np ## TODO: Feel free to modify the numbers here, to try out another filter! filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]]) print('Filter shape: ', filter_vals.shape) # + # Defining four different filters, # all of which are linear combinations of the `filter_vals` defined above # define four filters filter_1 = filter_vals filter_2 = -filter_1 filter_3 = filter_1.T filter_4 = -filter_3 filters = np.array([filter_1, filter_2, filter_3, filter_4]) # - # visualize all four filters fig = plt.figure(figsize=(10, 5)) for i in range(4): ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[]) ax.imshow(filters[i], cmap='gray') ax.set_title('Filter %s' % str(i+1)) width, height = filters[i].shape for x in range(width): for y in range(height): ax.annotate(str(filters[i][x][y]), xy=(y,x), horizontalalignment='center', verticalalignment='center', color='white' if filters[i][x][y]<0 else 'black') # ## Define a convolutional layer # # The various layers that make up any neural network are documented, [here](http://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll start by defining a: # * Convolutional layer # # Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network! # # # #### `__init__` and `forward` # To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the forward behavior of a network that applyies those initialized layers to an input (`x`) in the function `forward`. In PyTorch we convert all inputs into the Tensor datatype, which is similar to a list data type in Python. # # Below, I define the structure of a class called `Net` that has a convolutional layer that can contain four 4x4 grayscale filters. # + import torch import torch.nn as nn import torch.nn.functional as F # define a neural network with a single convolutional layer with four filters class Net(nn.Module): def __init__(self, weight): super(Net, self).__init__() # initializes the weights of the convolutional layer to be the weights of the 4 defined filters k_height, k_width = weight.shape[2:] # assumes there are 4 grayscale filters self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False) self.conv.weight = torch.nn.Parameter(weight) def forward(self, x): # calculates the output of a convolutional layer # pre- and post-activation conv_x = self.conv(x) activated_x = F.relu(conv_x) # returns both layers return conv_x, activated_x # instantiate the model and set the weights weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor) model = Net(weight) # print out the layer in the network print(model) # - print(filters) print(torch.from_numpy(filters).unsqueeze(1)) print(torch.from_numpy(filters).unsqueeze(1).shape) print(torch.from_numpy(filters).unsqueeze(1).shape[2:]) # **torch.unsqueeze(input, dim, out=None) → Tensor** # Returns a new tensor with a dimension of size one inserted at the specified position. # input (Tensor) – the input tensor # dim (int) – the index at which to insert the singleton dimension # out (Tensor, optional) – the output tensor x = torch.tensor([1, 2, 3, 4]) print(torch.unsqueeze(x, 0)) print(torch.unsqueeze(x, 1)) # ### Visualize the output of each filter # # First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through. # helper function for visualizing the output of a given layer # default number of filters is 4 def viz_layer(layer, n_filters= 4): fig = plt.figure(figsize=(20, 20)) for i in range(n_filters): ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[]) # grab layer outputs ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray') ax.set_title('Output %s' % str(i+1)) # Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied. # + # plot original image plt.imshow(gray_img, cmap='gray') # visualize all filters fig = plt.figure(figsize=(12, 6)) fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05) for i in range(4): ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[]) ax.imshow(filters[i], cmap='gray') ax.set_title('Filter %s' % str(i+1)) # convert the image into an input Tensor gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1) print(gray_img_tensor.shape) # [1, 1, 213, 320] # get the convolutional layer (pre and post activation) conv_layer, activated_layer = model(gray_img_tensor) # visualize the output of a conv layer viz_layer(conv_layer) # - # #### ReLu activation # # In this model, we've used an activation function that scales the output of the convolutional layer. We've chose a ReLu function to do this, and this function simply turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`. # # <img src='notebook_ims/relu_ex.png' height=50% width=50% /> # after a ReLu is applied # visualize the output of an activated conv layer viz_layer(activated_layer)
convolutional-neural-networks/conv-visualization/conv_visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/jakefed1/jakefed1.github.io/blob/master/survey_lab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="uglH-di7VkiS" import pandas as pd import numpy as np import seaborn as sns import altair as alt import matplotlib.pyplot as plt # + colab={"base_uri": "https://localhost:8080/"} id="kJzLDI2mVhqJ" outputId="9ef1c4a7-f4ce-453b-c2b0-474ace9aae0a" survey_path = "surveyedited.csv" survey_df = pd.read_csv(survey_path) print(survey_df.head(10)) # + colab={"base_uri": "https://localhost:8080/"} id="YpXrattQV4Dc" outputId="69bce78d-8e89-4822-edef-125fb20c0a1c" print(survey_df.shape) print(survey_df.columns) print(survey_df.describe()) # + [markdown] id="D63J6btcV6py" # ## Intro: About the Dataset # # For my lab, I looked at a dataset titled _Young People Survey_ that was created from a survey conducted by Slovakian college students in 2013. The survey, which was was aimed at friends of the students and other young individuals, consisted of 150 questions that ranged in subject from personality to background to habits. The goal of the study was to explore common trends and traits of young people. A total of 1,010 people filled out the questionaire, making for a dataset of 1,010 rows and 151 columns. Some of the columns contain categorical variables, but the vast majority contain quantitative variables, including both discrete and continuous variables. There are a few missing values in each column, and there could be response bias in a few of the questions, like smoking and drinking habits, as students may not want to admit they take part in such activities knowing that their responses are being recorded and saved. However, these issues seem to be small and do not seem to pose threats to the validity of the dataset as a whole. # + [markdown] id="w4vUv67JV95y" # ## Hypotheses # # I want to specifically analyze the music tastes of the young individuals in the survey. I want to know if hip hop music is preferred by young people over other genres like rock and punk. To do this, I will build a 2 sample means confidence interval to estimate the difference in the true population means between hip hop and rock, and then I will build another confidence interval of the same type, substituting punk for rock. The test between hip hop and rock will be indicated by the hebrew letter א, while the test between hip hop and punk will be indicated by the hebrew letter ב. Means must be used instead of proportions because the question in the survey asked participants to assign a rating to each genre from 1-5 with 5 being the best and 1 being the worst. The mean rating can serve as a decent estimator of the overall opinion of the genre. # # H0א: μ(hip hop) = μ(rock); The true population mean rating for hip hop is equal to the true population mean rating for rock. There is no difference between the two, young people do not prefer one of the genres over the other. # # HAא: μ(hip hop) != μ(rock); The true population mean rating for hip hop is not equal to the true population mean rating for rock. There is a difference between the two, young people prefer one of the genres over the other. # # H0ב: μ(hip hop) = μ(punk); The true population mean rating for hip hop is equal to the true population mean rating for punk. There is no difference between the two, young people do not prefer one of the genres over the other. # # HAב: μ(hip hop) != μ(punk); The true population mean rating for hip hop is not equal to the true population mean rating for punk. There is a difference between the two, young people prefer one of the genres over the other. # + [markdown] id="5ogf4KM6WI33" # ## Exploring the Dataset # # Out of the three genres, rock has the highest average rating, coming in at 3.76. It is followed by hip hop at 2.91 and then punk at 2.46. The histograms show that the rating most assigned to rock was a perfect 5, making it unsurprising that rock has an extraordinarily high average rating. The distribution for hip hop is nearly uniform, as each rating has nearly the same frequency, with only slightly fewer people giving the genre a perfect 5, dragging down its mean ever so slightly from 3 to 2.91. Finally, the rating most assigned to punk was a poor 1, causing the mean to be significantly lower than the other two genres. # + colab={"base_uri": "https://localhost:8080/"} id="x6ZkqieRWJ-y" outputId="d2841250-7674-447d-b7de-74da5163074d" survey_df2 = survey_df[['Hiphop, Rap', 'Rock', 'Punk']] print(survey_df2.describe()) # + colab={"base_uri": "https://localhost:8080/", "height": 851} id="qeHLVWmjWNiL" outputId="e043b7e6-3e8f-45f0-efaa-938db863646c" raphist = sns.histplot(data = survey_df2, x = 'Hiphop, Rap') plt.xlabel('Hip hop rating') plt.ylabel('Frequency') plt.title('Distribution of hip hop ratings') plt.show() rockhist = sns.histplot(data = survey_df2, x = 'Rock') plt.xlabel('Rock rating') plt.ylabel('Frequency') plt.title('Distribution of rock ratings') plt.show() punkhist = sns.histplot(data = survey_df2, x = 'Punk') plt.xlabel('Punk rating') plt.ylabel('Frequency') plt.title('Distribution of punk ratings') plt.show() # + [markdown] id="4r2_QpfLWYwC" # ## Confidence Intervals Using Bootstrap Method # # To build confidence intervals, I created new columns for the differences between hip hop rating and rock rating as well as hip hop and punk rating. I then used the bootstrap method, taking a sample of size 50 from my dataset, and resampling from that sample with replacement 1,000 times, taking the mean of each of the 1,000 samples. The 5th and 95th percentiles of the differences serve as the boundaries for the confidence intervals. # # The 90% confidence interval for the true population mean difference between hip hop rating and rock rating is (-1.1, -0.34). We are 90% confident the true population mean difference between hip hop rating and rock rating is between -1.1 and -0.34. Since 0 is not in our interval, it is not a plausible value, meaning it is not plausible that there is no difference between the hip hop rating and rock rating, and we can reject our null hypothesis H0א which states there is no difference between the hip hop rating and rock rating. # # The 90% confidence interval for the true population mean difference between hip hop rating and punk rating is (0.26, 1.0). We are 90% confident the true population mean difference between hip hop rating and punk rating is between 0.26 and 1.0. Since 0 is not in our interval, it is not a plausible value, meaning it is not plausible that there is no difference between the hip hop rating and punk rating, and we can reject our null hypothesis H0ב which states there is no difference between the hip hop rating and punk rating. # + colab={"base_uri": "https://localhost:8080/", "height": 501} id="ZTio0FinWaPX" outputId="3b1166b3-7b81-4c38-e6a8-6b671b23f14c" survey_df2["hiphopminusrock"] = survey_df2["Hiphop, Rap"] - survey_df2["Rock"] survey_df2["hiphopminuspunk"] = survey_df2["Hiphop, Rap"] - survey_df2["Punk"] survey_df2.head(10) survey_df2.describe() # + colab={"base_uri": "https://localhost:8080/"} id="Q5wCOeDNWjtF" outputId="4d8895b6-c471-4f33-fa4d-d52cd5cbb469" #Take a sample and bootstrap it survey_df3 = survey_df2.sample(50) def bootstrap_sample(df): bootstrapdf = df.sample(len(df), replace = True) return bootstrapdf.mean() def bootstrap_samples(N, df): x = 0 list = [] while x < N: list.append(bootstrap_sample(df)) x+=1 return list hhmrbs = bootstrap_samples(1000, survey_df3['hiphopminusrock']) survey_df4 = pd.DataFrame(hhmrbs, columns=['Y']) a = survey_df4['Y'].quantile(0.05) b = survey_df4['Y'].quantile(0.95) print(a,b) hhmpbs = bootstrap_samples(1000, survey_df3['hiphopminuspunk']) survey_df5 = pd.DataFrame(hhmpbs, columns=['Y']) c = survey_df5['Y'].quantile(0.05) d = survey_df5['Y'].quantile(0.95) print(c,d) # + [markdown] id="u1zr8N4nWo0l" # ## Conclusion # # In conclusion, it does appear there is a statistically significant difference between the mean rating for hip hop music and rock music, as well as hip hop music and punk music. The data seems to indicate that while hip hop is preferred by young people over punk, rock is preferred over hip hop.
survey_lab.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.3 64-bit (''base'': conda)' # name: python37364bitbaseconda863a9d2b2ce24774beb380f071c4d0fa # --- # # Machine Learning Course # # Data from 2019-nCoV # + import matplotlib.pyplot as plt import numpy as np y_data = [24324., 20438., 17205., 14380., 11791., 9692., 7711., 5974., 4515., 2744., 1975., 1287., 830., 571., 440. ] x_data = [15., 14., 13., 12., 11., 10., 9., 8., 7., 6., 5., 4., 3., 2., 1. ] # + # y_data = b + w1 * x_data + w2 * x_data*x_data b = -2200 w1 = 1000 w2 = 0 lr_b = 0 lr_w1 = 0 lr_w2 = 0 lr = 100 #learning rate iteration = 10000 diff = 0 for n in range(len(x_data)): diff = diff + (y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])**2 # store initial values for plotting b_history = [b] w1_history = [w1] w2_history = [w2] diff_history = [diff] # Iterations for i in range(iteration): b_grad = 0.0 w1_grad = 0.0 w2_grad = 0.0 for n in range(len(x_data)): b_grad = b_grad - 2.0*(y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])*1.0 w1_grad = w1_grad - 2.0*(y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])*x_data[n] w2_grad = w2_grad - 2.0*(y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])*x_data[n]*x_data[n] lr_b += b_grad**2 lr_w1 += w1_grad**2 lr_w2 += w2_grad**2 # update parameters. b = b - lr/np.sqrt(lr_b) * b_grad w1 = w1 - lr/np.sqrt(lr_w1) * w1_grad w2 = w2 - lr/np.sqrt(lr_w2) * w2_grad # store parameters for plotting diff = 0 for n in range(len(x_data)): diff = diff + (y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])**2 b_history.append(b) w1_history.append(w1) w2_history.append(w2) diff_history.append(diff) # - ''' # y_data = b + w1 * x_data + w2 * x_data*x_data b = -2200 w1 = 1000 w2 = 0 lr = 0.000005 #learning rate iteration = 500000 diff = 0 for n in range(len(x_data)): diff = diff + (y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])**2 # store initial values for plotting b_history = [b] w1_history = [w1] w2_history = [w2] diff_history = [diff] # Iterations for i in range(iteration): b_grad = 0.0 w1_grad = 0.0 w2_grad = 0.0 for n in range(len(x_data)): b_grad = b_grad - 2.0*(y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])*1.0 w1_grad = w1_grad - 2.0*(y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])*x_data[n] w2_grad = w2_grad - 2.0*(y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])*x_data[n]*x_data[n] # update parameters. b = b - lr * b_grad w1 = w1 - lr * w1_grad w2 = w2 - lr * w2_grad # store parameters for plotting diff = 0 for n in range(len(x_data)): diff = diff + (y_data[n] - b - w1*x_data[n] - w2*x_data[n]*x_data[n])**2 b_history.append(b) w1_history.append(w1) w2_history.append(w2) diff_history.append(diff) ''' # + # plot the figure plt.figure(dpi=100) plt.plot(x_data, y_data, 'o-', ms=3, lw=1.5, color='black',label="Disease") x_grad = np.arange(1,17) y_grad = [i*i*w2 + i*w1 + b for i in x_grad] plt.plot(x_grad,y_grad, 'o-', ms=3, lw=1.5, color='red',label="Prediction") plt.rcParams['font.sans-serif']=['SimHei'] plt.title('2019 nCoV Regression Model\n2019新型冠状病毒回归模型') plt.ylabel(u'Confirmed case/确诊病例') plt.xlabel(u'Date(from Jan 22)/日期(从1月22日始)') plt.legend() plt.show() print('y = x**2*',w2,'+x*',w1,'+',b) # - plt.plot(range(iteration+1),diff_history) plt.show() print(16*16*w2+16*w1+b) print(17*17*w2+17*w1+b)
MachineLearning_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .sos # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: SoS # language: sos # name: sos # --- # + [markdown] kernel="SoS" # # Extending SoS # + [markdown] kernel="SoS" # SoS can be easily extended with new actions, targets, converters, file previewers. To make the extension available to other users, you can either create and distribute a separate package, or extend SoS and send us a [pull request](https://help.github.com/articles/about-pull-requests/). Please open a ticket and discuss the idea with us before you send a pull request. # + [markdown] kernel="SoS" # ## Understanding `entry_points` # + [markdown] kernel="SoS" # SoS makes extensive use of [**entry points**](http://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins), which allows external modules to register their features in the file system to make them available to other modules. It can be confusing initially but [this stack overflow ticket](http://stackoverflow.com/questions/774824/explain-python-entry-points) explains the `entry_points` mechanism quite well. # # To register additional feature with SoS, your package should define one or more sos-recognizable `entry_points` such as `sos-languages`, `sos-targets`, and `sos-actions`, with a syntax similar to # # ``` # entry_points=''' # [sos-language] # ruby = sos_ruby.kernel:sos_ruby # # [sos-targets] # Ruby_Library = sos_ruby.target:Ruby-Library # ''' # ``` # # With the installation of this package, `sos` would be able to obtain a class `sos_ruby` from module `sos_ruby.kernel`, and use it to work with the `ruby` language. # + [markdown] kernel="SoS" # ## Defining your own actions # + [markdown] kernel="SoS" # Under the hood an action is a normal Python function that is decorated as `SoS_Action`. The `decorator` defines the common interface of actions and calls the actual function. To define your own action, you generally need to # # ``` # from sos.actions import SoS_Action # # @SoS_Action() # def my_action(*args, **kwargs): # pass # ``` # # The decorator accepts an optional parameter `acceptable_args=['*']` which can be used to specify a list of acceptable parameter (`*` matches all keyword args). An exception will be raised if an action is defined with a list of `acceptable_args` and is called with an unrecognized argument. # + [markdown] kernel="SoS" # You then need to add an entry to `entry_points` in your `setup.py` file as # # ``` # [sos-actions] # my_action = mypackage.mymodule:my_action # ``` # + [markdown] kernel="SoS" # The most important feature of an SoS actions is that they can behave differently in different `run_mode`, which can be `dryrun`, `run`, or `interactive` (for SoS Notebook). Depending on the nature of your action, you might want to do nothing for in `dryrun` mode and give more visual feedback in `interactive` mode. The relevant code would usually look like # # ``` # if env.config['run_mode'] == 'dryrun': # return None # ``` # # Because actions are often used in script format with ignored return value, actions usually return `None` for success, and raise an exception when error happens. # + [markdown] kernel="SoS" # If the execution of action depends on some other targets, you can raise an `UnknownTarget` with the target so that the target can be obtained, and the SoS step and the action will be re-executed after the target is obtained. For example, if your action depends on a particular `R_library`, you can test the existence of the target as follows: # # ``` # from sos.targets import UnknownTarget # from sos.targets_r import R_library # # @SoS_Action() # def my_action(script, *args, **kwargs): # if not R_library('somelib').target_exists(): # raise UnknownTarget(R_library('somelib')) # # ... # ``` # + [markdown] kernel="SoS" # ## Additional targets # + [markdown] kernel="SoS" # Additional target should be derived from [`BaseTarget`](https://github.com/vatlab/SoS/blob/master/src/sos/targets.py). # # ``` # from sos.targets import BaseTarget # # class my_target(BaseTarget): # def __init__(self, *args, **kwargs): # super(my_target, self).__init__(self) # # def target_name(self): # ... # # def target_exists(self, mode='any'): # ... # # def target_signature(self): # ... # # ``` # # Any target type should define the three functions: # # * `target_name`: name of the target for reporting purpose. # * `target_exists`: check if the target exists. This function accepts a parameter `mode` which can `target`, `signature`, or `any`, which you can safely ignore. # * `target_signature`: returns any immutable Python object (usually a string) that uniquely identifies the target so that two targets can be considered the same (different) if their signatures are the same (different). The signature is used to detect if a target has been changed. # # The details of this class can be found at the source code of [`BaseTarget`](https://github.com/vatlab/SoS/blob/master/src/sos/targets.py). The [`R_Library`](https://github.com/vatlab/SoS/blob/master/src/sos/targets_r.py) provides a good example of a **virtual target** that does not have a fixed corresponding file, can be checked for existence, and actually attempts to obtain (install a R library) the target when it is checked. # # After you defined your target, you will need to add an appropriate entry point to make it available to SoS: # # ``` # [sos-targets] # my_target = mypackage.targets:my_target # ``` # + [markdown] kernel="SoS" # ## File format conversion # + [markdown] kernel="SoS" # To convert between sos and another file format, you would need to define two function, one returnning an [`argparse.ArgumentParser`](https://docs.python.org/3/library/argparse.html) that parse converter arguments, and one performing real file conversion. # # Suppose you would like to convert `.sos` to a `.xp` format, you can define these two functions as follows # # ``` # import argparse # from sos.parser import SoS_Script # # def get_my_converter_parser(): # parser = argparse.ArgumentParser('sos_xp') # parser.add_argument('--theme', # help='Style of output format') # return parser # # def my_converter(source_file, dest_file, args=None, unknown_args=[]): # # parse additional_args to obtain converter-specific options # # then convert from source_file to dest_file # script = SoS_Script(source_file) # for section in script.sections: # # do something # # if __name__ == '__main__': # parser = get_my_converter_parser() # args, unknown_args = parser.parse_known_args(sys.argv[3:]) # my_converter(sys.argv[1], sys.argv[2], args, unknown_args) # # ``` # # You can then register the converter in `setup.py` as # # ``` # [sos-converters] # fromExt-toExt.parser: mypackage.mymodule:get_my_converter_parser # fromExt-toExt.func: mypackage.mymodule:my_converter # ``` # # Here `fromExt` is file extension without leading dot, `toExt` is destination file extension without leading dot, or a format specified by the `--to` parameter of command `sos convert`. If `dest_file` is unspecified, the output should be written to standard output. # # This example uses `if __name__ == '__main__'` section so that the converter can be used as a standandalone program, which is not needed but a great way for testing purposes. Note that the input and output files are handled by `sos convert` so the parser only needs to parse converter-specific options. # + [markdown] kernel="SoS" # ## Preview additional formats # # Adding a preview function is very simple. All you need to do is define a function that returns preview information, and add an entry point to link the function to certain file format. # # More specifically, a previewer should be specified as # # ``` # pattern,priority = preview_module:func # ``` # # or # # ``` # module:func,priority = preview_module:func # ``` # # where # # 1. `pattern` is a pattern that matches incoming filename (see module fnmatch.fnmatch for details) # 2. `module:func` specifies a function in module that detects the type of input file. # 3. `priority` is an integer number that indicates the priority of previewer in case multiple pattern or function matches the same file. Developers of third-party previewer can override an existing previewer by specifying a higher priority number. # 4. `preview_module:func` points to a function in a module. The function should accept a filename as the only parameter, and returns either # # * A string that will be displayed as plain text to standard output. # * A dictionary that will be returned as `data` field of `display_data` (see [Jupyter documentation](http://jupyter-client.readthedocs.io/en/latest/messaging.html) for details). The dictionary typically has `text/html` for HTML output, "text/plain" for plain text, and "text/png" for image presentation of the file. # + [markdown] kernel="SoS" # ## Adding a subcommad (addon) # + [markdown] kernel="SoS" # If you would like to add a complete subcommand as an addon to SoS, you will need to define two functions and add them to `setup.py` as two entry points, one with suffix `.args` and one with suffix `.func`. # # ``` # [sos_addons] # myaddon.args = yourpackage.module:addon_parser # myaddon.func = yourpackage.module:addon_func # ``` # # The `addon_parser` function should use module `argparse` to return an `ArgumentParser` object. SoS would obtain this parser and add it as a subparse of the SoS main parser so that the options can be parsed as # # ``` # sos myaddon options # ``` # # The `addon_func` should be defined as # # ``` # def addon_func(args, unknown_args) # ``` # # with `args` being the parsed known arguments, and `unknown_args` being a list of unknown arguments that you can process by yourself.
src/user_guide/extending_sos.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Replacing ligand parameters in an already-parametrized system # # This example applies SMIRNOFF-format parameters to a BRD4 inhibitor from the [living review on binding free energy benchmark systems](https://www.annualreviews.org/doi/abs/10.1146/annurev-biophys-070816-033654) by <NAME> Gilson. The BRD4 system comes from the [accompanying GitHub repository](https://github.com/MobleyLab/benchmarksets/tree/master/input_files/BRD4). # # This example uses [ParmEd](http://parmed.github.io) to take a protein-ligand system parameterized with an alternate force field, and replace the force field used for the ligand with an OpenFF force field. This example is meant to illustrate how to apply parameters to a single ligand, but it's also easy to process many ligands. # # ### Loading the already-parametrized system # Retrieve protein and ligand files for BRD4 and a docked inhibitor from the benchmark systems GitHub repository # https://github.com/MobleyLab/benchmarksets import requests repo_url = 'https://raw.githubusercontent.com/MobleyLab/benchmarksets/master/input_files/' sources = { 'system.prmtop' : repo_url + 'BRD4/prmtop-coords/BRD4-1.prmtop', 'system.crd' : repo_url + 'BRD4/prmtop-coords/BRD4-1.crds', 'ligand.sdf' : repo_url + 'BRD4/sdf/ligand-1.sdf', 'ligand.pdb' : repo_url + 'BRD4/pdb/ligand-1.pdb' } for (filename, url) in sources.items(): r = requests.get(url) open(filename, 'w').write(r.text) #Read AMBER to ParmEd Structure object import parmed in_prmtop = 'system.prmtop' in_crd = 'system.crd' orig_structure = parmed.amber.AmberParm(in_prmtop, in_crd) # Let's inspect the unique molecules in the system pieces = orig_structure.split() for piece in pieces: print(f"There are {len(piece[1])} instance(s) of {piece[0]}") # * The first molecule species has 2035 atoms, so it's probably the protein # * The second molecule species has 26 atoms, which is the size of our ligand # * The third and fourth molecule species have 32 and 35 copies, respectively, and one atom each. They are probably counterions # * The fifth molecule species has 11,000 copies with three atoms each, so these are our waters. # # We could drill into the ParmEd objects to find more about these if needed. # # **It's important to note that `pieces[1]` is the parameterized ligand, as we will be replacing it further down in this example.** If you apply this notebook to a system with a different number of components, or with objects in a different order, you may need to change some of the code below accordingly. # # ### Generating an Open Force Field Toolkit `Topology` for the ligand # # Here we assume a complicated scenario -- We have a SDF of our ligand available (`ligand.sdf`), containing bond orders and enough detail about the molecule for us to parameterize the ligand. However, this SDF does not necessarily have the same atom indexing or coordinates as the original ligand in `system.prmtop` and `system.crd`. If we mix up the ligand atom indices and try to use the original ligand coordinates, the ligand's initial geometry will be nonsense. So, we've also got a copy of the ligand as `ligand.pdb` (which we could have extracted from a dump of our system to PDB format, if desired), and we're going to use that as a reference to get the atom indexing right. # # This example will use the `simtk.openmm.app.PDBFile` class to read `ligand.pdb` and then use `Topology.from_openmm` to create an OpenFF Topology that contains the ligand in the correct atom ordering. # # If you **know** that this indexing mismatch will never occur for your data sources, and that your SDFs always contain the correct ordering, you can skip this step by simply running `ligand_off_topology = ligand_off_molecule.to_topology()` # # # + from openff.toolkit.topology import Molecule, Topology from simtk.openmm.app import PDBFile ligand_off_molecule = Molecule('ligand.sdf') ligand_pdbfile = PDBFile('ligand.pdb') ligand_off_topology = Topology.from_openmm(ligand_pdbfile.topology, unique_molecules=[ligand_off_molecule]) # - # ### Parametrizing the ligand # # <div class="alert alert-block alert-warning"> # <b>Note:</b> Even though we plan to constrain bond lengths to hydrogen, we load "openff_unconstrained-1.0.0.offxml". This is because our workflow will involve loading the OFF-parametrized ligand using ParmEd, which <a href="https://github.com/openforcefield/openff-toolkit/issues/444#issuecomment-547211377"> applies its own hydrogen bonds at a later time, and will fail if it attempts to maniuplate an OpenMM system that already contains them.</a> # </div> # # # Here we begin by loading a SMIRNOFF force field -- in this case, the OpenFF-1.0 force field, "Parsley". # # Once loaded, we create a new OpenMM system containing the ligand, then use ParmEd to create a `Structure` from that system. We'll re-combine this `Structure` object with those for the protein, ions, etc. later. # + # Load the SMIRNOFF-format Parsley force field from openff.toolkit.typing.engines.smirnoff import ForceField force_field = ForceField('openff_unconstrained-1.0.0.offxml') ligand_system = force_field.create_openmm_system(ligand_off_topology) new_ligand_structure = parmed.openmm.load_topology(ligand_off_topology.to_openmm(), ligand_system, xyz=pieces[1][0].positions) # - # It's possible to save out ligand parameters at this point, if desired; here we do so to AMBER and GROMACS format just for inspection. new_ligand_structure.save('tmp.prmtop', overwrite=True) new_ligand_structure.save('tmp.inpcrd', overwrite=True) new_ligand_structure.save('tmp.gro', overwrite=True) new_ligand_structure.save('tmp.top', overwrite=True) # ### Check for discrepancies between the original ligand and its replacement # # Here we check that the number of atoms are the same, and the same elements occur in the same order. This will catch many (but not all) errors where someone provided an SDF file for a different ligand than the one present in the system. It will miss errors where they happen to provide a different ligand with the same number of atoms, the same elements, in the same order -- which is unlikely to happen, but not impossible. # + # Check how many atoms and which order elements are in the new ligand n_atoms_new = len(new_ligand_structure.atoms) elements_new = [atom.element for atom in new_ligand_structure.atoms] # Check how many atoms and which order elements are in the old ligand old_ligand_structure, n_copies = pieces[1] n_atoms_old = len(old_ligand_structure.atoms) elements_old = [atom.element for atom in old_ligand_structure.atoms] print(f"There are {n_atoms_old} in the old ligand structure and {n_atoms_new} atoms " f"in the new ligand structure") # Print out error message if number of atoms doesn't match if n_atoms_new != n_atoms_old: print("Error: Number of atoms in input ligand doesn't match number extracted " "from prmtop file.") if elements_new != elements_old: print("Error: Elements in input ligand don't match elements in the ligand " "from the prmtop file.") print(f"Old elements: {elements_old}") print(f"New elements: {elements_new}") # - # That looks OK -- we're seeing a consistent number of atoms in both structures, and no errors about inconsistent elements. That means we're OK to proceed and start combining our ParmEd `Structure` objects. # # ### Combine receptor and ligand structures # # Now, we make a new ParmEd `Structure` for the complex, and begin adding the pieces of our system back together. Recall that above, we used ParmEd to split different portions of the system into a list of tuples called `pieces`, where the list items are tuples consisting of (`Structure`, `N`) where `N` denotes the number of times that piece occurs. We have just one protein, for example, but many water molecules. # # **Here, we begin by combining our original protein with our new ligand**. # # We also print out a lot of info as we do so just to check that we're ending up with the number of atom types we expect. # + # Create a new, empty system complex_structure = parmed.Structure() # Add the protein complex_structure += pieces[0][0] print("BEFORE SYSTEM COMBINATION (just protein)") print("Unique atom names:", sorted(list(set([atom.atom_type.name for atom in complex_structure])))) print("Number of unique atom types:", len(set([atom.atom_type for atom in complex_structure]))) print("Number of unique epsilons:", len(set([atom.epsilon for atom in complex_structure]))) print("Number of unique sigmas:", len(set([atom.sigma for atom in complex_structure]))) print() print("BEFORE SYSTEM COMBINATION (just ligand)") print("Unique atom names:", sorted(list(set([atom.atom_type.name for atom in new_ligand_structure])))) print("Number of unique atom types:", len(set([atom.atom_type for atom in new_ligand_structure]))) print("Number of unique epsilons:", len(set([atom.epsilon for atom in new_ligand_structure]))) print("Number of unique sigmas:", len(set([atom.sigma for atom in new_ligand_structure]))) print() # Add the ligand complex_structure += new_ligand_structure print("AFTER LIGAND ADDITION (protein+ligand)") print("Unique atom names:", sorted(list(set([atom.atom_type.name for atom in complex_structure])))) print("Number of unique atom types:", len(set([atom.atom_type for atom in complex_structure]))) print("Number of unique epsilons:", len(set([atom.epsilon for atom in complex_structure]))) print("Number of unique sigmas:", len(set([atom.sigma for atom in complex_structure]))) # - # This looks good. We see that the protein alone has 33 atom types, which have 14 unique sigma/epsilon values, and the ligand has six atom types with five unique sigma/epsilon values. After combining, we end up with 39 atom types having 19 unique sigma and epsilon values, which is correct. # # If you're astute, you'll notice the number of atom names doesn't add up. That's OK -- the atom names are just cosmetic attributes and don't affect the assigned parameters. # # ### Add the ions and water back into the system # # Remember, we split our system into protein + ligand + ions + water, and then we took out and replaced the ligand, generating a new `Structure` of the complex. Now we need to re-insert the ions and the water. First we'll handle the ions. # # Here, ParmEd has a convenient overload of the multiplication operator, so that if we want a `Structure` with N copies of an ion, we just ask it to multiply the `Structure` of an individual ion by the number of occurrences of that ion. # + # Add ions just_ion1_structure = parmed.Structure() just_ion1_structure += pieces[2][0] just_ion1_structure *= len(pieces[2][1]) just_ion2_structure = parmed.Structure() just_ion2_structure += pieces[3][0] just_ion2_structure *= len(pieces[3][1]) complex_structure += just_ion1_structure complex_structure += just_ion2_structure print("AFTER ION ADDITION (protein+ligand+ions)") print("Unique atom names:", sorted(list(set([atom.atom_type.name for atom in complex_structure])))) print("Number of unique atom types:", len(set([atom.atom_type for atom in complex_structure]))) print("Number of unique epsilons:", len(set([atom.epsilon for atom in complex_structure]))) print("Number of unique sigmas:", len(set([atom.sigma for atom in complex_structure]))) # - # Finally, we do that same thing for the water present in our system: # + # Add waters just_water_structure = parmed.Structure() just_water_structure += pieces[4][0] just_water_structure *= len(pieces[4][1]) complex_structure += just_water_structure print("AFTER WATER ADDITION (protein+ligand+ions+water)") print("Unique atom names:", sorted(list(set([atom.atom_type.name for atom in complex_structure])))) print("Number of unique atom types:", len(set([atom.atom_type for atom in complex_structure]))) print("Number of unique epsilons:", len(set([atom.epsilon for atom in complex_structure]))) print("Number of unique sigmas:", len(set([atom.sigma for atom in complex_structure]))) # - # ### Now that we've re-combined the system, handle the coordinates and box vectors # # The above dealt with the chemical topology and parameters for the system, which is most of what we need -- but not quite all. We still have to deal with the coordinates, and also with the information on the simulation box. So, our final stage of setup is to handle the coordinates and box vectors. This is straightforward -- we just need to copy the original coordinates and box vectors. Nothing fancy is needed: # Copy over the original coordinates and box vectors complex_structure.coordinates = orig_structure.coordinates complex_structure.box_vectors = orig_structure.box_vectors # ### Export to AMBER and GROMACS formats # # We started off in AMBER format, and presumably may want to continue in that format -- so let's write out to AMBER and GROMACS format: # + # Export the Structure to AMBER files complex_structure.save('complex.prmtop', overwrite=True) complex_structure.save('complex.inpcrd', overwrite=True) # Export the Structure to Gromacs files complex_structure.save('complex.gro', overwrite=True) complex_structure.save('complex.top', overwrite=True) # - # That should conclude our work in this example. However, perhaps we should just doublecheck by ensuring we can actually run some dynamics on the combined system without any trouble. # # # ## As a test, run some dynamics on the combined system # # First, we create an OpenMM system, as we've done in other examples here. We can do this, in this case, using ParmEd's built-in `createSystem` functionality already attached to the combined `Structure`. We ask for a reasonable cutoff, constrained hydrogen bonds (note that **this keyword argument overrides the fact that we use the `unconstrained` force field above**; the ligand (and all other molecules in the system) **will** have covalent bonds to hydrogen constrainted), PME, and rigid water: # + from simtk.openmm import app, unit, LangevinIntegrator import numpy as np from parmed.openmm import NetCDFReporter system = complex_structure.createSystem(nonbondedMethod=app.PME, nonbondedCutoff=9*unit.angstrom, constraints=app.HBonds, rigidWater=True) # - # Next we'll set up the integrator, a reporter to write the trajectory, pick the timestep, and then go on to minimize the energy and run a very short amount of dynamics after setting the temperature to 300K: # + integrator = LangevinIntegrator(300*unit.kelvin, 1/unit.picosecond, 0.001*unit.picoseconds) simulation = app.Simulation(complex_structure.topology, system, integrator) # Depending on where your system came from, you may want to # add something like (30, 30, 30)*Angstrom to center the protein # (no functional effect, just visualizes better) #simulation.context.setPositions(complex_structure.positions + np.array([30, 30, 30])*unit.angstrom) simulation.context.setPositions(complex_structure.positions) nc_reporter = NetCDFReporter('trajectory.nc', 10) simulation.reporters.append(nc_reporter) # - simulation.minimizeEnergy() minimized_coords = simulation.context.getState(getPositions=True).getPositions() simulation.context.setVelocitiesToTemperature(300*unit.kelvin) simulation.step(1000)
examples/swap_amber_parameters/swap_existing_ligand_parameters.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Machine Learning Model Building Pipeline: Feature Engineering # # In the following videos, we will take you through a practical example of each one of the steps in the Machine Learning model building pipeline that we described in the previous lectures. There will be a notebook for each one of the Machine Learning Pipeline steps: # # 1. Data Analysis # 2. Feature Engineering # 3. Feature Selection # 4. Model Building # # **This is the notebook for step 2: Feature Engineering** # # We will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details. # # =================================================================================================== # # ## Predicting Sale Price of Houses # # The aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. # # ### Why is this important? # # Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or underestimated, before making a buying judgment. # # ### What is the objective of the machine learning model? # # We aim to minimise the difference between the real price, and the estimated price by our model. We will evaluate model performance using the mean squared error (mse) and the root squared of the mean squared error (rmse). # # ### How do I download the dataset? # # To download the House Price dataset go this website: # https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data # # Scroll down to the bottom of the page, and click on the link 'train.csv', and then click the 'download' blue button towards the right of the screen, to download the dataset. Rename the file as 'houseprice.csv' and save it to a directory of your choice. # # **Note the following:** # - You need to be logged in to Kaggle in order to download the datasets. # - You need to accept the terms and conditions of the competition to download the dataset # - If you save the file to the same directory where you saved this jupyter notebook, then you can run the code as it is written here. # # ==================================================================================================== # ## House Prices dataset: Feature Engineering # # In the following cells, we will engineer / pre-process the variables of the House Price Dataset from Kaggle. We will engineer the variables so that we tackle: # # 1. Missing values # 2. Temporal variables # 3. Non-Gaussian distributed variables # 4. Categorical variables: remove rare labels # 5. Categorical variables: convert strings to numbers # 5. Standarise the values of the variables to the same range # # ### Setting the seed # # It is important to note that we are engineering variables and pre-processing data with the idea of deploying the model if we find business value in it. Therefore, from now on, for each step that includes some element of randomness, it is extremely important that we **set the seed**. This way, we can obtain reproducibility between our research and our development code. # # This is perhaps one of the most important lessons that you need to take away from this course: **Always set the seeds**. # # Let's go ahead and load the dataset. # + # to handle datasets import pandas as pd import numpy as np # for plotting import matplotlib.pyplot as plt # %matplotlib inline # to divide train and test set from sklearn.model_selection import train_test_split # feature scaling from sklearn.preprocessing import MinMaxScaler # to visualise al the columns in the dataframe pd.pandas.set_option('display.max_columns', None) # - # load dataset data = pd.read_csv('houseprice.csv') print(data.shape) data.head() # ### Separate dataset into train and test # # Before beginning to engineer our features, it is important to separate our data intro training and testing set. This is to avoid over-fitting. This step involves randomness, therefore, we need to set the seed. # + # Let's separate into train and test set # Remember to set the seed (random_state for this sklearn function) X_train, X_test, y_train, y_test = train_test_split(data, data.SalePrice, test_size=0.1, random_state=0) # we are setting the seed here X_train.shape, X_test.shape # - # ### Missing values # # For categorical variables, we will fill missing information by adding an additional category: "missing" # + # make a list of the categorical variables that contain missing values vars_with_na = [var for var in data.columns if X_train[var].isnull().sum()>1 and X_train[var].dtypes=='O'] # print the variable name and the percentage of missing values for var in vars_with_na: print(var, np.round(X_train[var].isnull().mean(), 3), ' % missing values') # - # function to replace NA in categorical variables def fill_categorical_na(df, var_list): X = df.copy() X[var_list] = df[var_list].fillna('Missing') return X # + # replace missing values with new label: "Missing" X_train = fill_categorical_na(X_train, vars_with_na) X_test = fill_categorical_na(X_test, vars_with_na) # check that we have no missing information in the engineered variables X_train[vars_with_na].isnull().sum() # - # check that test set does not contain null values in the engineered variables [vr for var in vars_with_na if X_train[var].isnull().sum()>0] # For numerical variables, we are going to add an additional variable capturing the missing information, and then replace the missing information in the original variable by the mode, or most frequent value: # + # make a list of the numerical variables that contain missing values vars_with_na = [var for var in data.columns if X_train[var].isnull().sum()>1 and X_train[var].dtypes!='O'] # print the variable name and the percentage of missing values for var in vars_with_na: print(var, np.round(X_train[var].isnull().mean(), 3), ' % missing values') # + # replace the missing values for var in vars_with_na: # calculate the mode mode_val = X_train[var].mode()[0] # train X_train[var+'_na'] = np.where(X_train[var].isnull(), 1, 0) X_train[var].fillna(mode_val, inplace=True) # test X_test[var+'_na'] = np.where(X_test[var].isnull(), 1, 0) X_test[var].fillna(mode_val, inplace=True) # check that we have no more missing values in the engineered variables X_train[vars_with_na].isnull().sum() # - # check that we have the added binary variables that capture missing information X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head() # check that test set does not contain null values in the engineered variables [vr for var in vars_with_na if X_test[var].isnull().sum()>0] # ### Temporal variables # # We remember from the previous lecture, that there are 4 variables that refer to the years in which something was built or something specific happened. We will capture the time elapsed between the that variable and the year the house was sold: # + # let's explore the relationship between the year variables and the house price in a bit of more details def elapsed_years(df, var): # capture difference between year variable and year the house was sold df[var] = df['YrSold'] - df[var] return df # - for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']: X_train = elapsed_years(X_train, var) X_test = elapsed_years(X_test, var) # check that test set does not contain null values in the engineered variables [vr for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt'] if X_test[var].isnull().sum()>0] # ### Numerical variables # # We will log transform the numerical variables that do not contain zeros in order to get a more Gaussian-like distribution. This tends to help Linear machine learning models. for var in ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice']: X_train[var] = np.log(X_train[var]) X_test[var]= np.log(X_test[var]) # check that test set does not contain null values in the engineered variables [var for var in ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice'] if X_test[var].isnull().sum()>0] # same for train set [var for var in ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice'] if X_train[var].isnull().sum()>0] # ### Categorical variables # # First, we will remove those categories within variables that are present in less than 1% of the observations: # let's capture the categorical variables first cat_vars = [var for var in X_train.columns if X_train[var].dtype == 'O'] # + def find_frequent_labels(df, var, rare_perc): # finds the labels that are shared by more than a certain % of the houses in the dataset df = df.copy() tmp = df.groupby(var)['SalePrice'].count() / len(df) return tmp[tmp>rare_perc].index for var in cat_vars: frequent_ls = find_frequent_labels(X_train, var, 0.01) X_train[var] = np.where(X_train[var].isin(frequent_ls), X_train[var], 'Rare') X_test[var] = np.where(X_test[var].isin(frequent_ls), X_test[var], 'Rare') # - # Next, we need to transform the strings of these variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target: # + # this function will assign discrete values to the strings of the variables, # so that the smaller value corresponds to the smaller mean of target def replace_categories(train, test, var, target): ordered_labels = train.groupby([var])[target].mean().sort_values().index ordinal_label = {k:i for i, k in enumerate(ordered_labels, 0)} train[var] = train[var].map(ordinal_label) test[var] = test[var].map(ordinal_label) # - for var in cat_vars: replace_categories(X_train, X_test, var, 'SalePrice') # check absence of na [var for var in X_train.columns if X_train[var].isnull().sum()>0] # check absence of na [var for var in X_test.columns if X_test[var].isnull().sum()>0] # + # let me show you what I mean by monotonic relationship between labels and target def analyse_vars(df, var): df = df.copy() df.groupby(var)['SalePrice'].median().plot.bar() plt.title(var) plt.ylabel('SalePrice') plt.show() for var in cat_vars: analyse_vars(X_train, var) # - # We can now see monotonic relationships between the labels of our variables and the target (remember that the target is log-transformed, that is why the differences seem so small). # ### Feature Scaling # # For use in linear models, features need to be either scaled or normalised. In the next section, I will scale features between the min and max values: train_vars = [var for var in X_train.columns if var not in ['Id', 'SalePrice']] len(train_vars) X_train[['Id', 'SalePrice']].reset_index(drop=True) # + # fit scaler scaler = MinMaxScaler() # create an instance scaler.fit(X_train[train_vars]) # fit the scaler to the train set for later use # transform the train and test set, and add on the Id and SalePrice variables X_train = pd.concat([X_train[['Id', 'SalePrice']].reset_index(drop=True), pd.DataFrame(scaler.transform(X_train[train_vars]), columns=train_vars)], axis=1) X_test = pd.concat([X_test[['Id', 'SalePrice']].reset_index(drop=True), pd.DataFrame(scaler.transform(X_test[train_vars]), columns=train_vars)], axis=1) # - X_train.head() # That concludes the feature engineering section for this dataset. # # **Remember: the aim of this course and this particular project is to show you how to put models in production. Surely there are additional things you can do on this dataset, to extract additional value from the features.** # # **In order to capitalise on the deployment aspect of things, we deliberately kept the engineering side simple, yet include many of the traditional engineering steps, so you get a full flavour of building and deploying a machine learning model.** # check absence of missing values X_train.isnull().sum() # + # let's now save the train and test sets for the next notebook! X_train.to_csv('xtrain.csv', index=False) X_test.to_csv('xtest.csv', index=False) # - # That is all for this notebook. We hope you enjoyed it and see you in the next one!
jupyter_notebooks/Section2_MLPipelineOverview/02.7_ML_Pipeline_Step2-FeatureEngineering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/martinfinis/HelloCodeSchoolProject/blob/master/Quest_Simple_Image_Classification_with_Neural_Networks_Martin.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="h1PGdjzAafpp" import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import tensorflow as tf from tensorflow import keras from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, Activation, BatchNormalization,ReLU from tensorflow.keras.preprocessing.image import ImageDataGenerator # + id="zzK4gGoqp_eI" import matplotlib.pyplot as plt def history_plot_from_loss_acc(history, title): plt.figure(figsize=(10,5)) train_loss_list = history.history['loss'] train_accuracy = history.history['accuracy'] val_loss_list = history.history['val_loss'] val_accuracy_list = history.history['val_accuracy'] x = history.epoch plt.subplot(1,2,1) plt.plot(x, train_loss_list) plt.plot(x, val_loss_list) plt.legend(['train_loss', 'val_loss']) plt.title(title+"_ LOSS") plt.subplot(1,2,2) plt.plot(x, train_accuracy) plt.plot(x, val_accuracy_list) plt.legend(['train_acc','val_accuracy']) plt.title(title+"_ ACCURACY") plt.show() # + [markdown] id="BMT9RcchaTWA" # #load the data # + id="6z7dBuLYaEYA" (X_train, y_train),(X_test, y_test) = keras.datasets.fashion_mnist.load_data() # + [markdown] id="9jgn0PI3hvPH" # # view the data # + colab={"base_uri": "https://localhost:8080/"} id="693cmxaZhD6r" outputId="d5f12843-1a5d-4338-fbba-6c094931fa96" type(X_train),type(y_train) # + colab={"base_uri": "https://localhost:8080/"} id="CkE828oviuY-" outputId="49d663e9-d709-4781-e306-730cc01b6806" X_train.shape,y_train.shape,X_test.shape, y_test.shape # + colab={"base_uri": "https://localhost:8080/"} id="gUJxMrw3jCpf" outputId="7d98479d-6ef5-49eb-8541-a8d3b875b59d" np.unique(y_train[:]) # + [markdown] id="ezEQtCOiRV6s" # ### display Images # + id="uwUNUIXPjgdH" class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] class_label = dict(zip(range(10), class_names)) # + id="f78WrDLsRV6t" colab={"base_uri": "https://localhost:8080/", "height": 879} outputId="e3d201bf-c067-461e-ed94-9c7a1b5962e0" import matplotlib.pyplot as plt plt.figure(figsize=(15,15)) for i in range(25): plt.subplot(5,5, i+1) plt.imshow(X_train[i]) plt.title(class_label.get(y_train[i])) plt.show() # + [markdown] id="LoNxSD9UOAKW" # # build model # + [markdown] id="du2GtLZmOEN9" # ## model without ImageDataGenerator # + colab={"base_uri": "https://localhost:8080/"} id="zKfaxUUnkvlj" outputId="e7cf9711-1ad8-407a-f2df-98969675eb6e" learning_rate = 0.001 dec_rate = 0 #1/4353 * 0.2 optim = Adam(lr=learning_rate, decay=dec_rate) model = Sequential() model.add(Conv2D(32, (3,3), padding='valid',activation='relu', input_shape=(28,28,1))) model.add(MaxPooling2D()) model.add(Conv2D(64,(3,3), padding='valid',activation='relu')) model.add(MaxPooling2D()) model.add(Conv2D(128,(3,3), padding='valid',activation='relu')) model.add(Flatten()) model.add(Dense(64,activation='relu')) model.add(Dense(10,activation='softmax')) model.summary() # + id="ERGdVZBi7CEn" # reshape input X_train = X_train.reshape(-1, 28, 28, 1) X_test = X_test.reshape(-1, 28, 28, 1) # + id="vZlnMSDhpnCp" model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="MYYWhVEnBcVU" outputId="7e3e32f6-3b8c-4e8b-f882-298ae86aaeda" history = model.fit(X_train, y_train, epochs=10, validation_split=0.2) # + colab={"base_uri": "https://localhost:8080/"} id="Nu_lTyfo_7B9" outputId="601ea0e8-f7a3-40be-b244-060ffbfab446" model.evaluate(X_test, y_test) # + colab={"base_uri": "https://localhost:8080/", "height": 336} id="pftgogb8__I1" outputId="b278491e-fd3f-4e55-ae58-9f35917872ff" history_plot_from_loss_acc(history, "sparse_categorical_crossentropy") # + [markdown] id="3PJNuIseNjR-" # ## model_2 with ImageDataGenerator # + id="Lrg_Si-wpqMP" datagen = ImageDataGenerator(rescale=1./255.,validation_split=0.2) datagen_test = ImageDataGenerator(rescale=1./255. ) train_generator = datagen.flow(X_train,y_train,batch_size=32,shuffle=True,seed=42,subset='training')# TODO,color_mode='grayscale' valid_generator = datagen.flow(X_train,y_train,batch_size=32,shuffle=True,seed=42,subset='validation') test_generator = datagen_test.flow(X_test,y_test,batch_size=32,seed=42) # + colab={"base_uri": "https://localhost:8080/"} id="JtPsUrW2psFQ" outputId="6159f4d5-566a-48f2-b5ed-fedaaed4005d" STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size STEP_SIZE_VALID=valid_generator.n//valid_generator.batch_size STEP_SIZE_TEST = test_generator.n//test_generator.batch_size STEP_SIZE_TRAIN,STEP_SIZE_VALID,STEP_SIZE_TEST # + colab={"base_uri": "https://localhost:8080/"} id="llOlO9zkNuEo" outputId="afd911e8-b42d-4b6a-8e92-8f4864838194" learning_rate = 0.001 dec_rate = 0 #1/4353 * 0.2 optim = Adam(lr=learning_rate, decay=dec_rate) model_2 = Sequential() model_2.add(Conv2D(32, (3,3), padding='valid',activation='relu', input_shape=(28,28,1))) model_2.add(MaxPooling2D()) model_2.add(Conv2D(64,(3,3), padding='valid',activation='relu')) model_2.add(MaxPooling2D()) model_2.add(Conv2D(128,(3,3), padding='valid',activation='relu')) model_2.add(Flatten()) model_2.add(Dense(64,activation='relu')) model_2.add(Dense(10,activation='softmax')) model_2.summary() # + id="FoGNwOvECoRf" model_2.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="nHEYPiJOpuwv" outputId="99117fd7-d4ca-4c8a-d151-38c506f1e7f0" history_2 = model_2.fit(train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data = valid_generator, validation_steps = STEP_SIZE_TEST, epochs=10) # + colab={"base_uri": "https://localhost:8080/"} id="SEADiE0hAqOx" outputId="6edfd4cf-027c-435c-dbec-60470b83b5d0" model_2.evaluate(test_generator) # + colab={"base_uri": "https://localhost:8080/", "height": 336} id="0gMOoo85Apm4" outputId="1e22765d-d73c-4703-8380-d63d786c7caf" history_plot_from_loss_acc(history_2, "ImageDataGenerator") # + [markdown] id="nBm0mZMQVK8x" # # model_3 add BatchNormalization # + colab={"base_uri": "https://localhost:8080/"} id="4bnekVFZVRuR" outputId="ca693463-cfa6-4708-e7fb-90b55e4f5651" learning_rate = 0.001 dec_rate = 0 #1/4353 * 0.2 optim = Adam(lr=learning_rate, decay=dec_rate) model_3 = Sequential() model_3.add(Conv2D(32, (3,3), padding='valid', input_shape=(28,28,1))) model_3.add(BatchNormalization()) model_3.add(ReLU()) model_3.add(MaxPooling2D()) model_3.add(Conv2D(64,(3,3), padding='valid')) model_3.add(BatchNormalization()) model_3.add(ReLU()) model_3.add(MaxPooling2D()) model_3.add(Conv2D(128,(3,3), padding='valid')) model_3.add(BatchNormalization()) model_3.add(ReLU()) model_3.add(Flatten()) model_3.add(Dense(64)) model_3.add(BatchNormalization()) model_3.add(ReLU()) model_3.add(Dense(10,activation='softmax')) model_3.summary() # + id="fPls8l0oWQcz" model_3.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="1gFW2MWjWQc1" outputId="3f3b5df8-4991-479c-a9d2-fe870383b50e" history_3 = model_3.fit(X_train, y_train, epochs=15, validation_split=0.2) # + colab={"base_uri": "https://localhost:8080/"} id="09cH1BQ6WQc4" outputId="b338dba1-e03b-4f8a-a993-353cf8023a10" model_3.evaluate(X_test, y_test) # + colab={"base_uri": "https://localhost:8080/", "height": 336} id="4Mgx_f43WQc5" outputId="5297b335-47b3-4649-8a61-08dd357da21e" history_plot_from_loss_acc(history_3, "model_3 BatchNormalization") # + [markdown] id="M6fkkhzzTfe2" # # model_4 with dropout and Batchnormalization # + colab={"base_uri": "https://localhost:8080/"} id="kESeHVGFbYS6" outputId="be74353d-20c5-4de0-b91a-0716e5109b3e" drop_1 = 0.3 # Dropout rate drop_2 = 0.5 # Dropout rate for 1st Dense layer learning_rate = 0.001 dec_rate = 0 #1/4353 * 0.2 optim = Adam(lr=learning_rate, decay=dec_rate) model_4 = Sequential() model_4.add(Conv2D(32, (3,3), padding='valid', input_shape=(28,28,1))) model_4.add(BatchNormalization()) model_4.add(ReLU()) model_4.add(MaxPooling2D()) model_4.add(Dropout(drop_1)) model_4.add(Conv2D(64,(3,3), padding='valid')) model_4.add(BatchNormalization()) model_4.add(ReLU()) model_4.add(MaxPooling2D()) model_4.add(Dropout(drop_1)) model_4.add(Conv2D(128,(3,3), padding='valid')) model_4.add(BatchNormalization()) model_4.add(ReLU()) model_4.add(Flatten()) model_4.add(Dropout(drop_2)) model_4.add(Dense(64)) model_4.add(BatchNormalization()) model_4.add(ReLU()) model_4.add(Dense(10,activation='softmax')) model_4.summary() # + id="QVg6hkXFdgf2" model_4.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="FhZtEa-Ldgf2" outputId="0fa66b67-3df0-4dda-c27e-e7d090938613" history_4 = model_4.fit(X_train, y_train, epochs=15, validation_split=0.2) # + colab={"base_uri": "https://localhost:8080/"} id="e3B5n0MQdgf3" outputId="0268da44-bd55-4f16-8974-13e965829c41" model_4.evaluate(X_test, y_test) # + colab={"base_uri": "https://localhost:8080/", "height": 336} id="NJikJYRTdgf3" outputId="60239a67-64d1-4631-9398-dbfae2c64f2f" history_plot_from_loss_acc(history_4, "model_4 BatchNormalization and dropout") # + [markdown] id="yS-COFj1eUcs" # # model_5 different model architecture, add for each layer one Conv2D and padding same # + colab={"base_uri": "https://localhost:8080/"} id="4tlFgeKaeYjE" outputId="7d57918d-2cc3-4cbc-fbb4-4350dc26b1c7" drop_1 = 0.3 # Dropout rate drop_2 = 0.5 # Dropout rate for 1st Dense layer learning_rate = 0.001 dec_rate = 0 #1/4353 * 0.2 optim = Adam(lr=learning_rate, decay=dec_rate) model_5 = Sequential() model_5.add(Conv2D(32, (3,3), padding='same', input_shape=(28,28,1))) model_5.add(Conv2D(32, (3,3), padding='same')) model_5.add(BatchNormalization()) model_5.add(ReLU()) model_5.add(MaxPooling2D()) model_5.add(Dropout(drop_1)) model_5.add(Conv2D(64,(3,3), padding='same')) model_5.add(Conv2D(64,(3,3), padding='same')) model_5.add(BatchNormalization()) model_5.add(ReLU()) model_5.add(MaxPooling2D()) model_5.add(Dropout(drop_1)) model_5.add(Conv2D(128,(3,3), padding='same')) model_5.add(Conv2D(128,(3,3), padding='same')) model_5.add(BatchNormalization()) model_5.add(ReLU()) model_5.add(Flatten()) model_5.add(Dropout(drop_2)) model_5.add(Dense(64)) model_5.add(BatchNormalization()) model_5.add(ReLU()) model_5.add(Dense(10,activation='softmax')) model_5.summary() # + id="NOJqNAL1fiZc" model_5.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="4N4t_oBQfiZc" outputId="843dc373-6a45-45be-cc11-ba79c1c8cded" history_5 = model_5.fit(X_train, y_train, epochs=15, validation_split=0.2) # + colab={"base_uri": "https://localhost:8080/"} id="avp7U2IafiZc" outputId="349176f8-48e4-497a-ae95-d96d69cbcdcb" model_5.evaluate(X_test, y_test) # + colab={"base_uri": "https://localhost:8080/", "height": 381} id="IxPCj3v-fiZd" outputId="58fe34d8-ebe4-49f9-9782-cd507617471d" history_plot_from_loss_acc(history_5, "model_5 different model architecture,\n add for each layer \none Conv2D and padding valid\n") # + [markdown] id="_kfHv3WN_YR7" # # show a few images with their ture and predicted labels # + [markdown] id="9CBf6eLSZzGP" # ## first 25 prediction # + colab={"base_uri": "https://localhost:8080/"} id="W_JVwKI4_zlQ" outputId="9ba08c28-181b-4bce-befb-5b1bd9355c09" X_test_subset =X_test[0:25] y_test_subset =y_test[0:25] # + colab={"base_uri": "https://localhost:8080/"} id="u18YdG2dINLm" outputId="870b1a3b-f0e5-4e07-a4a4-913ec145fd5b" y_predict = np.argmax(model.predict(X_test_subset), axis=-1) y_predict # + id="IA77Dy3OHPUE" def plot_prediction(X_test,y_test,class_label,y_predict): plt.figure(figsize=(20,20)) plt.subplots_adjust(left=0.125, bottom=0.1, right=0.9, top=0.9) for i in range(25): plt.subplot(5,5, i+1) plt.imshow(X_test[i].reshape(28,28)) plt.title("real "+class_label.get(y_test[i])+"\n predict "+class_label.get(y_test[i])) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="sf8jvrLHKrjT" outputId="aa74f0b5-2a7e-445d-d70d-7a9babd95bfb" plot_prediction(X_test_subset,y_test_subset,class_label,y_predict) # + [markdown] id="Fwatd1K3Z62_" # ## wrong prediction # + id="vh0PTTbVOQAX" import pandas as pd # + id="PbNCh89PSInb" def plot_prediction(df,X_test,class_label): m = int(df.shape[0]/5)+1 plt.figure(figsize=(20,m*4)) plt.subplots_adjust(left=0.125, bottom=0.1, right=0.9, top=0.9) i = 1 for index, row in df.iterrows(): plt.subplot(m,5, i) plt.imshow(X_test[index].reshape(28,28)) plt.title("label: "+class_label.get(row.y_test)+"\n predict: "+class_label.get(row.y_predict)) i += 1 plt.show() # + id="E917Fn_u_XMI" y_predict_all = np.argmax(model.predict(X_test), axis=-1) # + id="dQhgrxUiN-tv" df = pd.DataFrame(data=list(zip(y_test,y_predict_all)),columns=['y_test','y_predict']) # + id="z7HrRKf0QTQ5" df_wrong = df[df['y_test'] != df['y_predict']] # + colab={"base_uri": "https://localhost:8080/", "height": 533} id="iijg7LvyTr4M" outputId="be11fd78-a58b-4462-b0ec-bc16c2f04444" plot_prediction(df_wrong.head(10),X_test,class_label) # + [markdown] id="88JJVhr_ayfI" # ## right prediction # + id="HYYl6CYFaNG5" df_right = df[df['y_test'] == df['y_predict']] # + colab={"base_uri": "https://localhost:8080/", "height": 986} id="apGRmIdraO_P" outputId="bad3b624-c8ba-43dc-d467-fdbd645b95e8" plot_prediction(df_right.sample(20),X_test,class_label)
Quest_Simple_Image_Classification_with_Neural_Networks_Martin.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="V8k9buNKTfud" # # Label and feature engineering # # This lab is *optional*. It demonstrates advanced SQL queries for time-series engineering. For real-world problems, this type of feature engineering code is essential. If you are pursuing a time-series project for open project week, feel free to use this code as a template. # # --- # # Learning objectives: # # 1. Learn how to use BigQuery to build time-series features and labels for forecasting # 2. Learn how to visualize and explore features. # 3. Learn effective scaling and normalizing techniques to improve our modeling results # # **Note: In the previous lab we explored the data, if you haven’t run the previous notebook, go back to [optional_1_data_exploration.ipynb](../solutions/optional_1_data_exploration.ipynb) and run it.** # # Now that we have explored the data, let's start building our features, so we can build a model. # # <h3><font color="#4885ed">Feature Engineering</font> </h3> # # Use the `price_history` table, we can look at past performance of a given stock, to try to predict it's future stock price. In this notebook we will be focused on cleaning and creating features from this table. # # There are typically two different approaches to creating features with time-series data. # # **One approach** is aggregate the time-series into "static" features, such as "min_price_over_past_month" or "exp_moving_avg_past_30_days". Using this approach, we can use a deep neural network or a more "traditional" ML model to train. Notice we have essentially removed all sequention information after aggregating. This assumption can work well in practice. # # A **second approach** is to preserve the ordered nature of the data and use a sequential model, such as a recurrent neural network. This approach has a nice benefit that is typically requires less feature engineering. Although, training sequentially models typically takes longer. # # In this notebook, we will build features and also create rolling windows of the ordered time-series data. # # <h3><font color="#4885ed">Label Engineering</font> </h3> # # We are trying to predict if the stock will go up or down. In order to do this we will need to "engineer" our label by looking into the future and using that as the label. We will be using the [`LAG`](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#lag) function in BigQuery to do this. Visually this looks like: # # ![](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/gcp_forecasting/rolling_window.gif?raw=true) # + [markdown] colab_type="text" id="WBBSZf_uTdGy" # ## Import libraries; setup # + colab={} colab_type="code" id="kC9RZRlqTfuj" jupyter={"outputs_hidden": true} PROJECT = 'your-gcp-project' # Replace with your project ID. # + colab={} colab_type="code" id="IjsuN9heTfue" jupyter={"outputs_hidden": true} import pandas as pd from google.cloud import bigquery from IPython.core.magic import register_cell_magic from IPython import get_ipython bq = bigquery.Client(project = PROJECT) # + colab={} colab_type="code" id="xyaeBdzMTdG2" jupyter={"outputs_hidden": true} # Allow you to easily have Python variables in SQL query. @register_cell_magic('with_globals') def with_globals(line, cell): contents = cell.format(**globals()) if 'print' in line: print(contents) get_ipython().run_cell(contents) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="VHEy7L2EW-ug" jupyter={"outputs_hidden": true} outputId="b8a94157-c6d9-425d-b124-cead5d7d9d3f" def create_dataset(): dataset = bigquery.Dataset(bq.dataset("stock_market")) try: bq.create_dataset(dataset) # Will fail if dataset already exists. print("Dataset created") except: print("Dataset already exists") create_dataset() # + [markdown] colab_type="text" id="Ip9SZU7CTful" # ## Create time-series features and determine label based on market movement # + [markdown] colab_type="text" id="WduqaabdTfum" # ### Summary of base tables # - # **TODO**: How many rows are in our base tables `price_history` and `snp500`? # + colab={"base_uri": "https://localhost:8080/", "height": 77} colab_type="code" id="GEmgSKBNTdG_" jupyter={"outputs_hidden": true} outputId="845d17a4-83a0-4314-d888-8cc29ee90c2b" # %%with_globals # %%bigquery --project {PROJECT} --# TODO # + colab={"base_uri": "https://localhost:8080/", "height": 77} colab_type="code" id="Utn87x_ATdHC" jupyter={"outputs_hidden": true} outputId="9d4a74d7-1ad0-4510-adf0-91c7ee7e87b5" # %%with_globals # %%bigquery --project {PROJECT} --# TODO # + [markdown] colab_type="text" id="at7EL7pITfuq" # ### Label engineering # + [markdown] colab_type="text" id="pQ7R1VcWeJq9" # Ultimately, we need to end up with a single label for each day. The label takes on 3 values: {`down`, `stay`, `up`}, where `down` and `up` indicates the normalized price (more on this below) went down 1% or more and up 1% or more, respectively. `stay` indicates the stock remained within 1%. # # The steps are: # # 1. Compare close price and open price # 2. Compute price features using analytics functions # 3. Compute normalized price change (%) # 4. Join with S&P 500 table # 5. Create labels (`up`, `down`, `stay`) # # + [markdown] colab_type="text" id="1FvPgI6UOoQO" # <h3><font color="#4885ed">Compare close price and open price</font> </h3> # # For each row, get the close price of yesterday and the open price of tomorrow using the [`LAG`](https://cloud.google.com/bigquery/docs/reference/legacy-sql#lag) function. We will determine tomorrow's close - today's close. # + [markdown] colab_type="text" id="vBJzyVtCTfur" # #### Shift to get tomorrow's close price. # - # **Learning objective 1** # + colab={} colab_type="code" id="IreuNo_pTfus" jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery df --project {PROJECT} CREATE OR REPLACE TABLE `stock_market.price_history_delta` AS ( WITH shifted_price AS ( SELECT *, (LAG(close, 1) OVER (PARTITION BY symbol order by Date DESC)) AS tomorrow_close FROM `stock_src.price_history` WHERE Close > 0 ) SELECT a.*, (tomorrow_close - Close) AS tomo_close_m_close FROM shifted_price a ) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="adrk4vc1TdHM" jupyter={"outputs_hidden": true} outputId="6d6fbba1-9b94-4b96-fd49-6ceb39fa4ccb" # %%with_globals # %%bigquery --project {PROJECT} SELECT * FROM stock_market.price_history_delta ORDER by Date LIMIT 100 # + [markdown] colab_type="text" id="8UnOKtvdTdHO" # **TODO**: Historically, we know that the stock market has been going up. Can you think of a way to verify this using our newly created table `price_history_delta`? # - # **Learning objective 2** # + colab={"base_uri": "https://localhost:8080/", "height": 77} colab_type="code" id="XY2MetOeTfux" jupyter={"outputs_hidden": true} outputId="9310bac5-2d0e-468b-fa8e-a981f09b4b1c" # %%with_globals print # %%bigquery --project {PROJECT} SELECT --# TODO: verify the stock market is going up -- on average. FROM stock_market.price_history_delta # + [markdown] colab_type="text" id="efb9PCBdTfu0" # ### Add time series features # + [markdown] colab_type="text" id="S_vRjdyhOqZi" # <h3><font color="#4885ed">Compute price features using analytics functions</font> </h3> # # In addition, we will also build time-series features using the min, max, mean, and std (can you think of any over functions to use?). To do this, let's use [analytic functions]() in BigQuery (also known as window functions). # ``` # An analytic function is a function that computes aggregate values over a group of rows. Unlike aggregate functions, which return a single aggregate value for a group of rows, analytic functions return a single value for each row by computing the function over a group of input rows. # ``` # Using the `AVG` analytic function, we can compute the average close price of a given symbol over the past week (5 business days): # ```python # (AVG(close) OVER (PARTITION BY symbol # ORDER BY Date # ROWS BETWEEN 5 PRECEDING AND 1 PRECEDING)) / close # AS close_avg_prior_5_days # ``` # - # **Learning objective 1** # # **TODO**: Please fill in the `# TODO`s in the below query # + colab={"base_uri": "https://localhost:8080/", "height": 840} colab_type="code" id="pBi_CruzTfu0" jupyter={"outputs_hidden": true} outputId="7a7c57f8-cb57-4c81-9786-c785c9c4c518" def get_window_fxn(agg_fxn, n_days): """Generate a time-series feature. E.g., Compute the average of the price over the past 5 days.""" SCALE_VALUE = 'close' sql = ''' ({agg_fxn}(close) OVER (PARTITION BY (# TODO) ORDER BY (# TODO) ROWS BETWEEN {n_days} (# TODO)))/{scale} AS close_{agg_fxn}_prior_{n_days}_days'''.format( agg_fxn=agg_fxn, n_days=n_days, scale=SCALE_VALUE) return sql WEEK = 5 MONTH = 20 YEAR = 52*5 agg_funcs = ('MIN', 'MAX', 'AVG', 'STDDEV') lookbacks = (WEEK, MONTH, YEAR) sqls = [] for fxn in agg_funcs: for lookback in lookbacks: sqls.append(get_window_fxn(fxn, lookback)) time_series_features_sql = ','.join(sqls) # SQL string. def preview_query(): print(time_series_features_sql[0:1000]) preview_query() # + colab={"base_uri": "https://localhost:8080/", "height": 31} colab_type="code" id="4WX4VFSvTfu2" jupyter={"outputs_hidden": true} outputId="41cdcb4d-ccbd-4e12-9c6c-88de2d4538f5" # %%with_globals print # %%bigquery --project {PROJECT} CREATE OR REPLACE TABLE stock_market.price_features_delta AS SELECT * FROM (SELECT *, {time_series_features_sql}, -- Also get the raw time-series values; will be useful for the RNN model. (ARRAY_AGG(close) OVER (PARTITION BY symbol ORDER BY Date ROWS BETWEEN 260 PRECEDING AND 1 PRECEDING)) AS close_values_prior_260, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY Date) AS days_on_market FROM stock_market.price_history_delta) WHERE days_on_market > {YEAR} # + jupyter={"outputs_hidden": true} # %%bigquery --project {PROJECT} SELECT * FROM stock_market.price_features_delta ORDER BY symbol, Date LIMIT 10 # + [markdown] colab_type="text" id="EjGaQYuRTfu6" # #### Compute percentage change, then self join with prices from S&P index. # # We will also compute price change of S&P index, GSPC. We do this so we can compute the normalized percentage change. # + [markdown] colab_type="text" id="zL55y-YnOvOu" # <h3><font color="#4885ed">Compute normalized price change (%)</font> </h3> # # Before we can create our labels we need to normalize the price change using the S&P 500 index. The normalization using the S&P index fund helps ensure that the future price of a stock is not due to larger market effects. Normalization helps us isolate the factors contributing to the performance of a stock_market. # # Let's use the normalization scheme from by subtracting the scaled difference in the S&P 500 index during the same time period. # # In Python: # ```python # # Example calculation. # scaled_change = (50.59 - 50.69) / 50.69 # scaled_s_p = (939.38 - 930.09) / 930.09 # normalized_change = scaled_change - scaled_s_p # assert normalized_change == ~1.2% # ``` # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0uz1Qo0STfu7" jupyter={"outputs_hidden": true} outputId="67e1be37-8729-47d5-fde9-9c4f87da82b5" scaled_change = (50.59 - 50.69) / 50.69 scaled_s_p = (939.38 - 930.09) / 930.09 normalized_change = scaled_change - scaled_s_p print(''' scaled change: {:2.3f} scaled_s_p: {:2.3f} normalized_change: {:2.3f} '''.format(scaled_change, scaled_s_p, normalized_change)) # + [markdown] colab_type="text" id="HY9AJAN3Tfu-" # ### Compute normalized price change (shown above). # + [markdown] colab_type="text" id="m6OhYVoITdHd" # Let's join scaled price change (tomorrow_close / close) with the [gspc](https://en.wikipedia.org/wiki/S%26P_500_Index) symbol (symbol for the S&P index). Then we can normalize using the scheme described above. # - # **Learning objective 3** # # **TODO**: Please fill in the `# TODO` in the code below. # + colab={} colab_type="code" id="_W71_cb4TdHe" jupyter={"outputs_hidden": true} snp500_index = 'gspc' # + colab={"base_uri": "https://localhost:8080/", "height": 31} colab_type="code" id="b1PNvxhuTfu_" jupyter={"outputs_hidden": true} outputId="c9b87b11-cff2-452f-8c29-147290f95e1f" # %%with_globals print # %%bigquery --project {PROJECT} CREATE OR REPLACE TABLE stock_market.price_features_norm_per_change AS WITH all_percent_changes AS ( SELECT *, (tomo_close_m_close / Close) AS scaled_change FROM `stock_market.price_features_delta` ), s_p_changes AS (SELECT scaled_change AS s_p_scaled_change, date FROM all_percent_changes WHERE symbol="{snp500_index}") SELECT all_percent_changes.*, s_p_scaled_change, (# TODO) AS normalized_change FROM all_percent_changes LEFT JOIN s_p_changes --# Add S&P change to all rows ON all_percent_changes.date = s_p_changes.date # + [markdown] colab_type="text" id="5lcs6_BtTfvB" # #### Verify results # + colab={} colab_type="code" id="0G1SbI8kTdHl" jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery df --project {PROJECT} SELECT * FROM stock_market.price_features_norm_per_change LIMIT 10 # + colab={"base_uri": "https://localhost:8080/", "height": 299} colab_type="code" id="BeNiVymgTdHn" jupyter={"outputs_hidden": true} outputId="f7534321-713c-483d-ba4f-d096c59296fa" df.head() # + [markdown] colab_type="text" id="8TFFeA5sOm2Y" # <h3><font color="#4885ed">Join with S&P 500 table and Create labels: {`up`, `down`, `stay`}</font> </h3> # # Join the table with the list of S&P 500. This will allow us to limit our analysis to S&P 500 companies only. # # Finally we can create labels. The following SQL statement should do: # # ```sql # CASE WHEN normalized_change < -0.01 THEN 'DOWN' # WHEN normalized_change > 0.01 THEN 'UP' # ELSE 'STAY' # END # ``` # - # **Learning objective 1** # + colab={} colab_type="code" id="iv8i3e8GTdHq" jupyter={"outputs_hidden": true} down_thresh = -0.01 up_thresh = 0.01 # - # **TODO**: Please fill in the `CASE` function below. # + colab={} colab_type="code" id="-Kf5POU6TfvM" jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery df --project {PROJECT} CREATE OR REPLACE TABLE stock_market.percent_change_sp500 AS SELECT *, CASE --# TODO END AS direction FROM stock_market.price_features_norm_per_change features INNER JOIN `stock_src.snp500` USING (symbol) # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="jQzSbN2yTdH0" jupyter={"outputs_hidden": true} outputId="7a58fcb5-9a31-4c1b-fddb-ef3a5b379f2e" # %%with_globals print # %%bigquery --project {PROJECT} SELECT direction, COUNT(*) as cnt FROM stock_market.percent_change_sp500 GROUP BY direction # + colab={} colab_type="code" id="OLYTEUstTfva" jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery df --project {PROJECT} SELECT * FROM stock_market.percent_change_sp500 LIMIT 20 # + colab={"base_uri": "https://localhost:8080/", "height": 202} colab_type="code" id="iiARkRzPTdH5" jupyter={"outputs_hidden": true} outputId="2ed8fc43-f8c7-4827-9033-12b142917c9a" df.columns # - # The dataset is still quite large and the majority of the days the market `STAY`s. Let's focus our analysis on dates where [earnings per share](https://en.wikipedia.org/wiki/Earnings_per_share) (EPS) information is released by the companies. The EPS data has 3 key columns surprise, reported_EPS, and consensus_EPS: # + jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery --project {PROJECT} SELECT * FROM `stock_src.eps` LIMIT 10 # - # The surprise column indicates the difference between the expected (consensus expected eps by analysts) and the reported eps. We can join this table with our derived table to focus our analysis during earnings periods: # + jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery --project {PROJECT} CREATE OR REPLACE TABLE stock_market.eps_percent_change_sp500 AS SELECT a.*, b.consensus_EPS, b.reported_EPS, b.surprise FROM stock_market.percent_change_sp500 a INNER JOIN `stock_src.eps` b ON a.Date = b.date AND a.symbol = b.symbol # + colab={} colab_type="code" id="OLYTEUstTfva" jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery --project {PROJECT} SELECT * FROM stock_market.eps_percent_change_sp500 LIMIT 20 # + jupyter={"outputs_hidden": true} # %%with_globals print # %%bigquery --project {PROJECT} SELECT direction, COUNT(*) as cnt FROM stock_market.eps_percent_change_sp500 GROUP BY direction # + [markdown] colab_type="text" id="COPWKR1WTfvd" # ## Feature exploration # + [markdown] colab_type="text" id="T5HLcwy1Tfve" # Now that we have created our recent movements of the company’s stock price, let's visualize our features. This will help us understand the data better and possibly spot errors we may have made during our calculations. # # As a reminder, we calculated the scaled prices 1 week, 1 month, and 1 year before the date that we are predicting at. # + [markdown] colab_type="text" id="RDROJ7qMh7oz" # Let's write a re-usable function for aggregating our features. # - # **Learning objective 2** # + colab={} colab_type="code" id="Q7dT9NTSTfvf" def get_aggregate_stats(field, round_digit=2): """Run SELECT ... GROUP BY field, rounding to nearest digit.""" df = bq.query(''' SELECT {field}, COUNT(*) as cnt FROM (SELECT ROUND({field}, {round_digit}) AS {field} FROM stock_market.eps_percent_change_sp500) rounded_field GROUP BY {field} ORDER BY {field}'''.format(field=field, round_digit=round_digit, PROJECT=PROJECT)).to_dataframe() return df.dropna() # + colab={"base_uri": "https://localhost:8080/", "height": 338} colab_type="code" id="xgmCvlMtTfvh" jupyter={"outputs_hidden": true} outputId="5db92f71-8cce-4475-fad8-a991e2fc208f" field = 'close_AVG_prior_260_days' CLIP_MIN, CLIP_MAX = 0.1, 4. df = get_aggregate_stats(field) values = df[field].clip(CLIP_MIN, CLIP_MAX) counts = 100 * df['cnt'] / df['cnt'].sum() # Percentage. ax = values.hist(weights=counts, bins=30, figsize=(10, 5)) ax.set(xlabel=field, ylabel="%"); # - # **TODO** Use the `get_aggregate_stats` from above to visualize the `normalized_change` column. # + colab={} colab_type="code" id="UDcnYJrCTfvj" jupyter={"outputs_hidden": true} outputId="7e949d16-8c9a-416c-de2d-28ba97a2aa65" field = 'normalized_change' # TODO # + [markdown] colab_type="text" id="DuV7glaEh7o_" # Let's look at results by day-of-week, month, etc. # + colab={} colab_type="code" id="l7egsYhcTfvm" jupyter={"outputs_hidden": true} VALID_GROUPBY_KEYS = ('DAYOFWEEK', 'DAY', 'DAYOFYEAR', 'WEEK', 'MONTH', 'QUARTER', 'YEAR') DOW_MAPPING = {1: 'Sun', 2: 'Mon', 3: 'Tues', 4: 'Wed', 5: 'Thur', 6: 'Fri', 7: 'Sun'} def groupby_datetime(groupby_key, field): if groupby_key not in VALID_GROUPBY_KEYS: raise Exception('Please use a valid groupby_key.') sql = ''' SELECT {groupby_key}, AVG({field}) as avg_{field} FROM (SELECT {field}, EXTRACT({groupby_key} FROM date) AS {groupby_key} FROM stock_market.eps_percent_change_sp500) foo GROUP BY {groupby_key} ORDER BY {groupby_key} DESC'''.format(groupby_key=groupby_key, field=field, PROJECT=PROJECT) print(sql) df = bq.query(sql).to_dataframe() if groupby_key == 'DAYOFWEEK': df.DAYOFWEEK = df.DAYOFWEEK.map(DOW_MAPPING) return df.set_index(groupby_key).dropna() # + colab={"base_uri": "https://localhost:8080/", "height": 403} colab_type="code" id="z7mxvIqYTfvp" jupyter={"outputs_hidden": true} outputId="df531585-b724-4907-a84a-03483efc9a7d" field = 'normalized_change' df = groupby_datetime('DAYOFWEEK', field) ax = df.plot(kind='barh', color='orange', alpha=0.7) ax.grid(which='major', axis='y', linewidth=0) # + colab={"base_uri": "https://localhost:8080/", "height": 403} colab_type="code" id="BRI70WJpTfvs" jupyter={"outputs_hidden": true} outputId="b7525484-9b43-407b-f180-7de5ab7225a4" field = 'close' df = groupby_datetime('DAYOFWEEK', field) ax = df.plot(kind='barh', color='orange', alpha=0.7) ax.grid(which='major', axis='y', linewidth=0) # + colab={"base_uri": "https://localhost:8080/", "height": 403} colab_type="code" id="40m-6nMKTfvw" jupyter={"outputs_hidden": true} outputId="4c4bd05b-2278-4eb7-a741-d39076ec59d3" field = 'normalized_change' df = groupby_datetime('MONTH', field) ax = df.plot(kind='barh', color='blue', alpha=0.7) ax.grid(which='major', axis='y', linewidth=0) # + colab={"base_uri": "https://localhost:8080/", "height": 403} colab_type="code" id="Zj2pOrAiTfvz" jupyter={"outputs_hidden": true} outputId="4afba896-7e82-458a-86b0-4562dd31b5a4" field = 'normalized_change' df = groupby_datetime('QUARTER', field) ax = df.plot(kind='barh', color='green', alpha=0.7) ax.grid(which='major', axis='y', linewidth=0) # + colab={"base_uri": "https://localhost:8080/", "height": 418} colab_type="code" id="ohYWA_YsTfv4" jupyter={"outputs_hidden": true} outputId="7b597812-82f6-41dd-98c4-f3ed1ec5dc6c" field = 'close' df = groupby_datetime('YEAR', field) ax = df.plot(kind='line', color='purple', alpha=0.7) ax.grid(which='major', axis='y', linewidth=0) # + colab={"base_uri": "https://localhost:8080/", "height": 418} colab_type="code" id="BBTC2VunTfv2" jupyter={"outputs_hidden": true} outputId="3b21cb7d-4f80-4faf-a2c9-e67191969556" field = 'normalized_change' df = groupby_datetime('YEAR', field) ax = df.plot(kind='line', color='purple', alpha=0.7) ax.grid(which='major', axis='y', linewidth=0) # + [markdown] colab_type="text" id="P7UJ0W-5WvZC" # BONUS: How do our features correlate with the label `direction`? Build some visualizations. What features are most important? You can visualize this and do it statistically using the [`CORR`](https://cloud.google.com/bigquery/docs/reference/standard-sql/statistical_aggregate_functions) function. # + [markdown] colab_type="text" id="oRCY1E6CTfxD" # Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_2_feature_engineering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt from matplotlib import rc rc('font', **{'family': 'serif', 'serif': ['Computer Modern']}) rc('text', usetex=True) from priors import * # ## Aligned-spin chi_effective priors # # Let's demonstrate the priors on chi_effective corresponding to a uniform, aligned prior on component spins. # + # Set up three subplots to hold three examples, showing the result of varying mass ratio and # maximum dimensionless spin fig = plt.figure(figsize=(15,4)) ax1 = fig.add_subplot(131) ax2 = fig.add_subplot(132) ax3 = fig.add_subplot(133) # Choose maximum dimensionless spin and mass ratio -- we'll condition on these two parameters aMax=1 q=0.8 ntrials=100000 # Now draw random aligned component spin values from their prior (subject to this choice of aMax) # and compute chi_effectives s1s = aMax*(2.*np.random.random(ntrials)-1.) s2s = aMax*(2.*np.random.random(ntrials)-1.) xeff = (s1s + q*s2s)/(1.+q) # Alternatively, defined a grid of chi_effectives and use our analytic expression for the prior from priors.py xs = np.linspace(-1,1,300) p_xeff = chi_effective_prior_from_aligned_spins(q,aMax,xs) # Plot both on top of one another! ax1.hist(xeff,density=True,bins=50) ax1.plot(xs,p_xeff,color='black') ax1.xaxis.grid(True,which='major',ls=':',color='grey') ax1.yaxis.grid(True,which='major',ls=':',color='grey') ax1.tick_params(labelsize=14) ax1.set_xlabel(r'$\chi_{\rm eff}$',fontsize=18) ax1.set_ylabel(r'$p(\chi_{\rm eff})$',fontsize=18) ax1.annotate(r'$q={0}$'.format(q),(0.75,0.85),xycoords='axes fraction',fontsize=16) ax1.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.75,0.75),xycoords='axes fraction',fontsize=16) # Repeat a second time, but now under a different choice of mass ratio aMax=1 q=0.1 ntrials=100000 s1s = aMax*(2.*np.random.random(ntrials)-1.) s2s = aMax*(2.*np.random.random(ntrials)-1.) xeff = (s1s + q*s2s)/(1.+q) xs = np.linspace(-1,1,300) p_xeff = chi_effective_prior_from_aligned_spins(q,aMax,xs) ax2.hist(xeff,density=True,bins=50) ax2.plot(xs,p_xeff,color='black') ax2.xaxis.grid(True,which='major',ls=':',color='grey') ax2.yaxis.grid(True,which='major',ls=':',color='grey') ax2.tick_params(labelsize=14) ax2.set_xlabel(r'$\chi_{\rm eff}$',fontsize=18) ax2.annotate(r'$q={0}$'.format(q),(0.75,0.85),xycoords='axes fraction',fontsize=16,backgroundcolor=(1,1,1,0.95)) ax2.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.75,0.75),xycoords='axes fraction',fontsize=16,backgroundcolor=(1,1,1,0.95)) ax2.set_ylim(0,0.65) # ...and a third time, now varying the maximum spin magnitude aMax=0.4 q=0.8 ntrials=30000 s1s = aMax*(2.*np.random.random(ntrials)-1.) s2s = aMax*(2.*np.random.random(ntrials)-1.) xeff = (s1s + q*s2s)/(1.+q) xs = np.linspace(-1,1,300) p_xeff = chi_effective_prior_from_aligned_spins(q,aMax,xs) ax3.hist(xeff,density=True,bins=50) ax3.plot(xs,p_xeff,color='black') ax3.xaxis.grid(True,which='major',ls=':',color='grey') ax3.yaxis.grid(True,which='major',ls=':',color='grey') ax3.tick_params(labelsize=14) ax3.set_xlabel(r'$\chi_{\rm eff}$',fontsize=18) ax3.annotate(r'$q={0}$'.format(q),(0.7,0.85),xycoords='axes fraction',fontsize=16) ax3.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.7,0.75),xycoords='axes fraction',fontsize=16) plt.tight_layout() #plt.savefig('demo_chi_eff_aligned.pdf',bbox_inches='tight') plt.show() # - # ## Isotropic chi_effective priors # # Now demonstrate the priors on chi_effective corresponding to a uniform, *isotropic* prior on component spins. # + # As above, set up three different cases so we can test the various piecewise cases appearing in our analytic # definition of p(chi_eff|q) fig = plt.figure(figsize=(15,4)) ax1 = fig.add_subplot(131) ax2 = fig.add_subplot(132) ax3 = fig.add_subplot(133) # Choose a conditional value of aMax and mass ratio q aMax=1 q=0.8 # Make random draws from our component spin magnitudes (a1s and a2s) and cosine tilts (u1s and u2s) # and numerically construct the chi_effective prior ntrials=100000 a1s = aMax*np.random.random(ntrials) a2s = aMax*np.random.random(ntrials) u1s = 2.*np.random.random(ntrials)-1. u2s = 2.*np.random.random(ntrials)-1. xeff = (a1s*u1s + q*a2s*u2s)/(1.+q) # Alternatively, use our analytic function defined in priors.py xs = np.linspace(-1,1,300) p_xeff = chi_effective_prior_from_isotropic_spins(q,aMax,xs) # Plot! ax1.hist(xeff,density=True,bins=50) ax1.plot(xs,p_xeff,color='black') ax1.xaxis.grid(True,which='major',ls=':',color='grey') ax1.yaxis.grid(True,which='major',ls=':',color='grey') ax1.tick_params(labelsize=14) ax1.set_xlabel(r'$\chi_{\rm eff}$',fontsize=18) ax1.set_ylabel(r'$p(\chi_{\rm eff})$',fontsize=18) ax1.annotate(r'$q={0}$'.format(q),(0.075,0.85),xycoords='axes fraction',fontsize=16) ax1.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.075,0.75),xycoords='axes fraction',fontsize=16) # Again, under a different choice of aMax and q aMax=1 q=0.1 ntrials=100000 a1s = aMax*np.random.random(ntrials) a2s = aMax*np.random.random(ntrials) u1s = 2.*np.random.random(ntrials)-1. u2s = 2.*np.random.random(ntrials)-1. xeff = (a1s*u1s + q*a2s*u2s)/(1.+q) xs = np.linspace(-1,1,300) p_xeff = chi_effective_prior_from_isotropic_spins(q,aMax,xs) ax2.hist(xeff,density=True,bins=50) ax2.plot(xs,p_xeff,color='black') ax2.xaxis.grid(True,which='major',ls=':',color='grey') ax2.yaxis.grid(True,which='major',ls=':',color='grey') ax2.tick_params(labelsize=14) ax2.set_xlabel(r'$\chi_{\rm eff}$',fontsize=18) ax2.annotate(r'$q={0}$'.format(q),(0.075,0.85),xycoords='axes fraction',fontsize=16) ax2.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.075,0.75),xycoords='axes fraction',fontsize=16) # ...and a third time aMax=0.4 q=0.8 ntrials=30000 a1s = aMax*np.random.random(ntrials) a2s = aMax*np.random.random(ntrials) u1s = 2.*np.random.random(ntrials)-1. u2s = 2.*np.random.random(ntrials)-1. xeff = (a1s*u1s + q*a2s*u2s)/(1.+q) xs = np.linspace(-1,1,300) p_xeff = chi_effective_prior_from_isotropic_spins(q,aMax,xs) ax3.hist(xeff,density=True,bins=50) ax3.plot(xs,p_xeff,color='black') ax3.xaxis.grid(True,which='major',ls=':',color='grey') ax3.yaxis.grid(True,which='major',ls=':',color='grey') ax3.tick_params(labelsize=14) ax3.set_xlabel(r'$\chi_{\rm eff}$',fontsize=18) ax3.annotate(r'$q={0}$'.format(q),(0.075,0.85),xycoords='axes fraction',fontsize=16) ax3.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.075,0.75),xycoords='axes fraction',fontsize=16) plt.tight_layout() #plt.savefig('demo_chi_eff.pdf',bbox_inches='tight') plt.show() # - # ## Isotropic chi_p priors # # Finally, demonstrate the priors on chi_p corresponding to a uniform and isotropic prior on component spins. # + # Again set up three different cases fig = plt.figure(figsize=(15,4)) ax1 = fig.add_subplot(131) ax2 = fig.add_subplot(132) ax3 = fig.add_subplot(133) # Case 1 aMax=1 q=0.8 ntrials=100000 a1s = aMax*np.random.random(ntrials) a2s = aMax*np.random.random(ntrials) u1s = 2.*np.random.random(ntrials)-1. u2s = 2.*np.random.random(ntrials)-1. sin1s = np.sqrt(1.-u1s**2) sin2s = np.sqrt(1.-u2s**2) xp = np.maximum(a1s*sin1s,((3.+4.*q)/(4.+3.*q))*q*a2s*sin2s) xs = np.linspace(0,1,300) p_xp = chi_p_prior_from_isotropic_spins(q,aMax,xs) ax1.hist(xp,density=True,bins=50) ax1.plot(xs,p_xp,color='black') ax1.xaxis.grid(True,which='major',ls=':',color='grey') ax1.yaxis.grid(True,which='major',ls=':',color='grey') ax1.tick_params(labelsize=14) ax1.set_xlabel(r'$\chi_{\rm p}$',fontsize=18) ax1.set_ylabel(r'$p(\chi_{\rm p})$',fontsize=18) ax1.annotate(r'$q={0}$'.format(q),(0.75,0.85),xycoords='axes fraction',fontsize=16) ax1.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.75,0.75),xycoords='axes fraction',fontsize=16) aMax=1 q=0.2 ntrials=100000 a1s = aMax*np.random.random(ntrials) a2s = aMax*np.random.random(ntrials) u1s = 2.*np.random.random(ntrials)-1. u2s = 2.*np.random.random(ntrials)-1. sin1s = np.sqrt(1.-u1s**2) sin2s = np.sqrt(1.-u2s**2) xp = np.maximum(a1s*sin1s,((3.+4.*q)/(4.+3.*q))*q*a2s*sin2s) xs = np.linspace(0,1,300) p_xp = chi_p_prior_from_isotropic_spins(q,aMax,xs) ax2.hist(xp,density=True,bins=50) ax2.plot(xs,p_xp,color='black') ax2.xaxis.grid(True,which='major',ls=':',color='grey') ax2.yaxis.grid(True,which='major',ls=':',color='grey') ax2.tick_params(labelsize=14) ax2.set_xlabel(r'$\chi_{\rm p}$',fontsize=18) ax2.annotate(r'$q={0}$'.format(q),(0.75,0.85),xycoords='axes fraction',fontsize=16) ax2.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.75,0.75),xycoords='axes fraction',fontsize=16) aMax=0.4 q=0.8 ntrials=100000 a1s = aMax*np.random.random(ntrials) a2s = aMax*np.random.random(ntrials) u1s = 2.*np.random.random(ntrials)-1. u2s = 2.*np.random.random(ntrials)-1. sin1s = np.sqrt(1.-u1s**2) sin2s = np.sqrt(1.-u2s**2) xp = np.maximum(a1s*sin1s,((3.+4.*q)/(4.+3.*q))*q*a2s*sin2s) xs = np.linspace(0,1,300) p_xp = chi_p_prior_from_isotropic_spins(q,aMax,xs) ax3.hist(xp,density=True,bins=50) ax3.plot(xs,p_xp,color='black') ax3.xaxis.grid(True,which='major',ls=':',color='grey') ax3.yaxis.grid(True,which='major',ls=':',color='grey') ax3.tick_params(labelsize=14) ax3.set_xlabel(r'$\chi_{\rm p}$',fontsize=18) ax3.annotate(r'$q={0}$'.format(q),(0.75,0.85),xycoords='axes fraction',fontsize=16) ax3.annotate(r'$a_{{\rm max}} = {0}$'.format(aMax),(0.75,0.75),xycoords='axes fraction',fontsize=16) plt.tight_layout() #plt.savefig('demo_chi_p.pdf',bbox_inches='tight') plt.show() # -
Demo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Anomaly Detection of Retail Store Sales # # This hands-on mini-project will enable you to reinforce your learnings pertaining to anomaly detection in this unit. By now, you must already be aware of the key objective of anomaly detection. Just to refresh your memory, anomaly detection is the identification of outliers or rare event items in a dataset which potentially exhibit abnormal behavior or properties as compared to the rest of the datapoints. # # There are a wide variety of anomaly detection methods including supervised, unsupervised and semi-supervised. Typically you can perform anomaly detection on univariate data, multivariate data as well as data which is temporal in nature. In this mini-project you will leverage state-of-the-art anomaly detection models from frameworks like [__`scikit-learn`__](https://scikit-learn.org/stable/modules/outlier_detection.html) and [__`PyOD`__](https://pyod.readthedocs.io/en/latest/index.html). # # # By the end of this mini-project, you will have successfully applied these techniques to find out potential outliers pertaining to sales transactional data in a retail store dataset and also learnt how to visualize outliers similar to the following plot. # # ![](outlier_ex.png) # # We will be performing anomaly detection on both univariate and multivariate data and leverage the following anomaly detection techniques. # # - Simple Statistical Models (mean & standard deviation: the three-sigma rule) # - Isolation Forest # - Clustering-Based Local Outlier Factor # - Auto-encoders # # 1. Getting and Loading the Dataset # # The first step towards solving any data science or machine learning problem is to obtain the necessary data. In this scenario, we will be dealing with a popular retail dataset known as the [SuperStore Sales Dataset](https://community.tableau.com/docs/DOC-1236) which consists of transactional data pertaining to a retail store. # # #### Please download the required dataset from [here](https://community.tableau.com/docs/DOC-1236) if necessary, although it will also be provided to you along with this notebook for this mini-project # # Once we have the necessary data, we will load up the dataset and perform some initial exploratory data analysis # # 2. Exploratory Data Analysis # # It's time to do some basic exploratory analysis on the retail store transactional data. We start by loading up the dataset into a pandas dataframe. # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import matplotlib import warnings warnings.filterwarnings('ignore') # %matplotlib inline df = pd.read_excel("./Superstore.xls") df.info() # - # We don't have any major missing values in our dataset and we can now look at a sample subset of the data df.head() # ## Visualize Sales vs. Order Date # # Let's look more closely at the __`Sales`__ attribute of the dataset in the next few cells. We'll start by looking at typical sales over time fig, ax = plt.subplots(1, 1, figsize=(12, 6)) sns.lineplot(x=df['Order Date'], y=df['Sales']); # ## Visualize Sales Distribution # # # Let's now look at the data distribution for __`Sales`__ sns.distplot(df['Sales']) plt.title("Sales Distribution"); df['Sales'].describe() # We can definitely see the presence of potential outliers in terms of the min or max values as compared to the meat of the distribution in the interquartile range as observed in the distribution statistics # ## Q 2.1: Visualize Profit vs. Order Date # # Let's now look closely at the __`Profit`__ attribute of the dataset in the next few cells. We'll start by looking at typical profits over time. # # __Your turn: Plot `Order Date` vs. `Profit` using a line plot__ fig, ax = plt.subplots(1, 1, figsize=(12, 6)) sns.lineplot(x=df['Order Date'], y=df['Profit']); # ## Q 2.2: Visualize Profit Distribution # # Let's now look at the data distribution for __`Profit`__ # # __Your turn: Plot the distribution for `Profit`__ sns.distplot(df['Profit']) plt.title("Profit Distribution"); # __Your turn: Get the essential descriptive statistics for `Profit` using an appropriate function__ df['Profit'].describe() # __Your turn: Do you notice anything interesting about the distribution?__ # We have both positive and negative values in profits since it indicates either a profit or a loss based on the sales and original price of the items. # ## Visualize Discount vs. Profit sns.scatterplot(x="Discount", y="Profit", data=df); # In the above visual, we look at a scatter plot showing the distribution of profits w.r.t discounts given # # 3. Univariate Anomaly Detection # # Univariate is basically analysis done on a single attribute or feature. In this section, we will perform anomaly detection on a single attribute using the following methods. # # - Statistical Process Control Methods (mean + 3sigma thresholding) # - Isolation Forest # # We will start off by demonstrating both these techniques on the __`Sales`__ attribute and later on, you will implement similar techniques on the __`Profit`__ attribute. # ## 3.1: Univariate Anomaly Detection on Sales using Statistical Modeling # # Here we start off by implementing anomaly detecting using statistical modeling on the __`Sales`__ attribute # ### Obtain Upper Limit Threshold for Sales # # Here we are concerned about transactions with high sales values so we compute the upper limit using the $\mu$ + 3$\sigma$ rule where $\mu$ is the mean of the distribution and $\sigma$ is the standard deviation of the distribution. # + mean_sales = df['Sales'].mean() sigma_sales = df['Sales'].std() three_sigma_sales = 3*sigma_sales threshold_sales_value = mean_sales + three_sigma_sales print('Threshold Sales:', threshold_sales_value) # - # ### Visualize Outlier Region # + fig, ax = plt.subplots(1, 1, figsize=(12, 6)) sns.distplot(df['Sales']) plt.axvspan(threshold_sales_value, df['Sales'].max(), facecolor='r', alpha=0.3) plt.title("Sales Distribution with Outlier Region"); # - # ### Filter and Sort Outliers # # Here we filter out the outlier observations and sort by descending order and view the top 5 outlier values sales_outliers_df = df['Sales'][df['Sales'] > threshold_sales_value] print('Total Sales Outliers:', len(sales_outliers_df)) sales_outliers_sorted = sales_outliers_df.sort_values(ascending=False) sales_outliers_sorted.head(5) # ### View Top 10 Outlier Transactions (df.loc[sales_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).head(10) # ### View Bottom 10 Outlier Transactions (df.loc[sales_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).tail(10) # ## Q 3.2: Univariate Anomaly Detection on Profit using Statistical Modeling # # In this section you will use the learning from Section 3.1 and implement anomaly detecting using statistical modeling on the __`Profit`__ attribute. Since we have both +ve (profits) and -ve (losses) values in the distribution, we will try to find anomalies for each. # ### Obtain Upper Limit Threshold for Profit # # __Your turn:__ Compute the upper and lower limits using the 𝜇 + 3 𝜎 rule where 𝜇 is the mean of the distribution and 𝜎 is the standard deviation of the distribution. # + # threshold_sales_value = mean_sales + three_sigma_sales mean_profit = df['Profit'].mean() sigma_profit = df['Profit'].std() three_sigma_profit = 3*sigma_profit threshold_profit_upper_limit = mean_profit + three_sigma_profit threshold_profit_lower_limit = mean_profit - three_sigma_profit print('Thresholds Profit:', threshold_profit_lower_limit, threshold_profit_upper_limit) # - # ### Visualize Outlier Regions # # __Your turn:__ Visualize the upper and lower outlier regions in the distribution similar to what you did in 3.1 # + fig, ax = plt.subplots(1, 1, figsize=(12, 6)) sns.distplot(df['Profit']) plt.axvspan(threshold_sales_value, df['Profit'].max(), facecolor='orange', alpha=0.3) plt.title("Profit Distribution with Outlier Region"); # - # ### Filter and Sort Outliers # # __Your turn:__ Filter out the outlier observations and sort by descending order and view the top 5 outlier values profit_outliers_df =df['Profit'][df['Profit'] > threshold_profit_upper_limit] print('Total Profit Outliers:', len(profit_outliers_df)) profit_outliers_sorted = profit_outliers_df.sort_values(ascending=False) profit_outliers_df.head(5) # We need to identify also the losses outliers loses_outliers_df =df['Profit'][df['Profit'] < threshold_profit_lower_limit] print('Total Losses Outliers:', len(loses_outliers_df)) loses_outliers_sorted = loses_outliers_df.sort_values(ascending=False) loses_outliers_df.head(5) # ### View Top 10 Outlier Transactions # # __Your turn:__ View the top ten transactions based on highest profits (df.loc[profit_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).head(10) # ### Q: Do you notice any interesting insights based on these transactions? # __A:__ Most of these are purchases for Copiers and Binders , looks like Canon products yielded some good profits` # ### View Bottom 10 Outlier Transactions # # __Your turn:__ View the bottom ten transactions based on lowest profits (highest losses) (df.loc[loses_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).tail(10) # ### Q: Do you notice any interesting insights based on these transactions? # __A:__ Most of these are purchases for Machines and Binders , looks like Cibify 3D Printers yielded high losses # ## 3.3: Univariate Anomaly Detection on Sales using Isolation Forest # # You might have already learnt about this model from the curriculum. Just to briefly recap, the Isolation Forest model, 'isolates' observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. # # Recursive partitioning can be represented by a tree structure. Hence, the number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node. This path length, averaged over a forest of such random trees, is a measure of normality and our decision function. # # Random partitioning produces noticeably shorter paths for anomalies. Hence, when a forest of random trees collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies. # # More details are available in this [User Guide](https://scikit-learn.org/stable/modules/outlier_detection.html#isolation-forest) # ### Initialize and Train Model # # Here we initialize the isolation forest model with some hyperparameters assuming the proportion of outliers to be 1% of the total data (using the `contamination` setting) # + from sklearn.ensemble import IsolationForest sales_ifmodel = IsolationForest(n_estimators=100, contamination=0.01) sales_ifmodel.fit(df[['Sales']]) # - # ### Visualize Outlier Region # # Here we visualize the outlier region in the data distribution xx = np.linspace(df['Sales'].min(), df['Sales'].max(), len(df)).reshape(-1,1) anomaly_score = sales_ifmodel.decision_function(xx) outlier = sales_ifmodel.predict(xx) plt.figure(figsize=(12, 6)) plt.plot(xx, anomaly_score, label='anomaly score') plt.fill_between(xx.T[0], np.min(anomaly_score), np.max(anomaly_score), where=outlier==-1, color='r', alpha=.4, label='outlier region') plt.legend() plt.ylabel('anomaly score') plt.xlabel('Sales'); # ### Filter and Sort Outliers # # Here we predict outliers in our dataset using our trained model and filter out the outlier observations and sort by descending order and view the top 5 outlier values # + outlier_predictions = sales_ifmodel.predict(df[['Sales']]) sales_outliers_df = df[['Sales']] sales_outliers_df['Outlier'] = outlier_predictions sales_outliers_df = sales_outliers_df[sales_outliers_df['Outlier'] == -1]['Sales'] print('Total Sales Outliers:', len(sales_outliers_df)) sales_outliers_sorted = sales_outliers_df.sort_values(ascending=False) sales_outliers_sorted.head(5) # - # ### View Top 10 Outlier Transactions (df.loc[sales_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).head(10) # ### View Bottom 10 Outlier Transactions (df.loc[sales_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).tail(10) # ## Q 3.4: Univariate Anomaly Detection on Profit using Isolation Forest # # In this section you will use the learning from Section 3.3 and implement anomaly detecting using isolation on the __`Profit`__ attribute. Since we have both +ve (profits) and -ve (losses) values in the distribution, we will try to find anomalies for each. # ### Initialize and Train Model # # __Your Turn:__ Initialize the isolation forest model with similar hyperparameters as Section 3.3 and also assuming the proportion of outliers to be 1% of the total data (using the contamination setting) # + from sklearn.ensemble import IsolationForest profit_ifmodel = IsolationForest(n_estimators=100, contamination=0.01) profit_ifmodel.fit(df[['Profit']]) # - # ### Visualize Outlier Regions # # __Your turn:__ Visualize the upper and lower outlier regions in the distribution similar to what you did in 3.3 xx = np.linspace(df['Profit'].min(), df['Profit'].max(), len(df)).reshape(-1,1) anomaly_score = profit_ifmodel.decision_function(xx) outlier = profit_ifmodel.predict(xx) plt.figure(figsize=(12, 6)) plt.plot(xx, anomaly_score, label='anomaly score') plt.fill_between(xx.T[0], np.min(anomaly_score), np.max(anomaly_score), where=outlier==-1, color='r', alpha=.4, label='outlier region') plt.legend() plt.ylabel('anomaly score') plt.xlabel('Profit'); # ### Filter and Sort Outliers # # __Your Turn:__ Predict outliers in our dataset using our trained model and filter out the outlier observations and sort by descending order and view the top 5 outlier values similar to 3.3 # + outlier_predictions = profit_ifmodel.predict(df[['Profit']]) profit_outliers_df = df[['Profit']] profit_outliers_df['Outlier'] = outlier_predictions profit_outliers_df = profit_outliers_df[profit_outliers_df['Outlier'] == -1]['Profit'] print('Total Profit Outliers:', len(profit_outliers_df)) profit_outliers_sorted = profit_outliers_df.sort_values(ascending=False) profit_outliers_sorted.head(5) # - # ### View Top 10 Outlier Transactions # # __Your turn:__ View the top ten transactions based on highest profits (df.loc[profit_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).head(10) # ### View Bottom 10 Outlier Transactions # # __Your turn:__ View the bottom ten transactions based on lowest profits (highest losses) (df.loc[profit_outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).tail(10) # ### Q: Do you observe any similarity in the results with the previous method? # __A:__ Yes # Another interesting approach to check out would be the [Generalized ESD Test for Outliers](https://www.itl.nist.gov/div898/handbook/eda/section3/eda35h3.htm) # # # # 4. Multivariate Anomaly Detection # # Multivariate is basically analysis done on more than one attribute or feature at a time. In this section, we will perform anomaly detection on two attributes (__`Discount`__ & __`Profit`__) using the following methods. # # - Clustering Based Local Outlier Factor (CBLOF) # - Isolation Forest # - Auto-Encoders # # You will learn how to train these models to detect outliers and also visualize these outliers. For this section we will be using the __[`pyod`](https://pyod.readthedocs.io/en/latest/)__ package so make sure you have it installed. # !pip install pyod # ## Extract Subset Data for Outlier Detection cols = ['Discount', 'Profit'] subset_df = df[cols] subset_df.head() # ## Feature Scaling # + from sklearn.preprocessing import MinMaxScaler mms = MinMaxScaler(feature_range=(0, 1)) subset_df[cols] = mms.fit_transform(subset_df) subset_df.head() # - # ## 4.1: Multivariate Anomaly Detection with Clustering Based Local Outlier Factor (CBLOF) # # The CBLOF model takes as an input the dataset and the cluster model that was generated by a clustering algorithm. It classifies the clusters into small clusters and large clusters using the parameters alpha and beta. The anomaly score is then calculated based on the size of the cluster the point belongs to as well as the distance to the nearest large cluster. # # By default, kMeans is used for clustering algorithm. You can read more in the [official documentation](https://pyod.readthedocs.io/en/latest/pyod.models.html#module-pyod.models.cblof) # ### Initialize and Train Model # # Here we initialize the CBLOF model with some hyperparameters assuming the proportion of outliers to be 1% of the total data (using the `contamination` setting) # + from pyod.models import cblof cblof_model = cblof.CBLOF(contamination=0.01, random_state=42) cblof_model.fit(subset_df) # - # ### Filter and Sort Outliers # # Here we predict outliers in our dataset using our trained model and filter out the outlier observations and sort by descending order and view the top 5 outlier values # + outlier_predictions = cblof_model.predict(subset_df) outliers_df = subset_df.copy(deep=True) outliers_df['Outlier'] = outlier_predictions outliers_df = outliers_df[outliers_df['Outlier'] == 1] print('Total Outliers:', len(outliers_df)) outliers_sorted = outliers_df.sort_values(by=['Profit', 'Discount'], ascending=False) outliers_sorted.head(5) # - # ### View Bottom 10 Outlier Transactions (df.loc[outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).tail(10) # We can definitely see some huge losses incurred based on giving higher discounts even if the sales amount was high which is interesting as well as concerning. # ## Q 4.2: Multivariate Anomaly Detection with Isolation Forest # # Here you will detect anomalies using the Isolation Forest model and use the learnings from 4.1. Here you will use the [`pyod`](https://pyod.readthedocs.io/en/latest/pyod.models.html#module-pyod.models.iforest) version of [Isolation Forest](https://pyod.readthedocs.io/en/latest/pyod.models.html#module-pyod.models.iforest) which is basically a wrapper over the `scikit-learn` version but with more functionalities. # ### Initialize and Train Model # # __Your Turn:__ Initialize the isolation forest model with similar hyperparameters as before and also assuming the proportion of outliers to be 1% of the total data (using the contamination setting) # + from pyod.models import iforest if_model = iforest.IForest(contamination=0.01, random_state=42) if_model.fit(subset_df) # - # ### Filter and Sort Outliers # # __Your Turn:__ Predict outliers in our dataset using our trained model and filter out the outlier observations and sort by descending order and view the top 5 outlier values similar to 4.1 # + outlier_predictions = if_model.predict(subset_df) outliers_df = subset_df.copy(deep=True) outliers_df['Outlier'] = outlier_predictions outliers_df = outliers_df[outliers_df['Outlier'] == 1] print('Total Outliers:', len(outliers_df)) outliers_sorted = outliers_df.sort_values(by=['Profit', 'Discount'], ascending=False) outliers_sorted.head(5) # - # ### View Bottom 10 Outlier Transactions # # __Your turn:__ View the bottom ten transactions (df.loc[outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).tail(10) # ### Q: Do you notice any differences in the results with the previous model? # We do notice some transactions with 80% discount and high losses # ## Q 4.3: Multivariate Anomaly Detection with Auto-encoders # # Here you will detect anomalies using the Auto-encoder model and use the learnings from 4.1. Here you will use the [Auto-encoder](https://pyod.readthedocs.io/en/latest/pyod.models.html#module-pyod.models.auto_encoder) model from `pyod` which is a deep learning model often used for learning useful data representations in an unsupervised fashion without any labeled data. # # ![](outlier_ae.png) # # Similar to PCA, AE could be used to detect outlier objects in the data by calculating the reconstruction errors # ### Initialize Model # # Here we initiaze an auto-encoder network with a few hidden layers so that we could train it for a 100 epochs # + from pyod.models import auto_encoder ae_model = auto_encoder.AutoEncoder(hidden_neurons=[2, 32, 32, 2], hidden_activation='relu', output_activation='sigmoid', epochs=100, batch_size=32, contamination=0.01) # - # ### Train Model # # __Your turn:__ Train the model by calling the `fit()` function on the right data ae_model.fit(subset_df) # ### Filter and Sort Outliers # # __Your Turn:__ Predict outliers in our dataset using our trained model and filter out the outlier observations and sort by descending order and view the top 5 outlier values similar to 4.1 # + outlier_predictions = ae_model.predict(subset_df) outliers_df = subset_df.copy(deep=True) outliers_df['Outlier'] = outlier_predictions outliers_df = outliers_df[outliers_df['Outlier'] == 1] print('Total Outliers:', len(outliers_df)) outliers_sorted = outliers_df.sort_values(by=['Profit', 'Discount'], ascending=False) outliers_sorted.head(5) # - # ### View Bottom 10 Outlier Transactions # # __Your turn:__ View the bottom ten transactions (df.loc[outliers_sorted.index.tolist()][['City', 'Category', 'Sub-Category', 'Product Name', 'Sales', 'Quantity', 'Discount', 'Profit']]).tail(10) # ## 4.4: Visualize Anomalies and Compare Anomaly Detection Models # # Here we will look at the visual plots of anomalies as detected by the above three models def visualize_anomalies(model, xx, yy, data_df, ax_obj, subplot_title): # predict raw anomaly score scores_pred = model.decision_function(data_df) * -1 # prediction of a datapoint category outlier or inlier y_pred = model.predict(data_df) n_inliers = len(y_pred) - np.count_nonzero(y_pred) n_outliers = np.count_nonzero(y_pred == 1) out_df = data_df.copy(deep=True) out_df['Outlier'] = y_pred.tolist() # discount - inlier feature 1, profit - inlier feature 2 inliers_discount = out_df[out_df['Outlier'] == 0]['Discount'].values inliers_profit = out_df[out_df['Outlier'] == 0]['Profit'].values # discount - outlier feature 1, profit - outlier feature 2 outliers_discount = out_df[out_df['Outlier'] == 1]['Discount'].values outliers_profit = out_df[out_df['Outlier'] == 1]['Profit'].values # Use threshold value to consider a datapoint inlier or outlier # threshold = stats.scoreatpercentile(scores_pred,100 * outliers_fraction) threshold = np.percentile(scores_pred, 100 * outliers_fraction) # decision function calculates the raw anomaly score for every point Z = model.decision_function(np.c_[xx.ravel(), yy.ravel()]) * -1 Z = Z.reshape(xx.shape) # fill blue map colormap from minimum anomaly score to threshold value ax_obj.contourf(xx, yy, Z, levels=np.linspace(Z.min(), threshold, 7),cmap=plt.cm.Blues_r) # draw red contour line where anomaly score is equal to thresold a = ax_obj.contour(xx, yy, Z, levels=[threshold],linewidths=2, colors='red') # fill orange contour lines where range of anomaly score is from threshold to maximum anomaly score ax_obj.contourf(xx, yy, Z, levels=[threshold, Z.max()],colors='orange') b = ax_obj.scatter(inliers_discount, inliers_profit, c='white',s=20, edgecolor='k') c = ax_obj.scatter(outliers_discount, outliers_profit, c='black',s=20, edgecolor='k') ax_obj.legend([a.collections[0], b,c], ['learned decision function', 'inliers','outliers'], prop=matplotlib.font_manager.FontProperties(size=10),loc='upper right') ax_obj.set_xlim((0, 1)) ax_obj.set_ylim((0, 1)) ax_obj.set_xlabel('Discount') ax_obj.set_ylabel('Sales') ax_obj.set_title(subplot_title) # + outliers_fraction = 0.01 xx , yy = np.meshgrid(np.linspace(0, 1, 100), np.linspace(0, 1, 100)) fig, ax = plt.subplots(1, 3, figsize=(20, 6)) ax_objs = [ax[0], ax[1], ax[2]] models = [cblof_model, if_model, ae_model] plot_titles = ['Cluster-based Local Outlier Factor (CBLOF)', 'Isolation Forest', 'Auto-Encoder'] for ax_obj, model, plot_title in zip(ax_objs, models, plot_titles): visualize_anomalies(model=model, xx=xx, yy=yy, data_df=subset_df, ax_obj=ax_obj, subplot_title=plot_title) plt.axis('tight'); # -
anomaly-detection/mec-16.4.1-anomaly-detection-mini-project/Mini_Project_Anomaly_Detection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to Programming with Python # --- # ## What is Python and why would I use it? # Python is a programming language. # # A programming language is a way of writing commands so that an interpreter or compiler can turn them into machine instructions. # # We like using Python in Software Carpentry Workshops for lots of reasons # # - Widely used in science # - It's easy to read and write # - Huge supporting community - lots of ways to learn and get help # - This Jupyter Notebook. Not a lot of languages have this kind of thing (name comes from Julia, Python, and R). # Even if you aren't using Python in your work, you can use Python to learn the fundamentals of programming that will apply accross languages # ### Characters # # Python uses certain characters as part of its syntax. Here is what they are called: # # * `[` : left `square bracket` # * `]` : right `square bracket` # * `(` : left `paren` (parentheses) # * `)` : right `paren` # * `{` : left `curly brace` # * `}` : right `curly brace` # * `<` : left `angle bracket` # * `>` : right `angle bracket` # * `-` `dash` (not hyphen. Minus only when used in an equation or formula) # * `"` : `double quote` # * `'` : `single quote` (apostrophe) # # What are the fundamentals? # ## VARIABLES # # * We store values inside variables. # * We can refer to variables in other parts of our programs. # * In Python, the variable is created when a value is assigned to it. # * Values are assigned to variable names using the equals sign (=). # * A variable can hold two types of things. Basic data types and objects(ways to structure data and code). # * In Python, all variables are objects. # # Some data types you will find in almost every language include: # # - Strings (characters, words, sentences or paragraphs): 'a' 'b' 'c' 'abc' '0' '3' ';' '?' # - Integers (whole numbers): 1 2 3 100 10000 -100 # - Floating point or Float (decimals): 10.0 56.9 -3.765 # - Booleans: True, False # # Here, Python assigns an age to a variable `age` and a name in quotation marks to a variable `first_name`. age = 42 first_name = "Ahmed" # #### Of Note: # Variable names: # * Cannot start with a digit # * Cannot contain spaces, quotation marks, or other punctuation # You can display what is inside `age` by using the print command # `print()` # with the value placed inside the parenthesis print(age) # --- # ## EXERCISE: # 1. Create two new variables called age and first_name with your own age and name # 1. Print each variable out to dispaly it's value # # You can also combine values in a single print command by separating them with commas # Insert your variable values into the print statement below print(, 'is', , 'years old') # * `print` automatically puts a single space between items to separate them. # * And wraps around to a new line at the end. # ### Using Python built-in type() function # # If you are not sure of what your variables' types are, you can call a python function called type() in the same manner as you used print() function. # Python is an object-oriented language, so any defined variable has a type. Default common types are str, int, float, list, and tuple. We will cover list and tuple later print(type(age)) print(type(first_name)) # ### STRING TYPE # One or more characters strung together and enclosed in quotes (single or double): "Hello World!" greeting = "Hello World!" print ("The greeting is:", greeting) greeting = 'Hello World!' print ('The greeting is:', greeting) # #### Need to use single quotes in your string? # Use double quotes to make your string. greeting = "Hello 'World'!" print ("The greeting is:", greeting) # #### Need to use both? greeting1 = "'Hello'" greeting2 = '"World"!' print ("The greeting is:", greeting1, greeting2) # #### Concatenation bear = "wild" down = "cats" print (bear+down) # --- # ## EtherPad # Why isn't `greeting` enclosed in quotes in the statements above? # # Post your answers to the EtherPad, or vote for existing answers # # --- # #### Use an index to get a single character from a string. # * The characters (individual letters, numbers, and so on) in a string are ordered. # * For example, the string ‘AB’ is not the same as ‘BA’. Because of this ordering, we can treat the string as a list of characters. # * Each position in the string (first, second, etc.) is given a number. This number is called an index or sometimes a subscript. # * Indices are numbered from 0. # * Use the position’s index in square brackets to get the character at that position. # + # String : H e l i u m # Index Location: 0 1 2 3 4 5 atom_name = 'helium' print(atom_name[0], atom_name[3]) # - # ### NUMERIC TYPES # * Numbers are stored as numbers (no quotes) and are either integers (whole) or real numbers (decimal). # * In programming, numbers with decimal precision are called floating-point, or float. # * Floats use more processing than integers so use them wisely! # * Floats and ints come in various sizes but Python switches between them transparently. # + my_integer = 10 my_float = 10.99998 my_value = my_integer print("My numeric value:", my_value) print("Type:", type(my_value)) # - # ### BOOLEAN TYPE # * Boolean values are binary, meaning they can only either true or false. # * In python True and False (no quotes) are boolean values # + is_true = True is_false = False print("My true boolean variable:", is_true) # - # --- # ## EtherPad # What data type is `'1024'`? # <ol style="list-style-type:lower-alpha"> # <li>String</li> # <li>Int</li> # <li>Float</li> # <li>Boolean</li> # </ol> # # Post your answers to the EtherPad, or vote for existing answers # # --- # ## Variables can be used in calculations. # # * We can use variables in calculations just as if they were values. # * Remember, we assigned 42 to `age` a few lines ago. age = age + 3 print('Age in three years:', age) # * This now sets our age value 45. We can also add strings together. When you add strings it's called "concatenating" name = "Sonoran" full_name = name + " Desert" print(full_name) # * Notice how I included a space in the quotes before "Desert". If we hadn't, we would have had "SonoranDesert" # * Can we subtract, multiply, or divide strings? # + #Create a new variable called last_name with your own last name. #Create a second new variable called full_name that is a combination of your first and last name # - # ## DATA STRUCTURES # Python has many objects that can be used to structure data including: # # - Lists # - Tuples # - Sets # - Dictionaries # ### LISTS # Lists are collections of values held together in brackets: list_of_characters = ['a', 'b', 'c'] print (list_of_characters) # Create a new list called list_of_numbers with four numbers in it # * Just like strings, we can access any value in the list by it's position in the list. # * **IMPORTANT:** Indexes start at 0 # ~~~ # list: ['a', 'b', 'c', 'd'] # index location: 0 1 2 3 # ~~~ # Print out the second value in the list list_of_numbers # Once you have created a list you can add more items to it with the append method list_of_numbers.append(5) print(list_of_numbers) # #### Aside: Sizes of data structures # # To determine how large (how many values/entries/elements/etc.) any Python data structure has, use the `len()` function len(list_of_numbers) # Note that you cannot compute the length of a numeric variable: len(age) # This will give an error: `TypeError: object of type 'int' has no len()` # However, `len()` can compute the lengths of strings # + print(len('this is a sentence')) # You can also get the lengths of strings in a list list_of_strings = ["Python is Awesome!", "Look! I'm programming.", "E = mc^2"] # This will get the length of "Look! I'm programming." print(len(list_of_strings[1])) # - # ### TUPLES # Tuples are like a List, `cannot be changed (immutable)`. # # Tuples can be used to represent any collection of data. They work well for things like coordinates. tuple_of_x_y_coordinates = (3, 4) print (tuple_of_x_y_coordinates) # Tuples can have any number of values # + coordinates = (1, 7, 38, 9, 0) print (coordinates) icecream_flavors = ("strawberry", "vanilla", "chocolate") print (icecream_flavors) # - # ... and any types of values. # # Once created, you `cannot add more items to a tuple` (but you can add items to a list). If we try to append, like we did with lists, we get an error icecream_flavors.append('bubblegum') # ### THE DIFFERENCE BETWEEN TUPLES AND LISTS # Lists are good for manipulating data sets. It's easy for the computer to add, remove and sort items. Sorted tuples are easier to search and index. This happens because tuples reserve entire blocks of memory to make finding specific locations easier while lists use addressing and force the computer to step through the whole list. # ![array%20vs%20list.png](array%20vs%20list.png) # Let's say you want to get to the last item. The tuple can calculate the location because: # # (address)=(size of data)×(inex of the item)+(original address) # # This is how zero indexing works. The computer can do the calculation and jump directly to the address. The list would need to go through every item in the list to get there. # # Now lets say you wanted to remove the third item. Removing it from the tuple requires it to be resized and coppied. Python would even make you do this manually. Removing the third item in the list is as simple as making the second item point to the fourth. Python makes this as easy as calling a method on the tuple object. # ### SETS # Sets are similar to lists and tuples, but can only contain unique values and are held in braces # # # For example a list could contain multiple exact values # + # In the gapminder data that we will use, we will have data entries for the continents # of each country in the dataset my_list = ['Africa', 'Europe', 'North America', 'Africa', 'Europe', 'North America'] print("my_list is", my_list) # A set would only allow for unique values to be held my_set = {'Africa', 'Europe', 'North America', 'Africa', 'Europe', 'North America'} print("my_set is", my_set) # - # Just list lists, you can append to a set using the add() function # + my_set.add('Asia') # Now let's try to append one that is in: my_set.add('Europe') # - # ### DICTIONARIES # * Dictionaries are collections of things that you can lookup like in a real dictionary: # * Dictionarys can organized into key and value pairs separated by commas (like lists) and surrounded by braces. # * E.g. {key1: value1, key2: value2} # * We call each association a "key-value pair". # # dictionary_of_definitions = {"aardvark" : "The aardvark is a medium-sized, burrowing, nocturnal mammal native to Africa.", "boat" : "A boat is a thing that floats on water"} # We can find the definition of aardvark by giving the dictionary the "key" to the definition we want in brackets. # # In this case the key is the word we want to lookup print ("The definition of aardvark is:", dictionary_of_definitions["aardvark"]) # Print out the definition of a boat # Just like lists and sets, you can add to dictionaries by doing the following: dictionary_of_definitions['ocean'] = "An ocean is a very large expanse of sea, in particular each of the main areas into which the sea is divided geographically." print(dictionary_of_definitions) # --- # ## EtherPad # Which one of these is not a valid entry in a dictionary? # # 1. `"key"`: `"value"` # 2. `"GCBHSA"`: `"ldksghdklfghfdlgkfdhgfldkghfgfhd"` # 3. `"900"` : `"key"` : `"value"` # 4. `Books` : `10000` # # Post your answer to the EtherPad, or vote for an existing answer # --- # ## EXERCISE: # 1. Create a dictionary called `zoo` with at least three animal types with a different count for each animal. # 1. `print` out the count of the second animal in your dictionary # # --- # ## Statements # # OK great. Now what can we do with all of this? # # We can plug everything together with a bit of logic and python language and make a program that can do things like: # # * process data # # * parse files # # * data analysis # What kind of logic are we talking about? # # We are talking about something called a "logical structure" which starts at the top (first line) and reads down the page in order # # In python a logical structure are often composed of statements. Statements are powerful operators that control the flow of your script. There are two main types: # # * conditionals (if, while) # * loops (for) # # ### Conditionals # Conditionals are how we make a decision in the program. # In python, conditional statements are called if/else statements. # # # * If statement use boolean values to define flow. # * E.g. If something is True, do this. Else, do this # + it_is_daytime = False # this is the variable that holds the current condition of it_is_daytime which is True or False if it_is_daytime: print ("Have a nice day.") else: print ("Have a nice night.") # before running this cell # what will happen if we change it_is_daytime to True? # what will happen if we change it_is_daytime to False? # - # * Often if/else statement use a comparison between two values to determine True or False # * These comparisons use "comparison operators" such as ==, >, and <. # * \>= and <= can be used if you need the comparison to be inclusive. # * **NOTE**: Two equal signs is used to compare values, while one equals sign is used to assign a value # * E.g. # # 1 > 2 is False<br/> # 2 > 2 is False<br/> # 2 >= 2 is True<br/> # 'abc' == 'abc' is True # + user_name = "Ben" if user_name == "Marnee": print ("Marnee likes to program in Python.") else: print ("We do not know who you are.") # - # * What if a condition has more than two choices? Does it have to use a boolean? # * Python if-statments will let you do that with elif # * `elif` stands for "else if" # # + if user_name == "Marnee": print ("Marnee likes to program in Python.") elif user_name == "Ben": print ("Ben likes maps.") elif user_name == "Brian": print ("Brian likes plant genomes") else: print ("We do not know who you are") # for each possibility of user_name we have an if or else-if statment to check the value of the name # and print a message accordingly. # - # What does the following statement print? # # my_num = 42 # my_num = 8 + my_num # new_num = my_num / 2 # if new_num >= 30: # print("Greater than thirty") # elif my_num == 25: # print("Equals 25") # elif new_num <= 30: # print("Less than thirty") # else: # print("Unknown") # --- # ## EXERCISE: # * 1. Check to see if you have more than three entries in the `zoo` dictionary you created earlier. If you do, print "more than three". If you don't, print "less than three" # # --- # ### Loops # Loops tell a program to do the same thing over and over again until a certain condition is met. # In python two main loop types are for loops and while loops. # #### For Loops # We can loop over collections of things like lists or dictionaries or we can create a looping structure. # + # LOOPING over a collection # LIST # If I want to print a list of fruits, I could write out each print statment like this: print("apple") print("banana") print("mango") # or I could create a list of fruit # loop over the list # and print each item in the list list_of_fruit = ["apple", "banana", "mango"] # this is how we write the loop # "fruit" here is a variable that will hold each item in the list, the fruit, as we loop # over the items in the list print (">>looping>>") for fruit in list_of_fruit: print (fruit) # - # LOOPING a set number of times # We can do this with range # range automatically creates a list of numbers in a range # here we have a list of 10 numbers starting with 0 and increasing by one until we have 10 numbers # What will be printed for x in range(0,10): print (x) # + # LOOPING over a collection # DICTIONARY # We can do the same thing with a dictionary and each association in the dictionary fruit_price = {"apple" : 0.10, "banana" : 0.50, "mango" : 0.75} for key, value in fruit_price.items(): print ("%s price is %s" % (key, value)) # - # --- # ## EXERCISE: # 1\. For each entry in your `zoo` dictionary, print that entry/key # 2\. For each entry in your zoo dictionary, print that value # --- # #### While Loops # Similar to if statements, while loops use a boolean test to either continue looping or break out of the loop. # + # While Loops my_num = 10 while my_num > 0: print("My number", my_num) my_num = my_num - 1 # - # NOTE: While loops can be dangerous, because if you forget to to include an operation that modifies the variable being tested (above, we're subtracting 1 at the end of each loop), it will continue to run forever and you script will never finish. # That's it. With just these data types, structures, and logic, you can build a program # # Let's do that next with functions # # -- COMMIT YOUR WORK TO GITHUB -- # # Key Points # # * Python is an open-source programming language that can be used to do science! # * We store information in variables # * There are a variety of data types and objects for storing data # * You can do math on numeric variables, you can concatenate strings # * There are different Python default data structures including: lists, tuples, sets and dictionaries # * Programming uses conditional statements for flow control such as: if/else, for loops and while loops
python-lessons/01 - Introduction to Programming with Python.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Advection correction # # This tutorial shows how to use the optical flow routines of pysteps to implement # the advection correction procedure described in Anagnostou and Krajewski (1999). # # Advection correction is a temporal interpolation procedure that is often used # when estimating rainfall accumulations to correct for the shift of rainfall patterns # between consecutive radar rainfall maps. This shift becomes particularly # significant for long radar scanning cycles and in presence of fast moving # precipitation features. # # <div class="alert alert-info"><h4>Note</h4><p>The code for the advection correction using pysteps was originally # written by `<NAME> <https://github.com/wolfidan>`_.</p></div> # # + from datetime import datetime import matplotlib.pyplot as plt import numpy as np from pysteps import io, motion, rcparams from pysteps.utils import conversion, dimension from pysteps.visualization import plot_precip_field from scipy.ndimage import map_coordinates # - # ## Read the radar input images # # First, we import a sequence of 36 images of 5-minute radar composites # that we will use to produce a 3-hour rainfall accumulation map. # We will keep only one frame every 10 minutes, to simulate a longer scanning # cycle and thus better highlight the need for advection correction. # # You need the pysteps-data archive downloaded and the pystepsrc file # configured with the data_source paths pointing to data folders. # # # Selected case date = datetime.strptime("201607112100", "%Y%m%d%H%M") data_source = rcparams.data_sources["mch"] # ### Load the data from the archive # # # + root_path = data_source["root_path"] path_fmt = data_source["path_fmt"] fn_pattern = data_source["fn_pattern"] fn_ext = data_source["fn_ext"] importer_name = data_source["importer"] importer_kwargs = data_source["importer_kwargs"] timestep = data_source["timestep"] # Find the input files from the archive fns = io.archive.find_by_date( date, root_path, path_fmt, fn_pattern, fn_ext, timestep=5, num_next_files=35 ) # Read the radar composites importer = io.get_method(importer_name, "importer") R, __, metadata = io.read_timeseries(fns, importer, **importer_kwargs) # Convert to mm/h R, metadata = conversion.to_rainrate(R, metadata) # Upscale to 2 km (simply to reduce the memory demand) R, metadata = dimension.aggregate_fields_space(R, metadata, 2000) # Keep only one frame every 10 minutes (i.e., every 2 timesteps) # (to highlight the need for advection correction) R = R[::2] # - # ## Advection correction # # Now we need to implement the advection correction for a pair of successive # radar images. The procedure is based on the algorithm described in Anagnostou # and Krajewski (Appendix A, 1999). # # To evaluate the advection occurred between two successive radar images, we are # going to use the Lucas-Kanade optical flow routine available in pysteps. # # def advection_correction(R, T=5, t=1): """ R = np.array([qpe_previous, qpe_current]) T = time between two observations (5 min) t = interpolation timestep (1 min) """ # Evaluate advection oflow_method = motion.get_method("LK") fd_kwargs = {"buffer_mask": 10} # avoid edge effects V = oflow_method(np.log(R), fd_kwargs=fd_kwargs) # Perform temporal interpolation Rd = np.zeros((R[0].shape)) x, y = np.meshgrid( np.arange(R[0].shape[1], dtype=float), np.arange(R[0].shape[0], dtype=float) ) for i in range(t, T + t, t): pos1 = (y - i / T * V[1], x - i / T * V[0]) R1 = map_coordinates(R[0], pos1, order=1) pos2 = (y + (T - i) / T * V[1], x + (T - i) / T * V[0]) R2 = map_coordinates(R[1], pos2, order=1) Rd += (T - i) * R1 + i * R2 return t / T ** 2 * Rd # Finally, we apply the advection correction to the whole sequence of radar # images and produce the rainfall accumulation map. # # R_ac = R[0].copy() for i in range(R.shape[0] - 1): R_ac += advection_correction(R[i : (i + 2)], T=10, t=1) R_ac /= R.shape[0] # ## Results # # We compare the two accumulation maps. The first map on the left is # computed without advection correction and we can therefore see that the shift # between successive images 10 minutes apart produces irregular accumulations. # Conversely, the rainfall accumulation of the right is produced using advection # correction to account for this spatial shift. The final result is a smoother # rainfall accumulation map. # # plt.figure(figsize=(9, 4)) plt.subplot(121) plot_precip_field(R.mean(axis=0), title="3-h rainfall accumulation") plt.subplot(122) plot_precip_field(R_ac, title="Same with advection correction") plt.tight_layout() plt.show() # ### Reference # # <NAME>., and <NAME>. 1999. "Real-Time Radar Rainfall # Estimation. Part I: Algorithm Formulation." Journal of Atmospheric and # Oceanic Technology 16: 189–97. # https://doi.org/10.1175/1520-0426(1999)016<0189:RTRREP>2.0.CO;2 # #
notebooks/advection_correction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # Example: Covertype Data Set # The following example uses the (processed) Covertype dataset from [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Covertype). # # It is a dataset with both categorical (`wilderness_area` and `soil_type`) and continuous (the rest) features. The target is the `cover_type` column: # + covertype_dataset = spark.read.parquet("covertype_dataset.snappy.parquet") covertype_dataset.printSchema() # - # The 10 first rows: covertype_dataset.limit(10).toPandas() # In order for Spark's `DecisionTreeClassifier` to work with the categorical features (as well as the target), we first need to use [`pyspark.ml.feature.StringIndexer`](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.StringIndexer)s to generate a numeric representation for those columns: # + from pyspark.ml.feature import StringIndexer string_indexer_wilderness = StringIndexer(inputCol="wilderness_area", outputCol="wilderness_area_indexed") string_indexer_soil = StringIndexer(inputCol="soil_type", outputCol="soil_type_indexed") string_indexer_cover = StringIndexer(inputCol="cover_type", outputCol="cover_type_indexed") # - # To generate the new *StringIndexerModels*, we call `.fit()` on each `StringIndexer` instance: # + string_indexer_wilderness_model = string_indexer_wilderness.fit(covertype_dataset) string_indexer_soil_model = string_indexer_soil.fit(covertype_dataset) string_indexer_cover_model = string_indexer_cover.fit(covertype_dataset) # - # And we create the new columns: covertype_dataset_indexed_features = string_indexer_cover_model.transform(string_indexer_soil_model .transform(string_indexer_wilderness_model .transform(covertype_dataset) ) ) # New columns can be seen at the right: covertype_dataset_indexed_features.limit(10).toPandas() # Now, we just have to `VectorAssemble` our features to create the feature vector: # + from pyspark.ml.feature import VectorAssembler feature_columns = ["elevation", "aspect", "slope", "horizontal_distance_to_hydrology", "vertical_distance_to_hydrology", "horizontal_distance_to_roadways", "hillshade_9am", "hillshade_noon", "hillshade_3pm", "horizontal_distance_to_fire_points", "wilderness_area_indexed", "soil_type_indexed"] feature_assembler = VectorAssembler(inputCols=feature_columns, outputCol="features") # - # And we have our dataset prepared for ML: covertype_dataset_prepared = feature_assembler.transform(covertype_dataset_indexed_features) covertype_dataset_prepared.printSchema() # Let's build a simple `pyspark.ml.classification.DecisionTreeClassifier`: # + from pyspark.ml.classification import DecisionTreeClassifier dtree = DecisionTreeClassifier(featuresCol="features", labelCol="cover_type_indexed", maxDepth=3, maxBins=50) # - # We fit it, and we get our `DecisionTreeClassificationModel`: # + dtree_model = dtree.fit(covertype_dataset_prepared) dtree_model # - # The `.toDebugString` attribute prints the decision rules for the tree, but it is not very user-friendly: print(dtree_model.toDebugString) # Perhaps `spark_tree_plotting` may be helpful here ;) # + from spark_tree_plotting import plot_tree tree_plot = plot_tree(dtree_model, featureNames=feature_columns, categoryNames={"wilderness_area_indexed":string_indexer_wilderness_model.labels, "soil_type_indexed":string_indexer_soil_model.labels}, classNames=string_indexer_cover_model.labels, filled=True, # With color! roundedCorners=True, # Rounded corners in the nodes roundLeaves=True # Leaves will be ellipses instead of rectangles ) # + from IPython.display import Image Image(tree_plot)
examples/Example_covertype_dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Extended Kalman filter for Nomoto model # An Extended Kalman filter with a Nomoto model as the predictor will be developed. # The filter is run on simulated data as well as real model test data. # + tags=["hide-cell"] # %load_ext autoreload # %autoreload 2 import pandas as pd import numpy as np import matplotlib.pyplot as plt from numpy.linalg import inv import sympy as sp import src.visualization.book_format as book_format book_format.set_style() from src.substitute_dynamic_symbols import lambdify from sympy import Matrix from sympy.physics.mechanics import (dynamicsymbols, ReferenceFrame, Particle, Point) from IPython.display import display, Math, Latex from src.substitute_dynamic_symbols import run, lambdify from sympy.physics.vector.printing import vpprint, vlatex from src.data import mdl from src.extended_kalman_filter import extended_kalman_filter # - # ## Nomoto model for ship manoeuvring dynamics # The Nomoto model can be written as: # + tags=["remove-input"] r,r1d,r2d = sp.symbols('r \dot{r} \ddot{r}') psi,psi1d = sp.symbols('psi \dot{\psi}') h,u = sp.symbols('h u') x, x1d = sp.symbols('x \dot{x}') A,B,C,D,E, Phi = sp.symbols('A B C D E Phi') w = sp.symbols('w') K, delta, T_1, T_2 = sp.symbols('K delta T_1 T_2') eq_nomoto = sp.Eq(K*delta, r + T_1*r1d + T_2*r2d) Math(vlatex(eq_nomoto)) # - # where $r$ is yaw rate with its time derivatives and $\delta$ is the rudder angle. $K$, $T_{1}$ # and $T_{1}$ are the coefficients describing the hydrodynamics of the ship. # # For slow manoeuvres this equation can be further simplified by removing the $\ddot{r}$ term into a first order Nomoto model: # + tags=["remove-input"] eq_nomoto_simple = eq_nomoto.subs(r2d,0) Math(vlatex(eq_nomoto_simple)) # - # ### Simulation model # + tags=["remove-input"] f_hat = sp.Function('\hat{f}')(x,u,w) eq_system = sp.Eq(x1d, f_hat) eq_system # - # Where the state vector $x$: # + tags=["remove-input"] eq_x = sp.Eq(x, sp.UnevaluatedExpr(Matrix([psi,r]))) eq_x # - # and input vector $u$: # and $w$ is zero mean Gausian process noise # For the nomoto model the time derivatives for the states can be expressed as: # + tags=["remove-input"] eq_psi1d = sp.Eq(psi1d,r) eq_psi1d # + tags=["remove-input"] eq_r1d = sp.Eq(r1d,sp.solve(eq_nomoto_simple,r1d)[0]) eq_r1d # - def lambda_f_constructor(K, T_1): def lambda_f(x, u): delta = u f = np.array([[x[1], (K*delta-x[1])/T_1]]).T return f return lambda_f jac = sp.eye(2,2) + Matrix([r,eq_r1d.rhs]).jacobian([psi,r])*h jac Matrix([r, eq_r1d.rhs]).jacobian([delta]) def lambda_jacobian_constructor(h,T_1): def lambda_jacobian(x, u): jac = np.array( [ [1, h], [0, 1-h/T_1], ] ) return jac return lambda_jacobian # ## Simulation # Simulation with this model where rudder angle shifting between port and starboard # + tags=["cell_hide", "hide-cell"] T_1_ = 1.8962353076056344 K_ = 0.17950970687951323 h_ = 0.02 lambda_f = lambda_f_constructor(K=K_, T_1=T_1_) lambda_jacobian = lambda_jacobian_constructor(h=h_, T_1=T_1_) # - def simulate(E, ws, t, us): simdata = [] x_=np.deg2rad(np.array([[0,0]]).T) for u_,w_ in zip(us,ws): x_=x_ + h_*lambda_f(x=x_.flatten(), u=u_) simdata.append(x_.flatten()) simdata = np.array(simdata) df = pd.DataFrame(simdata, columns=["psi","r"], index=t) df['delta'] = us return df # + tags=["cell_hide", "hide-cell"] N_ = 4000 t_ = np.arange(0,N_*h_,h_) us = np.deg2rad(np.concatenate((-10*np.ones(int(N_/4)), 10*np.ones(int(N_/4)), -10*np.ones(int(N_/4)), 10*np.ones(int(N_/4))))) np.random.seed(42) E = np.array([[0, 1]]).T process_noise = np.deg2rad(0.01) ws = process_noise*np.random.normal(size=N_) df = simulate(E=E, ws=ws, t=t_, us=us) measurement_noise = np.deg2rad(0.5) df['epsilon'] = measurement_noise*np.random.normal(size=N_) df['psi_measure'] = df['psi'] + df['epsilon'] df['psi_deg'] = np.rad2deg(df['psi']) df['psi_measure_deg'] = np.rad2deg(df['psi_measure']) df['delta_deg'] = np.rad2deg(df['delta']) # + tags=["hide_input", "remove-input"] fig,ax=plt.subplots() df.plot(y='psi_deg', ax=ax) df.plot(y='psi_measure_deg', ax=ax, zorder=-1) df.plot(y='delta_deg', ax=ax, zorder=-1) df.plot(y='r') ax.set_title('Simulation with measurement and process noise') ax.set_xlabel('Time [s]'); # - # ## Kalman filter # Implementation of the Kalman filter. The code is inspired of this Matlab implementation: [ExEKF.m](https://github.com/cybergalactic/MSS/blob/master/mssExamples/ExEKF.m). # + tags=["hide-cell"] x0=np.deg2rad(np.array([[0,0]]).T) P_prd = np.diag(np.deg2rad([1, 0.1])) Qd = np.deg2rad(np.diag([0, 0.5])) Rd = np.deg2rad(1) ys = df['psi_measure'].values E_ = np.array( [[0,0], [0,1]], ) C_ = np.array([[1, 0]]) Cd_ = C_ Ed_ = h_ * E_ time_steps = extended_kalman_filter(x0=x0, P_prd=P_prd, lambda_f=lambda_f, lambda_jacobian=lambda_jacobian,h=h_, us=us, ys=ys, E=E_, Qd=Qd, Rd=Rd, Cd=Cd_) x_hats = np.array([time_step["x_hat"] for time_step in time_steps]).T time = np.array([time_step["time"] for time_step in time_steps]).T Ks = np.array([time_step["K"] for time_step in time_steps]).T # + tags=["remove-input"] n=len(P_prd) fig,axes=plt.subplots(nrows=n) keys = ['psi','r'] for i,key in enumerate(keys): ax=axes[i] df.plot(y=key, ax=ax, label="True") if key=='psi': df.plot(y='psi_measure', ax=ax, label="Measured", zorder=-1) ax.plot(time, x_hats[i, :], "-", label="kalman") ax.set_ylabel(key) ax.legend() # + tags=["remove-input"] fig,ax=plt.subplots() for i,key in enumerate(keys): ax.plot(time,Ks[i,:],label=key) ax.set_title('Kalman gains') ax.legend(); ax.set_ylim(0,0.1); # - # # Real data # Using the developed Kalman filter on some real model test data # ## Load test # + tags=["remove-input"] id=22773 df, units, meta_data = mdl.load(dir_path = '../data/raw', id=id) df.index = df.index.total_seconds() df.index-=df.index[0] # + tags=["remove-input"] from src.visualization.plot import track_plot fig,ax=plt.subplots() fig.set_size_inches(10,10) track_plot(df=df, lpp=meta_data.lpp, x_dataset='x0', y_dataset='y0', psi_dataset='psi', beam=meta_data.beam, ax=ax); # + tags=["hide-input"] ys = df['psi'].values h_m=h_ = df.index[1]-df.index[0] x0=np.deg2rad(np.array([[0,0]]).T) us = df['delta'].values P_prd = np.diag(np.deg2rad([1, 0.1])) Qd = np.deg2rad(np.diag([0, 10])) Rd = np.deg2rad(0.5) time_steps = extended_kalman_filter(x0=x0, P_prd=P_prd, lambda_f=lambda_f, lambda_jacobian=lambda_jacobian,h=h_, us=us, ys=ys, E=E_, Qd=Qd, Rd=Rd, Cd=Cd_) x_hats = np.array([time_step["x_hat"] for time_step in time_steps]).T time = np.array([time_step["time"] for time_step in time_steps]).T Ks = np.array([time_step["K"] for time_step in time_steps]).T # + tags=["remove-input"] n=len(P_prd) fig,axes=plt.subplots(nrows=n) ax=axes[0] df.plot(y='psi', ax=ax, label="Measured", zorder=-1) df['-delta']=-df['delta'] df.plot(y='-delta', ax=ax, label='$-\delta$', zorder=-10) ax.plot(time, x_hats[0, :], "-", label="kalman", zorder=10) ax.set_ylabel('$\Psi$') ax.legend() ax=axes[1] ax.plot(time, x_hats[1, :], "-", label="kalman") ax.set_ylabel('$r$') ax.legend();
notebooks/15.40_EKF_nomoto.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Building a Simple Chatbot from Scratch in Python (using NLTK) # # ![Alt text](https://cdn-images-1.medium.com/max/800/1*pPcVfZ7i-gLMabUol3zezA.gif) # # History of chatbots dates back to 1966 when a computer program called ELIZA was invented by Weizenbaum. It imitated the language of a psychotherapist from only 200 lines of code. You can still converse with it here: [Eliza](http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm?utm_source=ubisend.com&utm_medium=blog-link&utm_campaign=ubisend). # # On similar lines let's create a very basic chatbot utlising the Python's NLTK library.It's a very simple bot with hardly any cognitive skills,but still a good way to get into NLP and get to know about chatbots. # # For detailed analysis, please see the accompanying blog titled:**[Building a Simple Chatbot in Python (using NLTK](https://medium.com/analytics-vidhya/building-a-simple-chatbot-in-python-using-nltk-7c8c8215ac6e) # # ## NLP # NLP is a way for computers to analyze, understand, and derive meaning from human language in a smart and useful way. By utilizing NLP, developers can organize and structure knowledge to perform tasks such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation. # ## Import necessary libraries import io import random import string # to process standard python strings import warnings import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity import warnings warnings.filterwarnings('ignore') # ## Downloading and installing NLTK # NLTK(Natural Language Toolkit) is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries. # # [Natural Language Processing with Python](http://www.nltk.org/book/) provides a practical introduction to programming for language processing. # # For platform-specific instructions, read [here](https://www.nltk.org/install.html) # # pip install nltk # ### Installing NLTK Packages # # # import nltk from nltk.stem import WordNetLemmatizer nltk.download('popular', quiet=True) # for downloading packages #nltk.download('punkt') # first-time use only #nltk.download('wordnet') # first-time use only # ## Reading in the corpus # # For our example,we will be using the Wikipedia page for chatbots as our corpus. Copy the contents from the page and place it in a text file named ‘chatbot.txt’. However, you can use any corpus of your choice. f=open('chatbot.txt','r',errors = 'ignore') raw=f.read() raw = raw.lower()# converts to lowercase # # The main issue with text data is that it is all in text format (strings). However, the Machine learning algorithms need some sort of numerical feature vector in order to perform the task. So before we start with any NLP project we need to pre-process it to make it ideal for working. Basic text pre-processing includes: # # * Converting the entire text into **uppercase** or **lowercase**, so that the algorithm does not treat the same words in different cases as different # # * **Tokenization**: Tokenization is just the term used to describe the process of converting the normal text strings into a list of tokens i.e words that we actually want. Sentence tokenizer can be used to find the list of sentences and Word tokenizer can be used to find the list of words in strings. # # _The NLTK data package includes a pre-trained Punkt tokenizer for English._ # # * Removing **Noise** i.e everything that isn’t in a standard number or letter. # * Removing the **Stop words**. Sometimes, some extremely common words which would appear to be of little value in helping select documents matching a user need are excluded from the vocabulary entirely. These words are called stop words # * **Stemming**: Stemming is the process of reducing inflected (or sometimes derived) words to their stem, base or root form — generally a written word form. Example if we were to stem the following words: “Stems”, “Stemming”, “Stemmed”, “and Stemtization”, the result would be a single word “stem”. # * **Lemmatization**: A slight variant of stemming is lemmatization. The major difference between these is, that, stemming can often create non-existent words, whereas lemmas are actual words. So, your root stem, meaning the word you end up with, is not something you can just look up in a dictionary, but you can look up a lemma. Examples of Lemmatization are that “run” is a base form for words like “running” or “ran” or that the word “better” and “good” are in the same lemma so they are considered the same. # # # ## Tokenisation sent_tokens = nltk.sent_tokenize(raw)# converts to list of sentences word_tokens = nltk.word_tokenize(raw)# converts to list of words # ## Preprocessing # # We shall now define a function called LemTokens which will take as input the tokens and return normalized tokens. # + lemmer = nltk.stem.WordNetLemmatizer() #WordNet is a semantically-oriented dictionary of English included in NLTK. def LemTokens(tokens): return [lemmer.lemmatize(token) for token in tokens] remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation) def LemNormalize(text): return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict))) # - # ## Keyword matching # # Next, we shall define a function for a greeting by the bot i.e if a user’s input is a greeting, the bot shall return a greeting response.ELIZA uses a simple keyword matching for greetings. We will utilize the same concept here. GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up","hey",) GREETING_RESPONSES = ["hi", "hey", "*nods*", "hi there", "hello", "I am glad! You are talking to me"] def greeting(sentence): for word in sentence.split(): if word.lower() in GREETING_INPUTS: return random.choice(GREETING_RESPONSES) # ## Generating Response # # ### Bag of Words # After the initial preprocessing phase, we need to transform text into a meaningful vector (or array) of numbers. The bag-of-words is a representation of text that describes the occurrence of words within a document. It involves two things: # # * A vocabulary of known words. # # * A measure of the presence of known words. # # Why is it is called a “bag” of words? That is because any information about the order or structure of words in the document is discarded and the model is only **concerned with whether the known words occur in the document, not where they occur in the document.** # # The intuition behind the Bag of Words is that documents are similar if they have similar content. Also, we can learn something about the meaning of the document from its content alone. # # For example, if our dictionary contains the words {Learning, is, the, not, great}, and we want to vectorize the text “Learning is great”, we would have the following vector: (1, 1, 0, 0, 1). # # # ### TF-IDF Approach # A problem with the Bag of Words approach is that highly frequent words start to dominate in the document (e.g. larger score), but may not contain as much “informational content”. Also, it will give more weight to longer documents than shorter documents. # # One approach is to rescale the frequency of words by how often they appear in all documents so that the scores for frequent words like “the” that are also frequent across all documents are penalized. This approach to scoring is called Term Frequency-Inverse Document Frequency, or TF-IDF for short, where: # # **Term Frequency: is a scoring of the frequency of the word in the current document.** # # ``` # TF = (Number of times term t appears in a document)/(Number of terms in the document) # ``` # # **Inverse Document Frequency: is a scoring of how rare the word is across documents.** # # ``` # IDF = 1+log(N/n), where, N is the number of documents and n is the number of documents a term t has appeared in. # ``` # ### Cosine Similarity # # Tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus # # ``` # Cosine Similarity (d1, d2) = Dot product(d1, d2) / ||d1|| * ||d2|| # ``` # where d1,d2 are two non zero vectors. # # # To generate a response from our bot for input questions, the concept of document similarity will be used. We define a function response which searches the user’s utterance for one or more known keywords and returns one of several possible responses. If it doesn’t find the input matching any of the keywords, it returns a response:” I am sorry! I don’t understand you” # + def response(user_response): robo_response='' sent_tokens.append(user_response) TfidfVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english') tfidf = TfidfVec.fit_transform(sent_tokens) vals = cosine_similarity(tfidf[-1], tfidf) idx=vals.argsort()[0][-2] flat = vals.flatten() flat.sort() req_tfidf = flat[-2] if(req_tfidf==0): robo_response=robo_response+"I am sorry! I don't understand you" return robo_response else: robo_response = robo_response+sent_tokens[idx] return robo_response # - # Finally, we will feed the lines that we want our bot to say while starting and ending a conversation depending upon user’s input. flag=True print("ROBO: My name is Robo. I will answer your queries about Chatbots. If you want to exit, type Bye!") while(flag==True): user_response = input() user_response=user_response.lower() if(user_response!='bye'): if(user_response=='thanks' or user_response=='thank you' ): flag=False print("ROBO: You are welcome..") else: if(greeting(user_response)!=None): print("ROBO: "+greeting(user_response)) else: print("ROBO: ",end="") print(response(user_response)) sent_tokens.remove(user_response) else: flag=False print("ROBO: Bye! take care..")
src/chatbot/Chatbot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 线性回归 --- 从0开始 # # 虽然强大的深度学习框架可以减少很多重复性工作,但如果你过于依赖它提供的便利抽象,那么你可能不会很容易地理解到底深度学习是如何工作的。所以我们的第一个教程是如何只利用ndarray和autograd来实现一个线性回归的训练。 # # ## 线性回归 # # 给定一个数据点集合`X`和对应的目标值`y`,线性模型的目标是找一根线,其由向量`w`和位移`b`组成,来最好地近似每个样本`X[i]`和`y[i]`。用数学符号来表示就是我们将学`w`和`b`来预测, # # $$\boldsymbol{\hat{y}} = X \boldsymbol{w} + b$$ # # 并最小化所有数据点上的平方误差 # # $$\sum_{i=1}^n (\hat{y}_i-y_i)^2.$$ # # 你可能会对我们把古老的线性回归作为深度学习的一个样例表示很奇怪。实际上线性模型是最简单但也可能是最有用的神经网络。一个神经网络就是一个由节点(神经元)和有向边组成的集合。我们一般把一些节点组成层,每一层使用下一层的节点作为输入,并输出给上面层使用。为了计算一个节点值,我们将输入节点值做加权和,然后再加上一个激活函数。对于线性回归而言,它是一个两层神经网络,其中第一层是(下图橙色点)输入,每个节点对应输入数据点的一个维度,第二层是单输出节点(下图绿色点),它使用身份函数($f(x)=x$)作为激活函数。 # # ![](../img/simple-net-linear.png) # # ## 创建数据集 # # 这里我们使用一个人工数据集来把事情弄简单些,因为这样我们将知道真实的模型是什么样的。具体来说我们使用如下方法来生成数据 # # `y[i] = 2 * X[i][0] - 3.4 * X[i][1] + 4.2 + noise` # # 这里噪音服从均值0和标准差为0.01的正态分布。 # + attributes={"classes": [], "id": "", "n": "2"} from mxnet import ndarray as nd from mxnet import autograd num_inputs = 2 num_examples = 1000 true_w = [2, -3.4] true_b = 4.2 X = nd.random_normal(shape=(num_examples, num_inputs)) y = true_w[0] * X[:, 0] + true_w[1] * X[:, 1] + true_b y += .01 * nd.random_normal(shape=y.shape) # - # 注意到`X`的每一行是一个长度为2的向量,而`y`的每一行是一个长度为1的向量(标量)。 # + attributes={"classes": [], "id": "", "n": "3"} print(X[0], y[0]) # - # ## 数据读取 # # 当我们开始训练神经网络的时候,我们需要不断读取数据块。这里我们定义一个函数它每次返回`batch_size`个随机的样本和对应的目标。我们通过python的`yield`来构造一个迭代器。 # + attributes={"classes": [], "id": "", "n": "4"} import random batch_size = 10 def data_iter(): # 产生一个随机索引 idx = list(range(num_examples)) random.shuffle(idx) for i in range(0, num_examples, batch_size): j = nd.array(idx[i:min(i+batch_size,num_examples)]) yield nd.take(X, j), nd.take(y, j) # - # 下面代码读取第一个随机数据块 # + attributes={"classes": [], "id": "", "n": "5"} for data, label in data_iter(): print(data, label) break # - # ## 初始化模型参数 # # 下面我们随机初始化模型参数 # + attributes={"classes": [], "id": "", "n": "6"} w = nd.random_normal(shape=(num_inputs, 1)) b = nd.zeros((1,)) params = [w, b] # - # 之后训练时我们需要对这些参数求导来更新它们的值,所以我们需要创建它们的梯度。 # + attributes={"classes": [], "id": "", "n": "7"} for param in params: param.attach_grad() # - # ## 定义模型 # # 线性模型就是将输入和模型做乘法再加上偏移: # + attributes={"classes": [], "id": "", "n": "8"} def net(X): return nd.dot(X, w) + b # - # ## 损失函数 # # 我们使用常见的平方误差来衡量预测目标和真实目标之间的差距。 # + attributes={"classes": [], "id": "", "n": "9"} def square_loss(yhat, y): # 注意这里我们把y变形成yhat的形状来避免自动广播 return (yhat - y.reshape(yhat.shape)) ** 2 # - # ## 优化 # # 虽然线性回归有显试解,但绝大部分模型并没有。所以我们这里通过随机梯度下降来求解。每一步,我们将模型参数沿着梯度的反方向走特定距离,这个距离一般叫学习率。(我们会之后一直使用这个函数,我们将其保存在[utils.py](../utils.py)。) # + attributes={"classes": [], "id": "", "n": "10"} def SGD(params, lr): for param in params: param[:] = param - lr * param.grad # - # ## 训练 # # 现在我们可以开始训练了。训练通常需要迭代数据数次,一次迭代里,我们每次随机读取固定数个数据点,计算梯度并更新模型参数。 # + attributes={"classes": [], "id": "", "n": "11"} epochs = 5 learning_rate = .001 for e in range(epochs): total_loss = 0 for data, label in data_iter(): with autograd.record(): output = net(data) loss = square_loss(output, label) loss.backward() SGD(params, learning_rate) total_loss += nd.sum(loss).asscalar() print("Epoch %d, average loss: %f" % (e, total_loss/num_examples)) # - # 训练完成后我们可以比较学到的参数和真实参数 # + attributes={"classes": [], "id": "", "n": "12"} true_w, w # + attributes={"classes": [], "id": "", "n": "13"} true_b, b # - # ## 结论 # # 我们现在看到仅仅使用NDArray和autograd我们可以很容易地实现一个模型。 # # ## 练习 # # 尝试用不同的学习率查看误差下降速度(收敛率) # # **吐槽和讨论欢迎点**[这里](https://discuss.gluon.ai/t/topic/743)
chapter_supervised-learning/.ipynb_checkpoints/linear-regression-scratch-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/stephenbeckr/randomized-algorithm-class/blob/master/Demos/demo14_MonteCarlo_and_improvements.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="U5PNBZA4E3dt" # # Monte Carlo and variants # # Discusses Monte Carlo in the context of integration: # # - There are many ways to integrate functions # - Deterministic "quadrature" rules are fancy Riemann Sums, and will work *very well* if the integrand is smooth and in low dimensions. They break down when the integrand is highly oscillatory, and/or for high-dimensional integrals. Special versions targeted for oscillatory integrals is the subject of current applied math research. # - Monte Carlo integration interprets the integral as an expectation of a random variable, and draws samples to approximate the true mean with a sample mean. For a smooth function in low dimensions, Monte Carlo integration is a bad idea because classical quadrature rules are much, much better # - Monte Carlo is slow/inaccurate, but the inaccuracy is independent of the dimension of the integral. So for large enough dimensions, it makes sense (while in large dimensions, making a deterministic grid is impossible since it will be too large) # - Since Monte Carlo is useful sometimes, there are many known techniques to make it better. We examine two: # - **Quasi Monte Carlo**, which uses low-discrepancy sequences, and inherits some of the advantages and disadvantages from both Monte Carlo and grid/quadrature methods. Refs: # - <NAME> and <NAME>. [Digital nets and sequences: discrepancy theory and quasi-Monte Carlo integration](https://web.maths.unsw.edu.au/~josefdick/preprints/DP_book_preprint.pdf). Cambridge University Press, 2010 # - Art Owen's ["Monte Carlo Book: the Quasi-Monte Carlo parts"](https://artowen.su.domains/mc/qmcstuff.pdf) from [Monte Carlo theory, methods and examples (incomplete draft)](https://artowen.su.domains/mc/) by Art Owen # - [scipy.stats.qmc documentation](https://docs.scipy.org/doc/scipy/reference/stats.qmc.htm) which is quite useful, and according to this [commit](https://github.com/scipy/scipy/commit/b24017ea594a0e32e711c99015fbb27432a96ff0#diff-a94e84f2e5470e07eaf65ca735fe2f698d24edc24a1bed2768a8842a12c9d8ea) appears to have been written by Art Owen # - Full of good advice, such as make sure to use $n=2^d$ samples; if the number of samples is not a power of 2, performance can be much worse # - [wikipedia low-discrepancy sequences](https://en.wikipedia.org/wiki/Low-discrepancy_sequence#Construction_of_low-discrepancy_sequences) # - ["High-dimensional integration: The quasi-Monte Carlo way"](https://web.maths.unsw.edu.au/~josefdick/preprints/DKS2013_Acta_Num_Version.pdf) by <NAME> and Sloan (Acta Numerica, 2013) # - QMC (and randomized QMC, RQMC) can improve the **convergence rate** # - QMC code: # - [`scipy.stats.qmc`](https://docs.scipy.org/doc/scipy/reference/stats.qmc.htm) which is from 2020 # - [QMCPy](https://qmcpy.org/), Version 1.0 from 2021 # - Similar, but not the same, as QMC, is [**Sparse grid**](https://en.wikipedia.org/wiki/Sparse_grid) via **Smolyak's quadrature rule** # - **Control variates** as a means of **variance reduction**. Refs: # - [wikipedia control variates](https://en.wikipedia.org/wiki/Control_variates) # - There are many types of variance reduction. Other methods, not discussed here, include antithetic variates, [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling) and [stratified sampling](https://en.wikipedia.org/wiki/Stratified_sampling). # - Variance reduction techniques do *not* improve convergence rate, but improve the **constants** # # <NAME>, University of Colorado, April 2019, ipynb version Nov 2021 # + [markdown] id="Ye6ZHFO8xy7z" # The `qmc` module was added in version 1.7.0 of `scipy` (around July 2021), so we may need to upgrade our `scipy`. Let's see what version colab provides us with: # + id="l2_6p2OnYKCy" outputId="bf2c77fc-95fd-47c6-d443-e04176d8fb87" colab={"base_uri": "https://localhost:8080/"} import scipy print(scipy.__version__) # Nov 2021 on colab, this is 1.4.1 # + id="3C3mGb0Xx9Je" # !pip install scipy==1.7 # + id="dtIAm7rGxVG-" colab={"base_uri": "https://localhost:8080/"} outputId="c8d9b45c-45a1-42b7-f84e-dfa14914cfc9" import scipy print(scipy.__version__) # + id="gjRJTTnskSSd" import numpy as np from scipy.special import sici from numpy import sinc, pi rng=np.random.default_rng() from numpy.linalg import norm import scipy.stats.qmc as qmc import matplotlib.pyplot as plt import matplotlib as mpl mpl.rcParams["lines.linewidth"] = 2 mpl.rcParams["figure.figsize"] = [8,5] # + [markdown] id="Fw_KaFbtkVJT" # ## Integrate $\sin(x)/x$ from 0 to 1 (e.g. Si(1), Si is Sine Integral) # # The sine integral, Si(z), is the integral of $\sin(x)/x$ from 0 to z where we define $\sin(0)/0$ to be 1 (consistent with the limit) # # This integral is not known in closed form. See [Trigonometric_integral#Sine_integral](https://en.wikipedia.org/wiki/Trigonometric_integral#Sine_integral) on wikipedia. # # How can we approximate it? There are specialized techniques that are faster and more accurate than what we will discuss here, but we'll treat it via the integral definition and try to numerically evaluate the integral. # + colab={"base_uri": "https://localhost:8080/"} id="lcqRsEWLk4YH" outputId="f127998c-cd4c-4845-d3a0-72f5dffa4bf8" si = sici(1)[0] print(f"The sine integral Si(1) is {si}") # + [markdown] id="OfRT5ZeiIyi6" # Let's try some classical [quadrature rules](https://en.wikipedia.org/wiki/Numerical_integration) to integrate $\int_a^bf(x)\,dx$ # + colab={"base_uri": "https://localhost:8080/"} id="mN_N8Bj0lBSc" outputId="c5985924-e9b2-4e3b-a03f-5a18032f68ea" f = lambda x : sinc(x/pi) a = 0 b = 1 N = int(4e1) + 1 # simpler to have it odd (for Simpson's rule) xgrid, h = np.linspace(a,b,num=N,retstep=True) # spacing is h composite_mid = h*np.sum( f( xgrid[1:]-h/2) ) # open formula fx = f(xgrid) composite_trap = h*(np.sum(fx) - fx[0]/2 - fx[-1]/2 ) composite_simp = h/3*(fx[0]+fx[-1]+4*np.sum(fx[1::2]) + 2*np.sum(fx[2:-1:2])) print( si - composite_mid) print( si - composite_trap) print( si - composite_simp) # + [markdown] id="vgxijUm1JqZu" # Getting ready for quasi-Monte Carlo, let's visualize discrepancy of random numbers on $[0,1]$ # + colab={"base_uri": "https://localhost:8080/", "height": 324} id="yTcd-yQmJvQ-" outputId="b13e2e39-9dc9-4229-93af-4052582c5a5e" N = 2**8 # for the fancy QMC, we want powers of 2 setA = np.sort( rng.uniform(size=N) ) # uniform # Try something slight lower discrepancy and very easy to construct # (note: this is a *random* quasi-MC method) setB = np.hstack( (.5*setA[::2], .5 + .5*setA[1::2]) ) sampler = qmc.Sobol(d=1,scramble=True) setC = sampler.random_base2(m=int(np.log2(N))).ravel() setC.sort() # for visualization purposes plt.plot( setA, label='uniform random' ) plt.plot( setB, label='lower discrepancy' ) plt.plot( setC, label='Sobol sequence (proper way)' ) plt.plot( [0,N-1], [0,1], '--') plt.xlim((0,50)) plt.ylim((0,.2)) plt.legend() plt.show() # + id="OGuY6N-m1BWJ" colab={"base_uri": "https://localhost:8080/", "height": 623} outputId="cc937dcf-b8b6-4e96-b507-e857037f2084" # == Another way to plot it == kernel_size = 10 kernel = np.ones(kernel_size) / kernel_size smooth = lambda data : np.convolve(data, kernel, mode='same') plt.fill_between( np.arange(N), 0, smooth( setA - np.linspace(0,1,num=N) ), \ label='uniform random', alpha=0.5 ) plt.fill_between( np.arange(N), 0, smooth( setB - np.linspace(0,1,num=N) ), label='lower discrepancy', alpha=0.5 ) plt.fill_between( np.arange(N), 0, smooth( setC - np.linspace(0,1,num=N) ), label='Sobol sequence', alpha=0.5 ) plt.legend() plt.show() # == Another way to plot it == plt.hist( np.diff(setA), label='uniform random', alpha=0.5, bins=30 ) plt.hist( np.diff(setB), label='lower discrepancy', alpha=0.5, bins=30 ) plt.hist( np.diff(setC), label='Sobol sequence', alpha=0.5, bins=30 ) plt.legend() plt.show() # + [markdown] id="19kxHDMX1FQa" # #### Visualize this in 2D # # Also compare with a [sparse grid implementation](https://github.com/mfouesneau/sparsegrid) and [Latin hypercube sampling](https://en.wikipedia.org/wiki/Latin_hypercube_sampling) (with shuffling aka balanced sampling) # + id="SmB_XhW4YD3W" # !wget -q https://github.com/mfouesneau/sparsegrid/raw/master/sparsegrid.py # + id="A-92PK_x1MWZ" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="12eb4bac-9da5-4e56-a492-247cd9c5f507" m = 9 N = 2**m setA = rng.uniform(size=(N,2)) sampler = qmc.Sobol(d=2,scramble=False) setB = sampler.random_base2(m=m) sampler = qmc.Sobol(d=2,scramble=True) setC = sampler.random_base2(m=m) # Let's do balanced sampling of a Latin Hypercube x1 = np.linspace(0,1,num=N) x2 = np.linspace(0,1,num=N) rng.shuffle(x1) rng.shuffle(x1) setD = np.vstack( (x1,x2) ).T # and look at a sparse grid import sparsegrid from sparsegrid import SparseInterpolator dim = 2 # Dimensionality of function to interpolate nn = 6 indxi3 = sparsegrid.get_multi_index_sequence(nn, dim) nnodes, x_coord = sparsegrid._initialize_nodes(nn, interpolation_type) indxi4, pnt = sparsegrid._compute_sparse_grid(dim, nnodes, indxi3, x_coord) setE = pnt plt.figure(figsize=(7,7)) plt.title('Uniform([0,1]^2), for MC') plt.plot( setA[:,0], setA[:,1], '.', label='Uniform([0,1]^2)') plt.show() plt.figure(figsize=(7,7)) plt.title('Sobol sequence, for QMC') plt.plot( setB[:,0], setB[:,1], '.r', label='Sobol') plt.show() plt.figure(figsize=(7,7)) plt.title('Sobol sequence, scrambled, for rQMC') plt.plot( setC[:,0], setC[:,1], '.g', label='Sobol') plt.show() plt.figure(figsize=(7,7)) plt.title('balanced sampling for Latin Hypercube') plt.plot( setD[:,0], setD[:,1], '.', label='Latin Hypercube') plt.show() plt.figure(figsize=(7,7)) plt.title('Sparse Grid') plt.plot( setE[:,0], setE[:,1], '.') plt.show() # + id="pe8MZRXR2reB" colab={"base_uri": "https://localhost:8080/"} outputId="4e921d59-4a1f-4487-c43a-5a94ec7059d7" print('== Discrepancy: lower is better ==') print(f'Discrepancy for uniform is\t\t {qmc.discrepancy(setA):.1e}') print(f'Discrepancy for Sobol is\t\t {qmc.discrepancy(setB):.1e}') print(f'Discrepancy for scrambled Sobol is\t {qmc.discrepancy(setC):.1e}') print(f'Discrepancy for Latin Hypercube is\t {qmc.discrepancy(setD):.1e}') print(f'Discrepancy for sparse Grid is\t\t {qmc.discrepancy(setE):.1e}') # + [markdown] id="pI9Cpoa_NC4x" # Now use MC and quasi-MC to evaluate the integral # + colab={"base_uri": "https://localhost:8080/"} id="Ifz9PwYeNJvi" outputId="a243729b-727b-4f80-8aa1-76b5d0c0bda4" m = 10 N = 2**m # Sobol sequences like powers of 2 setA = rng.uniform(size=N) # uniform sampler = qmc.Sobol(d=1,scramble=True) setB = sampler.random_base2(m=m).ravel() int_MC = np.mean( f(setA) ) # simple! int_QMC = np.mean( f(setB) ) # simple! print(f"Via Monte Carlo, error is\t\t{si-int_MC:.3e}") print(f"Via Quasi-Monte Carlo, error is\t\t{si-int_QMC:.3e}") ## and for comparison with quadrature ... # since N is even, let's use N+1 points to make Simpson's rule nice xgrid, h = np.linspace(a,b,num=N+1,retstep=True) # spacing is h composite_mid = h*np.sum( f( xgrid[1:]-h/2) ) # open formula fx = f(xgrid) composite_trap = h*(np.sum(fx) - fx[0]/2 - fx[-1]/2 ) composite_simp = h/3*(fx[0]+fx[-1]+4*np.sum(fx[1::2]) + 2*np.sum(fx[2:-1:2])) print(f"Via comp. midpt rule, error is\t\t{si-composite_mid:.3e}") print(f"Via comp. trap rule, error is\t\t{si-composite_trap:.3e}") print(f"Via comp. simpson's rule, error is\t{si-composite_simp:.3e}") # + [markdown] id="nFzuwMBo30vE" # ### Let's look at the error rate of MC vs QMC # + id="R6-AwYMu30L0" colab={"base_uri": "https://localhost:8080/"} outputId="f5d251b1-ca10-4522-b444-d26e31f15fc1" nReps = 1000 mMax = 14 errMC = [] errQMC = [] nList = [] for m in range(6,mMax): N = 2**m nList.append(N) print(f"m is {m:2d} of {mMax} total") # Find avg error for MC err = [] for reps in range(nReps): x = rng.uniform(size=N) err.append( np.mean(f(x)) - si ) errMC.append( np.mean(np.abs(err)) ) # Repeat for QMC err = [] for reps in range(nReps): sampler = qmc.Sobol(d=1,scramble=True) x = sampler.random_base2(m=m).ravel() err.append( np.mean(f(x)) - si ) errQMC.append( np.mean(np.abs(err)) ) # + [markdown] id="vl3h5waK60LA" # Let's look at the convergence rates. For MC, we should be fairly close to theory, namely $1/\sqrt{n}$. For QMC, since this is a mix of MC and quadrature, we can do better than worse-case if the integrant is **smooth**, which it is in our case. (There are QMC results involving the smoothness of a function). # # For QMC, we should get at least $1/n$ if not more (though in higher dimensions, the dimension starts to play a mild role, whereas it doesn't for pure MC) # + id="CLdzfO7Q6BPw" colab={"base_uri": "https://localhost:8080/", "height": 346} outputId="4a7dc54c-cbeb-4c64-84e5-7a74518b63fd" nList = np.asarray(nList) plt.loglog( nList, errMC, 'o-', label='MC' ) plt.loglog( nList, errQMC, '*-', label='QMC' ) plt.loglog( nList, .03/np.sqrt(nList), '--', label='$O(1/\sqrt{n})$') plt.loglog( nList, .001/nList, '-.', label='$O(1/n)$') plt.loglog( nList, .1/nList**2, '-.', label='$O(1/n^2)$') plt.xlabel('Number of points $n$') plt.legend() plt.show() # + [markdown] id="HXwvPFZ0N9dg" # #### Now add in control variates # We're trying to integrate # $$ \int_0^1 \left( f(x) = \frac{\sin(x)}{x}\right) \,dx$$ # which has no closed form. But we can integrate polynomials in closed form. Let's approximate $f$ by its Maclaurin series, # $$ f(x) \approx g(x) = 1 - x^2/6 $$ # and we can compute # $$ \nu = \int_0^1 g(x)\,dx = \frac{17}{18} $$ # using basic calculus. # # + colab={"base_uri": "https://localhost:8080/"} id="wUm5fyOROkEr" outputId="ca52d535-e72b-41fa-cfe4-84aeda39bbaa" g = lambda x : 1 - x**2/6 nu = 17/18 x = rng.uniform(size=N) fx = f(x) gx = g(x) int_MC = np.mean( fx ) # Estimate the covariance and variance of gx Cov = np.cov( np.vstack( (fx,gx-nu) ) ) print("Scaled covariance matrix:") print(Cov/norm(Cov.flatten(),ord=np.Inf)) c = -Cov[0,1]/Cov[1,1] # Or this is slightly more accurate, using that we know mean(gx) exactly: c = -np.dot(fx-int_MC,gx-nu)/(norm(gx-nu)**2) print(f"Using c value of {c:.5f}") # c = -1 # this is also reasonable int_MC_CV = int_MC + c*( np.mean(gx) - nu ) print(f"Via Monte Carlo, error is\t\t\t{si-int_MC:.3e}") print(f"Via Monte Carlo w/ control variates, error is\t{si-int_MC_CV:.3e}") # Just the Taylor series alone isn't as accurage: print(f" And approximating integral with nu, error is\t{si-nu:.3e}") # Look at variance print(f"Variance of MC is \t\t{np.mean( (fx - si)**2 ):.2e}") print(f"Variance w. control variates is\t{np.mean( (fx + c*(gx-nu) - si)**2 ):.2e}") # + [markdown] id="KLND3WFydn3h" # Plot the error as a function of number of samples # # With the control variates, we don't change the $1/\sqrt{n}$ decay rate, but we do improve the constant factor # + colab={"base_uri": "https://localhost:8080/", "height": 324} id="e4gexxBBcaHf" outputId="a29b12ec-a1ce-4ebe-b53f-c9a15fc29c59" N = int(1e6) x = rng.uniform(size=N) fx = f(x) gx = g(x) er = np.abs( np.cumsum(fx)/np.arange(1,N+1) - si ) plt.loglog( er , label='monte carlo' ) er = np.abs( np.cumsum(fx+c*(gx-nu))/np.arange(1,N+1) - si ) plt.loglog( er , label='MC w/ control variate' ) plt.legend() plt.show() # + [markdown] id="0wT1TO60BcM1" # # Estimate the value of $\pi$ (skip this) # i.e., 2D integration of an indicator function # # We use the fact that the area of the unit circle is $\pi$, and so will look at the the ratio of the area of the unit circle to that of $[-1,1]^2$. Or equivalently, we can work in just the first quadrant. # # # Note: this part of the demo isn't as exciting as I'd hoped, so I'd suggest skipping it # + colab={"base_uri": "https://localhost:8080/"} id="IBBdHDMMBeWq" outputId="efb0437b-caf7-4508-a6a0-64eeefbf4aad" n = int(1e7) # Note: shape 2xn is faster than nx2 when n > 1e7 X = rng.uniform( size=(2,n) ) nrm = norm( X, ord=2, axis=0) Y = nrm <= 1 # Our final estimate for pi is just # 0's / total number, scaled by 4 # est = 4*np.count_nonzero(Y)/n est = 4*np.mean(Y) # another way print(f"Monte Carlo estimate of pi is {est:.6f}") # + colab={"base_uri": "https://localhost:8080/", "height": 341} id="WaJDkRemCJNq" outputId="87f084ae-3bd8-4b60-f5a0-b66b4edd9afe" # Let's also look at how this converges over time mc = np.cumsum( Y )/np.arange(1,n+1) err = np.abs( 4*mc - np.pi ) plt.loglog( err ) plt.show() var = np.mean( (4*Y - np.pi)**2 ) print(f"Variance is {var:.2e}") # + [markdown] id="gHpNJ445FDDZ" # #### Let's add a control variate # # Let's add in a polyhedral approximation, consisting of the lines connecting the points $(0,1)$, $(1/\sqrt{2},1/\sqrt{2})$ and $(1,0)$. # # Decomposing this into triangles, and we can figure out that it's area (in the first quadrant) is $1/\sqrt{2}$. # # This control variate isn't that good, so we won't see great results unfortunately. # + colab={"base_uri": "https://localhost:8080/"} id="Fp7XbI5iFBrD" outputId="1774b693-dfde-44bb-82d2-75d0ac2cf25c" slope = 1 - np.sqrt(2) intrcpt = 1 Za = X[0,:] <= slope*X[1,:] + intrcpt Zb = X[1,:] <= slope*X[0,:] + intrcpt # it's symmetric Z = Za & Zb sample_mean = np.mean(Z) true_mean = 1/np.sqrt(2) # to use a control variate, you need to know this print(f"Sample mean is {sample_mean}, true mean is {true_mean}") # ... just checking. # + colab={"base_uri": "https://localhost:8080/"} id="CW8eqbxiF5rw" outputId="cc9691fd-5b7b-4340-9287-ceb22688b1b9" # Estimate parameter "c", c = -Cov(Y,Z)/Var(Y) # The parameter c is high (close to 1), indicating good correlation Cov = np.cov( np.vstack( (Y,Z) ) ) print("Scaled covariance matrix:") print(Cov/norm(Cov.flatten(),ord=np.Inf)) c = -Cov[0,1]/Cov[1,1] print(c) # + colab={"base_uri": "https://localhost:8080/"} id="D_IYcPFAHP8q" outputId="be49e288-8fb6-4159-99ba-4f911bc741ca" CV = Y + c*( Z - true_mean) est = 4*np.mean(CV) print(f"Monte Carlo w. control variate estimate of pi is {est:.6f}") # + colab={"base_uri": "https://localhost:8080/", "height": 358} id="_1RD9nOUH6we" outputId="5f879712-6dff-4cb5-cd90-09e8f1dc9248" mc = np.cumsum( CV )/np.arange(1,n+1) err = np.abs( 4*mc - np.pi ) plt.loglog( err, label='w/ control variate' ) mc = np.cumsum( Y )/np.arange(1,n+1) err = np.abs( 4*mc - np.pi ) plt.loglog( err, label='basic MC' ) plt.legend() plt.show() var = np.mean( (4*Y - np.pi)**2 ) print(f"Variance is {var:.2e}") var = np.mean( (4*CV - np.pi)**2 ) print(f"Variance (using control variate) is {var:.2e}") # + id="1g6vJqIYj54v"
Demos/demo14_MonteCarlo_and_improvements.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="fVxf5ue7RxW5" # # The eigenfaces example: chaining PCA and SVMs # ============================================= # # The goal of this example is to show how an unsupervised method and a # supervised one can be chained for better prediction. # # Here we'll take a look at a simple facial recognition example. Ideally, # we would use a dataset consisting of a subset of the `Labeled Faces in # the Wild <http://vis-www.cs.umass.edu/lfw/>`__ data that is available # with :func:`sklearn.datasets.fetch_lfw_people`. The labelled face in the wild face dataset. # # However, this is a relatively large download (~200MB) so we will do the tutorial on a simpler, less rich dataset. # # # + colab={"base_uri": "https://localhost:8080/"} id="Ji8l6GbiRxW_" outputId="3a6224ea-921e-447e-c3ed-78d04be825a6" from sklearn import datasets faces = datasets.fetch_olivetti_faces() faces.data.shape # + [markdown] id="La__4Oa0RxXo" # Let's visualize these faces to see what we're working with # + colab={"base_uri": "https://localhost:8080/", "height": 342} id="gyACcsH2RxXv" outputId="6bb760c3-8ac3-41bd-9fb3-ee8b1c3f79f3" from matplotlib import pyplot as plt import random fig = plt.figure(figsize=(8, 6)) # plot several images for i in range(15): r = random.randint(0, 400) ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[]) ax.imshow(faces.images[r], cmap=plt.cm.bone) # + [markdown] id="Jn3nVfylRxYN" # Note is that these faces have already been localized and scaled to a common size. # # This is an important preprocessing piece for facial recognition, and is a process that can require a large collection of training data. # # This can be done in scikit-learn, but the challenge is gathering a sufficient amount of training data for the algorithm to work. # # We'll perform a Support Vector classification of the images. We'll do a typical train-test split on the images: # # # + colab={"base_uri": "https://localhost:8080/"} id="hDgt10c1RxYn" outputId="3c1cfecf-6282-4a8c-ea25-daf6de24aaee" from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(faces.data, faces.target, random_state=137) print(X_train.shape, X_test.shape) # + [markdown] id="U_zhGfTaRxY-" # Preprocessing: Principal Component Analysis # ------------------------------------------- # # We can use PCA to reduce these features to a manageable size, while maintaining most of the information # in the dataset. # # # + colab={"base_uri": "https://localhost:8080/"} id="zGxiRg_qRxZA" outputId="9fb9c845-c077-4308-fadf-674ffee17d8b" from sklearn import decomposition pca = decomposition.PCA(n_components=150, whiten=True) pca.fit(X_train) # + [markdown] id="Yhy0wHC5RxZT" # One interesting part of PCA is that it computes the "mean" face, which # can be interesting to examine: # # # + colab={"base_uri": "https://localhost:8080/", "height": 285} id="bwbJKHYXRxZY" outputId="a06f7975-5a76-4063-f12e-805150f96d76" plt.imshow(pca.mean_.reshape(faces.images[0].shape), cmap=plt.cm.bone) # + [markdown] id="oIvW5QygRxZv" # The principal components measure deviations about this mean along # orthogonal axes. # # # + colab={"base_uri": "https://localhost:8080/"} id="LgOkmKjcRxZy" outputId="e618139b-f4c6-44a2-ad27-8a20a9321dec" print(pca.components_.shape) # + [markdown] id="mn8IC0lORxZ-" # It is also interesting to visualize these principal components: # # # + colab={"base_uri": "https://localhost:8080/", "height": 340} id="9ePFr1ZARxaC" outputId="ed7ff1c7-d3aa-4a26-c455-daac3630a244" fig = plt.figure(figsize=(16, 6)) for i in range(30): ax = fig.add_subplot(3, 10, i + 1, xticks=[], yticks=[]) ax.imshow(pca.components_[i].reshape(faces.images[0].shape), cmap=plt.cm.bone) # + [markdown] id="SBkPczznRxaT" # The components ("eigenfaces") are ordered by their importance from # top-left to bottom-right. We see that the first few components seem to # primarily take care of lighting conditions; the remaining components # pull out certain identifying features: the nose, eyes, eyebrows, etc. # # With this projection computed, we can now project our original training # and test data onto the PCA basis: # # # + colab={"base_uri": "https://localhost:8080/"} id="0a8FG3HFRxaW" outputId="c66beba5-4f21-46a4-8a98-c39ad7187d11" X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) print(X_train_pca.shape) # + colab={"base_uri": "https://localhost:8080/"} id="bwJ9tns3Rxar" outputId="b5ada946-999c-4cae-f8f0-acdc70b7510f" print(X_test_pca.shape) # + [markdown] id="rit_f47IRxbC" # These projected components correspond to factors in a linear combination # of component images such that the combination approaches the original # face. # # Doing the Learning: Support Vector Machines # ------------------------------------------- # # Now we'll perform support-vector-machine classification on this reduced # dataset: # # # + colab={"base_uri": "https://localhost:8080/"} id="jtFRD3fORxbH" outputId="89fc8ead-4f52-44d9-d760-eb5ca8bfa627" from sklearn import svm clf = svm.SVC(C=5., gamma=0.001) clf.fit(X_train_pca, y_train) # + [markdown] id="7kvWiXT-Rxbf" # Finally, we can evaluate how well this classification did. First, we # might plot a few of the test-cases with the labels learned from the # training set: # # # + colab={"base_uri": "https://localhost:8080/", "height": 355} id="Y7aXBtRKRxbp" outputId="8e11d205-b514-4453-d76b-c5a82902adbc" import numpy as np fig = plt.figure(figsize=(8, 6)) for i in range(15): ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[]) ax.imshow(X_test[i].reshape(faces.images[0].shape), cmap=plt.cm.bone) y_pred = clf.predict(X_test_pca[i, np.newaxis])[0] color = ('black' if y_pred == y_test[i] else 'red') ax.set_title(y_pred, fontsize='small', color=color) # + [markdown] id="yFzw1hvKRxcB" # The classifier is correct on an impressive number of images given the # simplicity of its learning model! Using a linear classifier on 150 # features derived from the pixel-level data, the algorithm correctly # identifies a large number of the people in the images. # # Again, we can quantify this effectiveness using one of several measures # from :mod:`sklearn.metrics`. First we can do the classification # report, which shows the precision, recall and other measures of the # "goodness" of the classification: # # # + colab={"base_uri": "https://localhost:8080/"} id="pIRPKH2jRxcE" outputId="ad9e9ce6-fe40-4cd2-ce22-e1c9a673136c" from sklearn import metrics y_pred = clf.predict(X_test_pca) print(metrics.classification_report(y_test, y_pred)) # + [markdown] id="jztgVH4fRxci" # Another interesting metric is the *confusion matrix*, which indicates # how often any two items are mixed-up. The confusion matrix of a perfect # classifier would only have nonzero entries on the diagonal, with zeros # on the off-diagonal: # # # + colab={"base_uri": "https://localhost:8080/"} id="jPcMhOBoRxcl" outputId="198063a0-74aa-434a-f672-dea7476c76c1" print(metrics.confusion_matrix(y_test, y_pred)) # + [markdown] id="f8aB9bwGRxc_" # Pipelining # ---------- # # Above we used PCA as a pre-processing step before applying our support # vector machine classifier. Plugging the output of one estimator directly # into the input of a second estimator is a commonly used pattern; for # this reason scikit-learn provides a ``Pipeline`` object which automates # this process. The above problem can be re-expressed as a pipeline as # follows: # # # + colab={"base_uri": "https://localhost:8080/"} id="Wei0jY67RxdC" outputId="cc62bd4e-882e-4231-d011-f7815ecd5223" from sklearn.pipeline import Pipeline clf = Pipeline([('pca', decomposition.PCA(n_components=150, whiten=True)), ('svm', svm.LinearSVC(C=1.0))]) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(metrics.confusion_matrix(y_pred, y_test)) plt.show()
Lab 11/PCA_SVM_EigenFaces.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Otros tipos de datos, funciones, objetos # ## Diccionarios # # Un diccionario o mapa es una estructura de datos que guarda información en pares `clave:valor`. Las operaciones típicas son agregar un valor (con su clave) y extraer el valor asociado a una clave. En Python los diccionarios se declaran entre llaves `{}`, con pares `clave: valor` separados por comas: from datetime import date diccionario = {"nombre": "Javier", "edad": 27, "Fecha": date.today()} print(type(diccionario)) print(diccionario) # También se pueden declarar con dict() diccionario = dict([("nombre", "Javier"), ("edad", 27), ("Fecha", date.today())]) print(type(diccionario)) print(diccionario) # Como ven en el ejemplo, los valores de un diccionario pueden ser de cualquier tipo. Por otro lado, las claves tienen que ser *inmutables*; strings y números funcionan como claves, listas no. ejemplo = {[1,2,3]: 1} # Ni las claves ni los valores tienen porqué ser todos del mismo tipo ejemplo_2 = {0: "Hola", 1: date.today(), "nombre": "Javier"} print(ejemplo_2) # Para acceder el valor de una clave, lo hacemos como lo hacíamos con listas (acá el índice es la clave): print(ejemplo_2[1]) print(ejemplo_2["nombre"]) # También se puede usar el método `get()`: print(ejemplo_2.get(1)) print(ejemplo_2.get("nombre")) # Si la clave no existe devuelve una excepción ejemplo_2["clave"] # De la misma manera agregamos un nuevo par: ejemplo_2["nueva_clave"] = "valor" ejemplo_2 # Para borrar un par pueden usar el método `pop()` o `del`. `pop()` además de eliminar el par te devuelve el valor asociado a la clave: valor = ejemplo_2.pop("nueva_clave") print(valor) ejemplo_2 del(ejemplo_2["nombre"]) ejemplo_2 # El método `items()` devuelve un iterable con tuplas `(clave, valor)`, `keys()` devuelve las claves, `values()` los valores. ejemplo_2.items() ejemplo_2.keys() ejemplo_2.values() for clave, valor in ejemplo_2.items(): print(clave, valor) for clave in ejemplo_2.keys(): print(clave) # Un comentario sobre iterables: nunca es buena idea, adentro de un loop, modificar el iterable sobre el que se hace el loop. Si creen que necesitan hacer algo así, háganlo con otra variable que sea una copia del iterable original. copia = ejemplo_2.copy() print(copia) ejemplo_2["nueva_clave"] = "valor" print(ejemplo_2) print(copia) a = [1,2,3,4] b = a.copy() a.append(5) print(a) print(b) # Para chequear que una clave existe se usa `in`, la cantidad de entradas del diccionario se obtiene con `len`. print(0 in ejemplo_2) print("inexistente" in ejemplo_2) print(len(ejemplo_2)) # ## Funciones # # Una función es esencialmente un bloque de código que recipe un input y devuelve un resultado. # En general, cuando empiecen a hacer ejercicios van a tener ciertos bloques de código que cumplen un rol específico y se ejecutan muchas veces. En esos casos es común poner ese bloque en una función, para que el código esté más ordenado y sea más facil de leer. # Ejemplo: supongan que en el código que están escribiendo están trabajando mucho con strings y necesitan saber muy seguido la última palabra de los strings. def ultima_palabra(texto): palabras = texto.split() return palabras[-1] ultima_palabra("Esto es una oración") ultima_palabra("Esto también") # Definir esta función no es estrictamente necesario, ustedes podrían, cada vez que necesitan la última palabra, escribir el código # ``` # texto.split()[-1] # ``` # Sin embargo esto es menos claro que la linea `ultima_palabra(texto)`. Esa es un poco la utilidad de las funciones, separar el código en partes más claras y fáciles de leer. # Para definir una función se usa `def`, seguido del nombre de la función y entre paréntesis sus argumentos (inputs). Después de los dos puntos, todo lo que le siga en un bloque indentado es el cuerpo de la función (el código que se ejecuta al llamarla). Si queremos que la función devuelva un valor, usamos `return valor`; esto termina la ejecución de la función. def al_cuadrado(numero): return numero ** 2 al_cuadrado(2) al_cuadrado(4) def potencia(numero, n): return numero ** n print(potencia(2, 2)) print(potencia(2, 3)) print(potencia(3, 3)) # En Python las funciones pueden devolver más de un valor: def primero_y_ultimo(lista): return lista[0], lista[-1] # El resultado de la función es una tupla resultado = primero_y_ultimo([1, 2, 3, 4, 5]) print(resultado) print(type(resultado)) primero, ultimo = resultado print(primero) print(ultimo) # Pueden hacer que un argumento tenga un valor *default*, que se usa si al llamar la función no se especifica su valor. # Si no le pasás n, la función eleva al cuadrado def potencia(numero, n=2): return numero ** n print(potencia(10)) print(potencia(10, 3)) # ## Objetos # Sacado en buena parte de https://realpython.com/python3-object-oriented-programming/ # Python provee todas las funcionalidades básicas de la programación orientada a objetos ([OOP](https://es.wikipedia.org/wiki/Programaci%C3%B3n_orientada_a_objetos), por sus siglas en inglés). La idea básica de este paradigma es que uno tiene _objetos_ asociados a _clases_ (por ejemplo, las clases `str` (string), `list`, `int`, etc); de este modo, la variable `a = 1` es un objeto de la clase `int`, `b = [1, 2, 3]` un objeto de la clase `list`. A veces también se dice que `a` es una _instancia_ de la clase `int`. # Los tipos básicos de python son clases predefinidas. print(type('IEEE')) print(type([])) print(type(1)) print(type(diccionario)) # Las clases pueden pensarse como un *template* de un tipo de objeto, donde uno define los _atributos_ y _métodos_ de los objetos de ese tipo. Los atributos pueden pensarse como _propiedades_ de los objetos de esa clase, los métodos como _comportamientos_. Por ejemplo, un objeto de tipo *email* podría tener como atributos sus destinatarios, el título y el cuerpo del mail y como métodos agregar archivos adjuntos y enviar. lista = [1, 2, 3] # Si agarran la lista de arriba y en una celda de código escriben `lista.` y apretan `tab` (si están en un Collab, es control + espacio, o command + espacio si usan mac, a veces también es automático si le dan unos segundos), van a ver que les aparece un desplegable con funciones para autocompletar. Estas funciones son los métodos de clase `list`, algunos de los cuales ya vimos. Pueden probar hacer lo mismo con un diccionario, o en general con cualquier objeto de una clase; de hecho, el desplegable les va a mostrar no sólo los métodos, sino tambien los atributos cuando los haya. # Hasta ahora sólo hablamos de clases que Python ya define por su cuenta, pero en OOP la idea es que uno puede definir sus propias clases, para después poder crear objetos de estas. En el siguiente ejemplo, definimos la clase `Perro`. class Perro: especie = "Dogo" def __init__(self, nombre, edad): self.nombre = nombre self.edad = edad # Para definir la clase, usamos `class` seguido del nombre de la clase (por convención los nombres de clases suelen empezar en mayúscula); despues del `:`, declaramos todo lo que nos importa (atributos, métodos, etc). # # En este ejemplo, lo que hicimos fue definir la clase `Perro`, que tiene un atributo `especie`, cuyo valor es "Dogo". La linea siguiente define la función `__init__` de la clase, que en muchos otros lenguajes orientados a objetos es lo que se llama el _constructor_ de la clase <sup>*</sup>. Esta es una función especial de las clases, que se llama cuando uno instancia un objeto de una clase. En este caso, lo que estamos haciendo es decir que si uno escribe `Perro(nombre, edad)`, eso crea un objeto de tipo perro, con los atributos `nombre` y edad igual a los que le pasamos. # # El parámetro `self` es quizás la parte más confusa; esencialmente, `self` refiere a la instancia de la clase `Perro` que se acaba de crear. Es decir, cuando uno llama al constructor `Perro(nombre, edad)`, Python instancia un objeto de tipo perro y se lo pasa a `__init__` como el parámetro `self`; los otros paramétros los pasamos nosotros. # # # <sup> *</sup>: Si uno se pone formal,`__init__` no es exactamente un constructor como en otros lenguajes OOP, pero acá no nos importa esa distinción. perro = Perro("Bowie", 5) print(perro) # Una vez instanciado nuestro perro, podemos acceder a sus atributos usando un punto `.` print(perro.especie) print(perro.nombre) print(perro.edad) # Estos valores se pueden cambiar: perro.especie = "Bulldog" print(perro.especie) # Si quisiéramos poder especificar la especie del perro al crearlo (en vez de que sea Dogo por default), lo ponemos adentro de la función `__init__`: class Perro: def __init__(self, nombre, edad, especie): self.nombre = nombre self.edad = edad self.especie = especie perro = Perro("Bowie", 5, "Labrador") print(perro.especie) # Bien, tenemos atributos, nos falta poder definir métodos. Por ejemplo, queremos un método que nos devuelva una descripción completa del perro. class Perro: def __init__(self, nombre, edad, especie): self.nombre = nombre self.edad = edad self.especie = especie def descripcion(self): return f"{self.nombre} tiene {self.edad} años y es de raza {self.especie}" def dice(self, sonido): return f"{self.nombre} dice {sonido}" # El primer método devuelve una descripcíon completa del perro como string. Para llamarlo sobre un perro, usamos también un punto `.`: bowie = Perro("Bowie", 5, "Labrador") bowie.descripcion() bowie.dice("Woof!") # Cuando uno define métodos, aparece de nuevo el parámetro `self`; como antes, esto refiere a la instancia de `Perro` que está llamando al método. En los ejemplos de arriba, `self` es `bowie`, y cuando hacemos `bowie.descripcion()` estamos llamando a la función `descripcion` de la clase `Perro` con `bowie` como parámetro. De hecho, si quisiéramos podríamos llamar al método de esta manera: # Esto es lo mismo que bowie.descripcion() print(Perro.descripcion(bowie)) # Esto es lo mismo que bowie.dice("Woof!") print(Perro.dice(bowie, "Woof!")) # Por supuesto, la primera notación es mucho más cómoda, así que nadie usa la otra. # En general, puede ser útil tener un método como descripción que devuelva información legible del objeto en cuestión. Sin embargo, lo que hicimos con `descripcion()` no es la mejor forma de hacerlo. Estaría bueno que si uno hace `print(perro)`, nos salga la descripción: print(bowie) # Las clases predefinidas de Python ya hacen esto; por ejemplo, si uno tiene un diccionario y hace `print(diccionario)`, el resultado no es como lo de arriba. Esto es porque estas clases tienen un método especial `__str__()`, que es el que dice cómo se imprime un objeto. Si le cambiamos el nombre a `descripcion()`, podemos hacer lo mismo con nuestra clase: class Perro: def __init__(self, nombre, edad, especie): self.nombre = nombre self.edad = edad self.especie = especie def __str__(self): return f"{self.nombre} tiene {self.edad} años y es de raza {self.especie}" def dice(self, sonido): return f"{self.nombre} dice {sonido}" bowie = Perro("Bowie", 5, "Labrador") print(bowie) # Existen muchos de estos métodos especiales de Python que permiten customizar nuestras clases, todos ellos empiezan y terminan con `__`. Por ejemplo, si quisiéramos usar el operador `>` para decir que un perro es más chico o más grande que otro (en edad), podemos definir el método `__gt__` (gt es por *greater than*): class Perro: def __init__(self, nombre, edad, especie): self.nombre = nombre self.edad = edad self.especie = especie def __str__(self): return f"{self.nombre} tiene {self.edad} años y es de raza {self.especie}" def dice(self, sonido): return f"{self.nombre} dice {sonido}" def __gt__(self, otro_perro): return self.edad > otro_perro.edad bowie = Perro("Bowie", 5, "Labrador") perro_2 = Perro("Luna", 8, "Beagle") bowie > perro_2
clase03/clase_3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Self-Driving Car Engineer Nanodegree # # # ## Project 2: **Advanced Lane Finding** # *** # The goals / steps of this project are the following: # # * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. # * Apply a distortion correction to raw images. # * Use color transforms, gradients, etc., to create a thresholded binary image. # * Apply a perspective transform to rectify binary image ("birds-eye view"). # * Detect lane pixels and fit to find the lane boundary. # * Determine the curvature of the lane and vehicle position with respect to center. # * Warp the detected lane boundaries back onto the original image. # * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. # # ## Import Packages #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 import os # %matplotlib inline # ## Read in a Sample Image # + #reading in an image image = mpimg.imread('test_images/straight_lines1.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') # - # ## Helper Functions from Project 1 # + import math def grayscale(img): """ Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray') """ return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) masked_color = np.dstack((masked_image, masked_image, masked_image)) * 255 return masked_image, masked_color def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) # - # ## Camera Calibration # ### Extract Chessboard Corners # + import glob # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) nrows = 6 ncols = 9 objp = np.zeros((nrows*ncols,3), np.float32) objp[:,:2] = np.mgrid[0:ncols, 0:nrows].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Make a list of calibration images images = glob.glob('camera_cal/calibration*.jpg') # Step through the list and search for chessboard corners for idx, fname in enumerate(images): img = cv2.imread(fname) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (ncols,nrows), None) # If found, add object points, image points if ret == True: objpoints.append(objp) imgpoints.append(corners) # Draw and display the corners cv2.drawChessboardCorners(img, (ncols,nrows), corners, ret) #write_name = 'corners_found'+str(idx)+'.jpg' #cv2.imwrite(write_name, img) # #Uncomment the two lines below to view the result #cv2.imshow('img', img) #cv2.waitKey(500) cv2.destroyAllWindows() # - # ### Compute Camera Calibration Matrix # + import pickle # %matplotlib inline # Test undistortion on an image img = cv2.imread('camera_cal/calibration1.jpg') img_size = (img.shape[1], img.shape[0]) # Do camera calibration given object points and image points ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size, None, None) dst = cv2.undistort(img, mtx, dist, None, mtx) cv2.imwrite('output_images/undistorted_chessboard.jpg',dst) # Save the camera calibration result for later use dist_pickle = {} dist_pickle["mtx"] = mtx dist_pickle["dist"] = dist pickle.dump( dist_pickle, open( "camera_cal/wide_dist_pickle.p", "wb" ) ) #dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB) # Visualize undistortion f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) ax1.imshow(img) ax1.set_title('Original Image', fontsize=30) ax2.imshow(dst) ax2.set_title('Undistorted Image', fontsize=30) # - # ### Derivation of polynomial transformation from image coordinates to real coordinates # # While most of the content in the helper functions is based on code and theory described in the lessons, # it might be useful to describe the logic used to tranform polynomials from the image coordinates to the # real coordinates since this was not explicitly described in the course lessons. This section therefore, # provides a brief description of the approach used to perform the tranformation. # # Let the equation of the polynomial in the image coordinates be: # # \begin{equation} # x_i = A_i y_i^{2} + B_i y_i + C_i # \end{equation} # # In the above equation, $x$ is the dependent variable and $y$ is the independent # variable. Let the equation of the corresponding polynomial in the real coordinates be: # # \begin{equation} # x_r = A_r y_r^{2} + B_r y_r + C_r # \end{equation} # # Let $\lambda_x$ and $\lambda_y$ be the $x$ and $y$ distances (in meters) per pixel. The $x$ and $y$ # coordinates can therefore be transformed as: # # \begin{equation} # x_r = \lambda_{x} x_i \\ # y_r = \lambda_{y} x_i # \end{equation} # # If we substitute the above equations in to the polynomial expression in # real coordinates, we get: # # \begin{equation} # x_i \lambda_x = A_i\lambda_y^2 y_i^{2} + B_i \lambda_y y_i + C_i \\ # x_i = \frac{A_i\lambda_y^{2}}{\lambda_x} y_i^{2} + \frac{B_i \lambda_y}{\lambda_x} y_i + \frac{C_i}{\lambda_x} # \end{equation} # # Comparing the above equation with the equation for the image coordinates and equating the coefficients, we get: # # \begin{equation} # A_r = \frac{A_i\lambda_y^{2}}{\lambda_x} \\ # B_r = \frac{B_i \lambda_y}{\lambda_x} \\ # C_r = \frac{C_i}{\lambda_x} # \end{equation} # ### Helper Functions for Project 2 # + def edge_thresholds(img): # Extract R-channel R = img[:,:,0] # Extract S-channel hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) s_channel = hls[:,:,2] # Grayscale image gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Sobel x sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx)) # Threshold x gradient thresh_min = 50 thresh_max = 100 sxbinary = np.zeros_like(scaled_sobel) sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1 # Threshold R channel thresh_R = (150, 255) binary_R = np.zeros_like(R) binary_R[(R > thresh_R[0]) & (R <= thresh_R[1])] = 1 # Threshold S channel s_thresh_min = 100 s_thresh_max = 255 s_binary = np.zeros_like(s_channel) s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1 # Combine multiple binary images combined_binary = np.zeros_like(sxbinary) combined_binary[((s_binary == 1) & binary_R == 1) | (sxbinary == 1) ] = 1 #combined_binary[((s_binary == 1) & binary_R == 1)] = 1 #combined_binary[((s_binary == 1)) | (sxbinary == 1) ] = 1 color_image = np.dstack((combined_binary, combined_binary, combined_binary)) * 255 return binary_R, s_binary, sxbinary, combined_binary, color_image def topview(image): """ Perform perspective transformation to obtain a top view of the image """ src = np.float32([[600, 444], [675, 444], [1041, 676], [268, 676]]) offsetv = 0 offseth = 300 img_size = (image.shape[1], image.shape[0]) dst = np.float32([[offseth, offsetv], [img_size[0]-offseth, offsetv], [img_size[0]-offseth, img_size[1]-offsetv], [offseth, img_size[1]-offsetv]]) M = cv2.getPerspectiveTransform(src, dst) Minv = cv2.getPerspectiveTransform(dst, src) warped = cv2.warpPerspective(image, M, image.shape[1::-1], flags=cv2.INTER_LINEAR) return warped, M, Minv def fit_poly(img_shape, leftx, lefty, rightx, righty): # Fit a second order polynomial to each with np.polyfit() ### left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) # Generate x and y values for plotting ploty = np.linspace(0, img_shape[0]-1, img_shape[0]) #Calc both polynomials using ploty, left_fit and right_fit ### left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] middle_fitx = 0.5*(left_fitx + right_fitx) middle_fit = np.polyfit(middle_fitx, ploty, 2) return left_fitx, right_fitx, middle_fitx, ploty, left_fit, right_fit, middle_fit def find_lane_pixels(binary_warped): # Take a histogram of the bottom half of the image histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0) # Create an output image to draw on and visualize the result stacked_windows_image = np.dstack((binary_warped, binary_warped, binary_warped)) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint]) rightx_base = np.argmax(histogram[midpoint:]) + midpoint # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 50 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) # Identify the x and y positions of all nonzero pixels in the image nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height win_xleft_low = leftx_current - margin win_xleft_high = leftx_current + margin win_xright_low = rightx_current - margin win_xright_high = rightx_current + margin # Draw the windows on the visualization image cv2.rectangle(stacked_windows_image,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 2) cv2.rectangle(stacked_windows_image,(win_xright_low,win_y_low), (win_xright_high,win_y_high),(0,255,0), 2) good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0] good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0] # Append these indices to the lists left_lane_inds.append(good_left_inds) right_lane_inds.append(good_right_inds) #If number of pixes in windows exceeds minpix pixels, recenter next window if len(good_left_inds) > minpix: leftx_current = np.int(np.mean(nonzerox[good_left_inds])) if len(good_right_inds) > minpix: rightx_current = np.int(np.mean(nonzerox[good_right_inds])) # Concatenate the arrays of indices (previously was a list of lists of pixels) try: left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) except ValueError: # Avoids an error if the above is not implemented fully pass stacked_windows_image[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] stacked_windows_image[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] return leftx, lefty, rightx, righty, left_lane_inds, right_lane_inds, stacked_windows_image def search_around_poly(binary_warped): # Grab activated pixels nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Extract the lane points leftx, lefty, rightx, righty, left_lane_inds, right_lane_inds, stacked_windows_image = find_lane_pixels(binary_warped) # Fit new polynomials left_fitx, right_fitx, middle_fitx, ploty, left_fit, right_fit, middle_fit = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty) # Create an image to draw on and an image to show the selection window binary_warped_color = np.dstack((binary_warped, binary_warped, binary_warped))*255 window_img = np.zeros_like(binary_warped_color) # Identify the region between the left and right lanes left_region_limits = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) right_region_limits = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) region_pts = np.hstack((left_region_limits, right_region_limits)) # Draw the polynomial fit for middle lane line mid_lower_limits = np.array([np.transpose(np.vstack([middle_fitx-5, ploty]))]) mid_upper_limits = np.array([np.flipud(np.transpose(np.vstack([middle_fitx+5, ploty])))]) mid_region_pts = np.hstack((mid_lower_limits, mid_upper_limits)) # Draw the polynomial fit for left lane line left_lower_limits = np.array([np.transpose(np.vstack([left_fitx-5, ploty]))]) left_upper_limits = np.array([np.flipud(np.transpose(np.vstack([left_fitx+5, ploty])))]) left_region_pts = np.hstack((left_lower_limits, left_upper_limits)) # Draw the polynomial fit for right lane line right_lower_limits = np.array([np.transpose(np.vstack([right_fitx-5, ploty]))]) right_upper_limits = np.array([np.flipud(np.transpose(np.vstack([right_fitx+5, ploty])))]) right_region_pts = np.hstack((right_lower_limits, right_upper_limits)) # Write left, middle and right polynomial fits to image using different colors cv2.fillPoly(stacked_windows_image, np.int_([mid_region_pts]), (255,255, 0)) #yellow cv2.fillPoly(stacked_windows_image, np.int_([left_region_pts]), (255, 105, 180)) #pink cv2.fillPoly(stacked_windows_image, np.int_([right_region_pts]), (128, 0, 128)) #purple # Draw the lane onto the warped blank image cv2.fillPoly(window_img, np.int_([region_pts]), (0,255, 0)) result = cv2.addWeighted(binary_warped_color, 1, window_img, 0.3, 0) # Draw the lane onto the warped blank image result[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] result[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] return result, stacked_windows_image, left_fit, right_fit, middle_fit, \ left_fitx, right_fitx, middle_fitx, ploty def convert_to_real(poly_fit, xm_per_pix, ym_per_pix, poly_degree): """ Convert from image coordinates to real coordinates using scaling factors """ poly_fit_real = [0.0] * (poly_degree+1) for d in range(poly_degree+1): poly_fit_real[d] = poly_fit[d] * xm_per_pix / (ym_per_pix ** (poly_degree - d)) return poly_fit_real def measure_curvature(poly_fit, y_eval): ''' Calculates the curvature of polynomial given the polynomial coefficients ''' poly_curverad = ((1.0 + (2.0*poly_fit[0]*y_eval + poly_fit[1])**2)**1.5)/(2.0*poly_fit[0]) return poly_curverad def process_single_image(image, write_to_file = False): alpha = 1 beta = 1 gamma = 0 # Specify corners of the quadrilateral masked region of interest imshape = image.shape mask_vertices = np.array([[(0.05*imshape[1],imshape[0]), (0.47*imshape[1], 0.6*imshape[0]), (0.53*imshape[1], 0.6*imshape[0]), (0.95*imshape[1], imshape[0])]], dtype=np.int32) # Pixel to real world conversion factors ym_per_pix = 30/720 # meters per pixel in y dimension xm_per_pix = 3.7/700 # meters per pixel in x dimension # Write the original image to disk for reference if write_to_file: cv2.imwrite('output_images/original_image.png', cv2.cvtColor(image, cv2.COLOR_RGB2BGR)) # Undistort the original image by applying camera calibrtion matrix undst = cv2.undistort(image, mtx, dist, None, mtx) if write_to_file: cv2.imwrite('output_images/undistorted.png', cv2.cvtColor(undst, cv2.COLOR_RGB2BGR)) # Use a combination of thresholds and color gradients to obtain an image # that retains the lane edges while eliminating irrelevant edges as much as possible binary_R, s_binary, sxbinary, edges_binary, edges_color = edge_thresholds(undst) if write_to_file: cv2.imwrite('output_images/edges.png', edges_color) R_color = np.dstack((binary_R, binary_R, binary_R))*255 s_color = np.dstack((s_binary, s_binary, s_binary))*255 sx_color = np.dstack((sxbinary, sxbinary, sxbinary))*255 cv2.imwrite('output_images/R.jpg', R_color) cv2.imwrite('output_images/s.jpg', s_color) cv2.imwrite('output_images/sx.jpg', sx_color) # Remove all edges outside the region of interest defined by the vertices masked_image, masked_color = region_of_interest(edges_binary, mask_vertices) if write_to_file: cv2.imwrite('output_images/masked_image.png', masked_color) # Transform the masked image to obtain a top view of the image through perspective transform topview_image, M, Minv = topview(masked_image) topview_color = np.dstack((topview_image, topview_image, topview_image)) * 255 if write_to_file: cv2.imwrite('output_images/topview.png', topview_color) # Search the top view image to identify the lane region and fit polynomials to # the left, right and middle of the lane topview_region, stacked_windows_image, left_fit, right_fit, middle_fit, \ left_fitx, right_fitx, middle_fitx, ploty = search_around_poly(topview_image) if write_to_file: cv2.imwrite('output_images/topview_region.png', topview_region) cv2.imwrite('output_images/windowed_image.png', cv2.cvtColor(stacked_windows_image, cv2.COLOR_RGB2BGR)) # Convert polynomial coefficients from image to real coordinate system left_fit_real = convert_to_real(left_fit, xm_per_pix, ym_per_pix, 2) right_fit_real = convert_to_real(right_fit, xm_per_pix, ym_per_pix, 2) middle_fit_real = convert_to_real(middle_fit, xm_per_pix, ym_per_pix, 2) # Compute curvature for left, right and middle lines left_curverad = measure_curvature(left_fit_real, (image.shape[0])*ym_per_pix) right_curverad = measure_curvature(right_fit_real, (image.shape[0])*ym_per_pix) middle_curverad = measure_curvature(middle_fit_real, (image.shape[0])*ym_per_pix) if False: print('Left lane curvature (m) = ', left_curverad) print('Right lane curvature (m) = ', right_curverad) print('Middle lane curvature (m) = ', middle_curverad) # Transform the warped image back into the original image warped_back = cv2.warpPerspective(topview_region, Minv, topview_region.shape[1::-1], flags=cv2.INTER_LINEAR) if write_to_file: cv2.imwrite('output_images/lane_region.png', cv2.cvtColor(warped_back, cv2.COLOR_RGB2BGR)) # Compute mid point of the lane mid_lane = np.array([[middle_fitx[-1], ploty[-1]]], dtype = "float32") mid_lane = np.array([mid_lane]) midpoint_lane = cv2.perspectiveTransform(mid_lane, Minv) # Compute midpoint of the image (assume this is the center of the car) mid_image = np.array([[image.shape[1]//2, ploty[-1]]], dtype = "float32") mid_image = np.array([mid_image]) midpoint_image = cv2.perspectiveTransform(mid_image, Minv) offset = (midpoint_lane[0][0][0] - midpoint_image[0][0][0])*xm_per_pix # Combined the lane region markings to the original image result = weighted_img(warped_back, image, alpha, 1, gamma) if write_to_file: cv2.imwrite('output_images/weighted_image.png', cv2.cvtColor(result, cv2.COLOR_RGB2BGR)) # Write radius of curvature and offset onto the image font = cv2.FONT_HERSHEY_SIMPLEX radius_loc = (int(0.1*image.shape[0]),int(0.05*image.shape[1])) offset_loc = (int(0.1*image.shape[0]),int(0.1*image.shape[1])) fontScale = 1.5 fontColor = (255,255,255) lineType = 2 radius_text = 'Radius of curvature = ' + str(middle_curverad) + '(m)' if offset > 0: offset_text = 'Vehicle is {0:0.2f}m right of center'.format(offset) else: offset_text = 'Vehicle is {0:0.2f}m left of center'.format(-offset) cv2.putText(result,radius_text, radius_loc, font, fontScale, fontColor, lineType) cv2.putText(result, offset_text, offset_loc, font, fontScale, fontColor, lineType) if write_to_file: cv2.imwrite('output_images/annotated_image.png', cv2.cvtColor(result, cv2.COLOR_RGB2BGR)) return result # - # ### Test Sample Image #for imgpath in os.listdir("test_images/"): for imgpath in ["straight_lines1.jpg"]: fullpath = os.path.join("test_images", imgpath) basename, ext = os.path.splitext(imgpath) imgpath_output = basename + '_output' + ext fullpath_output = os.path.join("output_images", imgpath_output) image = mpimg.imread(fullpath) result = process_single_image(image, write_to_file = True) cv2.imwrite(fullpath_output, cv2.cvtColor(result, cv2.COLOR_RGB2BGR)) plt.imshow(result) # ### Create Video # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML # Process video by proecessing the individual images that constitute the video video_input = 'project_video.mp4' video_output = 'output_images/project_video_output.mp4' clip1 = VideoFileClip(video_input).subclip(0, None) project_clip = clip1.fl_image(process_single_image) #NOTE: this function expects color images!! # %time project_clip.write_videofile(video_output, audio=False) # Embed video HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(video_output))
P2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="kF6XR0AMAuRW" # # Week 2 - Ungraded Lab: A journey through Data # # Welcome to the ungraded lab for week 2 of Machine Learning Engineering for Production. **The paradigm behind Deep Learning is now facing a shift from model-centric to data-centric.** In this lab you will see how data intricacies affect the outcome of your models. To show you how far it will take you to apply data changes without addressing the model, you will be using a single model throughout: a simple Convolutional Neural Network (CNN). While training this model the journey will take you to address common problems: class imbalance and overfitting. As you navigate these issues, the lab will walk you through useful diagnosis tools and methods to mitigate these common problems. # # ------- # ------- # + [markdown] id="lao0CVv7c3Rd" # ### **IMPORTANT NOTES BEFORE STARTING THE LAB** # # Once opened in Colab, click on the "Connect" button on the upper right side corner of the screen to connect to a runtime to run this lab. # # # **NOTE 1:** # # For this lab you get the option to either train the models yourself (this takes around 20 minutes with GPU enabled for each model) or to use pretrained versions which are already provided. There are a total of 3 CNNs that require training and although some parameters have been tuned to provide a faster training time (such as `steps_per_epoch` and `validation_steps` which have been heavily lowered) this may result in a long time spent running this lab rather than thinking about what you observe. # # To speed things up we have provided saved pre-trained versions of each model along with their respective training history. We recommend you use these pre-trained versions to save time. However we also consider that training a model is an important learning experience especially if you haven't done this before. **If you want to perform this training by yourself, the code for replicating the training is provided as well. In this case the GPU is absolutely necessary, so be sure that it is enabled.** # # To make sure your runtime is GPU you can go to Runtime -> Change runtime type -> Select GPU from the menu and then press SAVE # # - Note: Restarting the runtime may # be required. # # - Colab will tell you if restarting is necessary -- you can do this from Runtime -> Restart Runtime option in the dropdown. # # **If you decide to use the pretrained versions make sure you are not using a GPU as it is not required and may prevent other users from getting access to one.** To check this, go to Runtime -> Change runtime type -> Select None from the menu and then press SAVE. # # **NOTE 2:** # # Colab **does not** guarantee access to a GPU. This depends on the availability of these resources. However **it is not very common to be denied GPU access**. If this happens to you, you can still run this lab without training the models yourself. If you really want to do the training but are denied a GPU, try switching the runtime to a GPU after a couple of hours. # # To know more about Colab's policies check out this [FAQ](https://research.google.com/colaboratory/faq.html). # # ----------- # ----------- # # Let's get started! # + id="LttdbzB5XB0O" import os import shutil import random import zipfile import tarfile import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf import matplotlib.pyplot as plt # To ignore some warnings about Image metadata that Pillow prints out import warnings warnings.filterwarnings("ignore") # + [markdown] id="v4Gq9Xffccwt" # Before you move on, download the two datasets used in the lab, as well as the pretrained models and histories: # + id="CkTzJYihXWu3" colab={"base_uri": "https://localhost:8080/"} outputId="49bd69ce-9926-4f26-d6ac-d87109b147f0" # Download datasets # Cats and dogs # !wget https://storage.googleapis.com/mlep-public/course_1/week2/kagglecatsanddogs_3367a.zip # Caltech birds # !wget https://storage.googleapis.com/mlep-public/course_1/week2/CUB_200_2011.tar # Download pretrained models and training histories # !wget -q -P /content/model-balanced/ https://storage.googleapis.com/mlep-public/course_1/week2/model-balanced/saved_model.pb # !wget -q -P /content/model-balanced/variables/ https://storage.googleapis.com/mlep-public/course_1/week2/model-balanced/variables/variables.data-00000-of-00001 # !wget -q -P /content/model-balanced/variables/ https://storage.googleapis.com/mlep-public/course_1/week2/model-balanced/variables/variables.index # !wget -q -P /content/history-balanced/ https://storage.googleapis.com/mlep-public/course_1/week2/history-balanced/history-balanced.csv # !wget -q -P /content/model-imbalanced/ https://storage.googleapis.com/mlep-public/course_1/week2/model-imbalanced/saved_model.pb # !wget -q -P /content/model-imbalanced/variables/ https://storage.googleapis.com/mlep-public/course_1/week2/model-imbalanced/variables/variables.data-00000-of-00001 # !wget -q -P /content/model-imbalanced/variables/ https://storage.googleapis.com/mlep-public/course_1/week2/model-imbalanced/variables/variables.index # !wget -q -P /content/history-imbalanced/ https://storage.googleapis.com/mlep-public/course_1/week2/history-imbalanced/history-imbalanced.csv # !wget -q -P /content/model-augmented/ https://storage.googleapis.com/mlep-public/course_1/week2/model-augmented/saved_model.pb # !wget -q -P /content/model-augmented/variables/ https://storage.googleapis.com/mlep-public/course_1/week2/model-augmented/variables/variables.data-00000-of-00001 # !wget -q -P /content/model-augmented/variables/ https://storage.googleapis.com/mlep-public/course_1/week2/model-augmented/variables/variables.index # !wget -q -P /content/history-augmented/ https://storage.googleapis.com/mlep-public/course_1/week2/history-augmented/history-augmented.csv # + [markdown] id="suKuIsOYdC9G" # ## A story of data # # To guide you through this lab we have prepared a narrative that simulates a real life scenario: # # Suppose you have been tasked to create a model that classifies images of cats, dogs and birds. For this you settle on a simple CNN architecture, since CNN's are known to perform well for image classification. You are probably familiar with two widely used datasets: `cats vs dogs`, and `caltech birds`. As a side note both datasets are available through `Tensforflow Datasets (TFDS)`. However, you decide NOT to use `TFDS` since the lab requires you to modify the data and combine the two datasets into one. # # ## Combining the datasets # # The raw images in these datasets can be found within the following paths: # # + id="-ja5V3AbYCp8" cats_and_dogs_zip = '/content/kagglecatsanddogs_3367a.zip' caltech_birds_tar = '/content/CUB_200_2011.tar' base_dir = '/tmp/data' # + [markdown] id="xRqfAVn6e8Lp" # The next step is extracting the data into a directory of choice, `base_dir` in this case. # # Note that the `cats vs dogs` images are in `zip` file format while the `caltech birds` images come in a `tar` file. # + id="aUl3_4nVXcsE" with zipfile.ZipFile(cats_and_dogs_zip, 'r') as my_zip: my_zip.extractall(base_dir) # + id="JQYh7tAyqOA7" with tarfile.open(caltech_birds_tar, 'r') as my_tar: my_tar.extractall(base_dir) # + [markdown] id="65E3t5Qlfwwn" # For the cats and dogs images no further preprocessing is needed as all exemplars of a single class are located in one directory: `PetImages\Cat` and `PetImages\Dog` respectively. Let's check how many images are available for each category: # + id="husRshAjYim9" colab={"base_uri": "https://localhost:8080/"} outputId="2030ff2b-8d0f-412b-f671-7c17e4c18240" base_dogs_dir = os.path.join(base_dir, 'PetImages/Dog') base_cats_dir = os.path.join(base_dir,'PetImages/Cat') print(f"There are {len(os.listdir(base_dogs_dir))} images of dogs") print(f"There are {len(os.listdir(base_cats_dir))} images of cats") # + [markdown] id="oqiG9G7-g2Z1" # The Bird images dataset organization is quite different. This dataset is commonly used to classify species of birds so there is a directory for each species. Let's treat all species of birds as a single class. This requires moving all bird images to a single directory (`PetImages/Bird` will be used for consistency). This can be done by running the next cell: # + id="ifcKshS6xmVj" colab={"base_uri": "https://localhost:8080/"} outputId="e3ce63b2-bd05-4f9c-88dc-45ec26ed391d" raw_birds_dir = '/tmp/data/CUB_200_2011/images' base_birds_dir = os.path.join(base_dir,'PetImages/Bird') os.mkdir(base_birds_dir) for subdir in os.listdir(raw_birds_dir): subdir_path = os.path.join(raw_birds_dir, subdir) for image in os.listdir(subdir_path): shutil.move(os.path.join(subdir_path, image), os.path.join(base_birds_dir)) print(f"There are {len(os.listdir(base_birds_dir))} images of birds") # + [markdown] id="9tteiK1fieHo" # It turns out that there is a similar number of images for each class you are trying to predict! Nice! # + [markdown] id="z3jHPdb7SE61" # Let's take a quick look at an image of each class you are trying to predict. # + id="lXE9RlF2ZFLL" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="bb609144-26f9-4dae-ab8a-86b8f4cff98b" from IPython.display import Image, display print("Sample cat image:") display(Image(filename=f"{os.path.join(base_cats_dir, os.listdir(base_cats_dir)[0])}")) print("\nSample dog image:") display(Image(filename=f"{os.path.join(base_dogs_dir, os.listdir(base_dogs_dir)[0])}")) print("\nSample bird image:") display(Image(filename=f"{os.path.join(base_birds_dir, os.listdir(base_birds_dir)[0])}")) # + [markdown] id="FduWstcripzJ" # ## Train / Evaluate Split # + [markdown] id="EiL9L8eSizCp" # Before training the model you need to split the data into `training` and `evaluating` sets. For training, we have chosen the [`Keras`](https://keras.io) application programming interface (API) which includes functionality to read images from various directories. The easier way to split the data is to create a different directory for each split of each class. # # Run the next cell to create the directories for training and evaluating sets. # + id="NdBnzB2Mvcs2" train_eval_dirs = ['train/cats', 'train/dogs', 'train/birds', 'eval/cats', 'eval/dogs', 'eval/birds'] for dir in train_eval_dirs: if not os.path.exists(os.path.join(base_dir, dir)): os.makedirs(os.path.join(base_dir, dir)) # + [markdown] id="x4XYN51Zj7-J" # Now, let's define a function that will move a percentage of images from an origin folder to a destination folder as desired to generate the training and evaluation splits: # + id="DRpbU9HAdn4n" def move_to_destination(origin, destination, percentage_split): num_images = int(len(os.listdir(origin))*percentage_split) for image_name, image_number in zip(sorted(os.listdir(origin)), range(num_images)): shutil.move(os.path.join(origin, image_name), destination) # + [markdown] id="DfssLKoathoG" # And now you are ready to call the previous function and split the data: # + id="VMKvQGH6fGdW" # Move 70% of the images to the train dir move_to_destination(base_cats_dir, os.path.join(base_dir, 'train/cats'), 0.7) move_to_destination(base_dogs_dir, os.path.join(base_dir, 'train/dogs'), 0.7) move_to_destination(base_birds_dir, os.path.join(base_dir, 'train/birds'), 0.7) # Move the remaining images to the eval dir move_to_destination(base_cats_dir, os.path.join(base_dir, 'eval/cats'), 1) move_to_destination(base_dogs_dir, os.path.join(base_dir, 'eval/dogs'), 1) move_to_destination(base_birds_dir, os.path.join(base_dir, 'eval/birds'), 1) # + [markdown] id="0eAD4J1ukGYC" # Something important to mention is that as it currently stands your dataset has some issues that will prevent model training and evaluation. Mainly: # # 1. Some images are corrupted and have zero bytes. # 2. Cats vs dogs zip file included a `.db` file for each class that needs to be deleted. # # If you didn't fix this before training you will get errors regarding these issues and training will fail. Zero-byte images are not valid images and Keras will let you know once these files are reached. In a similar way `.db` files are not valid images. **It is a good practice to always make sure that you are submitting files with the correct specifications to your training algorithm before start running it** as these issues might not be encountered right away and you will have to solve them and start training again. # # Running the following `bash` commands in the base directory will resolve these issues: # + id="3An_dEi0hwHj" # !find /tmp/data/ -size 0 -exec rm {} + # !find /tmp/data/ -type f ! -name "*.jpg" -exec rm {} + # + [markdown] id="oeqbprKcmr-0" # The first command removes all zero-byte files from the filesystem. The second one removes any file that does not have a `.jpg` extension. # # This also serves as a reminder of the power of bash. Although you could achieve the same result with Python code, bash allows you to do this much quicker. If you are not familiar with bash or some other shell-like language we encourage you to learn some of it as it is a very useful tool for data manipulation purposes. # # Let's check how many images you have available for each split and class after you remove the corrupted images: # + id="nZFk4f0jhEAk" colab={"base_uri": "https://localhost:8080/"} outputId="97113f51-59c6-4b16-aa65-c9c12ad405c3" print(f"There are {len(os.listdir(os.path.join(base_dir, 'train/cats')))} images of cats for training") print(f"There are {len(os.listdir(os.path.join(base_dir, 'train/dogs')))} images of dogs for training") print(f"There are {len(os.listdir(os.path.join(base_dir, 'train/birds')))} images of birds for training\n") print(f"There are {len(os.listdir(os.path.join(base_dir, 'eval/cats')))} images of cats for evaluation") print(f"There are {len(os.listdir(os.path.join(base_dir, 'eval/dogs')))} images of dogs for evaluation") print(f"There are {len(os.listdir(os.path.join(base_dir, 'eval/birds')))} images of birds for evaluation") # + [markdown] id="LSmRaN_Qm-s4" # It turns out that very few files presented the issues mentioned above. That's good news but it is also a reminder that small problems with the dataset might unexpectedly affect the training process. In this case, 4 non valid image files will have prevented you from training the model. # # In most cases training Deep Learning models is a time intensive task, so be sure to have everything in place before starting this process. # # # ## An unexpected issue! # # Let's face the first real life issue in this narrative! There was a power outage in your office and some hard drives were damaged and as a result of that, many of the images for `dogs` and `birds` have been erased. As a matter of fact, only 20% of the dog images and 10% of the bird images survived. # # To simulate this scenario, let's quickly create a new directory called `imbalanced` and copy only the proportions mentioned above for each class. # + id="wAG-rJRPZTQt" for dir in train_eval_dirs: if not os.path.exists(os.path.join(base_dir, 'imbalanced/'+dir)): os.makedirs(os.path.join(base_dir, 'imbalanced/'+dir)) # + id="GAGTj51qZT4e" colab={"base_uri": "https://localhost:8080/"} outputId="572c31c9-fb1b-4b55-80df-d53b8b69dcf4" # Very similar to the one used before but this one copies instead of moving def copy_with_limit(origin, destination, percentage_split): num_images = int(len(os.listdir(origin))*percentage_split) for image_name, image_number in zip(sorted(os.listdir(origin)), range(num_images)): shutil.copy(os.path.join(origin, image_name), destination) # Perform the copying copy_with_limit(os.path.join(base_dir, 'train/cats'), os.path.join(base_dir, 'imbalanced/train/cats'), 1) copy_with_limit(os.path.join(base_dir, 'train/dogs'), os.path.join(base_dir, 'imbalanced/train/dogs'), 0.2) copy_with_limit(os.path.join(base_dir, 'train/birds'), os.path.join(base_dir, 'imbalanced/train/birds'), 0.1) copy_with_limit(os.path.join(base_dir, 'eval/cats'), os.path.join(base_dir, 'imbalanced/eval/cats'), 1) copy_with_limit(os.path.join(base_dir, 'eval/dogs'), os.path.join(base_dir, 'imbalanced/eval/dogs'), 0.2) copy_with_limit(os.path.join(base_dir, 'eval/birds'), os.path.join(base_dir, 'imbalanced/eval/birds'), 0.1) # Print number of available images print(f"There are {len(os.listdir(os.path.join(base_dir, 'imbalanced/train/cats')))} images of cats for training") print(f"There are {len(os.listdir(os.path.join(base_dir, 'imbalanced/train/dogs')))} images of dogs for training") print(f"There are {len(os.listdir(os.path.join(base_dir, 'imbalanced/train/birds')))} images of birds for training\n") print(f"There are {len(os.listdir(os.path.join(base_dir, 'imbalanced/eval/cats')))} images of cats for evaluation") print(f"There are {len(os.listdir(os.path.join(base_dir, 'imbalanced/eval/dogs')))} images of dogs for evaluation") print(f"There are {len(os.listdir(os.path.join(base_dir, 'imbalanced/eval/birds')))} images of birds for evaluation") # + [markdown] id="2Qt_EGGJAaOR" # For now there is no quick or clear solution to the accidental file loss. So you decide to keep going and train the model with the remaining images. # + [markdown] id="qlDuR43ZAfwk" # ## Selecting the model # # Let's go ahead and create a model architecture and define a loss function, optimizer and performance metrics leveraging keras API: # + id="AiTGrTiHZ9fS" from tensorflow.keras import layers, models, optimizers def create_model(): # A simple CNN architecture based on the one found here: https://www.tensorflow.org/tutorials/images/classification model = models.Sequential([ layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)), layers.MaxPooling2D((2, 2)), layers.Conv2D(64, (3, 3), activation='relu'), layers.MaxPooling2D((2, 2)), layers.Conv2D(64, (3, 3), activation='relu'), layers.MaxPooling2D((2, 2)), layers.Conv2D(128, (3, 3), activation='relu'), layers.MaxPooling2D((2, 2)), layers.Flatten(), layers.Dense(512, activation='relu'), layers.Dense(3, activation='softmax') ]) # Compile the model model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(), optimizer=optimizers.Adam(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()] ) return model # + [markdown] id="UVj-I-Ke03Au" # And let's print out a model summary as a quick check. # + id="elM3J9P8I_zu" colab={"base_uri": "https://localhost:8080/"} outputId="d791f734-163c-4410-fcf0-5af2c94d1572" # Create a model to use with the imbalanced dataset imbalanced_model = create_model() # Print the model's summary print(imbalanced_model.summary()) # + [markdown] id="9YjjV9iU78Ca" # For training the model you will be using Keras' ImageDataGenerator, which has built-in functionalities to easily feed your model with raw, rescaled or even augmented image data. # # Another cool functionality within ImageDataGenerator is the `flow_from_directory` method which allows to read images as needed from a root directory. This method needs the following arguments: # # - `directory`: Path to the root directory where the images are stored. # - `target_size`: The dimensions to which all images found will be resized. Since images come in all kinds of resolutions, you need to standardize their size. 150x150 is used but other values should work well too. # - `batch_size`: Number of images the generator yields everytime it is asked for a next batch. 32 is used here. # - `class_mode`: How the labels are represented. Here "binary" is used to indicate that labels will be 1D. This is done for compatibility with the loss and evaluation metrics used when compiling the model. # # If you want to learn more about using Keras' ImageDataGenerator, check this [tutorial](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator). # + id="4SyU0P66azNE" colab={"base_uri": "https://localhost:8080/"} outputId="5777257a-cfb7-41cd-c866-8e77536841fc" from tensorflow.keras.preprocessing.image import ImageDataGenerator # No data augmentation for now, only normalizing pixel values train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) # Point to the imbalanced directory train_generator = train_datagen.flow_from_directory( '/tmp/data/imbalanced/train', target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( '/tmp/data/imbalanced/eval', target_size=(150, 150), batch_size=32, class_mode='binary') # + [markdown] id="NsowgcmDAOv-" # Let's do a quick sanity check to inspect that both generators (training and validation) use the same labels for each class: # + id="MlCgRwvWX8BO" colab={"base_uri": "https://localhost:8080/"} outputId="fbf48cec-1ece-4d59-9c90-2bcdc76953e4" print(f"labels for each class in the train generator are: {train_generator.class_indices}") print(f"labels for each class in the validation generator are: {validation_generator.class_indices}") # + [markdown] id="UqXdzv-soUzj" # # # Training a CNN with class imbalanced data # # + id="O1DI3mKCraJQ" colab={"base_uri": "https://localhost:8080/"} outputId="e9b97efb-5035-4e31-851e-204721adde6b" # Load pretrained model and history imbalanced_history = pd.read_csv('history-imbalanced/history-imbalanced.csv') imbalanced_model = tf.keras.models.load_model('model-imbalanced') # + id="UUhtEnsgxZ00" # Run only if you want to train the model yourself (this takes around 20 mins with GPU enabled) # imbalanced_history = imbalanced_model.fit( # train_generator, # steps_per_epoch=100, # epochs=50, # validation_data=validation_generator, # validation_steps=80) # + [markdown] id="9kHwAYLvEhiQ" # To analyze the model performance properly, it is important to track different metrics such as accuracy and loss function along the training process. Let's define a helper function to handle the metrics through the training history,depending on the method you previously selected: # + id="kmoJLjoTzb_L" def get_training_metrics(history): # This is needed depending on if you used the pretrained model or you trained it yourself if not isinstance(history, pd.core.frame.DataFrame): history = history.history acc = history['sparse_categorical_accuracy'] val_acc = history['val_sparse_categorical_accuracy'] loss = history['loss'] val_loss = history['val_loss'] return acc, val_acc, loss, val_loss # + [markdown] id="8OKRhD87E-V3" # Now, let's plot the metrics and losses for each training epoch as the training process progresses. # + id="RcYuJgrr11h4" colab={"base_uri": "https://localhost:8080/", "height": 590} outputId="bdc6e16b-f917-465a-8b7a-27ae7f517ae7" def plot_train_eval(history): acc, val_acc, loss, val_loss = get_training_metrics(history) acc_plot = pd.DataFrame({"training accuracy":acc, "evaluation accuracy":val_acc}) acc_plot = sns.lineplot(data=acc_plot) acc_plot.set_title('training vs evaluation accuracy') acc_plot.set_xlabel('epoch') acc_plot.set_ylabel('sparse_categorical_accuracy') plt.show() print("") loss_plot = pd.DataFrame({"training loss":loss, "evaluation loss":val_loss}) loss_plot = sns.lineplot(data=loss_plot) loss_plot.set_title('training vs evaluation loss') loss_plot.set_xlabel('epoch') loss_plot.set_ylabel('loss') plt.show() plot_train_eval(imbalanced_history) # + [markdown] id="4mF4fltDFM6o" # From these two plots is quite evident that the model is overfitting the training data. However, the evaluation accuracy is still pretty high. Maybe class imbalance is not such a big issue after all. Perhaps this is too good to be true. # # Let's dive a little deeper, and compute some additional metrics to explore if the class imbalance is hampering the model to perform well. In particular, let's compare: the accuracy score, the accuracy score balanced, and the confusion matrix. Information on the accuracy scores calculations is provided in the [sklearn](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-metrics) documentation. To refresh ideas on what is a confusion matrix check [Wikipedia](https://en.wikipedia.org/wiki/Confusion_matrix). # + id="kB_8ipYTK6FF" from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, accuracy_score, balanced_accuracy_score # + id="QQRokFyn-KIN" colab={"base_uri": "https://localhost:8080/"} outputId="428fce4f-b7ff-49cf-f9fa-87f673c2aa53" # Use the validation generator without shuffle to easily compute additional metrics val_gen_no_shuffle = test_datagen.flow_from_directory( '/tmp/data/imbalanced/eval', target_size=(150, 150), batch_size=32, class_mode='binary', shuffle=False) # + id="yJEg83EIW_jm" colab={"base_uri": "https://localhost:8080/"} outputId="08584aeb-eb73-486d-c285-38d0ace53702" # Get the true labels from the generator y_true = val_gen_no_shuffle.classes # Use the model to predict (will take a couple of minutes) predictions_imbalanced = imbalanced_model.predict(val_gen_no_shuffle) # Get the argmax (since softmax is being used) y_pred_imbalanced = np.argmax(predictions_imbalanced, axis=1) # Print accuracy score print(f"Accuracy Score: {accuracy_score(y_true, y_pred_imbalanced)}") # Print balanced accuracy score print(f"Balanced Accuracy Score: {balanced_accuracy_score(y_true, y_pred_imbalanced)}") # + [markdown] id="cXQQR9D8HVUh" # Comparing the `accuracy` and `balanced accuracy` metrics, the class imbalance starts to become apparent. Now let's compute the `confusion matrix` of the predictions. Notice that the class imbalance is also present in the evaluation set so the confusion matrix will show an overwhelming majority for cats. # + id="zZqpe9uLN2k0" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="58b499fa-53aa-4b9f-a51d-47bc3ff0f1bf" imbalanced_cm = confusion_matrix(y_true, y_pred_imbalanced) ConfusionMatrixDisplay(imbalanced_cm, display_labels=['birds', 'cats', 'dogs']).plot(values_format="d") # + id="nu3xXDhYAnqL" colab={"base_uri": "https://localhost:8080/"} outputId="98e7f3e4-8aa9-48ff-f074-7b0945f909f3" misclassified_birds = (imbalanced_cm[1,0] + imbalanced_cm[2,0])/np.sum(imbalanced_cm, axis=0)[0] misclassified_cats = (imbalanced_cm[0,1] + imbalanced_cm[2,1])/np.sum(imbalanced_cm, axis=0)[1] misclassified_dogs = (imbalanced_cm[0,2] + imbalanced_cm[1,2])/np.sum(imbalanced_cm, axis=0)[2] print(f"Proportion of misclassified birds: {misclassified_birds*100:.2f}%") print(f"Proportion of misclassified cats: {misclassified_cats*100:.2f}%") print(f"Proportion of misclassified dogs: {misclassified_dogs*100:.2f}%") # + [markdown] id="e3tpDKCsT564" # Class imbalance is a real problem that if not detected early on, gives the wrong impression that your model is performing better than it actually is. For this reason, is important to rely on several metrics that do a better job at capturing these kinds of issues. **In this case the standard `accuracy` metric is misleading** and provides a false sense that the model is performing better than it actually is. # # To prove this point further consider a model that only predicts cats: # + id="Yv65fC5NK5sV" colab={"base_uri": "https://localhost:8080/"} outputId="3669e273-fa51-429d-9d80-efe5fb6806ec" # Predict cat for all images all_cats = np.ones(y_true.shape) # Print accuracy score print(f"Accuracy Score: {accuracy_score(y_true, all_cats)}") # Print balanced accuracy score print(f"Balanced Accuracy Score: {balanced_accuracy_score(y_true, all_cats)}") # + [markdown] id="g_Gp6mYcIQlW" # If you only look at the `accuracy` metric the model seems to be working fairly well, since the majority class is the same that the model always predicts. # # There are several techniques to deal with class imbalance. A very popular one is `SMOTE`, which oversamples the minority classes by creating syntethic data. However, these techniques are outside the scope of this lab. # # The previous metrics were computed with class imbalance both on the training and evaluation sets. If you are wondering how the model performed with class imbalance only on the training set run the following cell to see the confusion matrix with balanced classes in the evaluation set: # # + id="r6xecVSuqMLx" colab={"base_uri": "https://localhost:8080/", "height": 317} outputId="c668d88d-ace1-4577-df89-53ee8f16feff" # Use the validation generator without shuffle to easily compute additional metrics val_gen_no_shuffle = test_datagen.flow_from_directory( '/tmp/data/eval', target_size=(150, 150), batch_size=32, class_mode='binary', shuffle=False) # Get the true labels from the generator y_true = val_gen_no_shuffle.classes # Use the model to predict (will take a couple of minutes) predictions_imbalanced = imbalanced_model.predict(val_gen_no_shuffle) # Get the argmax (since softmax is being used) y_pred_imbalanced = np.argmax(predictions_imbalanced, axis=1) # Confusion matrix imbalanced_cm = confusion_matrix(y_true, y_pred_imbalanced) ConfusionMatrixDisplay(imbalanced_cm, display_labels=['birds', 'cats', 'dogs']).plot(values_format="d") # + [markdown] id="R5vJRVjlQvK-" # # Training with the complete dataset # # For the time being and following the narrative, assume that a colleague of yours was careful enough to save a backup of the complete dataset in her cloud storage. Now you can try training without the class imbalance issue, what a relief! # # Now that you have the complete dataset it is time to try again without suffering from class imbalance. **In general, collecting more data is beneficial for models!** # + id="w5VwUrpGPhH_" # Create a model to use with the balanced dataset balanced_model = create_model() # + id="FWFrVUmsmzzs" colab={"base_uri": "https://localhost:8080/"} outputId="a0c8e705-89d9-42eb-f78e-4334263cc7c1" # Still no data augmentation, only re-scaling train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) # Generators now point to the complete dataset train_generator = train_datagen.flow_from_directory( '/tmp/data/train', target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( '/tmp/data/eval', target_size=(150, 150), batch_size=32, class_mode='binary') # + id="WC7-I1ylr-_n" colab={"base_uri": "https://localhost:8080/"} outputId="45302c2d-d8a1-4624-f07d-42660ffd1150" # Load pretrained model and history balanced_history = pd.read_csv('history-balanced/history-balanced.csv') balanced_model = tf.keras.models.load_model('model-balanced') # + id="NcOh1NVtm5Dg" # Run only if you want to train the model yourself (this takes around 20 mins with GPU enabled) # balanced_history = balanced_model.fit( # train_generator, # steps_per_epoch=100, # epochs=50, # validation_data=validation_generator, # validation_steps=80) # + [markdown] id="i7LZUa9RVvyX" # Let's check how the `accuracy` vs `balanced accuracy` comparison looks like now: # + id="EE3YiUW5WMOQ" colab={"base_uri": "https://localhost:8080/"} outputId="896b8569-c608-487f-d2db-715cef2d92dd" # Use the validation generator without shuffle to easily compute additional metrics val_gen_no_shuffle = test_datagen.flow_from_directory( '/tmp/data/eval', target_size=(150, 150), batch_size=32, class_mode='binary', shuffle=False) # + id="wfLgvWRfKuTQ" colab={"base_uri": "https://localhost:8080/"} outputId="8b03463c-afd3-4155-e24d-1f7052642859" # Get the true labels from the generator y_true = val_gen_no_shuffle.classes # Use the model to predict (will take a couple of minutes) predictions_balanced = balanced_model.predict(val_gen_no_shuffle) # Get the argmax (since softmax is being used) y_pred_balanced = np.argmax(predictions_balanced, axis=1) # Print accuracy score print(f"Accuracy Score: {accuracy_score(y_true, y_pred_balanced)}") # Print balanced accuracy score print(f"Balanced Accuracy Score: {balanced_accuracy_score(y_true, y_pred_balanced)}") # + id="7Mpnmv5YKyeD" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="2f2aef1b-4c8e-4cb0-9f0f-48ddad8aa9d6" balanced_cm = confusion_matrix(y_true, y_pred_balanced) ConfusionMatrixDisplay(balanced_cm, display_labels=['birds', 'cats', 'dogs']).plot(values_format="d") # + [markdown] id="Dp7QCgZ0Wuf3" # Both accuracy-based metrics are very similar now. The confusion matrix also looks way better than before. This suggests that class imbalance has been successfully mitigated by adding more data to the previously undersampled classes. # # Now that you now that you can trust the `accuracy` metric, let's plot the training history: # + id="6pr2VmKtJpet" colab={"base_uri": "https://localhost:8080/", "height": 590} outputId="9f9e0bcc-c39a-4314-a4bf-f4a2d546adc0" plot_train_eval(balanced_history) # + [markdown] id="YCH1hTj7JvHu" # This looks much better than for the imbalanced case! However, overfitting is still present. # # Can you think of ways to address this issue? If you are familiar with CNN's you might think of adding `dropout` layers. This intuition is correct but for the time being you decide to stick with the same model and only change the data to see if it is possible to mitigate overfitting in this manner. # # Another possible solution is to apply data augmentation techniques. Your whole team agrees this is the way to go so you decide to try this next! # + [markdown] id="VdlVWEZuX4ii" # # Training with Data Augmentation # # Augmenting images is a technique in which you create new versions of the images you have at hand, by applying geometric transformations. These transformations can vary from: zooming in and out, rotating, or even flipping the images. By doing this, you get a training dataset that exposes the model to a wider variety of images. This helps in further exploring the feature space and hence reducing the chances of overfitting. # # It is also a very natural idea since doing slight (or sometimes not so slight) changes to an image will result in an equally valid image. A cat sitting in an awkward position is still a cat, right? # + id="V1EUr1eTVXEz" # Create a model to use with the balanced and augmented dataset augmented_model = create_model() # + id="g7RAqkSRC98K" colab={"base_uri": "https://localhost:8080/"} outputId="b70d632c-6006-4bc6-fb3c-a92a48c86410" # Now applying image augmentation train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=50, width_shift_range=0.15, height_shift_range=0.15, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) # Still pointing to directory with full dataset train_generator = train_datagen.flow_from_directory( '/tmp/data/train', target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( '/tmp/data/eval', target_size=(150, 150), batch_size=32, class_mode='binary') # + [markdown] id="DnmSteAYA4s3" # Notice that the only difference with the previous training is that the `ImageDataGenerator` object now has some extra parameters. We encourage you to read more about this topic [here](https://keras.io/api/preprocessing/image/) if you haven't already. Also **this was only done to the training generator since this technique should only be applied to the training images.** # # # But what exactly are these extra parameters doing? # # Let's see these transformations in action. The following cell applies and displays different transformations for a single image: # + id="iiu_u0iRqgFM" colab={"base_uri": "https://localhost:8080/", "height": 821} outputId="1e33d66c-e333-4136-972a-0bb830277df0" from tensorflow.keras.preprocessing.image import img_to_array, array_to_img, load_img # Displays transformations on random images of birds in the training partition def display_transformations(gen): train_birds_dir = "/tmp/data/train/birds" random_index = random.randint(0, len(os.listdir(train_birds_dir))) sample_image = load_img(f"{os.path.join(train_birds_dir, os.listdir(train_birds_dir)[random_index])}", target_size=(150, 150)) sample_array = img_to_array(sample_image) sample_array = sample_array[None, :] for iteration, array in zip(range(4), gen.flow(sample_array, batch_size=1)): array = np.squeeze(array) img = array_to_img(array) print(f"\nTransformation number: {iteration}\n") display(img) # An example of an ImageDataGenerator sample_gen = ImageDataGenerator( rescale=1./255, rotation_range=50, width_shift_range=0.25, height_shift_range=0.25, shear_range=0.2, zoom_range=0.25, horizontal_flip=True) display_transformations(sample_gen) # + [markdown] id="OUNLR1NFBED3" # Let's look at another more extreme example: # + id="biDxKkdx09bg" colab={"base_uri": "https://localhost:8080/", "height": 821} outputId="a3af075f-2cbe-4041-9cb8-bca126c5947c" # An ImageDataGenerator with more extreme data augmentation sample_gen = ImageDataGenerator( rescale=1./255, rotation_range=90, width_shift_range=0.3, height_shift_range=0.3, shear_range=0.5, zoom_range=0.5, vertical_flip=True, horizontal_flip=True) display_transformations(sample_gen) # + [markdown] id="KaKZ624jBlt6" # Feel free to try your own custom ImageDataGenerators! The results can be very fun to watch. If you check the [docs](https://keras.io/api/preprocessing/image/) there are some other parameters you may want to toy with. # # Now that you know what data augmentation is doing to the training images let's move onto training: # + id="6vO9TP1dJ5My" colab={"base_uri": "https://localhost:8080/"} outputId="a1f6b385-9837-46ef-8981-690736b8765b" # Load pretrained model and history augmented_history = pd.read_csv('history-augmented/history-augmented.csv') augmented_model = tf.keras.models.load_model('model-augmented') # + id="7aSV4CyGHRz-" # Run only if you want to train the model yourself (this takes around 20 mins with GPU enabled) # augmented_history = augmented_model.fit( # train_generator, # steps_per_epoch=100, # epochs=80, # validation_data=validation_generator, # validation_steps=80) # + [markdown] id="d0hoorf7brwZ" # Since you know that class imbalance is no longer an issue there is no need to check for more in-depth metrics. # # Let's plot the training history right away: # + id="8EYc1oXmHjE2" colab={"base_uri": "https://localhost:8080/", "height": 590} outputId="b5da6616-3169-48da-d552-3513be5b333d" plot_train_eval(augmented_history) # + [markdown] id="nBy1VcxacPEx" # Now, the evaluation accuracy follows more closely the training one. This indicates that **the model is no longer overfitting**. Quite a remarkable finding, achieved by just augmenting the data set. Another option to handle overfitting is to include dropout layers in your model as mentioned earlier. # # Another point worth mentioning, is that this model achieves a slightly lower evaluation accuracy when compared to the model without data augmentation. The reason for this, is that this model needs more epochs to train. To spot this issue, check that for the model without data augmentation, the training accuracy reached almost 100%, whereas the augmented one can still improve. # # + [markdown] id="dOA93ENHczla" # ## Wrapping it up # # **Congratulations on finishing this ungraded lab!** # # It is quite amazing to see how data alone can impact Deep Learning models. Hopefully this lab helped you have a better understanding of the importance of data. # # In particular, you figured out ways to diagnose the effects of class imbalance and looked at specific metrics to spot this problem. Adding more data is a simple way to overcome class imbalance. However, this is not always feasible in a real life scenario. # # In the final section, you applied multiple geometric transformations to the images in the training dataset, to generate an augmented version. The goal was to use data augmentation to reduce overfitting. Changing the network architecture is an alternative method to reduce overfitting. In practice, it is a good idea to implement both techniques for better results. # # # **Keep it up!**
C1W2_Ungraded_Lab_Birds_Cats_Dogs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 3. Markov Models Example Problems # We will now look at a model that examines our state of healthiness vs. being sick. Keep in mind that this is very much like something you could do in real life. If you wanted to model a certain situation or environment, we could take some data that we have gathered, build a maximum likelihood model on it, and do things like study the properties that emerge from the model, or make predictions from the model, or generate the next most likely state. # # Let's say we have 2 states: **sick** and **healthy**. We know that we spend most of our time in a healthy state, so the probability of transitioning from healthy to sick is very low: # # $$p(sick \; | \; healthy) = 0.005$$ # # Hence, the probability of going from healthy to healthy is: # # $$p(healthy \; | \; healthy) = 0.995$$ # # Now, on the other hand the probability of going from sick to sick is also very high. This is because if you just got sick yesterday then you are very likely to be sick tomorrow. # # $$p(sick \; | \; sick) = 0.8$$ # # However, the probability of transitioning from sick to healthy should be higher than the reverse, because you probably won't stay sick for as long as you would stay healthy: # # $$p(healthy \; | \; sick) = 0.02$$ # # We have now fully defined our state transition matrix, and we can now do some calculations. # # ## 1.1 Example Calculations # ### 1.1.1 # What is the probability of being healthy for 10 days in a row, given that we already start out as healthy? Well that is: # # $$p(healthy \; 10 \; days \; in \; a \; row \; | \; healthy \; at \; t=0) = 0.995^9 = 95.6 \%$$ # # How about the probability of being healthy for 100 days in a row? # # $$p(healthy \; 100 \; days \; in \; a \; row \; | \; healthy \; at \; t=0) = 0.995^{99} = 60.9 \%$$ # ## 2. Expected Number of Continuously Sick Days # We can now look at the expected number of days that you would remain in the same state (e.g. how many days would you expect to stay sick given the model?). This is a bit more difficult than the last problem, but completely doable, only involving the mathematics of <a href="https://en.wikipedia.org/wiki/Geometric_series">infinite sums</a>. # # First, we can look at the probability of being in state $i$, and going to state $i$ in the next state. That is just $A(i,i)$: # # $$p \big(s(t)=i \; | \; s(t-1)=i \big) = A(i, i)$$ # # Now, what is the probability distribution that we actually want to calculate? How about we calculate the probability that we stay in state $i$ for $n$ transitions, at which point we move to another state: # # $$p \big(s(t) \;!=i \; | \; s(t-1)=i \big) = 1 - A(i, i)$$ # # So, the joint probability that we are trying to model is: # # $$p\big(s(1)=i, s(2)=i,...,s(n)=i, s(n+1) \;!= i\big) = A(i,i)^{n-1}\big(1-A(i,i)\big)$$ # # In english this means that we are multiplying the transition probability of staying in the same state, $A(i,i)$, times the number of times we stayed in the same state, $n$, (note it is $n-1$ because we are given that we start in that state, hence there is no transition associated with it) times $1 - A(i,i)$, the probability of transitioning from that state. This leaves us with an expected value for $n$ of: # # $$E(n) = \sum np(n) = \sum_{n=1..\infty} nA(i,i)^{n-1}(1-A(i,i))$$ # # Note, in the above equation $p(n)$ is the probability that we will see state $i$ $n-1$ times after starting from $i$ and then see a state that is not $i$. Also, we know that the expected value of $n$ should be the sum of all possible values of $n$ times $p(n)$. # # # ### 2.1 Expected $n$ # So, we can now expand this function and calculate the two sums separately. # # $$E(n) = \sum_{n=1..\infty}nA(i,i)^{n-1}(1 - A(i,i)) = \sum nA(i, i)^{n-1} - \sum nA(i,i)^n$$ # # **First Sum**<br> # With our first sum, we can say that: # # $$S = \sum na(i, i)^{n-1}$$ # # $$S = 1 + 2a + 3a^2 + 4a^3+ ...$$ # # And we can then multiply that sum, $S$, by $a$, to get: # # $$aS = a + 2a^2 + 3a^3 + 4a^4+...$$ # # And then we can subtract $aS$ from $S$: # # $$S - aS = S'= 1 + a + a^2 + a^3+...$$ # # This $S'$ is another infinite sum, but it is one that is much easier to solve! # # $$S'= 1 + a + a^2 + a^3+...$$ # # And then $aS'$ is: # # $$aS' = a + a^2 + a^3+ + a^4 + ...$$ # # Which, when we then do $S' - aS'$, we end up with: # # $$S' - aS' = 1$$ # # $$S' = \frac{1}{1 - a}$$ # # And if we then substitute that value in for $S'$ above: # # $$S - aS = S'= 1 + a + a^2 + a^3+... = \frac{1}{1 - a}$$ # # $$S - aS = \frac{1}{1 - a}$$ # # $$S = \frac{1}{(1 - a)^2}$$ # # # **Second Sum**<br> # We can now look at our second sum: # # $$S = \sum na(i,i)^n$$ # # $$S = 1a + 2a^2 + 3a^3 +...$$ # # # $$Sa = 1a^2 + 2a^3 +...$$ # # $$S - aS = S' = a + a^2 + a^3 + ...$$ # # $$aS' = a^2 + a^3 + a^4 +...$$ # # $$S' - aS' = a$$ # # $$S' = \frac{a}{1 - a}$$ # # And we can plug back in $S'$ to get: # # $$S - aS = \frac{a}{1 - a}$$ # # $$S = \frac{a}{(1 - a)^2}$$ # # **Combine** <br> # We can now combine these two sums as follows: # # $$E(n) = \frac{1}{(1 - a)^2} - \frac{a}{(1-a)^2}$$ # # $$E(n) = \frac{1}{1-a}$$ # # **Calculate Number of Sick Days**<br> # So, how do we calculate the correct number of sick days? That is just: # # $$\frac{1}{1 - 0.8} = 5$$ # ## 3. SEO and Bounce Rate Optimization # We are now going to look at SEO and Bounch Rate Optimization. This is a problem that every developer and website owner can relate to. You have a website and obviously you would like to increase traffic, increase conversions, and avoid a high bounce rate (which could lead to google assigning your page a low ranking). What would a good way of modeling this data be? Without even looking at any code we can look at some examples of things that we want to know, and how they relate to markov models. # # ### 3.1 Arrival # First and foremost, how do people arrive on your page? Is it your home page? Your landing page? Well, this is just the very first page of what is hopefully a sequence of pages. So, the markov analogy here is that this is just the initial state distribution or $\pi$. So, once we have our markov model, the $\pi$ vector will tell us which of our pages a user is most likely to start on. # # ### 3.2 Sequences of Pages # What about sequences of pages? Well, if you think people are getting to your landing page, hitting the buy button, checking out, and then closing the browser window, you can test the validity of that assumption by calculating the probability of that sequence. Of course, the probability of any sequence is probability going to be much less than 1. This is because for a longer sequence, we have more multiplication, and hence smaller final numbers. We do have two alternatives however: # # > * 1) You can compare the probability of two different sequences. So, are people going through the entire checkout process? Or is it more probable that they are just bouncing? # * 2) Another option is to just find the transition probabilities themselves. These are conditional probabilities instead of joint probabilities. You want to know, once they have made it to the landing page, what is the probability of hitting buy. Then, once they have hit buy, what is the probability of them completing the checkout. # # ### 3.3 Bounce Rate # This is hard to measure, unless you are google and hence have analytics on nearly every page on the web. This is because once a user has left your site, you can no longer run code on their computer or track what they are doing. However, let's pretend that we can determine this information. Once we have done this, we can measure which page has the highest bounce rate. At this point we can manually analyze that page and ask our marketing people "what is different about this page that people don't find it useful/want to leave?" We can then address that problem, and the hopefully later analysis shows that the fixed page no longer has a high bounce right. In the markov model, we can just represents this as the null state. # # ### 3.4 Data # So, the data we are going to be working with has two columns: `last_page_id` and `next_page_id`. This can be interpreted as the current page and the next page. The site has 10 pages with the id's 0-9. We can represent start pages by making the current page -1, and the next page the actual page. We can represent the end of the page with two different codes, `B`(bounce) or `C` (close). In the case of bounce, the user saw the page and then immediately bounced. In the case of close, the user saw the page stayed and potentially saw some useful information, and then closed the window. So, you can imagine that our engineer may use time as a factor in determining if it is a bounce or a close. import numpy as np import pandas as pd # + """Goal here is to store start page and end page, and the count how many times that happens. After that we are going to turn it into a probability distribution. We can divide all transitions that start with specific start state, by row_sum""" transitions = {} # getting all specific transitions from start pg to end pg, tallying up # of times each occurs row_sums = {} # start date as key -> getting number of times each starting pg occurs # Collect our counts for line in open('../../../data/site/site_data.csv'): s, e = line.rstrip().split(',') # get start and end page transitions[(s, e)] = transitions.get((s, e), 0.) + 1 row_sums[s] = row_sums.get(s, 0.) + 1 # Normalize the counts so they become real probability distributions for k, v in transitions.items(): s, e = k transitions[k] = v / row_sums[s] # Calculate initial state distribution print('Initial state distribution') for k, v in transitions.items(): s, e = k if s == '-1': # this means it is the start of the sequence. print (e, v) # Which page has the highest bounce rate? for k, v in transitions.items(): s, e = k if e == 'B': print(f'Bounce rate for {s}: {v}') # - # We can see that page with `id` 9 has the highest value in the initial state distribution, so we are most likely to start on that page. We can then see that the page with highest bounce rate is also at page `id` 9. # ## 4. Build a 2nd-order language model and generate phrases # So, we are now going to work with non first order markov chains for a little bit. In this example we are going to try and create a language model. So we are going to first train a model on some data to determine the distribution of a word given the previous two words. We can then use this model to generate new phrases. Note that another step of this model would be to calculate the probability of a phrase. # # So the data that we are going to look at is just a collection of Robert Frost Poems. It is just a text file with all of the poems concatenated together. So, the first thing we are going to want to do is tokenize each sentence, and remove punctuation. It will look similar to this: # # ``` # def remove_punctuation(s): # return s.translate(None, string.punctuation) # # tokens = [t for t in remove_puncuation(line.rstrip().lower()).split()] # ``` # # Once we have tokenized each line, we want to perform various counts in addition to the second order model counts. We need to measure the initial distribution of words, or stated another way the distribution of the first word of a sentence. We also want to know the distribution of the second word of a sentence. Both of these do not have two previous words, so they are not second order. We could technically include them in the second order measurement by using `None` in place of the previous words, but we won't do that here. We also want to keep track of how to end the sentence (end of sentence distribution, will look similar to (w(t-2), w(t-1) -> END)), so we will include a special token for that too. # # When we do this counting, what we first want to do is create an array of all possibilities. So, for example if we had two sentences: # # ``` # I love dogs # I love cats # ``` # # Then we could have a dictionary where the key was `(I, love)` and the value was an array `[dogs, cats]`. If "I love" was also a stand alone sentence, then the value would be `[dogs, cats, END]`. The function below can help us with this, since we first need to check if there is any value for the key, create an array if not, otherwise just append to the array. # # ``` # def add2dict(d, k, v): # if k not in d: # d[k] = [] # else: # d[k].append(v) # ``` # # One we have collected all of these arrays of possible next words, we need to turn them into **probability distributions**. For example, the array `[cat, cat, dog]` would become the dictionary `{"cat": 2/3, "dog": 1/3}`. Here is a function that can do this: # # ``` # def list2pdict(ts): # d = {} # n = len(ts) # for t in ts: # d[t] = d.get(t, 0.) + 1 # for t, c in d.items(): # d[t] = c / n # return d # ``` # # Next, we will need a function that can sample from this dictionary. To do this we will need to generate a random number between 0 and 1, and then use the distribution of the words to sample a word given a random number. Here is a function that can do that: # # ``` # def sample_word(d): # p0 = np.random.random() # cumulative = 0 # for t, p in d.items(): # cumulative += p # if p0 < cumulative: # return t # assert(False) # should never get here # ``` # # Because all of our distributions are structured as dictionaries, we can use the same function for all of them. import numpy as np import string # + """3 dicts. 1st store pdist for the start of a phrase, then a second word dict which stores the distributions for the 2nd word of a sentence, and then we are going to have a dict for all second order transitions""" initial = {} second_word = {} transitions = {} def remove_punctuation(s): return s.translate(str.maketrans('', '', string.punctuation)) def add2dict(d, k, v): """Parameters: Dictionary, Key, Value""" if k not in d: d[k] = [] d[k].append(v) # Loop through file of poems for line in open('../../../data/poems/robert_frost.txt'): tokens = remove_punctuation(line.rstrip().lower()).split() # Get all tokens for specific line we are looping over T = len(tokens) # Length of sequence for i in range(T): # Loop through every token in sequence t = tokens[i] if i == 0: # We are looking at first word initial[t] = initial.get(t, 0.) + 1 else: t_1 = tokens[i - 1] if i == T - 1: # Looking at last word add2dict(transitions, (t_1, t), 'END') if i == 1: # second word of sentence, hence only 1 previous word add2dict(second_word, t_1, t) else: t_2 = tokens[i - 2] # Get second previous word add2dict(transitions, (t_2, t_1), t) # add previous and 2nd previous word as key, and current word as val # Normalize the distributions initial_total = sum(initial.values()) for t, c in initial.items(): initial[t] = c / initial_total # Take our list and turn it into a dictionary of probabilities def list2pdict(ts): d = {} n = len(ts) # get total number of values for t in ts: # look at each token d[t] = d.get(t, 0.) + 1 for t, c in d.items(): # go through dictionary, divide frequency by sum d[t] = c / n return d for t_1, ts in second_word.items(): second_word[t_1] = list2pdict(ts) for k, ts in transitions.items(): transitions[k] = list2pdict(ts) def sample_word(d): p0 = np.random.random() # Generate random number from 0 to 1 cumulative = 0 # cumulative count for all probabilities seen so far for t, p in d.items(): cumulative += p if p0 < cumulative: return t assert(False) # should never hit this """Function to generate a poem""" def generate(): for i in range(4): sentence = [] # initial word w0 = sample_word(initial) sentence.append(w0) # sample second word w1 = sample_word(second_word[w0]) sentence.append(w1) # second-order transitions until END -> enter infinite loop while True: w2 = sample_word(transitions[(w0, w1)]) # sample next word given previous two words if w2 == 'END': break sentence.append(w2) w0 = w1 w1 = w2 print(' '.join(sentence)) generate() # - # ## 5. Google's PageRank Algorithm # Markov models were even used in Google's PageRank algorithm. The basic problem we face is: # > * We have $M$ webpages that link to eachother, and we would like to assign importance scores $x(1),...,x(M)$ # * All of these scores are greater than or equal to 0 # * So, we want to assign a page rank to all of these pages # # How can we go about doing this? Well, we can think of a webpage as a sequence, and the page you are on as the state. Where does the ranking come from? Well, the ranking actually comes from the limiting distribution. That is, in the long run, the proportion of visits that will be spent on this page. Now, if you think "great that is all I need to know", slow down. How can we actually do this in practice? How do we train the markov model, and what are the values we assign to the state transition matrix? And how can we ensure that the limiting distribution exists and is unique? The key insight was that **we can use the linked structure of the web to determine the ranking**. # # The main idea is that a *link to a page* is like a *vote for its importance*. So, as a first attempt we could just use a frequency count to measure the votes. Of course, that wouldn't be a valid probability distribution, so we could just divide each row by its sum to make it sum to 1. So we set: # # $$A(i, j) = \frac{1}{n(i)} \; if \; i \; links \; to \; j$$ # $$A(i, j) = 0 \; otherwise$$ # # Here $n(i)$ stands for the total number of links on a page, and you can confirm that the sum of a row is $\frac{n(i)}{n(i)} = 1$, so this is a valid markov matrix. Now, we still aren't sure if the limiting distribution is unique. # # ### 5.1 This is already a good start # Let's keep in mind that the above solution already solves a few problems. For instance, let's say you are a spammer and you want to sell 1000 links on your webpage. Well, because the transition matrix must remain a valid probability matrix, the rows must sum to 1, which means that each of your links now only has a strength of $\frac{1}{1000}$. For example the frequency matrix would look like: # # | |abc.com|amazon.com|facebook.com|github.com| # |--- |--- |--- | --- |--- | # |thespammer.com|1 |1 |1 |1 | # # And then if we transformed that into a probability matrix it would just be each value divided by the total number of links, 4: # # | |abc.com|amazon.com|facebook.com|github.com| # |--- |--- |--- | --- |--- | # |thespammer.com|0.25 |0.25 |0.25 |0.25 | # # You may then think, I will just create 1000 pages and each of them will only have 1 link. Unfortunately, since nobody knows about those 1000 pages you just created nobody is going to link to them, which means they are impossible to get to. So, in the limiting distribution, those states will have 0 probability because you can't even get to them, so there outgoing links are worthless. Remember, the markov chains limiting distribution will model the long running proportion of visits to a state. So, if you never visit that state, its probability will be 0. # # We still have not ensure that the limiting distribution exists and is unique. # # ### 5.2 Perron-Frobenius Theorem # How can we ensure that our model has a unique stationary distribution. In 1910, this was actually determined. It is known as the **Perron-Frobenius Theorem**, and it states that: # > *If our transition matrix is a markov matrix -meaning that all of the rows sum to 1, and all of the values are strictly positive, i.e. no values that are 0- then the stationary distribution exists and is unique*. # # In fact, we can start in any initial state and as time approaches infinity we will always end up with the same stationary distribution, therefore this is also the limiting distribution. # # So, how can we satisfy the PF criterion? Let's return to this idea of **smoothing**, which we first talked about when discussing how to train a markov model. The basic idea was that we can make things that were 0, non-zero, so there is still a small possibility that we can get to that state. This might be good news for the spammer. So, we can create a uniform probability distribution $U = \frac{1}{M}$, which is an $M x M$ matrix ($M$ is the number of states). PageRanks solution was to take the matrix we had before and multiply it by 0.85, and to take the uniform distribution and multiply it by 0.15, and add them together to get the final pagerank matrix. # # $$G = 0.85A + 0.15U$$ # # Now all of the elements are strictly positive, and we can convince ourselves that G is still a valid markov matrix.
Machine_Learning/05-Hidden_Markov_Models-03-Markov-Models-Example-Problems-and-Applications.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Convolutional Neural Networks with Tensorflow # # "Deep Learning" is a general term that usually refers to the use of neural networks with multiple layers that synthesize the way the human brain learns and makes decisions. A convolutional neural network is a kind of neural network that extracts *features* from matrices of numeric values (often images) by convolving multiple filters over the matrix values to apply weights and identify patterns, such as edges, corners, and so on in an image. The numeric representations of these patterns are then passed to a fully-connected neural network layer to map the features to specific classes. # # ## Building a CNN # There are several commonly used frameworks for creating CNNs. In this notebook, we'll build a simple example CNN using Tensorflow. The example is a classification model that can classify an image as a circle, a triangle, or a square. # # ### Import framework # # First, let's import the Tensorflow libraries we'll need. # + tags=[] import tensorflow from tensorflow import keras print('TensorFlow version:',tensorflow.__version__) print('Keras version:',keras.__version__) # - # ### Preparing the Data # Before we can train the model, we need to prepare the data. We'll divide the feature values by 255 to normalize them as floating point values between 0 and 1, and we'll split the data so that we can use 70% of it to train the model, and hold back 30% to validate it. When loading the data, the data generator will assing "hot-encoded" numeric labels to indicate which class each image belongs to based on the subfolders in which the data is stored. In this case, there are three subfolders - *circle*, *square*, and *triangle*, so the labels will consist of three *0* or *1* values indicating which of these classes is associated with the image - for example the label [0 1 0] indicates that the image belongs to the second class (*square*). # + tags=[] from tensorflow.keras.preprocessing.image import ImageDataGenerator data_folder = 'data/shapes' img_size = (128, 128) batch_size = 30 print("Getting Data...") datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values validation_split=0.3) # hold back 30% of the images for validation print("Preparing training dataset...") train_generator = datagen.flow_from_directory( data_folder, target_size=img_size, batch_size=batch_size, class_mode='categorical', subset='training') # set as training data print("Preparing validation dataset...") validation_generator = datagen.flow_from_directory( data_folder, target_size=img_size, batch_size=batch_size, class_mode='categorical', subset='validation') # set as validation data classnames = list(train_generator.class_indices.keys()) print("class names: ", classnames) # - # ### Defining the CNN # Now we're ready to create our model. This involves defining the layers for our CNN, and compiling them for multi-class classification. # + tags=[] # Define a CNN classifier network from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense # Define the model as a sequence of layers model = Sequential() # The input layer accepts an image and applies a convolution that uses 32 6x6 filters and a rectified linear unit activation function model.add(Conv2D(32, (6, 6), input_shape=train_generator.image_shape, activation='relu')) # Next we;ll add a max pooling layer with a 2x2 patch model.add(MaxPooling2D(pool_size=(2,2))) # We can add as many layers as we think necessary - here we'll add another convolution, max pooling, and dropout layer model.add(Conv2D(32, (6, 6), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # And another set model.add(Conv2D(32, (6, 6), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # A dropout layer randomly drops some nodes to reduce inter-dependencies (which can cause over-fitting) model.add(Dropout(0.2)) # Now we'll flatten the feature maps and generate an output layer with a predicted probability for each class model.add(Flatten()) model.add(Dense(train_generator.num_classes, activation='sigmoid')) # With the layers defined, we can now compile the model for categorical (multi-class) classification model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) # - # ### Training the Model # With the layers of the CNN defined, we're ready to train the model using our image data. In the example below, we use 5 iterations (*epochs*) to train the model in 30-image batches, holding back 30% of the data for validation. After each epoch, the loss function measures the error (*loss*) in the model and adjusts the weights (which were randomly generated for the first iteration) to try to improve accuracy. # # > **Note**: We're only using 5 epochs to minimze the training time for this simple example. A real-world CNN is usually trained over more epochs than this. CNN model training is processor-intensive, involving a lot of matrix and vector-based operations; so it's recommended to perform this on a system that can leverage GPUs, which are optimized for these kinds of calculation. This will take a while to complete on a CPU-based system - status will be displayed as the training progresses. # + tags=[] # Train the model over 5 epochs using 30-image batches and using the validation holdout dataset for validation num_epochs = 5 history = model.fit( train_generator, steps_per_epoch = train_generator.samples // batch_size, validation_data = validation_generator, validation_steps = validation_generator.samples // batch_size, epochs = num_epochs) # - # ### View the Loss History # We tracked average training and validation loss history for each epoch. We can plot these to verify that loss reduced as the model was trained, and to detect *overfitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase). # + # %matplotlib inline from matplotlib import pyplot as plt epoch_nums = range(1,num_epochs+1) training_loss = history.history["loss"] validation_loss = history.history["val_loss"] plt.plot(epoch_nums, training_loss) plt.plot(epoch_nums, validation_loss) plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['training', 'validation'], loc='upper right') plt.show() # - # ### Evaluate Model Performance # We can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class. # + tags=[] # Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn import numpy as np from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt # %matplotlib inline print("Generating predictions from validation data...") # Get the image and label arrays for the first batch of validation data x_test = validation_generator[0][0] y_test = validation_generator[0][1] # Use the moedl to predict the class class_probabilities = model.predict(x_test) # The model returns a probability value for each class # The one with the highest probability is the predicted class predictions = np.argmax(class_probabilities, axis=1) # The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1 true_labels = np.argmax(y_test, axis=1) # Plot the confusion matrix cm = confusion_matrix(true_labels, predictions) plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues) plt.colorbar() tick_marks = np.arange(len(classnames)) plt.xticks(tick_marks, classnames, rotation=85) plt.yticks(tick_marks, classnames) plt.xlabel("Predicted Shape") plt.ylabel("True Shape") plt.show() # - # ### Using the Trained Model # Now that we've trained the model, we can use it to predict the class of a new image. # + tags=[] from tensorflow.keras import models from random import randint import os # %matplotlib inline # Function to create a random image (of a square, circle, or triangle) def create_image (size, shape): from random import randint import numpy as np from PIL import Image, ImageDraw xy1 = randint(10,40) xy2 = randint(60,100) col = (randint(0,200), randint(0,200), randint(0,200)) img = Image.new("RGB", size, (255, 255, 255)) draw = ImageDraw.Draw(img) if shape == 'circle': draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col) elif shape == 'triangle': draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col) else: # square draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col) del draw return np.array(img) # Save the trained model modelFileName = 'models/shape_classifier.h5' model.save(modelFileName) del model # deletes the existing model variable # Create a random test image classnames = os.listdir(os.path.join('data', 'shapes')) classnames.sort() img = create_image ((128,128), classnames[randint(0, len(classnames)-1)]) plt.axis('off') plt.imshow(img) # The model expects a batch of images as input, so we'll create an array of 1 image imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2]) # We need to format the input to match the training data # The generator loaded the values as floating point numbers # and normalized the pixel values, so... imgfeatures = imgfeatures.astype('float32') imgfeatures /= 255 # Use the classifier to predict the class model = models.load_model(modelFileName) # loads the saved model class_probabilities = model.predict(imgfeatures) # Find the class predictions with the highest predicted probability class_idx = np.argmax(class_probabilities, axis=1) print (classnames[int(class_idx[0])]) # - # In this notebook, you used Tensorflow to train an image classification model based on a convolutional neural network.
05b - Convolutional Neural Networks (Tensorflow).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/QDaria/QDaria.github.io/blob/main/Copy_of_hello_many_worlds.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="xLOXFOT5Q40E" # ##### Copyright 2020 The TensorFlow Authors. # + cellView="form" id="iiQkM5ZgQ8r2" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] id="j6331ZSsQGY3" # # Hello, many worlds # + [markdown] id="i9Jcnb8bQQyd" # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/hello_many_worlds"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/hello_many_worlds.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/hello_many_worlds.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/hello_many_worlds.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # </table> # + [markdown] id="6tYn2HaAUgH0" # This tutorial shows how a classical neural network can learn to correct qubit calibration errors. It introduces <a target="_blank" href="https://github.com/quantumlib/Cirq" class="external">Cirq</a>, a Python framework to create, edit, and invoke Noisy Intermediate Scale Quantum (NISQ) circuits, and demonstrates how Cirq interfaces with TensorFlow Quantum. # + [markdown] id="sPZoNKvpUaqa" # ## Setup # + id="TorxE5tnkvb2" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="04733134-0571-484b-f309-3b4b5382a635" # !pip install tensorflow==2.3.1 # + [markdown] id="FxkQA6oblNqI" # Install TensorFlow Quantum: # + id="saFHsRDpkvkH" colab={"base_uri": "https://localhost:8080/"} outputId="a6d478e5-8e66-4248-ad49-891c3f149e81" # !pip install tensorflow-quantum # + [markdown] id="F1L8h1YKUvIO" # Now import TensorFlow and the module dependencies: # + id="enZ300Bflq80" import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np # visualization tools # %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit # + [markdown] id="b08Mmbs8lr81" # ## 1. The Basics # + [markdown] id="y31qSRCczI-L" # ### 1.1 Cirq and parameterized quantum circuits # # Before exploring TensorFlow Quantum (TFQ), let's look at some <a target="_blank" href="https://github.com/quantumlib/Cirq" class="external">Cirq</a> basics. Cirq is a Python library for quantum computing from Google. You use it to define circuits, including static and parameterized gates. # # Cirq uses <a target="_blank" href="https://www.sympy.org" class="external">SymPy</a> symbols to represent free parameters. # + id="2yQdmhQLCrzQ" a, b = sympy.symbols('a b') # + [markdown] id="itUlpbKmDYNW" # The following code creates a two-qubit circuit using your parameters: # + id="Ps-pd2mndXs7" colab={"base_uri": "https://localhost:8080/", "height": 138} outputId="c18bf1d0-0f49-4b83-bc79-e8be00a6c0e2" # Create two qubits q0, q1 = cirq.GridQubit.rect(1, 2) # Create a circuit on these qubits using the parameters you created above. circuit = cirq.Circuit( cirq.rx(a).on(q0), cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1)) SVGCircuit(circuit) # + [markdown] id="zcCX109cJUaz" # To evaluate circuits, you can use the `cirq.Simulator` interface. You replace free parameters in a circuit with specific numbers by passing in a `cirq.ParamResolver` object. The following code calculates the raw state vector output of your parameterized circuit: # + id="VMq7EayNRyQb" colab={"base_uri": "https://localhost:8080/"} outputId="a1ab4819-3c1b-4db9-b679-d83efd9fdedf" # Calculate a state vector with a=0.5 and b=-0.5. resolver = cirq.ParamResolver({a: 0.5, b: -0.5}) output_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state_vector output_state_vector # + [markdown] id="-SUlLpXBeicF" # State vectors are not directly accessible outside of simulation (notice the complex numbers in the output above). To be physically realistic, you must specify a measurement, which converts a state vector into a real number that classical computers can understand. Cirq specifies measurements using combinations of the <a target="_blank" href="https://en.wikipedia.org/wiki/Pauli_matrices" class="external">Pauli operators</a> $\hat{X}$, $\hat{Y}$, and $\hat{Z}$. As illustration, the following code measures $\hat{Z}_0$ and $\frac{1}{2}\hat{Z}_0 + \hat{X}_1$ on the state vector you just simulated: # + id="hrSnOCi3ehr_" colab={"base_uri": "https://localhost:8080/"} outputId="f876235b-9c7a-4ecf-cbf8-502e9b469558" z0 = cirq.Z(q0) qubit_map={q0: 0, q1: 1} z0.expectation_from_state_vector(output_state_vector, qubit_map).real # + id="OZ0lWFXv6pII" colab={"base_uri": "https://localhost:8080/"} outputId="53a714a8-7595-40dd-c139-24ee51656660" z0x1 = 0.5 * z0 + cirq.X(q1) z0x1.expectation_from_state_vector(output_state_vector, qubit_map).real # + [markdown] id="bkC-yjIolDNr" # ### 1.2 Quantum circuits as tensors # # TensorFlow Quantum (TFQ) provides `tfq.convert_to_tensor`, a function that converts Cirq objects into tensors. This allows you to send Cirq objects to our <a target="_blank" href="https://www.tensorflow.org/quantum/api_docs/python/tfq/layers">quantum layers</a> and <a target="_blank" href="https://www.tensorflow.org/quantum/api_docs/python/tfq/get_expectation_op">quantum ops</a>. The function can be called on lists or arrays of Cirq Circuits and Cirq Paulis: # + id="1gLQjA02mIyy" colab={"base_uri": "https://localhost:8080/"} outputId="fc6f16cb-bab8-4e1d-aee7-0f2fdb3ea89a" # Rank 1 tensor containing 1 circuit. circuit_tensor = tfq.convert_to_tensor([circuit]) print(circuit_tensor.shape) print(circuit_tensor.dtype) # + [markdown] id="SJy6AkbU6pIP" # This encodes the Cirq objects as `tf.string` tensors that `tfq` operations decode as needed. # + id="aX_vEmCKmpQS" colab={"base_uri": "https://localhost:8080/"} outputId="6362ba7c-bdfc-40e1-8f86-6f60ff26398b" # Rank 1 tensor containing 2 Pauli operators. pauli_tensor = tfq.convert_to_tensor([z0, z0x1]) pauli_tensor.shape # + [markdown] id="FI1JLWe6m8JF" # ### 1.3 Batching circuit simulation # # TFQ provides methods for computing expectation values, samples, and state vectors. For now, let's focus on *expectation values*. # # The highest-level interface for calculating expectation values is the `tfq.layers.Expectation` layer, which is a `tf.keras.Layer`. In its simplest form, this layer is equivalent to simulating a parameterized circuit over many `cirq.ParamResolvers`; however, TFQ allows batching following TensorFlow semantics, and circuits are simulated using efficient C++ code. # # Create a batch of values to substitute for our `a` and `b` parameters: # + id="1fsVZhF5lIXp" batch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32) # + [markdown] id="Ip7jlGXIf22u" # Batching circuit execution over parameter values in Cirq requires a loop: # + id="RsfF53UCJtr9" colab={"base_uri": "https://localhost:8080/"} outputId="557d75af-6889-4925-ac06-450bd53fd506" cirq_results = [] cirq_simulator = cirq.Simulator() for vals in batch_vals: resolver = cirq.ParamResolver({a: vals[0], b: vals[1]}) final_state_vector = cirq_simulator.simulate(circuit, resolver).final_state_vector cirq_results.append( [z0.expectation_from_state_vector(final_state_vector, { q0: 0, q1: 1 }).real]) print('cirq batch results: \n {}'.format(np.array(cirq_results))) # + [markdown] id="W0JlZEu-f9Ac" # The same operation is simplified in TFQ: # + id="kGZVdcZ6y9lC" colab={"base_uri": "https://localhost:8080/"} outputId="bbe5bf35-7198-436d-8706-0c857d870499" tfq.layers.Expectation()(circuit, symbol_names=[a, b], symbol_values=batch_vals, operators=z0) # + [markdown] id="wppQ3TJ23mWC" # ## 2. Hybrid quantum-classical optimization # # Now that you've seen the basics, let's use TensorFlow Quantum to construct a *hybrid quantum-classical neural net*. You will train a classical neural net to control a single qubit. The control will be optimized to correctly prepare the qubit in the `0` or `1` state, overcoming a simulated systematic calibration error. This figure shows the architecture: # # <img src="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/images/nn_control1.png?raw=1" width="1000"> # # Even without a neural network this is a straightforward problem to solve, but the theme is similar to the real quantum control problems you might solve using TFQ. It demonstrates an end-to-end example of a quantum-classical computation using the `tfq.layers.ControlledPQC` (Parametrized Quantum Circuit) layer inside of a `tf.keras.Model`. # + [markdown] id="NlyxF3Q-6pIe" # For the implementation of this tutorial, this is architecture is split into 3 parts: # # - The *input circuit* or *datapoint circuit*: The first three $R$ gates. # - The *controlled circuit*: The other three $R$ gates. # - The *controller*: The classical neural-network setting the parameters of the controlled circuit. # + [markdown] id="VjDf-nTM6ZSs" # ### 2.1 The controlled circuit definition # # Define a learnable single bit rotation, as indicated in the figure above. This will correspond to our controlled circuit. # + id="N-j7SCl-51-q" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="20077cd6-6c96-44ed-cf78-9cabef9833b7" # Parameters that the classical NN will feed values into. control_params = sympy.symbols('theta_1 theta_2 theta_3') # Create the parameterized circuit. qubit = cirq.GridQubit(0, 0) model_circuit = cirq.Circuit( cirq.rz(control_params[0])(qubit), cirq.ry(control_params[1])(qubit), cirq.rx(control_params[2])(qubit)) SVGCircuit(model_circuit) # + [markdown] id="wfjSbsvb7g9f" # ### 2.2 The controller # # Now define controller network: # + id="1v4CK2jD6pIj" # The classical neural network layers. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ]) # + [markdown] id="QNimbsAt6pIm" # Given a batch of commands, the controller outputs a batch of control signals for the controlled circuit. # # The controller is randomly initialized so these outputs are not useful, yet. # + id="kZbYRTe16pIm" colab={"base_uri": "https://localhost:8080/"} outputId="bcac6c56-abb2-4c7f-9a0c-93b27fb9ff55" controller(tf.constant([[0.0],[1.0]])).numpy() # + [markdown] id="XizLExg56pIp" # ### 2.3 Connect the controller to the circuit # + [markdown] id="I5Pmy5-V6pIq" # Use `tfq` to connect the controller to the controlled circuit, as a single `keras.Model`. # # See the [Keras Functional API guide](https://www.tensorflow.org/guide/keras/functional) for more about this style of model definition. # # First define the inputs to the model: # + id="UfHF8NNE6pIr" # This input is the simulated miscalibration that the model will learn to correct. circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.string, name='circuits_input') # Commands will be either `0` or `1`, specifying the state to set the qubit to. commands_input = tf.keras.Input(shape=(1,), dtype=tf.dtypes.float32, name='commands_input') # + [markdown] id="y9xN2mNl6pIu" # Next apply operations to those inputs, to define the computation. # + id="Zvt2YGmZ6pIu" dense_2 = controller(commands_input) # TFQ layer for classically controlled circuits. expectation_layer = tfq.layers.ControlledPQC(model_circuit, # Observe Z operators = cirq.Z(qubit)) expectation = expectation_layer([circuits_input, dense_2]) # + [markdown] id="Ip2jNA9h6pIy" # Now package this computation as a `tf.keras.Model`: # + id="Xs6EMhah6pIz" # The full Keras model is built from our layers. model = tf.keras.Model(inputs=[circuits_input, commands_input], outputs=expectation) # + [markdown] id="w7kgqm3t6pI3" # The network architecture is indicated by the plot of the model below. # Compare this model plot to the architecture diagram to verify correctness. # # Note: May require a system install of the `graphviz` package. # + id="ERXNPe4F6pI4" colab={"base_uri": "https://localhost:8080/", "height": 232} outputId="bb60e4d8-48c0-4253-976d-8326075d6ae9" tf.keras.utils.plot_model(model, show_shapes=True, dpi=70) # + [markdown] id="-Pbemgww6pI7" # This model takes two inputs: The commands for the controller, and the input-circuit whose output the controller is attempting to correct. # + [markdown] id="hpnIBK916pI8" # ### 2.4 The dataset # + [markdown] id="yJSC9qH76pJA" # The model attempts to output the correct correct measurement value of $\hat{Z}$ for each command. The commands and correct values are defined below. # + id="ciMIJAuH6pJA" # The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired Z expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32) # + [markdown] id="kV1LM_hZ6pJD" # This is not the entire training dataset for this task. # Each datapoint in the dataset also needs an input circuit. # + [markdown] id="bbiVHvSYVW4H" # ### 2.4 Input circuit definition # # The input-circuit below defines the random miscalibration the model will learn to correct. # + id="_VYfzHffWo7n" random_rotations = np.random.uniform(0, 2 * np.pi, 3) noisy_preparation = cirq.Circuit( cirq.rx(random_rotations[0])(qubit), cirq.ry(random_rotations[1])(qubit), cirq.rz(random_rotations[2])(qubit) ) datapoint_circuits = tfq.convert_to_tensor([ noisy_preparation ] * 2) # Make two copied of this circuit # + [markdown] id="FvOkMyKI6pJI" # There are two copies of the circuit, one for each datapoint. # + id="6nk2Yr3e6pJJ" colab={"base_uri": "https://localhost:8080/"} outputId="5ac61df7-5981-46b4-8493-e879556f833f" datapoint_circuits.shape # + [markdown] id="gB--UhZZYgVY" # ### 2.5 Training # + [markdown] id="jATjqUIv6pJM" # With the inputs defined you can test-run the `tfq` model. # + id="Lwphqvs96pJO" colab={"base_uri": "https://localhost:8080/"} outputId="66a9e6b4-f2af-477c-b99d-f575d3ed03f8" model([datapoint_circuits, commands]).numpy() # + [markdown] id="9gyg5qSL6pJR" # Now run a standard training process to adjust these values towards the `expected_outputs`. # + id="dtPYqbNi8zeZ" optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() model.compile(optimizer=optimizer, loss=loss) history = model.fit(x=[datapoint_circuits, commands], y=expected_outputs, epochs=30, verbose=0) # + id="azE-qV0OaC1o" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="e88e4462-eccc-4d2c-f61f-1001f788846c" plt.plot(history.history['loss']) plt.title("Learning to Control a Qubit") plt.xlabel("Iterations") plt.ylabel("Error in Control") plt.show() # + [markdown] id="GTd5DGcRmmOK" # From this plot you can see that the neural network has learned to overcome the systematic miscalibration. # + [markdown] id="C2RfWismj66S" # ### 2.6 Verify outputs # Now use the trained model, to correct the qubit calibration errors. With Cirq: # + id="RoIlb7r7j5SY" colab={"base_uri": "https://localhost:8080/"} outputId="7e75fe1a-dde6-4f68-c618-2158c5d28fe4" def check_error(command_values, desired_values): """Based on the value in `command_value` see how well you could prepare the full circuit to have `desired_value` when taking expectation w.r.t. Z.""" params_to_prepare_output = controller(command_values).numpy() full_circuit = noisy_preparation + model_circuit # Test how well you can prepare a state to get expectation the expectation # value in `desired_values` for index in [0, 1]: state = cirq_simulator.simulate( full_circuit, {s:v for (s,v) in zip(control_params, params_to_prepare_output[index])} ).final_state_vector expt = cirq.Z(qubit).expectation_from_state_vector(state, {qubit: 0}).real print(f'For a desired output (expectation) of {desired_values[index]} with' f' noisy preparation, the controller\nnetwork found the following ' f'values for theta: {params_to_prepare_output[index]}\nWhich gives an' f' actual expectation of: {expt}\n') check_error(commands, expected_outputs) # + [markdown] id="wvW_ZDwmsws6" # The value of the loss function during training provides a rough idea of how well the model is learning. The lower the loss, the closer the expectation values in the above cell is to `desired_values`. If you aren't as concerned with the parameter values, you can always check the outputs from above using `tfq`: # + id="aYskLTacs8Ku" colab={"base_uri": "https://localhost:8080/"} outputId="957f5b96-6693-4db0-fb8f-ab885d5e0d20" model([datapoint_circuits, commands]) # + [markdown] id="jNrW0NXR-lDC" # ## 3 Learning to prepare eigenstates of different operators # # The choice of the $\pm \hat{Z}$ eigenstates corresponding to 1 and 0 was arbitrary. You could have just as easily wanted 1 to correspond to the $+ \hat{Z}$ eigenstate and 0 to correspond to the $-\hat{X}$ eigenstate. One way to accomplish this is by specifying a different measurement operator for each command, as indicated in the figure below: # # <img src="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/images/nn_control2.png?raw=1" width="1000"> # # This requires use of <code>tfq.layers.Expectation</code>. Now your input has grown to include three objects: circuit, command, and operator. The output is still the expectation value. # + [markdown] id="Ci3WMZ9CjEM1" # ### 3.1 New model definition # # Lets take a look at the model to accomplish this task: # + id="hta0G3Nc6pJY" # Define inputs. commands_input = tf.keras.layers.Input(shape=(1), dtype=tf.dtypes.float32, name='commands_input') circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.dtypes.string, name='circuits_input') operators_input = tf.keras.Input(shape=(1,), dtype=tf.dtypes.string, name='operators_input') # + [markdown] id="dtdnkrZm6pJb" # Here is the controller network: # + id="n_aTG4g3-y0F" # Define classical NN. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ]) # + [markdown] id="q9aN2ciy6pJf" # Combine the circuit and the controller into a single `keras.Model` using `tfq`: # + id="IMHjiKit6pJg" dense_2 = controller(commands_input) # Since you aren't using a PQC or ControlledPQC you must append # your model circuit onto the datapoint circuit tensor manually. full_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit) expectation_output = tfq.layers.Expectation()(full_circuit, symbol_names=control_params, symbol_values=dense_2, operators=operators_input) # Contruct your Keras model. two_axis_control_model = tf.keras.Model( inputs=[circuits_input, commands_input, operators_input], outputs=[expectation_output]) # + [markdown] id="VQTM6CCiD4gU" # ### 3.2 The dataset # # Now you will also include the operators you wish to measure for each datapoint you supply for `model_circuit`: # + id="4gw_L3JG0_G0" # The operators to measure, for each command. operator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]]) # The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32) # + [markdown] id="ALCKSvwh0_G2" # ### 3.3 Training # # Now that you have your new inputs and outputs you can train once again using keras. # + id="nFuGA73MAA4p" colab={"base_uri": "https://localhost:8080/"} outputId="7de53d9b-130e-4dea-df57-59c1d4aba063" optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() two_axis_control_model.compile(optimizer=optimizer, loss=loss) history = two_axis_control_model.fit( x=[datapoint_circuits, commands, operator_data], y=expected_outputs, epochs=30, verbose=1) # + id="Cf_G-GdturLL" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="995578d5-2d90-40be-b2c8-d22886ea775e" plt.plot(history.history['loss']) plt.title("Learning to Control a Qubit") plt.xlabel("Iterations") plt.ylabel("Error in Control") plt.show() # + [markdown] id="sdCPDH9NlJBl" # The loss function has dropped to zero. # + [markdown] id="NzY8eSVm6pJs" # The `controller` is available as a stand-alone model. Call the controller, and check its response to each command signal. It would take some work to correctly compare these outputs to the contents of `random_rotations`. # + id="uXmH0TQ76pJt" colab={"base_uri": "https://localhost:8080/"} outputId="cb935dd1-8a59-4706-a4b6-441ca620948f" controller.predict(np.array([0,1])) # + [markdown] id="n2WtXnsxubD2" # Success: See if you can adapt the `check_error` function from your first model to work with this new model architecture.
Copy_of_hello_many_worlds.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Define the Convolutional Neural Network # # After you've looked at the data you're working with and, in this case, know the shapes of the images and of the keypoints, you are ready to define a convolutional neural network that can *learn* from this data. # # In this notebook and in `models.py`, you will: # 1. Define a CNN with images as input and keypoints as output # 2. Construct the transformed FaceKeypointsDataset, just as before # 3. Train the CNN on the training data, tracking loss # 4. See how the trained model performs on test data # 5. If necessary, modify the CNN structure and model hyperparameters, so that it performs *well* **\*** # # **\*** What does *well* mean? # # "Well" means that the model's loss decreases during training **and**, when applied to test image data, the model produces keypoints that closely match the true keypoints of each face. And you'll see examples of this later in the notebook. # # --- # # ## CNN Architecture # # Recall that CNN's are defined by a few types of layers: # * Convolutional layers # * Maxpooling layers # * Fully-connected layers # # You are required to use the above layers and encouraged to add multiple convolutional layers and things like dropout layers that may prevent overfitting. You are also encouraged to look at literature on keypoint detection, such as [this paper](https://arxiv.org/pdf/1710.00977.pdf), to help you determine the structure of your network. # # # ### TODO: Define your model in the provided file `models.py` file # # This file is mostly empty but contains the expected name and some TODO's for creating your model. # # --- # ## PyTorch Neural Nets # # To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the feedforward behavior of a network that employs those initialized layers in the function `forward`, which takes in an input image tensor, `x`. The structure of this Net class is shown below and left for you to fill in. # # Note: During training, PyTorch will be able to perform backpropagation by keeping track of the network's feedforward behavior and using autograd to calculate the update to the weights in the network. # # #### Define the Layers in ` __init__` # As a reminder, a conv/pool layer may be defined like this (in `__init__`): # ``` # # 1 input image channel (for grayscale images), 32 output channels/feature maps, 3x3 square convolution kernel # self.conv1 = nn.Conv2d(1, 32, 3) # # # maxpool that uses a square window of kernel_size=2, stride=2 # self.pool = nn.MaxPool2d(2, 2) # ``` # # #### Refer to Layers in `forward` # Then referred to in the `forward` function like this, in which the conv1 layer has a ReLu activation applied to it before maxpooling is applied: # ``` # x = self.pool(F.relu(self.conv1(x))) # ``` # # Best practice is to place any layers whose weights will change during the training process in `__init__` and refer to them in the `forward` function; any layers or functions that always behave in the same way, such as a pre-defined activation function, should appear *only* in the `forward` function. # #### Why models.py # # You are tasked with defining the network in the `models.py` file so that any models you define can be saved and loaded by name in different notebooks in this project directory. For example, by defining a CNN class called `Net` in `models.py`, you can then create that same architecture in this and other notebooks by simply importing the class and instantiating a model: # ``` # from models import Net # net = Net() # ``` # + # import the usual resources import matplotlib.pyplot as plt import numpy as np # watch for any changes in model.py, if it changes, re-load it automatically # %load_ext autoreload # %autoreload 2 # + ## TODO: Define the Net in models.py import torch import torch.nn as nn import torch.nn.functional as F ## TODO: Once you've define the network, you can instantiate it # one example conv layer has been provided for you from models import Net net = Net() print(net) # - # ## Transform the dataset # # To prepare for training, create a transformed dataset of images and keypoints. # # ### TODO: Define a data transform # # In PyTorch, a convolutional neural network expects a torch image of a consistent size as input. For efficient training, and so your model's loss does not blow up during training, it is also suggested that you normalize the input images and keypoints. The necessary transforms have been defined in `data_load.py` and you **do not** need to modify these; take a look at this file (you'll see the same transforms that were defined and applied in Notebook 1). # # To define the data transform below, use a [composition](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html#compose-transforms) of: # 1. Rescaling and/or cropping the data, such that you are left with a square image (the suggested size is 224x224px) # 2. Normalizing the images and keypoints; turning each RGB image into a grayscale image with a color range of [0, 1] and transforming the given keypoints into a range of [-1, 1] # 3. Turning these images and keypoints into Tensors # # These transformations have been defined in `data_load.py`, but it's up to you to call them and create a `data_transform` below. **This transform will be applied to the training data and, later, the test data**. It will change how you go about displaying these images and keypoints, but these steps are essential for efficient training. # # As a note, should you want to perform data augmentation (which is optional in this project), and randomly rotate or shift these images, a square image size will be useful; rotating a 224x224 image by 90 degrees will result in the same shape of output. # + from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils # the dataset we created in Notebook 1 is copied in the helper file `data_load.py` from data_load import FacialKeypointsDataset # the transforms we defined in Notebook 1 are in the helper file `data_load.py` from data_load import Rescale, RandomCrop, Normalize, ToTensor ## TODO: define the data_transform using transforms.Compose([all tx's, . , .]) # order matters! i.e. rescaling should come before a smaller crop data_transform = transforms.Compose([Rescale((224, 224)), Normalize(), ToTensor()]) # testing that you've defined a transform assert(data_transform is not None), 'Define a data_transform' # + # create the transformed dataset transformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv', root_dir='data/training/', transform=data_transform) print('Number of images: ', len(transformed_dataset)) # iterate through the transformed dataset and print some stats about the first few samples for i in range(4): sample = transformed_dataset[i] print(i, sample['image'].size(), sample['keypoints'].size()) # - # ## Batching and loading data # # Next, having defined the transformed dataset, we can use PyTorch's DataLoader class to load the training data in batches of whatever size as well as to shuffle the data for training the model. You can read more about the parameters of the DataLoader, in [this documentation](http://pytorch.org/docs/master/data.html). # # #### Batch size # Decide on a good batch size for training your model. Try both small and large batch sizes and note how the loss decreases as the model trains. # # **Note for Windows users**: Please change the `num_workers` to 0 or you may face some issues with your DataLoader failing. # + # load training data in batches batch_size = 12 train_loader = DataLoader(transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4) # - # ## Before training # # Take a look at how this model performs before it trains. You should see that the keypoints it predicts start off in one spot and don't match the keypoints on a face at all! It's interesting to visualize this behavior so that you can compare it to the model after training and see how the model has improved. # # #### Load in the test dataset # # The test dataset is one that this model has *not* seen before, meaning it has not trained with these images. We'll load in this test data and before and after training, see how your model performs on this set! # # To visualize this test data, we have to go through some un-transformation steps to turn our images into python images from tensors and to turn our keypoints back into a recognizable range. # + # load in the test data, using the dataset class # AND apply the data_transform you defined above # create the test dataset test_dataset = FacialKeypointsDataset(csv_file='data/test_frames_keypoints.csv', root_dir='data/test/', transform=data_transform) print ("test dataset", len(test_dataset)) # + # load test data in batches batch_size = 10 test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True, num_workers=4) # - # ## Apply the model on a test sample # # To test the model on a test sample of data, you have to follow these steps: # 1. Extract the image and ground truth keypoints from a sample # 2. Make sure the image is a FloatTensor, which the model expects. # 3. Forward pass the image through the net to get the predicted, output keypoints. # # This function test how the network performs on the first batch of test data. It returns the images, the transformed images, the predicted keypoints (produced by the model), and the ground truth keypoints. # + # test the model on a batch of test images def net_sample_output(): # iterate through the test dataset for i, sample in enumerate(test_loader): # get sample data: images and ground truth keypoints images = sample['image'] key_pts = sample['keypoints'] # convert images to FloatTensors images = images.type(torch.FloatTensor) # forward pass to get net output output_pts = net(images) # reshape to batch_size x 68 x 2 pts output_pts = output_pts.view(output_pts.size()[0], 68, -1) # break after first image is tested if i == 0: return images, output_pts, key_pts # - # #### Debugging tips # # If you get a size or dimension error here, make sure that your network outputs the expected number of keypoints! Or if you get a Tensor type error, look into changing the above code that casts the data into float types: `images = images.type(torch.FloatTensor)`. # + # call the above function # returns: test images, test predicted keypoints, test ground truth keypoints test_images, test_outputs, gt_pts = net_sample_output() # print out the dimensions of the data to see if they make sense print(test_images.data.size()) print(test_outputs.data.size()) print(gt_pts.size()) # - # ## Visualize the predicted keypoints # # Once we've had the model produce some predicted output keypoints, we can visualize these points in a way that's similar to how we've displayed this data before, only this time, we have to "un-transform" the image/keypoint data to display it. # # Note that I've defined a *new* function, `show_all_keypoints` that displays a grayscale image, its predicted keypoints and its ground truth keypoints (if provided). def show_all_keypoints(image, predicted_key_pts, gt_pts=None): """Show image with predicted keypoints""" # image is grayscale plt.imshow(image, cmap='gray') plt.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m') # plot ground truth points as green pts if gt_pts is not None: plt.scatter(gt_pts[:, 0], gt_pts[:, 1], s=20, marker='.', c='g') # #### Un-transformation # # Next, you'll see a helper function. `visualize_output` that takes in a batch of images, predicted keypoints, and ground truth keypoints and displays a set of those images and their true/predicted keypoints. # # This function's main role is to take batches of image and keypoint data (the input and output of your CNN), and transform them into numpy images and un-normalized keypoints (x, y) for normal display. The un-transformation process turns keypoints and images into numpy arrays from Tensors *and* it undoes the keypoint normalization done in the Normalize() transform; it's assumed that you applied these transformations when you loaded your test data. # + # visualize the output # by default this shows a batch of 10 images def visualize_output(test_images, test_outputs, gt_pts=None, batch_size=10): for i in range(batch_size): plt.figure(figsize=(20,10)) ax = plt.subplot(1, batch_size, i+1) # un-transform the image data image = test_images[i].data # get the image from it's wrapper image = image.numpy() # convert to numpy array from a Tensor image = np.transpose(image, (1, 2, 0)) # transpose to go from torch to numpy image # un-transform the predicted key_pts data predicted_key_pts = test_outputs[i].data predicted_key_pts = predicted_key_pts.numpy() # undo normalization of keypoints predicted_key_pts = predicted_key_pts*50.0+100 # plot ground truth points for comparison, if they exist ground_truth_pts = None if gt_pts is not None: ground_truth_pts = gt_pts[i] ground_truth_pts = ground_truth_pts*50.0+100 # call show_all_keypoints show_all_keypoints(np.squeeze(image), predicted_key_pts, ground_truth_pts) plt.axis('off') plt.show() # call it visualize_output(test_images, test_outputs, gt_pts) # - # ## Training # # #### Loss function # Training a network to predict keypoints is different than training a network to predict a class; instead of outputting a distribution of classes and using cross entropy loss, you may want to choose a loss function that is suited for regression, which directly compares a predicted value and target value. Read about the various kinds of loss functions (like MSE or L1/SmoothL1 loss) in [this documentation](http://pytorch.org/docs/master/_modules/torch/nn/modules/loss.html). # # ### TODO: Define the loss and optimization # # Next, you'll define how the model will train by deciding on the loss function and optimizer. # # --- # + ## TODO: Define the loss and optimization import torch.optim as optim criterion = torch.nn.SmoothL1Loss() optimizer = optim.Adam(net.parameters(), lr = 0.001) # - # ## Training and Initial Observation # # Now, you'll train on your batched training data from `train_loader` for a number of epochs. # # To quickly observe how your model is training and decide on whether or not you should modify it's structure or hyperparameters, you're encouraged to start off with just one or two epochs at first. As you train, note how your the model's loss behaves over time: does it decrease quickly at first and then slow down? Does it take a while to decrease in the first place? What happens if you change the batch size of your training data or modify your loss function? etc. # # Use these initial observations to make changes to your model and decide on the best architecture before you train for many epochs and create a final model. def train_net(n_epochs): # prepare the net for training net.train() for epoch in range(n_epochs): # loop over the dataset multiple times running_loss = 0.0 # train on batches of data, assumes you already have train_loader for batch_i, data in enumerate(train_loader): # get the input images and their corresponding labels images = data['image'] key_pts = data['keypoints'] # flatten pts key_pts = key_pts.view(key_pts.size(0), -1) # convert variables to floats for regression loss key_pts = key_pts.type(torch.FloatTensor) images = images.type(torch.FloatTensor) # forward pass to get outputs output_pts = net(images) # calculate the loss between predicted and target keypoints loss = criterion(output_pts, key_pts) # zero the parameter (weight) gradients optimizer.zero_grad() # backward pass to calculate the weight gradients loss.backward() # update the weights optimizer.step() # print loss statistics # to convert loss into a scalar and add it to the running_loss, use .item() running_loss += loss.item() if batch_i % 10 == 9: # print every 10 batches print('Epoch: {}, Batch: {}, Avg. Loss: {}'.format(epoch + 1, batch_i+1, running_loss/10)) running_loss = 0.0 print('Finished Training') # + # train your network n_epochs = 2 # start small, and increase when you've decided on your model structure and hyperparams # sum6 = 3+3+4 + 2 train_net(n_epochs) # - # ## Test data # # See how your model performs on previously unseen, test data. We've already loaded and transformed this data, similar to the training data. Next, run your trained model on these images to see what kind of keypoints are produced. You should be able to see if your model is fitting each new face it sees, if the points are distributed randomly, or if the points have actually overfitted the training data and do not generalize. # + # get a sample of test data again test_images, test_outputs, gt_pts = net_sample_output() print(test_images.data.size()) print(test_outputs.data.size()) print(gt_pts.size()) # key_pts = key_pts.type(torch.FloatTensor) loss = criterion(test_outputs.type(torch.FloatTensor), gt_pts.type(torch.FloatTensor)) # backward pass to calculate the weight gradients loss.backward() print ("Loss ", loss.item() / gt_pts.size(0)) # + ## TODO: visualize your test output # you can use the same function as before, by un-commenting the line below: visualize_output(test_images, test_outputs, gt_pts) # - # Once you've found a good model (or two), save your model so you can load it and use it later! # + ## TODO: change the name to something uniqe for each new model model_dir = 'saved_models/' model_name = 'keypoints_model_6_1.pt' # after training, save your model parameters in the dir 'saved_models' torch.save(net.state_dict(), model_dir+model_name) # - # After you've trained a well-performing model, answer the following questions so that we have some insight into your training and architecture selection process. Answering all questions is required to pass this project. # ### Question 1: What optimization and loss functions did you choose and why? # # **Answer**: As an optimization fucntion I chose Adam because it usually converges faster and SmoothL1Loss as a loss function because it seemed to give slightly better results. # ### Question 2: What kind of network architecture did you start with and how did it change as you tried different architectures? Did you decide to add more convolutional layers or any layers to avoid overfitting the data? # **Answer**: I tried approximately 10 different model architectures starting from very simple where there are 1 conv layer, 1 max pooling and 1 fully connected layer and then added more layers, introduced batch norm, dropout and the final model contained 3 conv layers, 3 max poolings and 2 fully connected layers. # # To reduce overfitting I added dropout after each conv layer and batch norm between fc1 and fc2. I also altogether added 3 conv layers. # ### Question 3: How did you decide on the number of epochs and batch_size to train your model? # **Answer**: First, I chose 2 epochs and checked if the loss was decreasing. If it did in decrease, then performed 3 more epochs. # If the loss did not change during last 2 epochs, then decreased learning rate and tried 2 more epochs. If the loss didn't decrease, then stopped training. # I decided on batch_size=16 because I could not fit larger in memory. # ## Feature Visualization # # Sometimes, neural networks are thought of as a black box, given some input, they learn to produce some output. CNN's are actually learning to recognize a variety of spatial patterns and you can visualize what each convolutional layer has been trained to recognize by looking at the weights that make up each convolutional kernel and applying those one at a time to a sample image. This technique is called feature visualization and it's useful for understanding the inner workings of a CNN. # In the cell below, you can see how to extract a single filter (by index) from your first convolutional layer. The filter should appear as a grayscale grid. # + # Get the weights in the first conv layer, "conv1" # if necessary, change this to reflect the name of your first conv layer weights1 = net.conv6_3.weight.data w = weights1.numpy() filter_index = 0 print(w[filter_index][0]) print(w[filter_index][0].shape) # display the filter weights plt.imshow(w[filter_index][0], cmap='gray') # - # ## Feature maps # # Each CNN has at least one convolutional layer that is composed of stacked filters (also known as convolutional kernels). As a CNN trains, it learns what weights to include in it's convolutional kernels and when these kernels are applied to some input image, they produce a set of **feature maps**. So, feature maps are just sets of filtered images; they are the images produced by applying a convolutional kernel to an input image. These maps show us the features that the different layers of the neural network learn to extract. For example, you might imagine a convolutional kernel that detects the vertical edges of a face or another one that detects the corners of eyes. You can see what kind of features each of these kernels detects by applying them to an image. One such example is shown below; from the way it brings out the lines in an the image, you might characterize this as an edge detection filter. # # <img src='images/feature_map_ex.png' width=50% height=50%/> # # # Next, choose a test image and filter it with one of the convolutional kernels in your trained CNN; look at the filtered output to get an idea what that particular kernel detects. # # ### TODO: Filter an image to see the effect of a convolutional kernel # --- # + #TODO: load in and display any image from the transformed test dataset test_obj = next(iter(test_loader)) test_img = test_obj["image"].data.numpy()[0][0] fig=plt.figure(figsize=(5, 5)) plt.imshow(test_img, cmap='gray') ## TODO: Using cv's filter2D function, ## apply a specific set of filter weights (like the one displayed above) to the test image import cv2 weights = net.conv6_1.weight.data w = weights.numpy() c = cv2.filter2D(test_img, -1, w[0][0]) fig=plt.figure(figsize=(5, 5)) plt.imshow(c, cmap='gray') # - # ### Question 4: Choose one filter from your trained CNN and apply it to a test image; what purpose do you think it plays? What kind of feature do you think it detects? # # **Answer**: It seems it has learned to blur out the noise. # --- # ## Moving on! # # Now that you've defined and trained your model (and saved the best model), you are ready to move on to the last notebook, which combines a face detector with your saved model to create a facial keypoint detection system that can predict the keypoints on *any* face in an image!
2. Define the Network Architecture.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # Copyright (c) Microsoft Corporation. All rights reserved. # # Licensed under the MIT License. # ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.png) # + [markdown] nbpresent={"id": "bf74d2e9-2708-49b1-934b-e0ede342f475"} # # Training, hyperparameter tune, and deploy with Keras # # ## Introduction # This tutorial shows how to train a simple deep neural network using the MNIST dataset and Keras on Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of `28x28` pixels, representing number from 0 to 9. The goal is to create a multi-class classifier to identify the digit each image represents, and deploy it as a web service in Azure. # # For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/). # # ## Prerequisite: # * Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning # * If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to: # * install the AML SDK # * create a workspace and its configuration file (`config.json`) # * For local scoring test, you will also need to have `tensorflow` and `keras` installed in the current Jupyter kernel. # - # Let's get started. First let's import some Python libraries. # + nbpresent={"id": "c377ea0c-0cd9-4345-9be2-e20fb29c94c3"} # %matplotlib inline import numpy as np import os import matplotlib.pyplot as plt # + nbpresent={"id": "edaa7f2f-2439-4148-b57a-8c794c0945ec"} import azureml from azureml.core import Workspace # check core SDK version number print("Azure ML SDK Version: ", azureml.core.VERSION) # - # ## Initialize workspace # Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`. ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') # + [markdown] nbpresent={"id": "59f52294-4a25-4c92-bab8-3b07f0f44d15"} # ## Create an Azure ML experiment # Let's create an experiment named "keras-mnist" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure. # + nbpresent={"id": "bc70f780-c240-4779-96f3-bc5ef9a37d59"} from azureml.core import Experiment script_folder = './keras-mnist' os.makedirs(script_folder, exist_ok=True) exp = Experiment(workspace=ws, name='keras-mnist') # - # ## Explore data # # Before you train a model, you need to understand the data that you are using to train it. In this section you learn how to: # # * Download the MNIST dataset # * Display some sample images # # ### Download the MNIST dataset # # Download the MNIST dataset and save the files into a `data` directory locally. Images and labels for both training and testing are downloaded. # + import urllib.request data_folder = os.path.join(os.getcwd(), 'data') os.makedirs(data_folder, exist_ok=True) urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'train-images.gz')) urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'train-labels.gz')) urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'test-images.gz')) urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'test-labels.gz')) # - # ### Display some sample images # # Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them. Note this step requires a `load_data` function that's included in an `utils.py` file. This file is included in the sample folder. Please make sure it is placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays. # + # make sure utils.py is in the same directory as this code from utils import load_data, one_hot_encode # note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster. X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0 X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0 y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1) y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1) # now let's show some randomly chosen images from the training set. count = 0 sample_size = 30 plt.figure(figsize = (16, 6)) for i in np.random.permutation(X_train.shape[0])[:sample_size]: count = count + 1 plt.subplot(1, sample_size, count) plt.axhline('') plt.axvline('') plt.text(x=10, y=-10, s=y_train[i], fontsize=18) plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys) plt.show() # - # Now you have an idea of what these images look like and the expected prediction outcome. # + [markdown] nbpresent={"id": "defe921f-8097-44c3-8336-8af6700804a7"} # ## Create a FileDataset # A FileDataset references one or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred. [Learn More](https://aka.ms/azureml/howto/createdatasets) # + from azureml.core.dataset import Dataset web_paths = [ 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz' ] dataset = Dataset.File.from_files(path = web_paths) # - # Use the `register()` method to register datasets to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script. dataset = dataset.register(workspace = ws, name = 'mnist dataset', description='training and test dataset', create_new_version=True) # ## Create or Attach existing AmlCompute # You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource. # If we could not find the cluster with the given name, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps: # 1. create the configuration (this step is local and only takes a second) # 2. create the cluster (this step will take about **20 seconds**) # 3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell # + from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # choose a name for your cluster cluster_name = "gpu-cluster" try: compute_target = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing compute target') except ComputeTargetException: print('Creating a new compute target...') compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', max_nodes=4) # create the cluster compute_target = ComputeTarget.create(ws, cluster_name, compute_config) # can poll for a minimum number of nodes and for a specific timeout. # if no min node count is provided it uses the scale settings for the cluster compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20) # use get_status() to get a detailed status for the current cluster. print(compute_target.get_status().serialize()) # - # Now that you have created the compute target, let's see what the workspace's `compute_targets` property returns. You should now see one entry named "gpu-cluster" of type `AmlCompute`. compute_targets = ws.compute_targets for name, ct in compute_targets.items(): print(name, ct.type, ct.provisioning_state) # ## Copy the training files into the script folder # The Keras training script is already created for you. You can simply copy it into the script folder, together with the utility library used to load compressed data file into numpy array. # + import shutil # the training logic is in the keras_mnist.py file. shutil.copy('./keras_mnist.py', script_folder) # the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays. shutil.copy('./utils.py', script_folder) # + [markdown] nbpresent={"id": "2039d2d5-aca6-4f25-a12f-df9ae6529cae"} # ## Construct neural network in Keras # In the training script `keras_mnist.py`, it creates a very simple DNN (deep neural network), with just 2 hidden layers. The input layer has 28 * 28 = 784 neurons, each representing a pixel in an image. The first hidden layer has 300 neurons, and the second hidden layer has 100 neurons. The output layer has 10 neurons, each representing a targeted label from 0 to 9. # # ![DNN](nn.png) # - # ### Azure ML concepts # Please note the following three things in the code below: # 1. The script accepts arguments using the argparse package. In this case there is one argument `--data_folder` which specifies the FileDataset in which the script can find the MNIST data # ``` # parser = argparse.ArgumentParser() # parser.add_argument('--data_folder') # ``` # 2. The script is accessing the Azure ML `Run` object by executing `run = Run.get_context()`. Further down the script is using the `run` to report the loss and accuracy at the end of each epoch via callback. # ``` # run.log('Loss', log['loss']) # run.log('Accuracy', log['acc']) # ``` # 3. When running the script on Azure ML, you can write files out to a folder `./outputs` that is relative to the root directory. This folder is specially tracked by Azure ML in the sense that any files written to that folder during script execution on the remote target will be picked up by Run History; these files (known as artifacts) will be available as part of the run history record. # The next cell will print out the training code for you to inspect. with open(os.path.join(script_folder, './keras_mnist.py'), 'r') as f: print(f.read()) # ## Create TensorFlow estimator & add Keras # Next, we construct an `azureml.train.dnn.TensorFlow` estimator object, use the `gpu-cluster` as compute target, and pass the mount-point of the datastore to the training code as a parameter. # The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed. In this case, we add `keras` package (for the Keras framework obviously), and `matplotlib` package for plotting a "Loss vs. Accuracy" chart and record it in run history. # + dataset = Dataset.get_by_name(ws, 'mnist dataset') # list the files referenced by mnist dataset dataset.to_path() # + from azureml.train.dnn import TensorFlow script_params = { '--data-folder': dataset.as_named_input('mnist').as_mount(), '--batch-size': 50, '--first-layer-neurons': 300, '--second-layer-neurons': 100, '--learning-rate': 0.001 } est = TensorFlow(source_directory=script_folder, script_params=script_params, compute_target=compute_target, entry_script='keras_mnist.py', pip_packages=['keras==2.2.5','azureml-dataprep[pandas,fuse]','matplotlib']) # - # ## Submit job to run # Submit the estimator to the Azure ML experiment to kick off the execution. run = exp.submit(est) # ### Monitor the Run # As the Run is executed, it will go through the following stages: # 1. Preparing: A docker image is created matching the Python environment specified by the TensorFlow estimator and it will be uploaded to the workspace's Azure Container Registry. This step will only happen once for each Python environment -- the container will then be cached for subsequent runs. Creating and uploading the image takes about **5 minutes**. While the job is preparing, logs are streamed to the run history and can be viewed to monitor the progress of the image creation. # # 2. Scaling: If the compute needs to be scaled up (i.e. the AmlCompute cluster requires more nodes to execute the run than currently available), the cluster will attempt to scale up in order to make the required amount of nodes available. Scaling typically takes about **5 minutes**. # # 3. Running: All scripts in the script folder are uploaded to the compute target, data stores are mounted/copied and the `entry_script` is executed. While the job is running, stdout and the `./logs` folder are streamed to the run history and can be viewed to monitor the progress of the run. # # 4. Post-Processing: The `./outputs` folder of the run is copied over to the run history # # There are multiple ways to check the progress of a running job. We can use a Jupyter notebook widget. # # **Note: The widget will automatically update ever 10-15 seconds, always showing you the most up-to-date information about the run** from azureml.widgets import RunDetails RunDetails(run).show() # We can also periodically check the status of the run object, and navigate to Azure portal to monitor the run. run run.wait_for_completion(show_output=True) # In the outputs of the training script, it prints out the Keras version number. Please make a note of it. # ### The Run object # The Run object provides the interface to the run history -- both to the job and to the control plane (this notebook), and both while the job is running and after it has completed. It provides a number of interesting features for instance: # * `run.get_details()`: Provides a rich set of properties of the run # * `run.get_metrics()`: Provides a dictionary with all the metrics that were reported for the Run # * `run.get_file_names()`: List all the files that were uploaded to the run history for this Run. This will include the `outputs` and `logs` folder, azureml-logs and other logs, as well as files that were explicitly uploaded to the run using `run.upload_file()` # # Below are some examples -- please run through them and inspect their output. run.get_details() run.get_metrics() run.get_file_names() # ## Download the saved model # In the training script, the Keras model is saved into two files, `model.json` and `model.h5`, in the `outputs/models` folder on the gpu-cluster AmlCompute node. Azure ML automatically uploaded anything written in the `./outputs` folder into run history file store. Subsequently, we can use the `run` object to download the model files. They are under the the `outputs/model` folder in the run history file store, and are downloaded into a local folder named `model`. # + # create a model folder in the current directory os.makedirs('./model', exist_ok=True) for f in run.get_file_names(): if f.startswith('outputs/model'): output_file_path = os.path.join('./model', f.split('/')[-1]) print('Downloading from {} to {} ...'.format(f, output_file_path)) run.download_file(name=f, output_file_path=output_file_path) # - # ## Predict on the test set # Let's check the version of the local Keras. Make sure it matches with the version number printed out in the training script. Otherwise you might not be able to load the model properly. # + import keras import tensorflow as tf print("Keras version:", keras.__version__) print("Tensorflow version:", tf.__version__) # - # Now let's load the downloaded model. # + from keras.models import model_from_json # load json and create model json_file = open('model/model.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights("model/model.h5") print("Model loaded from disk.") # - # Feed test dataset to the persisted model to get predictions. # + # evaluate loaded model on test data loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) y_test_ohe = one_hot_encode(y_test, 10) y_hat = np.argmax(loaded_model.predict(X_test), axis=1) # print the first 30 labels and predictions print('labels: \t', y_test[:30]) print('predictions:\t', y_hat[:30]) # - # Calculate the overall accuracy by comparing the predicted value against the test set. print("Accuracy on the test set:", np.average(y_hat == y_test)) # ## Intelligent hyperparameter tuning # We have trained the model with one set of hyperparameters, now let's how we can do hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling. # + from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal from azureml.train.hyperdrive import choice, loguniform ps = RandomParameterSampling( { '--batch-size': choice(25, 50, 100), '--first-layer-neurons': choice(10, 50, 200, 300, 500), '--second-layer-neurons': choice(10, 50, 200, 500), '--learning-rate': loguniform(-6, -1) } ) # - # Next, we will create a new estimator without the above parameters since they will be passed in later by Hyperdrive configuration. Note we still need to keep the `data-folder` parameter since that's not a hyperparamter we will sweep. est = TensorFlow(source_directory=script_folder, script_params={'--data-folder': dataset.as_named_input('mnist').as_mount()}, compute_target=compute_target, entry_script='keras_mnist.py', pip_packages=['keras==2.2.5','azureml-dataprep[pandas,fuse]','matplotlib']) # Now we will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric (defined later) falls outside of the top 10% range, Azure ML terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric. policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1) # Now we are ready to configure a run configuration object, and specify the primary metric `Accuracy` that's recorded in your training runs. If you go back to visit the training script, you will notice that this value is being logged after every epoch (a full batch set). We also want to tell the service that we are looking to maximizing this value. We also set the number of samples to 20, and maximal concurrent job to 4, which is the same as the number of nodes in our computer cluster. hdc = HyperDriveConfig(estimator=est, hyperparameter_sampling=ps, policy=policy, primary_metric_name='Accuracy', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=20, max_concurrent_runs=4) # Finally, let's launch the hyperparameter tuning job. hdr = exp.submit(config=hdc) # We can use a run history widget to show the progress. Be patient as this might take a while to complete. RunDetails(hdr).show() hdr.wait_for_completion(show_output=True) # ### Warm start a Hyperparameter Tuning experiment and resuming child runs # Often times, finding the best hyperparameter values for your model can be an iterative process, needing multiple tuning runs that learn from previous hyperparameter tuning runs. Reusing knowledge from these previous runs will accelerate the hyperparameter tuning process, thereby reducing the cost of tuning the model and will potentially improve the primary metric of the resulting model. When warm starting a hyperparameter tuning experiment with Bayesian sampling, trials from the previous run will be used as prior knowledge to intelligently pick new samples, so as to improve the primary metric. Additionally, when using Random or Grid sampling, any early termination decisions will leverage metrics from the previous runs to determine poorly performing training runs. # # Azure Machine Learning allows you to warm start your hyperparameter tuning run by leveraging knowledge from up to 5 previously completed hyperparameter tuning parent runs. # # Additionally, there might be occasions when individual training runs of a hyperparameter tuning experiment are cancelled due to budget constraints or fail due to other reasons. It is now possible to resume such individual training runs from the last checkpoint (assuming your training script handles checkpoints). Resuming an individual training run will use the same hyperparameter configuration and mount the storage used for that run. The training script should accept the "--resume-from" argument, which contains the checkpoint or model files from which to resume the training run. You can also resume individual runs as part of an experiment that spends additional budget on hyperparameter tuning. Any additional budget, after resuming the specified training runs is used for exploring additional configurations. # # For more information on warm starting and resuming hyperparameter tuning runs, please refer to the [Hyperparameter Tuning for Azure Machine Learning documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters) # # ## Find and register best model # When all the jobs finish, we can find out the one that has the highest accuracy. best_run = hdr.get_best_run_by_primary_metric() print(best_run.get_details()['runDefinition']['arguments']) # Now let's list the model files uploaded during the run. print(best_run.get_file_names()) # We can then register the folder (and all files in it) as a model named `keras-dnn-mnist` under the workspace for deployment. model = best_run.register_model(model_name='keras-mlp-mnist', model_path='outputs/model') # ## Deploy the model in ACI # Now we are ready to deploy the model as a web service running in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/). Azure Machine Learning accomplishes this by constructing a Docker image with the scoring logic and model baked in. # ### Create score.py # First, we will create a scoring script that will be invoked by the web service call. # # * Note that the scoring script must have two required functions, `init()` and `run(input_data)`. # * In `init()` function, you typically load the model into a global object. This function is executed only once when the Docker container is started. # * In `run(input_data)` function, the model is used to predict a value based on the input data. The input and output to `run` typically use JSON as serialization and de-serialization format but you are not limited to that. # + # %%writefile score.py import json import numpy as np import os from keras.models import model_from_json from azureml.core.model import Model def init(): global model model_root = Model.get_model_path('keras-mlp-mnist') # load json and create model json_file = open(os.path.join(model_root, 'model.json'), 'r') model_json = json_file.read() json_file.close() model = model_from_json(model_json) # load weights into new model model.load_weights(os.path.join(model_root, "model.h5")) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) def run(raw_data): data = np.array(json.loads(raw_data)['data']) # make prediction y_hat = np.argmax(model.predict(data), axis=1) return y_hat.tolist() # - # ### Create myenv.yml # We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify conda packages `tensorflow` and `keras`. # + from azureml.core.conda_dependencies import CondaDependencies cd = CondaDependencies.create() cd.add_tensorflow_conda_package() cd.add_conda_package('keras==2.2.5') cd.add_pip_package("azureml-defaults") cd.save_to_file(base_directory='./', conda_file_path='myenv.yml') print(cd.serialize_to_string()) # - # ### Deploy to ACI # We are almost ready to deploy. Create the inference configuration and deployment configuration and deploy to ACI. This cell will run for about 7-8 minutes. # + from azureml.core.webservice import AciWebservice from azureml.core.model import InferenceConfig from azureml.core.model import Model from azureml.core.environment import Environment myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") inference_config = InferenceConfig(entry_script="score.py", environment=myenv) aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, auth_enabled=True, # this flag generates API keys to secure access memory_gb=1, tags={'name': 'mnist', 'framework': 'Keras'}, description='Keras MLP on MNIST') service = Model.deploy(workspace=ws, name='keras-mnist-svc', models=[model], inference_config=inference_config, deployment_config=aciconfig) service.wait_for_deployment(True) print(service.state) # - # **Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())` # This is the scoring web service endpoint: print(service.scoring_uri) # ### Test the deployed model # Let's test the deployed model. Pick 30 random samples from the test set, and send it to the web service hosted in ACI. Note here we are using the `run` API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl. # # After the invocation, we print the returned predictions and plot them along with the input images. Use red font color and inversed image (white on black) to highlight the misclassified samples. Note since the model accuracy is pretty high, you might have to run the below cell a few times before you can see a misclassified sample. # + import json # find 30 random samples from test set n = 30 sample_indices = np.random.permutation(X_test.shape[0])[0:n] test_samples = json.dumps({"data": X_test[sample_indices].tolist()}) test_samples = bytes(test_samples, encoding='utf8') # predict using the deployed model result = service.run(input_data=test_samples) # compare actual value vs. the predicted values: i = 0 plt.figure(figsize = (20, 1)) for s in sample_indices: plt.subplot(1, n, i + 1) plt.axhline('') plt.axvline('') # use different color for misclassified sample font_color = 'red' if y_test[s] != result[i] else 'black' clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys plt.text(x=10, y=-10, s=y_test[s], fontsize=18, color=font_color) plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map) i = i + 1 plt.show() # - # We can retrieve the API keys used for accessing the HTTP endpoint. # Retrieve the API keys. Two keys were generated. key1, Key2 = service.get_keys() print(key1) # We can now send construct raw HTTP request and send to the service. Don't forget to add key to the HTTP header. # + import requests # send a random row from the test set to score random_index = np.random.randint(0, len(X_test)-1) input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}" headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1} resp = requests.post(service.scoring_uri, input_data, headers=headers) print("POST to url", service.scoring_uri) #print("input data:", input_data) print("label:", y_test[random_index]) print("prediction:", resp.text) # - # Let's look at the workspace after the web service was deployed. You should see # * a registered model named 'keras-mlp-mnist' and with the id 'model:1' # * a webservice called 'keras-mnist-svc' with some scoring URL # + models = ws.models for name, model in models.items(): print("Model: {}, ID: {}".format(name, model.id)) webservices = ws.webservices for name, webservice in webservices.items(): print("Webservice: {}, scoring URI: {}".format(name, webservice.scoring_uri)) # - # ## Clean up # You can delete the ACI deployment with a simple delete API call. service.delete()
how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # NumPy Array Basics - Vectorization import sys print(sys.version) import numpy as np print(np.__version__) npa = np.random.randint(0,50,20) # Now I’ve harped on about vectorization in the last couple of videos and I’ve told you that it’s great but I haven’t shown you how it’s so great. # # Here are the two powerful reasons # - Concise # - Efficient # # The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a high-level programming model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations. # # You can read more here: # https://en.wikipedia.org/wiki/Array_programming npa # With vectorization we can apply changes to the entire array extremely efficiently, no more for loops. If we want to double the array, we just multiply by 2 if we want to cube it we just cube it. npa * 2 npa ** 3 [x * 2 for x in npa] # So who cares? Again it’s going to be efficiency thing just like boolean selection Let’s try something a bit more complex. # Define a function named new_func that cubes the value if it is less than 5 and squares it if it is greater or equal to 5. def new_func(numb): if numb < 10: return numb**3 else: return numb**2 new_func(npa) # However we can’t just pass in the whole vector because we’re going to get this array ambiguity. # ?np.vectorize # We need to vectorize this operation and we do that with np.vectorize # # # We can then apply that to our entire array and it takes care of the complexity for us. We can think in terms of the data without having to think about each individual element. vect_new_func = np.vectorize(new_func) type(vect_new_func) vect_new_func(npa) [new_func(x) for x in npa] # It's also much faster to vectorize operations and while these are simple examples the benefits will become apparent as we continue through this course. # # *this has changed since python3 and the list comprehension has gotten much faster. However, this doesn't mean that vectorization is slower, just that it's a bit heavier because it places a lot more tools at your disposal like we'll see in the next video.* # %timeit [new_func(x) for x in npa] # %timeit vect_new_func(npa) npa2 = np.random.random_integers(0,100,20*1000) # Speed comparisons with size. # %timeit [new_func(x) for x in npa2] # %timeit vect_new_func(npa2)
Data_Analysis_with_Pandas/01-Numpy Basics/1-3 NumPy Array Basics - Vectorization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # .. _tfn_userguide: # # TFN Strings # ============ # + active="" # Introduction # ------------ # # The function :func:`clean_au_tfn() <dataprep.clean.clean_au_tfn.clean_au_tfn>` cleans a column containing Australian Tax File Numbers (TFN) strings, and standardizes them in a given format. The function :func:`validate_au_tfn() <dataprep.clean.clean_au_tfn.validate_au_tfn>` validates either a single TFN strings, a column of TFN strings or a DataFrame of TFN strings, returning `True` if the value is valid, and `False` otherwise. # - # TFN strings can be converted to the following formats via the `output_format` parameter: # # * `compact`: only number strings without any seperators or whitespace, like "123456782" # * `standard`: TFN strings with proper whitespace in the proper places, like "123 456 782" # # Invalid parsing is handled with the `errors` parameter: # # * `coerce` (default): invalid parsing will be set to NaN # * `ignore`: invalid parsing will return the input # * `raise`: invalid parsing will raise an exception # # The following sections demonstrate the functionality of `clean_au_tfn()` and `validate_au_tfn()`. # ### An example dataset containing TFN strings import pandas as pd import numpy as np df = pd.DataFrame( { "tfn": [ "123 456 782", "999 999 999", "123456782", "51 824 753 556", "hello", np.nan, "NULL" ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df # ## 1. Default `clean_au_tfn` # # By default, `clean_au_tfn` will clean tfn strings and output them in the standard format with proper separators. from dataprep.clean import clean_au_tfn clean_au_tfn(df, column = "tfn") # ## 2. Output formats # This section demonstrates the output parameter. # ### `standard` (default) clean_au_tfn(df, column = "tfn", output_format="standard") # ### `compact` clean_au_tfn(df, column = "tfn", output_format="compact") # ## 3. `inplace` parameter # # This deletes the given column from the returned DataFrame. # A new column containing cleaned TFN strings is added with a title in the format `"{original title}_clean"`. clean_au_tfn(df, column="tfn", inplace=True) # ## 4. `errors` parameter # ### `coerce` (default) clean_au_tfn(df, "tfn", errors="coerce") # ### `ignore` clean_au_tfn(df, "tfn", errors="ignore") # ## 4. `validate_au_tfn()` # `validate_au_tfn()` returns `True` when the input is a valid TFN. Otherwise it returns `False`. # # The input of `validate_au_tfn()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame. # # When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. # # When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_au_tfn()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_au_tfn()` returns the validation result for the whole DataFrame. from dataprep.clean import validate_au_tfn print(validate_au_tfn("123 456 782")) print(validate_au_tfn("99 999 999")) print(validate_au_tfn("123456782")) print(validate_au_tfn("51 824 753 556")) print(validate_au_tfn("hello")) print(validate_au_tfn(np.nan)) print(validate_au_tfn("NULL")) # ### Series validate_au_tfn(df["tfn"]) # ### DataFrame + Specify Column validate_au_tfn(df, column="tfn") # ### Only DataFrame validate_au_tfn(df)
docs/source/user_guide/clean/clean_au_tfn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python basics: Expressions and strings # # By [<NAME>](http://www.decontextualize.com/) # # In this tutorial, I introduce the basics of how to use Python to process text, starting with the concept of expressions and evaluation. I go into particular detail on Python's string manipulation functions. # ### A note on Python versions # # There are two main "branches" of Python in current use: Python 2 and Python 3. Both of these branches have their own versions: the latest version of Python 2 (as of this writing) is Python 2.7.x, and the latest version of Python 3 is Python 3.7.x. The branches and versions all have slightly different capabilities and their syntax and structure are slightly different. Python 2.7.x still has a larger number of users overall, and many new projects continue to support it. But most data scientists and data journalists using Python today use the newer version, and following their lead, we'll be using we're using Python 3.6 or later in this course (specifically, the version included with the latest version of [Anaconda](https://www.anaconda.com/download/)). # # (The main reason you need to know this information is that you should be careful when looking up Python information on the Internet---make sure whatever tutorial you're looking at is about Python 3, not Python 2.) # ## Expressions and evaluation # # Let's start with a very high-level description of how computer programming works. When you're writing a computer program, you're describing to the computer what you want, and then asking the computer to figure that thing out for you. Your description of what you want is called an *expression*. The process that the computer uses to turn your expression into whatever that expression means is called *evaluation.* # # Think of a science fiction movie where a character asks the computer, out loud, "What's the square root of nine billion?" or "How many people older than 50 live in Paris, France?" Those are examples of expressions. The process that the computer uses to transform those expressions into a response is evaluation. # # When the process of evaluation is complete, you're left with a single "value". Think of it schematically like so: # # ![Expression -> Evaluation -> Value](http://static.decontextualize.com/snaps/expressiondiagram.png) # # What makes computer programs powerful is that they make it possible to write very precise and sophisticated expressions. And importantly, you can embed the results of evaluating one expression inside of another expression, or save the results of evaluating an expression for later in your program. # # Unfortunately, computers can't understand and intuit your desires simply from a verbal description. That's why we need computer programming languages: to give us a way to write expressions in a way that the computer can understand. Because programming languages are designed to be precise, they can also be persnickety (and frustrating). And every programming language is different. It's tricky, but worth it. # ## Arithmetic expressions # # Let's start with simple arithmetic expressions. The way that you write arithmetic expressions in Python is very similar to the way that you write arithmetic expressions in, say, grade school arithmetic, or algebra. In the example below, `3 + 5` is the expression. You can tell Python to evaluate the expression and display its value simply by typing in the expression in a new notebook cell and typing CTRL+ENTER. 1 + 5 # Arithmetic expressions in Python can be much more sophisticated than this, of course. We won't go over all of the details right now, but one thing you should know immediately is that Python arithmetic operations are evaluated using the typical order of operations, which you can override with parentheses: 4 + 5 * 6 (4 + 5) * 6 # You can write arithmetic expressions with or without spaces between the numbers and the operators (but usually it's considered better style to include spaces): 10+20+30 # Expressions in Python can also be very simple. In fact, a number on its own is its own expression, which Python evaluates to that number itself: 19 # If you write an expression that Python doesn't understand, then you'll get an error. Here's what that looks like: + 20 19 # ## Expressions of inequality # # You can also ask Python whether two expressions evaluate to the same value, or if one expression evaluates to a value greater than another expression, using a similar familiar syntax. When evaluating such expressions, Python will return one of two special values: either `True` or `False`. # # The `==` operator compares the expression on its left side to the expression on its right side. It evaluates to `True` if the values are equal, and `False` if they're not equal. 3 * 5 == 9 + 6 20 == 7 * 3 # The `<` operator compares the expression on its left side to the expression on its right side, evaluating to `True` if the left-side expression is less than the right-side expression, `False` otherwise. The `>` does the same thing, except checking to see if the left-side expression is greater than the right-side expression: 17 < 18 17 > 18 # The `>=` and `<=` operators translate to "greater than or equal" and "lesser than or equal," respectively: 22 >= 22 22 <= 22 # Make sure to get the order of the angle bracket and the equal sign right! 22 =< 22 # ## Variables # # You can save the result of evaluating an expression for later using the `=` operator (called the "assignment operator"). On the left-hand side of the `=`, write a word that you'd like to use to refer to the value of the expression, and on the right-hand side, write the expression itself. After you've assigned a value like this, whenever you include that word in your Python code, Python will evaluate the word and replace it with the value you assigned to it earlier. Like so: x = (4 + 5) * 6 x # (Notice that the line `x = (4 + 5) * 6` didn't cause Python to display anything. That's because an assignment in Python isn't an expression, it's a "statement"---we'll discuss the difference later.) # # Now, whenever you use the variable `x` in your program, it "stands in" for the result of the expression that you assigned to it. x / 6 # You can create as many variables as you want! another_variable = (x + 2) * 4 another_variable # Variable names can contain letters, numbers and underscores, but must begin with a letter or underscore. There are other, more technical constraints on variable names; you can review them [here](http://en.wikibooks.org/wiki/Think_Python/Variables,_expressions_and_statements#Variable_names_and_keywords). # # If you attempt to use a the name of a variable that you haven't defined in the notebook, Python will raise an error: voldemort # If you assign a value to a variable, and then assign a value to it again, the previous value of the variable will be overwritten: x = 15 x x = 42 x # The fact that variables can be overwritten with new values can be helpful in some contexts (e.g., if you're writing a program and you're using the variable to keep track of some value that changes over time). But it can also be annoying if you use the same variable name twice on accident and overwrite values in one part of your program that another part of your program is using the same variable name to keep track of! # ## Types # # Another important thing to know is that when Python evaluates an expression, it assigns the result to a "type." A type is a description of what kind of thing a value is, and Python uses that information to determine later what you can do with that value, and what kinds of expressions that value can be used in. You can ask Python what type it thinks a particular expression evaluates to, or what type a particular value is, using the `type()` function: type(100 + 1) # The word int stands for "integer." ("Integers" are numbers that don't have a fractional component, i.e., -2, -1, 0, 1, 2, etc.) Python has many, many other types, and lots of (sometimes arcane) rules for how those types interact with each other when used in the same expression. For example, you can create a floating point type (i.e., a number with a decimal point in it) by writing a number with a decimal point in it: type(3.14) # Interestingly, the result of adding a floating-point number and an integer number together is always a floating point number: type(3.14 + 17) # ... and the result of dividing one integer by another integer is a floating point number: type(4 / 3) # Throwing an expression into the `type()` function is a good way to know whether or not the value you're working with is the value you were expecting to work with. We'll use it for debugging some example code later. # ## Strings # # Another type of value in Python is called a "string." Strings are a way of representing in our computer programs stretches of text: one or more letters in sequential order. To make an expression that evaluates to a string in Python, simply enclose some text inside of quotes and put it into the interactive interpreter: "Suppose there is a pigeon, suppose there is." # Asking Python for the type of a string returns `str`: type("Suppose there is a pigeon, suppose there is.") # You can use single quotes or double quotes to enclose strings (I tend to use them interchangeably), as long as the opening quote matches the closing quote: 'Suppose there is a pigeon, suppose there is.' # (When you ask Python to evaluate a string expression, it will display it with single quotes surrounding it.) # # You can assign strings to variables, just like any other value: roastbeef = "Suppose there is a pigeon, suppose there is." roastbeef # In versions of Python previous to Python 3, it could be tedious to use any characters inside of strings that weren't ASCII characters (i.e., the letters, numbers and punctuation used most commonly when writing English). In Python 3, you can easily include whatever characters you want by typing them into the string directly: cat_message = "我爱猫!😻" cat_message # ### "Escaping" special characters in strings # # Normally, if there are any characters you want in your string, all you have to do to put them there is type the characters in on your keyboard, or paste in the text that you want from some other source. There are some characters, however, that require special treatment and can't be typed into a string directly. # # For example, say you have a double-quoted string. Now, the rules about quoting strings (as outlined above) is that the quoted string begins with a double-quote character and ends with a double-quote character. But what if you want to include a double-quote character INSIDE the string? You might think you could do this: # # "And then he said, "I think that's a cool idea," and vanished." # # But that won't work: "And then he said, "I think that's a cool idea," and vanished." # It doesn't work because Python interprets the first double-quote it sees after the beginning of the string as the double-quote that marks the end of the string. Then it sees all of the stuff after the string and says, "okay, the programmer must not be having a good day?" and displays a syntax error. Clearly, we need a way to tell Python "I want you to interpret this character not with the special meaning it has in Python, but LITERALLY as the thing that I typed." # # We can do this exact thing by putting a backslash in front of the characters that we want Python to interpret literally, like so: "And then he said, \"I think that's a cool idea,\" and vanished." # A character indicated in this way is called an "escape" character (because you've "escaped" from the typical meaning of the character). There are several other useful escape characters to know about: # # * I showed `\"` above, but you can also use `\'` in a single-quoted string. # * Use `\n` if you want to include a new line in your string. # * Use `\t` instead of hitting the tab key to put a tab in your string. # * Because `\` is itself the character used to escape other characters, you need to type `\\` if you actually want a backslash in your string. # # ### Printing vs. evaluating # # There are two ways to see the result of an expression in the interactive interpreter. You can either type the expression directly: 7 + 15 "\tA \"string\" with escape\ncharacters." # Or you can "print" the expression using the `print()` function by putting the expression inside the parentheses: print(7 + 15) print("\tA \"string\" with escape\ncharacters.") # As you can see, the `print()` function doesn't make a huge difference when displaying the result of an arithmetic expression. But it *does* make a difference when displaying a string. When you simply type an expression that evaluates to a string in order to display it, without the `print()` function, Python won't "interpolate" any special characters in the string. ("Interpolate" is a fancy computer programming term that means "replace symbols in something with whatever those symbols represent.") The `print()` function, on the other hand, *will* perform the interpolation. # # Typing the expression itself results in Python showing you *exactly* the code you'd need to copy and paste in order to replicate the vale. Typing the expression into `print()` tells Python to do its best to make the result of the expression look "nice." (The `print()` function also sends the result of the expression to standard output, which will be important to know when we're writing our own Python programs on the command line later on.) # ### Asking questions about strings # # Now that we can get some text into our program, let's talk about some of the ways Python allows us to do interesting things with that text. # # Let's talk about the `len()` function first. If you take an expression that evaluates to a string and put it inside the parentheses of `len()`, you get an integer value that indicates how long the string is. Like so: len("Suppose there is a pigeon, suppose there is.") # The value that `len()` evaluates to can itself be used in other expressions (just like any other value!): len("Camembert") + len("Cheddar") # Next up: the `in` operator, which lets us check to see if a particular string is found inside of another string. "foo" in "buffoon" "foo" in "reginald" # The `in` operator takes one expression evaluating to a string on the left and another on the right, and returns `True` if the string on the left occurs somewhere inside of the string on the right. # # We can check to see if a string begins with or ends with another string using that string's `.startswith()` and `.endswith()` methods, respectively: "foodie".startswith("foo") "foodie".endswith("foo") # The `.isdigit()` method returns `True` if Python thinks the string could represent an integer, and `False` otherwise: "foodie".isdigit() "4567".isdigit() # The `.isdigit()` method (along with many of the other methods discussed in this section) works not just for ASCII characters but generally across Unicode. For example, it returns `True` for a full-width digit: "7".isdigit() # And the `.islower()` and `.isupper()` methods return `True` if the string is in all lower case or all upper case, respectively (and `False` otherwise). "foodie".islower() "foodie".isupper() "YELLING ON THE INTERNET".islower() "YELLING ON THE INTERNET".isupper() # The `in` operator discussed above will tell us if a substring occurs in some other string. If we want to know *where* that substring occurs, we can use the `.find()` method. The `.find()` method takes a single parameter between its parentheses: an expression evaluating to a string, which will be searched for within the string whose `.find()` method was called. If the substring is found, the entire expression will evaluate to the index at which the substring is found. If the substring is not found, the expression evaluates to `-1`. To demonstrate: "Now is the winter of our discontent".find("win") "Now is the winter of our discontent".find("lose") # The `.count()` method will return the number of times a particular substring is found within the larger string: "I got rhythm, I got music, I got my man, who could ask for anything more".count("I got") # Finally, remember the `==` operator that we discussed earlier? You can use that in Python to check to see if two strings contain the same characters in the same order: "pants" == "pants" "pants" == "trousers" # ### Simple string transformations # # Python strings have a number of different methods which, when called on a string, return a copy of that string with a simple transformation applied to it. These are helpful for normalizing and cleaning up data, or preparing it to be displayed. # # Let's start with `.lower()`, which evaluates to a copy of the string in all lower case: "ARGUMENTATION! DISAGREEMENT! STRIFE!".lower() # The converse of `.lower()` is `.upper()`: "e.e. cummings is. not. happy about this.".upper() # The method `.title()` evaluates to a copy of the string it's called on, replacing every letter at the beginning of a word in the string with a capital letter: "dr. strangelove, or, how I learned to love the bomb".title() # The `.strip()` method removes any whitespace from the beginning or end of the string (but not between characters later in the string): " got some random whitespace in some places here ".strip() # Finally, the `.replace()` method takes two parameters: a string to find, and a string to replace that string with whenever it's found. You can use this to make sad stories. "I got rhythm, I got music, I got my man, who could ask for anything more".replace("I got", "I used to have") # The `.replace()` method works with non-ASCII characters as well, of course: "我爱猫!".replace("猫", "狗") # ### Reading in the contents of a file as a string # # So far we've just been typing our strings directly into the interactive interpreter by writing *string literals* (i.e., characters in between quotation marks). This is nice but for larger chunks of text it's desirable to be able to read files from your file system directly. Fortunately, Python makes it easy to do this! The code below will read the contents of the file `sea_rose.txt` into a variable called `text`: text = open("sea_rose.txt").read() # You can change the name of the variable to whatever you want, of course, and you can choose a different file name as well. Once the text is loaded, it's just a regular string, and you can do whatever you want with it! You could just print it out: print(text) # Or you can ask questions about it: text.count("you") # Or you can transform it: print(text.replace("a", "aaaa")) # Some caveats: # # * The file you specify must be located in the same directory as the interactive interpreter. # * The file needs to be in *plain text* format. [More information on plain text](http://air.decontextualize.com/plain-text/) # * The file needs to be in either ASCII or UTF-8 encoding. (We'll talk more about encodings later, but if the text you want to work with isn't in UTF-8 format, most text editors will allow you to modify the encoding of a file when you save it.) # ## Functions and methods # # Okay, we're getting somewhere together! But I've still been using a lot of jargon when explaning this stuff. One thing that might confuse you: what's a "function" and what's a "method"? # # We've talked about two "functions" so far: `len()` and `type()`. A function is a special word that you can use in Python expressions that runs some pre-defined code: you put your expression inside the parentheses, and Python sends the result of evaluating that expression to the code in the function. That code operates on the value that you gave it, and then itself evaluates to another value. Using a function in this way is usually called "calling" it or "invoking" it. The stuff that you put inside the parentheses is called a "parameter" or "argument"; the value that the function gives back is called its "return value." # # ![Function diagram](http://static.decontextualize.com/snaps/functiondiagram.png) # # The `len()` and `type()` functions are two of what are called "built-in functions," i.e. functions that come with Python and are available whenever you're writing Python code. In Python, built-in functions tend to be able to take many different types of value as parameters. ([There are a lot of other built-in functions](https://docs.python.org/2/library/functions.html), not just `len()` and `type()`! We'll discuss them as the need arises.) # # > NOTE: You can also write your own functions---we'll learn how to do this later in the class. Writing functions is a good way to avoid repetition in your code and to compartmentalize it.) # # "Methods" work a lot like functions, except in how it looks when you use them. Instead of putting the expression that you want to use them with inside the parentheses, you put the call to the method directly AFTER the expression that you want to call it on, following a period (`.`). Methods, unlike built-in functions, are usually only valid for one type of value; e.g., values of the string type have a `.strip()` method, but integer values don't. # # It's important to remember that methods can be called both on an expression that evaluates to a particular value AND on a variable that contains that value. So you can do this: "hello".find('e') # ...and this: s = "hello" s.find('e') # ## Getting help in the interactive interpreter # # The interactive interpreter has all kinds of nuggets to help you program in Python. The first one worth mentioning is the `help()` function. Pass any function or method as a parameter to `help()` and you'll get a handy description of the method or function and what it does: >>> help(len) # Remember above when we were talking about how certain types of value have certain "methods" that you can only use with that type of value? Sometimes it's helpful to be reminded of exactly which methods an object supports. You can find this out right in the interactive interpreter without having to look it up in the documentation using the `dir()` built-in function. Just pass the value that you want to know more about to `dir()`: >>> dir("hello") # This is a list of all of the methods that the string type supports. (Ignore anything that begins with two underscores (`__`) for now---those are special weird built-in methods that aren't very useful to call on their own.) If you want to know more about one method in particular, you can type this (note again that you need to NOT include the parentheses after the method): help("hello".swapcase) # Hey awesome! We've learned something about another string method. Let's try this method out: "New York University".swapcase() # > EXERCISE: Use `dir()` and `help()` to find and research a string method that isn't mentioned in the notes. Then write an expression using that method. # ## String indexing # # Python has some powerful language constructions that allow you to access parts of the string by their numerical position in the string. You can get an individual character of a string by putting square brackets (`[]`) right after an expression that evaluates to a string, and putting inside the square brackets the number that represents which character you want. Here's an example: "bungalow"[2] # You can also do this with variables that contain string values, of course: message = "bungalow" message[2] # If we were to say this expression out loud, it might read, "I have a string, consisting of the characters `b`, `u`, `n`, `g`, `a`, `l`, `o` and `w`, in that order. Give me back the second item in that string." Python evaluates that expression to `n`, which is indeed the second letter in the word "bungalow." # # ### The second letter? Am I seeing things. "u" is clearly the second letter. # # You're right---good catch. But for reasons too complicated to go into here, Python (along with many other programming languages!) starts counting at 0, instead of 1. So what looks like the third letter of the string to human eyes is actually the second letter to Python. The first letter of the string is accessed using index 0, like so: message[0] # The way I like to conceptualize this is to think of list indexes not as specifying the number of the item you want, but instead specifying how "far away" from the beginning of the list to look for that value. # # If you attempt to use a value for the index of a list that is beyond the end of the list (i.e., the value you use is higher than the last index in the list), Python gives you an error: message[17] # An individual character from a string still has the same type as the string it came from: type(message[3]) # And, of course, a string containing an individual character has a length of 1: len(message[3]) # ### Indexes can be expressions too # # The thing that goes inside of the index brackets doesn't have to be a number that you've just typed in there. Any Python expression that evaluates to an integer can go in there. message[2 * 3] x = 3 message[x] message[message.find("a")] # ### Negative indexes # # If you use `-1` as the value inside of the brackets, something interesting happens: message[-1] # The expression evaluates to the *last* character in the string. This is essentially the same thing as the following code: message[len(message) - 1] # ... except easier to write. In fact, you can use any negative integer in the brackets, and Python will count that many items from the end of the string, and the expression evaluates to that item. message[-3] # If the value in the brackets would "go past" the beginning of the list, Python will raise an error: message[-987] # ## String slices # # The index bracket syntax explained above allows you to write an expression that evaluates to a character in a string, based on its position in the string. Python also has a powerful way for you to write expressions that return a *section* of a string, starting from a particular index and ending with another index. In Python parlance we'll call this section a *slice*. # # Writing an expression to get a slice of a string looks a lot like writing an expression to get a single character. The difference is that instead of putting one number between square brackets, we put *two* numbers, separated by a colon. The first number tells Python where to begin the slice, and the second number tells Python where to end it. message[1:4] # Note that the value after the colon specifies at which index the slice should end, but the slice does *not* include the value at that index. I would translate the expression above as saying "give me characters one through four of the string in the "message" variable, NOT INCLUDING character four." # # The fact that slice indexes aren't inclusive means that you can tell how long the slice will be by subtracting the value before the colon from the value after it: message[1:4] len(message[1:4]) 4 - 1 # Also note that---as always!---any expression that evaluates to an integer can be used for either value in the brackets. For example: x = 3 message[x:x+2] # Finally, note that the type of a slice is still `str`: type(message[5:7]) # ### Omitting slice values # # Because it's so common to use the slice syntax to get a string that is either a slice starting at the beginning of the string or a slice ending at the end of the string, Python has a special shortcut. Instead of writing: message[0:3] # You can leave out the `0` and write this instead: message[:3] # Likewise, if you wanted a slice that starts at index 4 and goes to the end of the string, you might write: message[4:] # ### Negative index values in slices # # Now for some tricky stuff: You can use negative index values in slice brackets as well! For example, to get a slice of a string from the fourth-to-last element of the string up to (but not including) the second-to-last element of the string: message[-4:-2] # (Even with negative slice indexes, the numbers have the property that subtracting the first from the second yields the length of the slice, i.e. `-2 - (-4)` is `2`). # # To get the last three elements of the string: message[:-3] # > EXERCISE: Write an expression, or a series of expressions, that prints out "Sea Rose" from the first occurence of the string `sand` up until the end of the poem. (Hint: Use the `.find()` method, discussed above.) # ## Putting strings together # # Earlier, we discussed how the `+` operator can be used to create an expression that evaluates to the sum of two numbers. E.g.: 17 + 92 # The `+` operator can also be used to create a new string from two other strings. This is called "concatenation": "Spider" + "man" part1 = "Nickel, what is nickel, " part2 = "it is originally rid of a cover." part1 + part2 # You can combine as many strings as you want this way, using the `+` operator multiple times in the same expression: "bas" + "ket" + "ball" # > EXERCISE: Write an expression that evaluates to a string containing the first fifty characters of "Sea Rose" followed by the last fifty characters of "Sea Rose." # ### Strings and numbers # # It's important to remember that a string that contains what looks like a number does *not* behave like an actual integer or floating point number does. For example, attempting to subtract one string containing a number from another string containing a number will cause an error to be raised: "15" - "4" # The "unsupported operand type(s)" error means that you tried to use an operator (in this case `+`) with two types that the operator in question doesn't know how to work with. (Python is saying: "You asked me to subtract a string from another string. That doesn't make sense to me.") # # Attempting to add an integer or floating-point number to a string that has (what looks like) a number inside of it will raise a similar error: 16 + "8.9" # Fortunately, there are built-in functions whose purpose is to convert from one type to another; notably, you can put a string inside the parentheses of the `int()` and `float()` functions, and it will evaluate to (what Python interprets as) the integer and floating-point values (respectively) of the string: type("17") int("17") type(int("17")) type("3.14159") float("3.14159") type(float("3.14159")) # If you give a string to one of these functions that Python can't interpret as an integer or floating-point number, Python will raise an error: int("shumai") # ### Strings with multiple lines # # Sometimes we want to work with strings that have more than one "line" of text in them. The problem with this is that Python interprets your having pressed "Enter" with your having finished your input, so if you try to cut-and-paste in some text with new line characters, you'll get an error: poem = "Rose, harsh rose, marred and with stint of petals, meagre flower, thin, spare of leaf," # (`EOL while scanning string literal` is Python's way of saying "you hit enter too soon.") One way to work around this is to include `\n` (newline character) inside the string when we type it into our program: poem = "Rose, harsh rose,\nmarred and with stint of petals,\nmeagre flower, thin,\nspare of leaf," print(poem) # This works, but it's kind of inconvenient! A better solution is to use a different way of quoting strings in Python, the triple-quote. It looks like this: poem = """Rose, harsh rose, marred and with stint of petals, meagre flower, thin, spare of leaf,""" print(poem) # When you use three quotes instead of one, Python allows you to put new line characters directly into the string. Nice! We'll be using this for some of the examples below. # > Exercise: Create a variable called `poem` and assign the text of "Sea Rose" to that variable. Use the `len()` function to find out how many characters are in it. Then, use the `count()` method to find out how many times the string `rose` occurs within it. # ## Conclusion # # This section introduces many of the basic building blocks you'll need in order to use computer programs to write poems. We've talked about how to use the interactive interpreter, and about expressions and values, and about the distinction between functions and methods; and we've discussed the details of how strings work and how to manipulate them. # # Further reading: # # * From [Think Python](http://www.greenteapress.com/thinkpython/html/index.html): [Variables, expressions and statements](http://greenteapress.com/thinkpython2/html/thinkpython2003.html); [Strings](http://greenteapress.com/thinkpython2/html/thinkpython2009.html). #
expressions-and-strings.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="6nOTljC_mTMn" # # Unified Planning Basic Demo # # This python notebook shows the basic usage of the unified planning library. # + [markdown] id="t8dCcpf7mivV" # ## Setup the library and the planners # # We start by downloading (from github) the unified planning library and the two planners we currently have at our disposal, namely `pyperplan` and `tamer`. # + [markdown] id="CwlvEzKrm1jT" # First, we install unified_planning library and its dependencies from PyPi. Here, we use the `--pre` flag to use the latest development build. # + id="GPKRUQgNZBo8" # begin of installation # + id="BoqALxJWdfl8" # !pip install --pre unified-planning[pyperplan,tamer] # + [markdown] id="iNHFHxQKnKIp" # We are now ready to use the Unified-Planning library! # + id="uzkOkJbSdMTY" # end of installation # + [markdown] id="9dP5scv7nNJu" # ## Unified-Planning Demo # # ### Basic imports # The basic imports we need for this demo are abstracted in the `shortcuts` package. Moreover we import the PDDL input/output modules. # + id="06rETnGAfQHg" import unified_planning from unified_planning.shortcuts import * from unified_planning.io.pddl_writer import PDDLWriter from unified_planning.io.pddl_reader import PDDLReader # + [markdown] id="i8J7rP0cnvXq" # ### Problem definition via code # # In this example, we will model a very simple robot navigation problem. # # #### Types # # The first thing to do is to introduce a "UserType" to model the concept of a location. It is possible to introduce as many types as needed; then, for each type we will define a set of objects of that type. # # In addition to `UserType`s we have three built-in types: `Bool`, `Real` and `Integer`. # + id="huAy2IbVn0GZ" Location = UserType('Location') # + [markdown] id="fDukLfPPn20t" # #### Fluents and constants # # The basic variables of a planning problem are called "fluents" and are quantities that can change over time. Fluents can have differen types, in this first example we will stick to classical "predicates" that are fluents of boolean type. Moreover, fluents can have parameters: effectively describing multiple variables. # # For example, a booean fluent `connected` with two parameters of type `Location` (that can be interpreted as `from` and `to`) can be used to model a graph of locations: there exists an edge between two locations `a` and `b` if `connected(a, b)` is true. # # In this example, `connected` will be a constant (i.e. it will never change in any execution), but another fluent `robot_at` will be used to model where the robot is: the robot is in locatiopn `l` if and only if `robot_at(l)` is true (we will ensure that exactly one such `l` exists, so that the robot is always in one single location). # + id="LZUgad7ZoA2p" robot_at = unified_planning.model.Fluent('robot_at', BoolType(), l=Location) connected = unified_planning.model.Fluent('connected', BoolType(), l_from=Location, l_to=Location) # + [markdown] id="rVzqSj3XoDPa" # #### Actions # # Now we have the problem variables, but in order to describe the possible evolutions of a systems we need to describe how these variables can be changed and how they can evolve. We model this problem using classical, action-based planning, where a set of actions is used to characterize the possible transitions of the system from a state to another. # # An action is a transition that can be applied if a specified set of preconditions is satisfied and that prescribes a set of effects that change the value of some fluents. All the fluents that are subjected to the action effects are unchanged. # # We allow _lifted_ actions, that are action with parameters: the parameters can be used to specify preconditions or effects and the planner will select among the possible values of each parameters the ones to be used to characterize a specific action. # # In our example, we introduce an action called `move` that has two parameters of type `Location` indicating the current position of the robot `l_from` and the intended destination of the movement `l_to`. The `move(a, b)` action is applicable only when the robot is in position `a` (i.e. `robot_at(a)`) and if `a` and `b` are connected locations (i.e. `connected(a, b)`). As a result of applying the action `move(a, b)`, the robot is no longer in `a` and is instead in location `b`. # # In the unified_planning, we can create actions by instantiating the `unified_planning.InstantaneousAction` class; parameters are specified as keyword arguments to the constructor as shown below. Preconditions and effects are added by means of the `add_precondition` and `add_effect` methods. # + id="dRfrnEOfoHD8" move = unified_planning.model.InstantaneousAction('move', l_from=Location, l_to=Location) l_from = move.parameter('l_from') l_to = move.parameter('l_to') move.add_precondition(connected(l_from, l_to)) move.add_precondition(robot_at(l_from)) move.add_effect(robot_at(l_from), False) move.add_effect(robot_at(l_to), True) print(move) # + [markdown] id="iMuggWWioJ8K" # #### Creating the problem # # The class that represents a planning problem is `unified_planning.Problem`, it contains the set of fluents, the actions, the objects, an intial value for all the fluents and a goal to be reached by the planner. We start by adding the entities we created so far. Note that entities are not bound to one problem, we can create the actions and fluents one and create multiple problems with them. # + id="pgrJOj6ioMSC" problem = unified_planning.model.Problem('robot') problem.add_fluent(robot_at, default_initial_value=False) problem.add_fluent(connected, default_initial_value=False) problem.add_action(move) # + [markdown] id="35A3dp--oOOS" # The set of objects is a set of `unified_planning.Object` instances, each represnting an element of the domain. In this example, we create `NLOC` (set to 10) locations named `l0` to `l9`. We can create the set of objects and add it to the problem as follows. # + id="jbwJbJv8oQ9B" NLOC = 10 locations = [unified_planning.model.Object('l%s' % i, Location) for i in range(NLOC)] problem.add_objects(locations) # + [markdown] id="L-MnST4ioTKo" # Then, we need to specify the initial state. We used the `default_initial_value` specification when adding the fluents, so it suffices to indicate the fluents that are initially true (this is called "small-world assumption". Without this specification, we would need to initialize all the possible instantiation of all the fluents). # # In this example, we connect location `li` with location `li+1`, creating a simple "linear" graph lof locations and we set the initial position of the robot in location `l0`. # + id="t7jLGJ1xoVxq" problem.set_initial_value(robot_at(locations[0]), True) for i in range(NLOC - 1): problem.set_initial_value(connected(locations[i], locations[i+1]), True) # + [markdown] id="re1sYZHKoYx5" # Finally, we set the goal of the problem. In this example, we set ourselves to reach location `l9`. # + id="4zKqcGHlocdY" problem.add_goal(robot_at(locations[-1])) print(problem) # + [markdown] id="OTDDF5M1oezl" # ### Solving Planning Problems # # The most direct way to solve a planning problem is to select an available planning engine by name and use it to solve the problem. In the following we use `pyperplan` to solve the problem and print the plan. # + id="8FTO4AoTojko" with OneshotPlanner(name='pyperplan') as planner: result = planner.solve(problem) if result.status == up.engines.PlanGenerationResultStatus.SOLVED_SATISFICING: print("Pyperplan returned: %s" % result.plan) else: print("No plan found.") # + [markdown] id="Q-Pju4K2q_bM" # The unified_planning can also automatically select, among the available planners installed on the system, one that is expressive enough for the problem at hand. # + id="wuTcp_xTxvTj" with OneshotPlanner(problem_kind=problem.kind) as planner: result = planner.solve(problem) print("%s returned: %s" % (planner.name, result.plan)) # + [markdown] id="6KEe1f_Zx71o" # In this example, Pyperplan was selected. The `problem.kind` property, returns an object that describes the characteristics of the problem. # + id="Zmz6B_CcyABQ" print(problem.kind.features) # + [markdown] id="J3tblkI9yEnW" # #### Beyond plan generation # + [markdown] id="xbY7bAPByL35" # `OneshotPlanner` is not the only operation mode we can invoke from the unified_planning, it is just one way to interact with a planning engine. Another useful functionality is `PlanValidation` that checks if a plan is valid for a problem. # + id="p5s7ZwhzyPKG" plan = result.plan with PlanValidator(problem_kind=problem.kind, plan_kind=plan.kind) as validator: if validator.validate(problem, plan): print('The plan is valid') else: print('The plan is invalid') # + [markdown] id="FtY51vyASTcp" # It is also possible to use the `Grounding` operation mode to create an equivalent formulation of a problem that does not use parameters for the actions. This openarion mode is implemented by an internal python code, but also some engines offer advanced grounding techniques. # + id="2mTQ3DlrSoRk" with Compiler(problem_kind=problem.kind, compilation_kind=CompilationKind.GROUNDING) as grounder: grounding_result = grounder.compile(problem, CompilationKind.GROUNDING) ground_problem = grounding_result.problem print(ground_problem) # The grounding_result can be used to "lift" a ground plan back to the level of the original problem with OneshotPlanner(problem_kind=ground_problem.kind) as planner: ground_plan = planner.solve(ground_problem).plan print('Ground plan: %s' % ground_plan) # Replace the action instances of the grounded plan with their correspoding lifted version lifted_plan = ground_plan.replace_action_instances(grounding_result.map_back_action_instance) print('Lifted plan: %s' % lifted_plan) # Test the problem and plan validity with PlanValidator(problem_kind=problem.kind, plan_kind=ground_plan.kind) as validator: ground_validation = validator.validate(ground_problem, ground_plan) lift_validation = validator.validate(problem, lifted_plan) Valid = up.engines.ValidationResultStatus.VALID assert ground_validation.status == Valid assert lift_validation.status == Valid # + [markdown] id="bbVeET7FyVB3" # #### Parallel planning # + [markdown] id="16WuqVp3yX9j" # We can invoke different instances of a planner in parallel or different planners and return the first plan that is generated effortlessly. # + id="aeUm0TPZya7e" with OneshotPlanner(names=['tamer', 'tamer', 'pyperplan'], params=[{'heuristic': 'hadd'}, {'heuristic': 'hmax'}, {}]) as planner: plan = planner.solve(problem).plan print("%s returned: %s" % (planner.name, plan)) # + [markdown] id="qi-tOYPAyezo" # ### PDDL I/O # + [markdown] id="Bb70DVgayiiX" # The library allows to read and write PDDL problems effortlessly. # + id="L7ZibzXAyk4z" w = PDDLWriter(problem) print(w.get_domain()) print(w.get_problem()) # + id="JU2CJJgvjn6n" # !wget https://raw.githubusercontent.com/aiplan4eu/unified-planning/master/unified_planning/test/pddl/depot/domain.pddl -O /tmp/depot_domain.pddl # + id="lprAyCOgj9J3" # !wget https://raw.githubusercontent.com/aiplan4eu/unified-planning/master/unified_planning/test/pddl/depot/problem.pddl -O /tmp/depot_problem.pddl # + id="jsW0X8T9yqXI" reader = PDDLReader() pddl_problem = reader.parse_problem('/tmp/depot_domain.pddl', '/tmp/depot_problem.pddl') print(pddl_problem) # + [markdown] id="4x4o5D9dyoZe" # A parsed PDDL problem is just a normal problem that can be solved. # + id="P1dexnd1yvdi" print(pddl_problem.kind.features) with OneshotPlanner(name='pyperplan') as planner: result = planner.solve(pddl_problem) print("%s returned: %s" % (planner.name, result.plan))
notebooks/Unified_Planning_Basics.ipynb
#!/usr/bin/env python # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Principle of Maximum Likelihood # # # ## Description: # # Python script for illustrating the principle of maximum likelihood and a likelihood fit. # # __This is both an exercise, but also an attempt to illustrate four things:__ # 1. How to make a (binned and unbinned) Likelihood function/fit. # 2. The difference and a comparison between a Chi-square and a (binned) Likelihood. # 3. The difference and a comparison between a binned and unbinned Likelihood. # 4. What goes on behind the scenes in Minuit, when it is asked to fit something. # # In this respect, the exercise is more of an illustration rather than something to be used directly, which is why it is followed later by another exercise, where you can test if you have understood the differences, and how and when to apply which fit method. # # The example uses 50 exponentially distributed random times, with the goal of finding the best estimate of the lifetime (data is generated with lifetime, tau = 1). Three estimates are considered: # 1. Chi-square fit (chi2) # 2. Binned Likelihood fit (bllh) # 3. Unbinned Likelihood fit (ullh) # # The three methods are based on a scan of values for tau in the range [0.5, 2.0]. For each value of tau, the chi2, bllh, and ullh are calculated. In the two likelihood cases, it is actually -2*log(likelihood) which is calculated, which you should (by now) understand why. # # Note that the unbinned likelihood is in principle the "optimal" fit, but also the most difficult for several reasons (convergence, numerical problems, implementation, speed, etc.). However, all three methods/constructions essentially yield the same results, when there is enough statistics (i.e. errors are Gaussian), though the $\chi^2$ also gives a fit quality. # # The problem is explicitly chosen to have only one fit parameter, such that simple 1D graphs can show what goes on. In this case, the analytical solution (simple mean) is actually prefered (see Barlow). Real world problems will almost surely be more complex. # # Also, the exercise is mostly for illustration. In reality, one would hardly ever calculate and plot the Chi-square or Likelihood values, but rather do the minimization using an algorithm (Minuit) to do the hard work. # # ### Authors: # - <NAME> (<NAME> Institute, <EMAIL>) # - <NAME> (<EMAIL>) # # ### Date: # - 26-11-2021 (latest update) # # ### Reference: # - Barlow, chapter 5 (5.1-5.7) # - Cowan, chapter 6 # # *** import numpy as np # Matlab like syntax for linear algebra and functions import matplotlib.pyplot as plt # Plots and figures like you know them from Matlab import seaborn as sns # Make the plots nicer to look at from iminuit import Minuit # The actual fitting tool, better than scipy's import sys # Module to see files and folders in directories from scipy import stats # + sys.path.append('../../../External_Functions') from ExternalFunctions import Chi2Regression, BinnedLH, UnbinnedLH from ExternalFunctions import nice_string_output, add_text_to_ax # useful functions to print fit results on figure plt.rcParams['font.size'] = 16 # set some basic plotting parameters # - # ## Program settings: # + save_plots = False # Determining if plots are saved or not verbose = True # Should the program print or not? veryverbose = True # Should the program print a lot or not? ScanChi2 = True # In addition to fit for minimum, do a scan... # Parameters of the problem: Ntimes = 50 # Number of time measurements. tau_truth = 1.0; # We choose (like Gods!) the lifetime. # Binning: Nbins = 50 # Number of bins in histogram tmax = 10.0 # Maximum time in histogram binwidth = tmax / Nbins # Size of bins (s) # General settings: r = np.random # Random numbers r.seed(42) # We set the numbers to be random, but the same for each run # - # # ## Generate data: # Produce array of exponentially distributed times and put them in a histogram: t = r.exponential(tau_truth, Ntimes) # Exponential with lifetime tau. yExp, xExp_edges = np.histogram(t, bins=Nbins, range=(0, tmax)) # Is the data plotted like we wouls like to? Let's check... # In case you want to check that the numbers really come out as you want to (very healthy to do at first): if (veryverbose) : for index, time in enumerate(t) : print(f" {index:2d}: t = {time:5.3f}") if index > 10: break # let's restrain ourselves # Looks like values are coming int, but are they actually giving an exponential? Remember the importance of __plotting your data before hand__! X_center = xExp_edges[:-1] + (xExp_edges[1]-xExp_edges[0])/2.0 # Get the value of the histogram bin centers plt.plot(X_center,yExp,'o') plt.show() # Check that it looks like you are producing the data that you want. If this is the case, move on (and possibly comment out the plot!). # ## Analyse data: # The following is "a manual fit", i.e. scanning over possible values of the fitting parameter(s) - here luckely only one, tau - and seeing what value of chi2, bllh, and ullh it yields. When plotting these, one should find a <b>parabola</b>, the minimum value of which is the optimal fitting parameter of tau. The rate of increase around this minimum represents the uncertainty of the fitting parameter. # Define the number of tau values and their range to test in Chi2 and LLH: # As we know the "truth", namely tau = 1, the range [0.5, 1.5] seems fitting for the mean. # The number of bins can be increased at will, but for now 50 seems fitting. Ntau_steps = 50 min_tau = 0.5 max_tau = 1.5 delta_tau = (max_tau-min_tau) / Ntau_steps # Loop over hypothesis for the value of tau and calculate Chi2 and (B)LLH: chi2_minval = 999999.9 # Minimal Chi2 value found chi2_minpos = 0.0 # Position (i.e. time) of minimal Chi2 value bllh_minval = 999999.9 bllh_minpos = 0.0 ullh_minval = 999999.9 ullh_minpos = 0.0 tau = np.zeros(Ntau_steps+1) chi2 = np.zeros(Ntau_steps+1) bllh = np.zeros(Ntau_steps+1) ullh = np.zeros(Ntau_steps+1) # Now loop of POSSIBLE tau estimates: for itau in range(Ntau_steps+1): tau_hypo = min_tau + itau*delta_tau # Scan in values of tau tau[itau] = tau_hypo # Calculate Chi2 and binned likelihood (from loop over bins in histogram): chi2[itau] = 0.0 bllh[itau] = 0.0 for ibin in range (Nbins) : # Note: The number of EXPECTED events is the intergral over the bin! xlow_bin = xExp_edges[ibin] xhigh_bin = xExp_edges[ibin+1] # Given the start and end of the bin, we calculate the INTEGRAL over the bin, # to get the expected number of events in that bin: nexp = Ntimes * (np.exp(-xlow_bin/tau_hypo) - np.exp(-xhigh_bin/tau_hypo)) # The observed number of events... that is just the data! nobs = yExp[ibin] if (nobs > 0): # For ChiSquare but not LLH, we need to require Nobs > 0, as we divide by this: chi2[itau] += (nobs-nexp)**2 / nobs # Chi2 summation/function bllh[itau] += -2.0*np.log(stats.poisson.pmf(int(nobs), nexp)) # Binned LLH function if (veryverbose and itau == 0) : print(f" Nexp: {nexp:10.7f} Nobs: {nobs:3.0f} Chi2: {chi2[itau]:5.1f} BLLH: {bllh[itau]:5.1f}") # Calculate Unbinned likelihood (from loop over events): ullh[itau] = 0.0 for time in t : # i.e. for every data point generated... ullh[itau] += -2.0*np.log(1.0/tau_hypo*np.exp(-time/tau_hypo)) # Unbinned LLH function if (verbose) : print(f" {itau:3d}: tau = {tau_hypo:4.2f} chi2 = {chi2[itau]:6.2f} log(bllh) = {bllh[itau]:6.2f} log(ullh) = {ullh[itau]:6.2f}") # Search for minimum values of chi2, bllh, and ullh: if (chi2[itau] < chi2_minval) : chi2_minval = chi2[itau] chi2_minpos = tau_hypo if (bllh[itau] < bllh_minval) : bllh_minval = bllh[itau] bllh_minpos = tau_hypo if (ullh[itau] < ullh_minval) : ullh_minval = ullh[itau] ullh_minpos = tau_hypo print(f" Decay time of minimum found: chi2: {chi2_minpos:7.4f}s bllh: {bllh_minpos:7.4f}s ullh: {ullh_minpos:7.4f}s") print(f" Chi2 value at minimum: chi2 = {chi2_minval:.1f}") # ### Plot and fit results: # Define range around minimum to be fitted: min_fit = 0.15 max_fit = 0.20 # + fig, axes = plt.subplots(2, 2, figsize=(16, 12)) ax_chi2 = axes[0,0] ax_bllh = axes[1,0] ax_ullh = axes[0,1] # A fourth plot is available for plotting whatever you want :) # ChiSquare: # ---------- ax_chi2.plot(tau, chi2, 'k.', label='chi2') ax_chi2.set_xlim(chi2_minpos-2*min_fit, chi2_minpos+2*max_fit) ax_chi2.set_title("ChiSquare") ax_chi2.set_xlabel(r"Value of $\tau$") ax_chi2.set_ylabel("Value of ChiSquare") # Binned Likelihood: # ---------- ax_bllh.plot(tau, bllh,'bo') ax_bllh.set_xlim(bllh_minpos-2*min_fit, bllh_minpos+2*max_fit) ax_bllh.set_title("Binned Likelihood") ax_bllh.set_xlabel(r"Value of $\tau$") ax_bllh.set_ylabel(r"Value of $\ln{LLH}$") # Unbinned Likelihood: # ---------- ax_ullh.plot(tau, ullh, 'g.') ax_ullh.set_xlim(ullh_minpos-2*min_fit, ullh_minpos+2*max_fit) ax_ullh.set_title("Unbinned Likelihood") ax_ullh.set_xlabel(r"Value of $\tau$") ax_ullh.set_ylabel(r"Value of $\ln{LLH}$") fig; # - # --- # ## Parabola function # Note that the parabola is defined differently than normally. The parameters are: # * `minval`: Minimum value (i.e. constant) # * `minpos`: Minimum position (i.e. x of minimum) # * `quadratic`: Quadratic term. def func_para(x, minval, minpos, quadratic) : return minval + quadratic*(x-minpos)**2 func_para_vec = np.vectorize(func_para) # Note: This line makes it possible to send vectors through the function! # --- # ## Double parabola with different slopes on each side of the minimum: # In case the uncertainties are asymmetric, the parabola will also be so, and hence needs to be fitted with two separate parabolas meeting at the top point. Parameters are now as follows: # * `minval`: Minimum value (i.e. constant) # * `minpos`: Minimum position (i.e. x of minimum) # * `quadlow`: Quadratic term on lower side # * `quadhigh`: Quadratic term on higher side def func_asympara(x, minval, minpos, quadlow, quadhigh) : if (x < minpos) : return minval + quadlow*(x-minpos)**2 else : return minval + quadhigh*(x-minpos)**2 func_asympara_vec = np.vectorize(func_asympara) # Note: This line makes it possible to send vectors through the function! # ## Perform both fits: # + # Fit chi2 values with our parabola: indexes = (tau>chi2_minpos-min_fit) & (tau<chi2_minpos+max_fit) # Fit with parabola: chi2_object_chi2 = Chi2Regression(func_para, tau[indexes], chi2[indexes]) minuit_chi2 = Minuit(chi2_object_chi2, minval=chi2_minval, minpos=chi2_minpos, quadratic=20.0) minuit_chi2.errordef = 1.0 minuit_chi2.migrad() # Fit with double parabola: chi2_object_chi2_doublep = Chi2Regression(func_asympara, tau[indexes], chi2[indexes]) minuit_chi2_doublep = Minuit(chi2_object_chi2_doublep, minval=chi2_minval, minpos=chi2_minpos, quadlow=20.0, quadhigh=20.0) minuit_chi2_doublep.errordef = 1.0 minuit_chi2_doublep.migrad(); # + # Plot (simple) fit: minval, minpos, quadratic = minuit_chi2.values # Note how one can "extract" the three values from the object. print(minval) minval_2p, minpos_2p, quadlow_2p, quadhigh_2p = minuit_chi2_doublep.values print(minval_2p) x_fit = np.linspace(chi2_minpos-min_fit, chi2_minpos+max_fit, 1000) y_fit_simple = func_para_vec(x_fit, minval, minpos, quadratic) ax_chi2.plot(x_fit, y_fit_simple, 'b-') d = {'Chi2 value': minval, 'Fitted tau (s)': minpos, 'quadratic': quadratic} text = nice_string_output(d, extra_spacing=3, decimals=3) add_text_to_ax(0.02, 0.95, text, ax_chi2, fontsize=14) fig.tight_layout() if save_plots: fig.savefig("FitMinimum.pdf", dpi=600) fig # + # Given the parabolic fit, we can now extract the uncertainty on tau (think about why the below formula works!): err = 1.0 / np.sqrt(quadratic) # For comparison, I give one extra decimal, than I would normally do: print(f" Chi2 fit gives: tau = {minpos:.3f} +- {err:.3f}") # For the asymmetric case, there are naturally two errors to calculate. #err_lower = 1.0 / np.sqrt(quadlow) #err_upper = 1.0 / np.sqrt(quadhigh) # - # Go through tau values to find minimum and +-1 sigma: # This assumes knowing the minimum value, and Chi2s above Chi2_min+1 if (ScanChi2) : if (((chi2[0] - chi2_minval) > 1.0) and ((chi2[Ntau_steps] - chi2_minval) > 1.0)) : found_lower = False found_upper = False for itau in range (Ntau_steps+1) : if ((not found_lower) and ((chi2[itau] - chi2_minval) < 1.0)) : tau_lower = tau[itau] found_lower = True if ((found_lower) and (not found_upper) and ((chi2[itau] - chi2_minval) > 1.0)) : tau_upper = tau[itau] found_upper = True print(f" Chi2 scan gives: tau = {chi2_minpos:6.4f} + {tau_upper-chi2_minpos:6.4f} - {chi2_minpos-tau_lower:6.4f}") else : print(f" Error: Chi2 values do not fulfill requirements for finding minimum and errors!") # ### Discussion: # One could here of course have chosen a finer binning, but that is still not very satisfactory, and in any case very slow. That is why we of course want to use e.g. iMinuit to perform the fit, and extract all the relevant fitting parameters in a nice, fast, numerically stable, etc. way. # --- # # # Fit the data using iminuit (both chi2 and binned likelihood fits) # # Now we want to see, what a "real" fit gives, in order to compare our result with the one provided by Minuit. # + # Define the function to fit with: def func_exp(x, N0, tau) : return N0 * binwidth / tau * np.exp(-x/tau) # Define the function to fit with: def func_exp2(x, tau) : return Ntimes * binwidth / tau * np.exp(-x/tau) # - # ### $\chi^2$ fit: # + # Prepare figure fig_fit, ax_fit = plt.subplots(figsize=(8, 6)) ax_fit.set_title("tau values directly fitted with iminuit") ax_fit.set_xlabel("Lifetimes [s]") ax_fit.set_ylabel("Frequency [ev/0.1s]") # Plot our tau values indexes = yExp>0 # only bins with values! xExp = (xExp_edges[1:] + xExp_edges[:-1])/2 # Move from bins edges to bin centers syExp = np.sqrt(yExp) # Uncertainties ax_fit.errorbar(xExp[indexes], yExp[indexes], syExp[indexes], fmt='k_', ecolor='k', elinewidth=1, capsize=2, capthick=1) # Chisquare-fit tau values with our function: chi2_object_fit = Chi2Regression(func_exp, xExp[indexes], yExp[indexes], syExp[indexes]) # NOTE: The constant for normalization is NOT left free in order to have only ONE parameter! minuit_fit_chi2 = Minuit(chi2_object_fit, N0=Ntimes, tau=tau_truth) minuit_fit_chi2.fixed["N0"] = True minuit_fit_chi2.errordef = 1.0 minuit_fit_chi2.migrad() # Plot fit x_fit = np.linspace(0, 10, 1000) y_fit_simple = func_exp(x_fit, *minuit_fit_chi2.values) ax_fit.plot(x_fit, y_fit_simple, 'b-', label="ChiSquare fit") # + # Print the obtained fit results: # print(minuit_fit_chi2.values["tau"], minuit_fit_chi2.errors["tau"]) tau_fit = minuit_fit_chi2.values["tau"] etau_fit = minuit_fit_chi2.errors["tau"] print(f" Decay time of minimum found: chi2: {tau_fit:.3f} +- {etau_fit:.3f}s") print(f" Chi2 value at minimum: chi2 = {minuit_fit_chi2.fval:.1f}") # - # Alternatively to the above, one can in iMinuit actually ask for the Chi2 curve to be plotted by one command: minuit_fit_chi2.draw_mnprofile('tau') # --- # # ### Binned likelihood fit: # # Below is an example of a binned likelihood fit. Try to write an unbinned likelihood fit yourself! # + # Binned likelihood-fit tau values with our function # extended=True because we have our own normalization in our fit function bllh_object_fit = BinnedLH(func_exp2, t, bins=Nbins, bound=(0, tmax), extended=True) minuit_fit_bllh = Minuit(bllh_object_fit, tau=tau_truth) minuit_fit_bllh.errordef = 0.5 # Value for likelihood fit minuit_fit_bllh.migrad() # Plot fit x_fit = np.linspace(0, 10, 1000) y_fit_simple = func_exp2(x_fit, *minuit_fit_bllh.values[:]) ax_fit.plot(x_fit, y_fit_simple, 'r-', label="Binned Likelihood fit") # Define the ranges: ax_fit.set_xlim(0, 5) ax_fit.set_ylim(bottom=0) # We don't want to see values below this! fig_fit.legend(loc=[0.45, 0.75]) fig_fit.tight_layout() fig_fit # - if (save_plots) : fig_fit.savefig("ExponentialDist_Fitted.pdf", dpi=600) # --- # # ## Summary: # # Make sure that you understand how the likelihood is different from the ChiSquare, # and how the binned likelihood is different from the unbinned. If you don't do it, # this exercise, and much of the course and statistics in general will be a bit lost # on you! :-) # # The binned likelihood resembels the ChiSquare a bit, only the evaluation in each bin # is different, especially if the number of events in the bin is low, as the PDF # considered (Poisson for the LLH, Gaussian for the ChiSquare) is then different. # At high statistics, they give the same result, but the ChiSquare fit quality can be evaluated. # # The unbinned likelihood uses each single event, and is thus different at its core. # This can make a difference, if there are only few events and/or if each event has # several attributes, which can't be summarized in a simple histogram with bins. # # ## Conclusion: # Fitting "manually" is damn hard, cumbersome, and not a thing that one wants to do. Always let a well tested program (e.g. iMinuit) do it, and instead take the inspired position of checking that the fitting program actually is doing what it is supposed to do, and that everything comes out reasonable. # # The art of fitting is multiple. **Very importantly, a fit requires good input parameters**, as it will otherwise not converge. Also, the Chi-square fit is more robust, so it is often a good idea to start with this, and if the fit converges, one can use the fitting parameters as input values for subsequent (likelihood) fits. Finally, one needs to consider the binning and fitting range carefully, and make good use of the p-value from the Chi-square. # # # # # # Questions: # # 1) Consider the four plots (bottom right one empty) showing chi2, bllh, and ullh as a function of lifetime, tau. Do the four curves resemble each other in shape? Are they identical in shape? Do the three methods give similar results, or are they different? Do you see the relation between the curves and the fit result? This question requires that you also fit a parabola to the other two cases. Remember to consider both central value and uncertainty of tau. # # Example solution 1: # The main thing to see is, that the two likelihood curves (and especially the unbinned one) rise faster, and thus have a smaller uncertainty. Also, if repeating the experiment many times, it will be clear that the likelihood is a better estimate, while the Chi2 is biased towards smaller values (tau < 1), as the higher mostly empty bins are disregarded (well depending on how you define the numerator in the Pearson Chi2). Finally, the minimum likelihood values don't give any information in themselves, unlike the Chi2 value, which can be used to test the fit goodness. # # --- # # 2) Now consider the two (chi2 and bllh) fits by iMinuit. How alike results do they obtain? Again, consider both the central values and the uncertainty. # # 3) Try to decrease the number of exponential numbers you consider to say 10, and see how things change. Does the difference between Chi2, bllh, and ullh get bigger or not? # # Example solution 2 and 3: # In the limit of large statistics, the three converge, but at low statistics, the Chi2 can become a really poor estimate. The two likelihood methods do much better. Their difference is due to the binning, which if too coarse gives the unbinned likelihood fit an advantage. All of this is best investigated by running many experiments (a bit like god!) to see the outcome statistically. # # --- # # 4) Try to increase the number of exponential numbers you consider to say 10000, and see what happens to the difference between Chi2 and BLLH? Also, does the errors become more symetric? Perhaps you will need to consider a shorter range of the fit around the mimimal value, and have to also increase the number of points you calculate the chi2/bllh/ullh (or decrease the range you search!), and possibly change the ranges of your plotting. # # Example solution 4: # With more statistics, all methods converge, and also the asymmetry of the chi2/llh curve decreases. On a large scale, it may be (and still is) asymmetric, but locally around the minimum it becomes almost perfectly symmetric. In general, uncertainties are more or less symmetric, and become increasingly so with increasing statistics. # # # ### Advanced Questions: # # 5) Make (perhaps in a new program) a loop over the production of random data, # and try to see, if you can print (or plot) the Chi2 and BLLH results for each # turn. Can you spot any general trends? I.e. is the Chi2 uncertainty always # lower or higher than the (B/U)LLH? And are any of the estimators biased? # # 6) Make a copy of the program and put in a different PDF (i.e. not the exponential). # Run it, and see if the errors are still asymetric. For the function, try either # e.g. a Polynomial or a Gaussian.
AppStat2022/Week2/ExampleSolutions/LikelihoodFit/LikelihoodFit_ExampleSolution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Install required packages # Run the line below to ensure that all required packages are installed. # !pip install -r requirements.txt # ## Getting the Data # In order to access the data that I use below you will need to join the [March Machine Learning Mania 2021 - NCAAM](https://www.kaggle.com/c/ncaam-march-mania-2021) Kaggle competition [here](https://www.kaggle.com/account/login?returnUrl=%2Fc%2Fncaam-march-mania-2021%2Frules). # Then you will need to get your Kaggle API token by following the instructions [here](https://www.kaggle.com/docs/api). # # Once you have completed these steps, run the cell below to download and decompress the competition data # !kaggle competitions download -c ncaam-march-mania-2021 -p data # !unzip -f data/ncaam-march-mania-2021.zip -d data # # Imports # Run the cell below to import all required modules. import pandas as pd from seaborn import heatmap from matplotlib import pyplot as plt file_path = 'data/MRegularSeasonCompactResults.csv' # # Inspecting the Compact Tournament results. # In this notebook we will inspect the compact regular season results provided by the `MRegularSeasonCompactResults.csv` file. df = pd.read_csv(file_path) df.head() # ### Outcome Matrix # Below we will create an all-time outcome matrix, which will count the total number of wins for every ordered tuple in the form `(WTeamID, LTeamID)`. def compute_outcome_matrix(file_path, season=None): """Takes a file path to a compact results file and returns an outcome matrix for all Mens NCAA games. Given an optional season, outcomes are restricted to the season in question. """ df = pd.read_csv(file_path) if season: df = df[df['Season'] == season] team_ids_by_game = df[['WTeamID', 'LTeamID']] win_counts = team_ids_by_game.value_counts().reset_index() col_names = win_counts.columns.to_list() col_names[2] = 'Wins' win_counts.columns = col_names outcome_matrix = win_counts.pivot_table(values= 'Wins', index='WTeamID', columns='LTeamID').fillna(0) return outcome_matrix def show_outcome_matrix(file_path, season=None): """Takes the file path to MNCAATourneyCompactResults.csv and plots a heatmap of outcomes for all Mens NCAA games. Outcomes can be restricted to a specific season by passing a season.""" outcome_matrix = compute_outcome_matrix(file_path, season) fig, ax = plt.subplots(figsize=(20,15)); if season: ax.set_title(f'Outcome matrix for {season} season.') else: ax.set_title('All-Time Outcome Matrix'); heatmap(outcome_matrix, cmap='Blues', ax=ax); return fig # #### All-time outcome heat map # Below, we see that the all-time outcome matrix is fairly sparse, with the majority of pairings having never occurred. fig = show_outcome_matrix(file_path) # #### Outcome heat map for 2015 season # Below, we see that even seasonal data is sparse. It would be wise to group teams by conference to obtain a more informative visual. fig = show_outcome_matrix(file_path, season=2019)
HeatmapNCAA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="ChjuaQjm_iBf" # ##### Copyright 2020 The TensorFlow Authors. # + colab={} colab_type="code" id="uWqCArLO_kez" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="ikhIvrku-i-L" # # Taking advantage of context features # # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/recommenders/examples/context_features"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/context_features.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/context_features.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/context_features.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # </table> # + [markdown] colab_type="text" id="rrDVNe7Vdqhr" # In [the featurization tutorial](featurization) we incorporated multiple features beyond just user and movie identifiers into our models, but we haven't explored whether those features improve model accuracy. # # Many factors affect whether features beyond ids are useful in a recommender model: # # 1. __Importance of context__: if user preferences are relatively stable across contexts and time, context features may not provide much benefit. If, however, users preferences are highly contextual, adding context will improve the model significantly. For example, day of the week may be an important feature when deciding whether to recommend a short clip or a movie: users may only have time to watch short content during the week, but can relax and enjoy a full-length movie during the weekend. Similarly, query timestamps may play an imporatant role in modelling popularity dynamics: one movie may be highly popular around the time of its release, but decay quickly afterwards. Conversely, other movies may be evergreens that are happily watched time and time again. # 2. __Data sparsity__: using non-id features may be critical if data is sparse. With few observations available for a given user or item, the model may struggle with estimating a good per-user or per-item representation. To build an accurate model, other features such as item categories, descriptions, and images have to be used to help the model generalize beyond the training data. This is especially relevant in [cold-start](https://en.wikipedia.org/wiki/Cold_start_(recommender_systems))) situations, where relatively little data is available on some items or users. # # In this tutorial, we'll experiment with using features beyond movie titles and user ids to our MovieLens model. # + [markdown] colab_type="text" id="D7RYXwgbAcbU" # ## Preliminaries # # We first import the necessary packages. # + colab={} colab_type="code" id="2bK2g6_Mbn73" # !pip install -q tensorflow-recommenders # !pip install -q --upgrade tensorflow-datasets # + colab={} colab_type="code" id="XbwMjnLP5nZ_" import os import tempfile import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs # + [markdown] colab_type="text" id="tgKIjpQLAiax" # We follow [the featurization tutorial](featurization) and keep the user id, timestamp, and movie title features. # + colab={} colab_type="code" id="kc2REbOO52Fl" ratings = tfds.load("movie_lens/100k-ratings", split="train") movies = tfds.load("movie_lens/100k-movies", split="train") ratings = ratings.map(lambda x: { "movie_title": x["movie_title"], "user_id": x["user_id"], "timestamp": x["timestamp"], }) movies = movies.map(lambda x: x["movie_title"]) # + [markdown] colab_type="text" id="5YZ2q5RXYNI6" # We also do some housekeeping to prepare feature vocabularies. # + colab={} colab_type="code" id="G5CVveCS9Doq" timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100))) max_timestamp = timestamps.max() min_timestamp = timestamps.min() timestamp_buckets = np.linspace( min_timestamp, max_timestamp, num=1000, ) unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000)))) unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map( lambda x: x["user_id"])))) # + [markdown] colab_type="text" id="mFJcCVMUQou3" # ## Model definition # + [markdown] colab_type="text" id="PtS6a4sgmI-c" # ### Query model # # We start with the user model defined in [the featurization tutorial](featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings. However, we change it slightly to allow us to turn timestamp features on or off. This will allow us to more easily demonstrate the effect that timestamp features have on the model. In the code below, the `use_timestamps` parameter gives us control over whether we use timestamp features. # + colab={} colab_type="code" id="_ItzYwMW42cb" class UserModel(tf.keras.Model): def __init__(self, use_timestamps): super().__init__() self._use_timestamps = use_timestamps self.user_embedding = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_user_ids), tf.keras.layers.Embedding(len(unique_user_ids) + 2, 32), ]) if use_timestamps: self.timestamp_embedding = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()), tf.keras.layers.Embedding(len(timestamp_buckets) + 2, 32), ]) self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization() self.normalized_timestamp.adapt(timestamps) def call(self, inputs): if not self._use_timestamps: return self.user_embedding(inputs["user_id"]) return tf.concat([ self.user_embedding(inputs["user_id"]), self.timestamp_embedding(inputs["timestamp"]), self.normalized_timestamp(inputs["timestamp"]), ], axis=1) # + [markdown] colab_type="text" id="B9IqNTLmpJzs" # Note that our use of timestamp features in this tutorial interacts with our choice of training-test split in an undesirable way. Because we have split our data randomly rather than chronologically (to ensure that events that belong to the test dataset happen later than those in the training set), our model can effectively learn from the future. This unrealistic: after all, we cannot train a model today on data from tomorrow. # # This means that adding time features to the model lets it learn _future_ interaction patterns. We do this for illustration purposes only: the MovieLens dataset itself is very dense, and unlike many real-world datasets does not benefit greatly from features beyond user ids and movie titles. # # This caveat aside, real-world models may well benefit from other time-based features such as time of day or day of the week, especially if the data has strong seasonal patterns. # + [markdown] colab_type="text" id="XleMceZNHC__" # ### Candidate model # # For simplicity, we'll keep the candidate model fixed. Again, we copy it from the [featurization](featurization) tutorial: # + colab={} colab_type="code" id="oQZHX8bEHPOk" class MovieModel(tf.keras.Model): def __init__(self): super().__init__() max_tokens = 10_000 self.title_embedding = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_movie_titles), tf.keras.layers.Embedding(len(unique_movie_titles) + 2, 32) ]) self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization( max_tokens=max_tokens) self.title_text_embedding = tf.keras.Sequential([ self.title_vectorizer, tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True), tf.keras.layers.GlobalAveragePooling1D(), ]) self.title_vectorizer.adapt(movies) def call(self, titles): return tf.concat([ self.title_embedding(titles), self.title_text_embedding(titles), ], axis=1) # + [markdown] colab_type="text" id="Cc4KbTNwHSvD" # ### Combined model # # With both `UserModel` and `QueryModel` defined, we can put together a combined model and implement our loss and metrics logic. # # Note that we also need to make sure that the user model and query model output embeddings of compatible size. Because we'll be varying their sizes by adding more features, the easiest way to accomplish this is to use a dense projection layer after each model: # # # + colab={} colab_type="code" id="26_hNJPKIh4-" class MovielensModel(tfrs.models.Model): def __init__(self, use_timestamps): super().__init__() self.query_model = tf.keras.Sequential([ UserModel(use_timestamps), tf.keras.layers.Dense(32) ]) self.candidate_model = tf.keras.Sequential([ MovieModel(), tf.keras.layers.Dense(32) ]) self.task = tfrs.tasks.Retrieval( metrics=tfrs.metrics.FactorizedTopK( candidates=movies.batch(128).map(self.candidate_model), ), ) def compute_loss(self, features, training=False): # We only pass the user id and timestamp features into the query model. This # is to ensure that the training inputs would have the same keys as the # query inputs. Otherwise the discrepancy in input structure would cause an # error when loading the query model after saving it. query_embeddings = self.query_model({ "user_id": features["user_id"], "timestamp": features["timestamp"], }) movie_embeddings = self.candidate_model(features["movie_title"]) return self.task(query_embeddings, movie_embeddings) # + [markdown] colab_type="text" id="8YXjsRsLTVzt" # ## Experiments # + [markdown] colab_type="text" id="QY7MTwMruoKh" # ### Prepare the data # # We first split the data into a training set and a testing set. # + colab={} colab_type="code" id="wMFUZ4dyTdYd" tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000) cached_train = train.shuffle(100_000).batch(2048) cached_test = test.batch(4096).cache() # + [markdown] colab_type="text" id="I2HEuTBzJ9w5" # ### Baseline: no timestamp features # # We're ready to try out our first model: let's start with not using timestamp features to establish our baseline. # + colab={} colab_type="code" id="NkoLkiQdK4Um" model = MovielensModel(use_timestamps=False) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) train_accuracy = model.evaluate( cached_train, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] test_accuracy = model.evaluate( cached_test, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] print(f"Top-100 accuracy (train): {train_accuracy:.2f}.") print(f"Top-100 accuracy (test): {test_accuracy:.2f}.") # + [markdown] colab_type="text" id="p90vFk8LvJXp" # This gives us a baseline top-100 accuracy of around 0.2. # # # + [markdown] colab_type="text" id="BjJ1anzuLXgN" # ### Capturing time dynamics with time features # # Do the result change if we add time features? # + colab={} colab_type="code" id="11qAr5gGMUxE" model = MovielensModel(use_timestamps=True) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) train_accuracy = model.evaluate( cached_train, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] test_accuracy = model.evaluate( cached_test, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] print(f"Top-100 accuracy (train): {train_accuracy:.2f}.") print(f"Top-100 accuracy (test): {test_accuracy:.2f}.") # + [markdown] colab_type="text" id="NHnzYfQrOj8I" # This is quite a bit better: not only is the training accuracy much higher, but the test accuracy is also substantially improved. # + [markdown] colab_type="text" id="dB09crfpgBx7" # ## Next Steps # # This tutorial shows that even simple models can become more accurate when incorporating more features. However, to get the most of your features it's often necessary to build larger, deeper models. Have a look at the [deep retrieval tutorial](deep_recommenders) to explore this in more detail.
docs/examples/context_features.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- import numpy as np class Perceptron(object): """ Perceptron classifier. Parameters ----------- eta : float Learning Rate (between 0.0 and 1.0) n_iter : int Passes over the training set Attributes ----------- w_ : 1d array Weights after fitting. errors_ : list Number of misclassifications in every epoch. """ def __init__(self, eta=0.01, n_iter=10): self.eta = eta self.n_iter = n_iter def fit(self, X,y): """Fit training data. Parameters ----------- X : {array-like}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target values Returns ----------- self : object """ self.w_ = np.zeros(1 + X.shape[1]) self.errors_ = [] for _ in range(self.n_iter): errors = 0 for xi, target in zip(X, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[0] += update errors += int(update != 0.0) self.errors_.append(errors) return self def net_input(self, X): """Calculate net input""" return np.dot(X, self.w_[1:]) + self.w_[0] def predict(self, X): """Return class label after unit step""" return np.where(self.net_input(X) >= 0.0, 1, -1) import pandas as pd df = pd.read_csv('https://archive.ics.uci.edu/ml/' 'machine-learning-databases/iris/iris.data', header=None) df.tail() import matplotlib.pyplot as plt import numpy as np y = df.iloc[0:100, 4].values y = np.where(y == 'Iris-setosa', -1, 1) X = df.iloc[0:100, [0, 2]].values plt.scatter(X[:50, 0], X[:50, 1], color='red', marker = 'o', label = 'setosa') plt.scatter(X[50:100,0], X[50:100, 1], color = 'blue', marker = 'x', label = 'versicolor') plt.xlabel('petal length') plt.ylabel('sepal length') plt.legend(loc='upper left') plt.show() # ### Now we'll fit the perceptron # #### First initiialize it, then fit to data, then plot ppn = Perceptron(eta = 0.1, n_iter=10) ppn.fit(X,y) plt.plot(range(1, len(ppn.errors_) +1), ppn.errors_, marker = 'o') plt.xlabel('Epochs') plt.ylabel('number of misclassifications') plt.show() # #### Now writing a small function to more easily be able to visualize the decision boundaries from matplotlib.colors import ListedColormap def plot_decision_regions(X, y, classifier, resolution = 0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) #plot the decision surface x1_min, x1_max = X[:,0].min() -1, X[:,0].max() +1 x2_min, x2_max = X[:, 1].min() - 1, X[:,1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z= classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z= Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha = 0.4, cmap = cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y==cl,0], y=X[y == cl, 1], alpha = 0.8, c = cmap(idx), marker = markers[idx], label = cl) plot_decision_regions(X,y, classifier=ppn, resolution = .02) plt.xlabel('sepal length [cm]') plt.ylabel('petal length [cm]') plt.legend(loc='upper left') plt.show()
ch02_perceptrons/ch_02_Perceptron.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (TensorFlow 2.3 Python 3.7 GPU Optimized) # language: python # name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/tensorflow-2.3-gpu-py37-cu110-ubuntu18.04-v3 # --- # # SageMakerCV TensorFlow Tutorial # # SageMakerCV is a collection of computer vision tools developed to take full advantage of Amazon SageMaker by providing state of the art model accuracy, training speed, and training cost reductions. SageMakerCV is based on the lessons we learned from developing the record breaking computer vision models we announced at Re:Invent in 2019 and 2020, along with talking to our customers and understanding the challenges they faced in training their own computer vision models. # # The tutorial in this notebook walks through using SageMakerCV to train Mask RCNN on the COCO dataset. The only prerequisite is to setup SageMaker studio, the instructions for which can be found in [Onboard to Amazon SageMaker Studio Using Quick Start](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html). Everything else, from getting the COCO data to launching a distributed training cluster, is included here. # # ## Setup and Roadmap # # Before diving into the tutorial itself, let's take a minute to discuss the various tools we'll be using. # # #### SageMaker Studio # [SageMaker Studio](https://aws.amazon.com/sagemaker/studio/) is a machine learning focused IDE where you can interactively develop models and launch SageMaker training jobs all in one place. SageMaker Studio provides a Jupyter Lab like environment, but with a number of enhancements. We'll just scratch the surface here. See the [SageMaker Studio Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/studio.html) for more details. # # For our purposes, the biggest difference from regular Jupyter Lab is that SageMaker Studio allows you to change your compute resources as needed, by connecting notebooks to Docker containers on different ML instances. This is a little confusing to just describe, so let's walk through an example. # # Once you've completed the setup on [Onboard to Amazon SageMaker Studio Using Quick Start](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html), go to the [SageMaker Console](https://us-west-2.console.aws.amazon.com/sagemaker) and click `Open SageMaker Studio` near the top right of the page. # # <img src="../assets/SageMaker_console.png" style="width: 600px"> # # If you haven't yet created a user, do so via the link at the top left of the page. Give it any name you like. For execution role, you can either use an existing SageMaker role, or create a new one. If you're unsure, create a new role. On the `Create IAM Role` window, make sure to select `Any S3 Bucket`. # # <img src="../assets/Create_IAM_role.png" style="width: 600px"> # # Back on the SageMaker Studio page, select `Open Studio` next to the user you just created. # # <img src="../assets/Studio_domain.png" style="width: 600px"> # # This will take a couple minutes to start up the first time. Once it starts, you'll have a Jupyter Lab like interface running on a small instance with an attached EBS volume. Let's start by taking a look at the `Launcher` tab. # # <img src="../assets/Studio_launcher.png" style="width: 750px"> # # If you don't see the `Launcher`, you can bring one up by clicking the `+` on the menu bar in the upper left corner. # # <img src="../assets/Studio_menu_bar.png" style="width: 600px"> # # The `Launcher` gives you access to all kinds of tools. This is where you can create new notebooks, text files, or get a terminal for your instance. Try the `System Terminal`. This gives you a new terminal tab for your Studio instance. It's useful for things like downloading data or cloning github repos into studio. For example, you can run `aws s3 ls` to browse your current S3 buckets. Go ahead and clone this repo onto Studio with # # `git clone https://github.com/aws-samples/amazon-sagemaker-cv` # # Let's look at the launcher one more time. Bring another one up with the `+`. Notice you have an option for `Select a SageMaker image` above the button to launch a notebook. This allows you to select a Docker image that will launch on a new instance. The notebook you create will be attached to that new instance, along with the EBS volume on your Studio instance. Let's try it out. On the `Launcher` page, click the drop down menu next to `Select a SageMaker Image` and select `TensorFlow 2.3 Python 3.7 (Optimzed for GPU)`, then click the `Notebook` button below the dropdown. # # <img src="../assets/Select_tensorflow_image.png" style="width: 600px"> # # Take a look at the upper righthand corner of the notebook. # # <img src="../assets/notebook_tensorflow_kernel.png" style="width: 600px"> # # The `Ptyhon 3 (TensorFlow 2.3 Python 3.7 GPU Optimized)` refers to the kernel associated with this notebook. The `Unknown` refers to the current instance type. Click `Unknown` and select `ml.g4dn.xlarge`. # # <img src="../assets/instance_types.png" style="width: 600px"> # # This will launch a `ml.g4dn.xlarge` instance and attach this notebook to it. This will take a couple of minutes, because Studio needs to download the PyTorch Docker image to the new instance. Once an instance has started, launching new notebooks with the same instance type and kernel is immediate. You'll also see the `Unknown` replaced with and instance description `4 vCPU + 16 GiB + 1 GPU`. You can also change instance as needed. Say you want to run your notebook on a `ml.p3dn.24xlarge` to get 8 GPUs. To change instances, just click the instance description. To get more instances in the menu, deselect `Fast launch only`. # # Once your notebook is up and running, you can also get a terminal into your new instance. # # <img src="../assets/Launch_terminal.png" style="width: 600px"> # # This can be useful for customizing your image with setup scripts, pip installing new packages, or using mpi to launch multi GPU training jobs. Click to get a terminal and run `ls`. Note that you have the same directories as your main Studio instance. Studio will attach the same EBS volume to all the instances you start, so all your files and data are shared across any notebooks you start. This means that you can prototype a model on a single GPU instance, then switch to a multi GPU instance while still having access to all of your data and scripts. # # Finally, when you want to shut down instances, click the circle with a square in it on the left hand side. # # <img src="../assets/running_instances.png" style="width: 600px"> # # This shows your current running instances, and the Docker containers attached to those instances. To shut them down, just click the power button to their right. # # Now that we've explored studio a bit, let's get started with SageMakerCV. If you followed the instructions above to clone the repo, you should have `amazon-sagemaker-cv` in the file browser on the left. Navigate to `amazon-sagemaker-cv/pytorch/tutorial.ipynb` to open this notebook on your instance. If you still have a `g4dn` running, it should automatically attach to it. # # The rest of this notebook is broken into 4 sections. # # - Installing SageMakerCV and Downloading the COCO Data # # Since we're using the base AWS Deep Learning Container image, we need to add the SageMakerCV tools. Then we'll download the COCO dataset and upload it to S3. # # - Prototyping in Studio # # We'll walk through how to train a model on Studio, how SageMakerCV is structured, and how you can add your own models and features. # # - Launching a SageMaker Training Job # # There's lots of bells and whistles available to train your models fast, an on large datasets. We'll put a lot of those together to launch a high performance training job. Specifically, we'll create a training job with 4 P4d.24xlarge instances connected with 400 GB EFA, and streaming our training data from S3, so we don't have to load the dataset onto the instances before training. You could even use this same configuration to train on a dataset that wouldn't fit on the instances. If you'd rather only launch a smaller (or larger) training cluster, we'll discuss how to modify configuration. # # - Testing Our Model # # Finally, we'll take the output trained Mask RCNN model and visualize its performance in Studio. # # #### Installing SageMakerCV # # To install SageMakerCV on the PyTorch Studio Docker, just run `pip install -e .` in the `amazon-sagemaker-cv/tensorflow` directory. You can do this with either an image terminal, or by running the paragraph below. Note that we use the `-e` option. This will keep the SageMakerCV modules editable, so any changes you make will be launched on your training job. # !pip install -e . # *** # ### Setup on S3 and Download COCO data # # Next we need to setup an S3 bucket for all our data and results. Enter a name for your S3 bucket below. You can either create a new bucket, or use an existing bucket. If you use an existing bucket, make sure it's in the same region where you plan to run training. For new buckets, we'll specify that it needs to be in the current SageMaker region. By default we'll put everything in an S3 location on your bucket named `smcv-tutorial`, and locally in `/root/smcv-tutorial`, but you can change these locations. S3_BUCKET = 'sagemaker-smcv-tutorial' # Don't include s3:// in your bucket name S3_DIR = 'smcv-tensorflow-tutorial' LOCAL_DATA_DIR = '/root/smcv-tensorflow-tutorial' # For reasons detailed in Distributed Training, do not put this dir in the SageMakerCV dir import os import zipfile from pathlib import Path from s3fs import S3FileSystem from concurrent.futures import ThreadPoolExecutor import boto3 from botocore.client import ClientError from tqdm import tqdm # + s3 = boto3.resource('s3') boto_session = boto3.session.Session() region = boto_session.region_name # Check if bucket exists. If it doesn't, create it. try: bucket = s3.meta.client.head_bucket(Bucket=S3_BUCKET) print(f"S3 Bucket {S3_BUCKET} Exists") except ClientError: print(f"Creating Bucket {S3_BUCKET}") bucket = s3.create_bucket(Bucket=S3_BUCKET, CreateBucketConfiguration={'LocationConstraint': region}) # - # *** # # Next we'll download the COCO data to Studio, unzip the files, create TFRecords, and upload to S3. The reason we want the data in two places is that it's convenient to have the data locally on Studio for prototyping. We also want to unarchive the data before moving it to S3 so that we can stream it to our training instances instead of downloading it all at once. # # Once this is finished, you'll have copies of the COCO data on your Studio instance, and in S3. Be careful not to open the `data/coco/train2017` dir in the Studio file browser. It contains 118287 images, and can cause your web browser to crash. If you need to browse these files, use the terminal. # # This only needs to be done once, and only if you don't already have the data. The COCO 2017 dataset is about 20GB, so this step takes around 30 minutes to complete. The next paragraph sets up all the file directories we'll use for downloading, and later in training. COCO_URL="http://images.cocodataset.org" ANNOTATIONS_ZIP="annotations_trainval2017.zip" TRAIN_ZIP="train2017.zip" VAL_ZIP="val2017.zip" COCO_DIR=os.path.join(LOCAL_DATA_DIR, 'data', 'coco') TF_RECORD_DIR=os.path.join(LOCAL_DATA_DIR, 'data', 'coco', 'tfrecord') os.makedirs(COCO_DIR, exist_ok=True) os.makedirs(TF_RECORD_DIR, exist_ok=True) S3_DATA_LOCATION=os.path.join("s3://", S3_BUCKET, S3_DIR, "data", "coco") S3_WEIGHTS_LOCATION=os.path.join("s3://", S3_BUCKET, S3_DIR, "data", "weights", "resnet") WEIGHTS_DIR=os.path.join(LOCAL_DATA_DIR, 'data', 'weights') os.makedirs(WEIGHTS_DIR, exist_ok=True) R50_WEIGHTS_SRC="https://sagemakercv.s3.us-west-2.amazonaws.com/weights/tensorflow" R50_WEIGHTS_TAR="tensorflow_resnet50.tar" R50_WEIGHTS="tensorflow_resnet50" # *** # # This paragraph will download everything. It takes around 30 minutes to complete. # + print("Downloading annotations") # !wget -O $COCO_DIR/$ANNOTATIONS_ZIP $COCO_URL/annotations/$ANNOTATIONS_ZIP # !unzip $COCO_DIR/$ANNOTATIONS_ZIP -d $COCO_DIR # !aws s3 cp --recursive $COCO_DIR/annotations $S3_DATA_LOCATION/annotations print("Downloading COCO training data") # !wget -O $COCO_DIR/$TRAIN_ZIP $COCO_URL/zips/$TRAIN_ZIP # train data has ~128000 images. Unzip is too slow, about 1.5 hours beceause of disk read and write speed on the EBS volume. # This technique is much faster because it grabs all the zip metadata at once, then uses threading to unzip multiple files at once. print("Unzipping COCO training data") train_zip = zipfile.ZipFile(os.path.join(COCO_DIR, TRAIN_ZIP)) jpeg_files = [image.filename for image in train_zip.filelist if image.filename.endswith('.jpg')] os.makedirs(os.path.join(COCO_DIR, 'train2017')) with ThreadPoolExecutor() as executor: threads = list(tqdm(executor.map(lambda x: train_zip.extract(x, COCO_DIR), jpeg_files), total=len(jpeg_files))) print("Downloading COCO validation data") # !wget -O $COCO_DIR/$VAL_ZIP $COCO_URL/zips/$VAL_ZIP # switch to also threading # !unzip -q $COCO_DIR/$VAL_ZIP -d $COCO_DIR val_images = [i for i in Path(os.path.join(COCO_DIR, 'val2017')).glob("*.jpg")] # !apt-get -y update && apt install -y protobuf-compiler # !cd sagemakercv/data/coco && ./process_coco_tfrecord.sh $COCO_DIR $TF_RECORD_DIR tfrecord_train = list(Path(TF_RECORD_DIR).glob('train-*.tfrecord')) tfrecord_val = list(Path(TF_RECORD_DIR).glob('val-*.tfrecord')) s3fs = S3FileSystem() print("Uploading training tfrecords to S3") with ThreadPoolExecutor() as executor: threads = list(tqdm(executor.map(lambda record: s3fs.put(record.as_posix(), os.path.join(S3_DATA_LOCATION, 'tfrecord', 'train2017', record.name)), tfrecord_train), total=len(tfrecord_train))) print("Uploading validation tfrecords to S3") with ThreadPoolExecutor() as executor: threads = list(tqdm(executor.map(lambda record: s3fs.put(record.as_posix(), os.path.join(S3_DATA_LOCATION, 'tfrecord', 'val2017', record.name)), tfrecord_val), total=len(tfrecord_val))) print("Downloading Resnet Weights") # !wget -O $WEIGHTS_DIR/$R50_WEIGHTS_TAR $R50_WEIGHTS_SRC/$R50_WEIGHTS_TAR # !tar -xf $WEIGHTS_DIR/$R50_WEIGHTS_TAR -C $WEIGHTS_DIR s3fs.put(os.path.join(WEIGHTS_DIR, R50_WEIGHTS), S3_WEIGHTS_LOCATION, recursive=True) print("Finished!") # - # *** # ### Training on Studio # # Now that we have the data, we can get to training a Mask RCNN model to detect objects in the COCO dataset images. # # Since training on a single GPU can take days, we'll just train for a couple thousands steps, and run a single evaluation to make sure our model is at least starting to learn something. We'll train a full model on a larger cluster of GPUs in a SageMaker training job. # # The reason we first want to train in Studio is that we want to dig a bit into the SageMakerCV framework, and talk about the model architecture, since we expect many users will want to modify models for their own use cases. # # #### Mask RCNN # # First, just a very brief overview of Mask RCNN. If you would like a more in depth examination, we recommend taking a look at the [original paper](https://arxiv.org/abs/1703.06870), the [feature pyramid paper](https://arxiv.org/abs/1612.03144) which describes a popular architectural change we'll use in our model, and blog posts from [viso.ai](https://viso.ai/deep-learning/mask-r-cnn/), [tryo labs](https://tryolabs.com/blog/2018/01/18/faster-r-cnn-down-the-rabbit-hole-of-modern-object-detection/), [<NAME>](https://jonathan-hui.medium.com/image-segmentation-with-mask-r-cnn-ebe6d793272), and [<NAME>](https://lilianweng.github.io/lil-log/2017/12/31/object-recognition-for-dummies-part-3.html). # # Mask RCNN is a two stage object detection model that locates objects in images by places bounding boxes around, and segmentation masks over, any object for which the model is trained to find. It also provides classifcations for each object. # # <img src="../assets/traffic.png" style="width: 1200px"> # # Mask RCNN is called a two stage model because it performs detection in two steps. The first identifies any objects in the image, versus background. The second stage determines the specific class of each object, and applies the segmentation mask. Below is an architectural diagram of the model. Let's walk through each step. # # <img src="../assets/mask_rcnn_arch.jpeg" style="width: 1200px"> # Credit: <NAME> # # The `Convolution Network` is often referred to as the model backbone. This is a pretrained image classification model, commonly ResNet, which has been trained on a large image classification dataset, like ImageNet. The classification layer is removed, and instead the backbone outputs a set of convolution feature maps. The idea is, the classification model learned to identify objects in the process of classifying images, and now we can use that information to build a more complex model that can find those objects in the image. We want to pretrain because training the backbone at the same time as training the object detector tends to be very unstable. # # One additional component that is sometimes added to the backbone is a `Fearure Pyramid Network`. This take the outputs of the backbone, and combines them to together into a new set of feature maps by perform both up and down convolutions. The idea is that the different sized feature maps will help the model detect images of different sizes. The feature pyramid also helps with this, by allowing the different feature maps to share information with each other. # # The outputs of the feature pyramid are then passed to the `Region Proposal Network` which is responsible for finding regions of the image that might contain an object (this is the first of the two stages). The RPN will output several hundred thousand regions, each with a probability of containing an object. We'll typically take the top few thousand most likely regions. Because these several thousand regions will usually have a lot of overlap, we perform [non-max supression](https://towardsdatascience.com/non-maximum-suppression-nms-93ce178e177c), which removed regions with large areas of overlap. This gives us a set of `regions of interest` regions of the image that we think might contain an image. # # Next, we use those regions to crop out the corresponding sections of the feature maps that came from the feature pyramid network using a technique called [ROI align](https://firiuza.medium.com/roi-pooling-vs-roi-align-65293ab741db). # # We pass our cropped feature maps to the `box head` which classifies each region into either a specific object category, or as background. It also refines the position of the bounding box. In Mask RCNN, we also pass the feature maps to a `mask head` which produces a segmentation mask over the object. # # #### SageMakerCV Internals # # An important feature of Mask RCNN is its multiple heads. One head constructs a bounding box, while another creates a mask. These are referred to as the `ROI heads`. It's common for users to extend this and other two stage models by adding their own ROI heads. For example, a keypoint head it common. Doing so means modifying SageMakerCV's internals, so let's talk about those for a second. # # The high level Mask RCNN model can be found in `amazon-sageamaker-cv/pytorch/sagemakercv/detection/detectors/two_stage_detector.py`. If you trace through the `call` function, you'll see that the model first passes an image through the backbone, neck, then the RPN. The RPN layer also contains the non-max supression step. The regions of interest are then passed to the roi heads, where the regions of interest are used to crop sections of the feature maps, which are then classified into object categories. # # Probably the most important feature to be aware of are the `build` imports at the top. Each section of the model has an associated build function `(build_backbone, build_neck, build_dense_head, build_roi_head)` which are implemented in the `build_two_stage_detector` at the bottom of the file. These functions simplify building the model by letting us pass in a single configuration file for building all the different pieces. # # For example, if you open `amazon-sageamaker-cv/tensorflow/sagemakercv/detection/roi_heads/standard_roi_head.py`, you'll find the `build_standard_roi_head` function at the bottom. To add a new head, you would write a Tensorflow module with its own build function. The decorator at the top of the build function allows it to be called from the config file. The dectorator `@HEADS.register("StandardRoIHead")` adds a dictionary entry so that when `StandardRoIHead` is in the config file, build_standard_roi_head gets called at the `build_roi_head`. If, for example, you specify `CascadeRoIHead` the associated builder for the cascade ROI head is used instead. # # Finally, a note about data loading. SageMakerCV uses and optimized TFRecord data format. The COCO dataloader can be found in `amazon-sageamaker-cv/tensorflow/sagemakercv/data/coco/dataloader.py`. It takes a file pattern in the form `data/coco/train2017/train*` which will include all files that start with `train` in the dataset. You can use either a local directory or an S3 location `s3://my-bucket/my-data/coco/train2017/train*`. The dataloader will automatically switch between the two. The S3 functionality is especially useful for distributed training with large datasets, since it means you can train without waiting for your data to download. # # #### Setting Up Training # # Let's actually use some of these functions to train a model. # # Start by importing the default configuration file. from configs import cfg # *** # We use the [yacs](https://github.com/rbgirshick/yacs) format for configuration files. If you want to see the entire config, run `print(cfg.dump())` but this prints out a lot, and to not overwhelm you with too much information, we'll just focus on the bits we want to change for this model. # *** # First, let's put in all the file directories for the data and weights we downloaded in the previous section, as well as an output directory for the model results. # + cfg.PATHS.TRAIN_FILE_PATTERN = os.path.join(TF_RECORD_DIR, "train*") cfg.PATHS.VAL_FILE_PATTERN = os.path.join(TF_RECORD_DIR, "val*") cfg.PATHS.WEIGHTS = os.path.join(WEIGHTS_DIR, R50_WEIGHTS, "resnet50.ckpt") cfg.PATHS.VAL_ANNOTATIONS = os.path.join(COCO_DIR, "annotations", "instances_val2017.json") cfg.PATHS.OUT_DIR = os.path.join(LOCAL_DATA_DIR, "output") # create output dir if it doesn't exist os.makedirs(cfg.PATHS.OUT_DIR, exist_ok=True) # - # *** # This section specifies model details, including the type of model, and internal hyperparameters. We wont cover the details of all of these, but more information can be found in this blog posts listed above, as well as the original paper. cfg.LOG_INTERVAL = 50 # Number of training steps between logging interval cfg.MODEL.DENSE.PRE_NMS_TOP_N_TRAIN = 2000 # Top regions of interest to select before NMS cfg.MODEL.DENSE.POST_NMS_TOP_N_TRAIN = 1000 # Top regions of interest to select after NMS cfg.MODEL.RCNN.ROI_HEAD = "StandardRoIHead" # ROI head with box and mask, if mask is set to true cfg.MODEL.FRCNN.LOSS_TYPE = "giou" cfg.MODEL.INCLUDE_MASK = True # include mask. switching this off runs Faster RCNN # *** # Next we set up the configuration for training, including the optimizer, hyperparameters, batch size, and training length. Batch size is global, so if you set a batch size of 64 across 8 GPUs, it will be a batch size of 8 per GPU. SageMakerCV currently supports the following optimizers: momentum SGD (stochastic gradient descent) and NovoGrad, and the following learning rate schedulers: stepwise and cosine decay. New, custom optimizers and schedulers can be added by modifying the `sagemakercv/training/builder.py` file. # # For training on Studio, we'll just run for a thousand steps. We'll be using SageMaker training instances for the full training on multiple GPUs. cfg.INPUT.TRAIN_BATCH_SIZE = 4 # Training batch size cfg.INPUT.EVAL_BATCH_SIZE = 8 # Training batch size cfg.SOLVER.SCHEDULE = "CosineDecay" # Learning rate schedule, either CosineDecay or PiecewiseConstantDecay cfg.SOLVER.OPTIMIZER = "NovoGrad" # Optimizer type NovoGrad or Momentum cfg.SOLVER.LR = .002 # Base learning rate after warmup cfg.SOLVER.BETA_1 = 0.9 # NovoGrad beta 1 value cfg.SOLVER.BETA_2 = 0.5 # NovoGRad beta 2 value cfg.SOLVER.MAX_ITERS = 2500 # Total training steps cfg.SOLVER.WARMUP_STEPS = 250 # warmup steps cfg.SOLVER.XLA = True # Train with XLA cfg.SOLVER.FP16 = True # Train with mixed precision enables cfg.SOLVER.TF32 = False # Train with TF32 data type enabled, only available on Ampere GPUs and TF 2.4 and up # Finally, SageMakerCV includes a number of training hooks. These work similar to Keras callbacks by adding some functionality to training. We use our own training hooks and runner class which improves performance beyond the standard keras model.fit() training strategy. # # Here we include three hooks. The `CheckpointHook` loads the backbone weights, and saves a model checkpoint after each epoch. The `IterTimerHook` and `TextLoggerHook` print helpful training progress information out to CloudWatch during training. cfg.HOOKS=["CheckpointHook", "IterTimerHook", "TextLoggerHook"] # Let's save our new configuration file in case we want to use it in future training. import yaml from contextlib import redirect_stdout local_config_file = f"configs/local-config-studio.yaml" with open(local_config_file, 'w') as outfile: with redirect_stdout(outfile): print(cfg.dump()) # A saved model configuration can be loaded by first running `from configs import cfg` and mapping our saved file with `merge_from_file` cfg.merge_from_file(local_config_file) # And now we can build and train our model. Import build functions so we can build pieces directory with our configuration file. from sagemakercv.detection import build_detector from sagemakercv.training import build_optimizer, build_scheduler, build_trainer from sagemakercv.data import build_dataset from sagemakercv.utils.dist_utils import get_dist_info, MPI_size, is_sm_dist from sagemakercv.utils.runner import Runner, build_hooks import tensorflow as tf # And include some standard TensorFlow configuration setup so our model runs in mixed precision with XLA enabled. rank, local_rank, size, local_size = get_dist_info() devices = tf.config.list_physical_devices('GPU') for device in devices: tf.config.experimental.set_memory_growth(device, True) tf.config.set_visible_devices([devices[local_rank]], 'GPU') logical_devices = tf.config.list_logical_devices('GPU') tf.config.optimizer.set_experimental_options({"auto_mixed_precision": cfg.SOLVER.FP16}) tf.config.optimizer.set_jit(cfg.SOLVER.XLA) if int(tf.__version__.split('.')[1])>=4: tf.config.experimental.enable_tensor_float_32_execution(cfg.SOLVER.TF32) # Build the dataset and create an iterable object from it. dataset = iter(build_dataset(cfg)) # Build the detector model. detector = build_detector(cfg) # Pass a single observation through the model so the shapes are set. This is necessary to load the backbone weights. features, labels = next(dataset) result = detector(features, training=False) # Build the model optimizer. This will also build our learning rate schedule. optimizer = build_optimizer(cfg) # The trainer contains our training and evaluation step, and sets up our distributed training based on if we're using Horovod or SMDDP (more on this later). trainer = build_trainer(cfg, detector, optimizer, dist='smd' if is_sm_dist() else 'hvd') # Finally, the runner will manage our training and run our training hooks. This serves a similar role to training with Keras, but provides increased flexibility and training performance. runner = Runner(trainer, cfg) hooks = build_hooks(cfg) for hook in hooks: runner.register_hook(hook) # Run training for 2500 steps. This will take about 30 minutes. runner.run(dataset) # So now we have a partially trained model. Let's go ahead and try visualizing the results. You'll notice it picks up common categories (such as people) better at this point. The images are randomly picked from the training data, so it might take a few tries to get an image where the model picks up objects at this point in training. from sagemakercv.utils.visualization import build_image, restore_image from sagemakercv.data.coco.coco_labels import coco_categories import matplotlib.pyplot as plt features, labels = next(dataset) result = detector(features, training=False) image_num = 0 # image number within the batch # We first restore the original image, then extract the boxes and labels from the results. image = restore_image(result['images'][image_num], features['image_info'][image_num]) # converts the image back to its original shape and color boxes = result['detection_boxes'][image_num] classes = result['detection_classes'][image_num] scores = result['detection_scores'][image_num] # Generate an image with the boxes and labels mapped onto it. The threshold limits the number of boxes to those were the model is at least this confident in the class. detection_image = build_image(image, boxes, scores, classes, coco_categories, threshold=0.8) plt.figure(figsize = (15, 15)) plt.imshow(detection_image) # Great! So far you've built a partially trained model locally on Studio. For many applications, this might be enough. If all you need is to train a model on a small dataset, you can likely do everything you need with what we've covered so far. # # On the other hand, if you need to train a model on many GBs or even TBs of data, and don't want to wait weeks for it to finish, you'll need to run a distributed training job across multiple GPUs, or even multiple nodes. With SageMaker training jobs you can train on as many as 512 [A100 GPUs](https://www.nvidia.com/en-us/data-center/a100/). We won't go quite that far, but we'll show you how. # # The section below is also replicated in the `SageMaker.ipynb` notebook for future training once all the above setup is complete. # # Before we get started, a few notes about how SageMaker training instances work. SageMaker takes care of a lot of setup for you, but it's important to understand a little of what's happening under the hood so you can customize training to your own needs. # # First we're going to look at a toy estimator to explain what's happening: # # ``` # from sagemaker import get_execution_role # from sagemaker.tensorflow import TesnorFlow # # estimator = TesnorFlow( # entry_point='train.py', # source_dir='.', # py_version='py37', # framework_version='2.4.1', # role=get_execution_role(), # instance_count=4, # instance_type='ml.p4d.24xlarge', # distribution=distribution, # output_path='s3://my-bucket/my-output/', # checkpoint_s3_uri='s3://my-bucket/my-checkpoints/', # model_dir='s3://my-bucket/my-model/', # hyperparameters={'config': 'my-config.yaml'}, # volume_size=500, # code_location='s3://my-bucket/my-code/', # ) # ``` # # The estimator forms the basic configuration of your training job. # # SageMaker will first launch `instance_count=4` `instance_type=ml.p4d.24xlarge` instances. The `role` is an IAM role that SageMaker will use to launch instances on your behalf. SageMaker includes a `get_execution_role` function which grabs the execution role of your current instance. Each instance will have a `volume_size=500` EBS volume attached for your model and data. On `ml.p4d.24xlarge` and `ml.p3dn.24xlarge` instance types, SageMaker will automatically set up the [Elastic Fabric Adapter](https://aws.amazon.com/hpc/efa/). EFA provides up to 400 GB/s communication between your training nodes, as well as [GPU Direct RDMA](https://aws.amazon.com/about-aws/whats-new/2020/11/efa-supports-nvidia-gpudirect-rdma/) on `ml.p4d.24xlarge`, which allows your GPUs to bypass the host and communicate directly with each other across nodes. # # Next, SageMaker we copy all the contents of `source_dir='.'` first to the `code_location='s3://my-bucket/my-code/'` S3 location, then to each of your instances. One common mistake is to leave large files or data in this directory or its subdirectories. This will slow down your launch times, or can even cause the launch to hang. Make sure to keep your working data and model artifacts elsewhere on your Studio instance so you don't accidently copy them to your training instance. You should instead use `Channels` to copy data and model artifacts, which we'll cover shortly. # # SageMaker will then download the training Docker image to all your instances. Which container you download is determined by `py_version='py37'` and `framework_version='2.4.1'`. You can also use your own [custom Docker image](https://aws.amazon.com/blogs/machine-learning/bringing-your-own-custom-container-image-to-amazon-sagemaker-studio-notebooks/) by specifying an ECR address with the `image_uri` option. SageMakerCV currently works with TensorFlow versions 2.3-2.5. # # Before starting training, SageMaker will check your source directory for a `setup.py` file, and install if one is present. Then SageMaker will launch training, via `entry_point='train.py'`. Anything in `hyperparameters={'config': 'my-config.yaml'}` will be passed to the training script as a command line argument (ie `python train.py --config my-config.yaml`). The distribution will determine what form of distributed training to launch. This will be covered in more detail later. # # During training, anything written to `/opt/ml/checkpoints` on your training instances will be synced to `checkpoint_s3_uri='s3://my-bucket/my-checkpoints/'` at the same time. This can be useful for checkpointing a model you might want to restart later, or for writting Tensorboard logs to monitor your training. # # When training complets, you can write your model artifacats to `/opt/ml/model` and it will save to `model_dir='s3://my-bucket/my-model/'`. Another option is to also write model artifacts to your checkpoints file. # # Training logs, and any failure messages will to written to `/opt/ml/output` and saved to `output_path='s3://my-bucket/my-output/'`. from sagemaker import get_execution_role from sagemaker.tensorflow import TensorFlow from datetime import datetime # First we need to set some names. You want `AWS_DEFAULT_REGION` to be the same region as the S3 bucket your created earlier, to ensure your training jobs are reading from nearby S3 buckets. # # Next, set a `user_id`. This is just for naming your training job so it's easier to find later. This can be anything you like. We also get the current date and time to make organizing training jobs a little easier. # + # explain region. Don't launch a training job in VA with S3 bucket in OR os.environ['AWS_DEFAULT_REGION'] = region # This is the region we set at the beginning, when creating the S3 bucket for our data # this is all for naming user_id="jbsnyder-smcv-tutorial" # This is used for naming your training job, and organizing your results on S3. It can be anything you like. date_str=datetime.now().strftime("%d-%m-%Y") # use the data and time to keep track of training jobs and organize results in S3 time_str=datetime.now().strftime("%d-%m-%Y-%H-%M-%S") # - # For instance type, we'll use an `ml.p4d.24xlarge`. We recommend this instance type for large training. It includes the latest A100 Nvidia GPUs, which can train several times faster than the previous generation. If you would rather train part way on smaller instanes, `ml.p3.2xlarge, ml.p3.8xlarge, ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g4dn.12xlarge` are all good options. In particular, if you're looking for a low cost way to try a short distributed training, but aren't worried about the model fully converging, we recommend the `ml.g4dn.12xlarge` which uses 4 Nvidia T4 GPUs per node. # # `s3_location` will be the base S3 storage location we used earlier for the COCO data. For `role` we get the execution role from our studio instance. For `source_dir` we use the current directory. Again, make sure you haven't accidently written any large files to this directory. # specify training type, s3 src and nodes instance_type="ml.p4d.24xlarge" # This can be any of 'ml.p3dn.24xlarge', 'ml.p4d.24xlarge', 'ml.p3.16xlarge', 'ml.p3.8xlarge', 'ml.p3.2xlarge', 'ml.g4dn.12xlarge' nodes=4 # number of training nodes s3_location=os.path.join("s3://", S3_BUCKET, S3_DIR) role=get_execution_role() #give Sagemaker permission to launch nodes on our behalf source_dir='.' entry_point='train.py' # *** # # Let's modify our previous training configuration for multinode. We don't need to change much. We'll increase the batch size since we have more and large GPUs. For A100 GPUs a batch size of 12 per GPU works well. For V100 and T4 GPUs, a batch size of 6 per GPU is recommended. Make sure to lower the learning rate and increase your number of training steps if you decrease the batch size. For example, if you want to train on 2 `ml.g4dn.12xlarge` instances, you'll have 8 T4 GPUs. A batch size of `cfg.INPUT.TRAIN_BATCH_SIZE = 32`, with inference batch size of `cfg.INPUT.EVAL_BATCH_SIZE = 16`, learning rate of `cfg.SOLVER.LR = .008`, and training steps of `cfg.SOLVER.MAX_ITERS = 25000`` is probably about right. from configs import cfg cfg.LOG_INTERVAL = 50 # Number of training steps between logging interval cfg.MODEL.DENSE.PRE_NMS_TOP_N_TRAIN = 2000 # Top regions of interest to select before NMS cfg.MODEL.DENSE.POST_NMS_TOP_N_TRAIN = 1000 # Top regions of interest to select after NMS cfg.MODEL.RCNN.ROI_HEAD = "StandardRoIHead" cfg.MODEL.FRCNN.LOSS_TYPE = "giou" cfg.MODEL.FRCNN.LABEL_SMOOTHING = 0.1 # label smoothing for box head cfg.INPUT.TRAIN_BATCH_SIZE = 256 # Training batch size cfg.INPUT.EVAL_BATCH_SIZE = 128 # Training batch size cfg.SOLVER.SCHEDULE = "CosineDecay" # Learning rate schedule, either CosineDecay or PiecewiseConstantDecay cfg.SOLVER.OPTIMIZER = "NovoGrad" # Optimizer type NovoGrad or Momentum cfg.SOLVER.LR = .042 # Base learning rate after warmup cfg.SOLVER.BETA_1 = 0.9 # NovoGrad beta 1 value cfg.SOLVER.BETA_2 = 0.3 # NovoGRad beta 2 value cfg.SOLVER.ALPHA = 0.001 # scehduler final alpha cfg.SOLVER.WEIGHT_DECAY = 0.001 # weight decay cfg.SOLVER.MAX_ITERS = 5500 # Total training steps cfg.SOLVER.WARMUP_STEPS = 500 # warmup steps cfg.SOLVER.XLA = True # Train with XLA cfg.SOLVER.FP16 = True # Train with mixed precision enables cfg.SOLVER.TF32 = True # Train with TF32 data type enabled, only available on Ampere GPUs and TF 2.4 and up cfg.SOLVER.EVAL_EPOCH_EVAL = False # Only run eval at end cfg.HOOKS=["CheckpointHook", "IterTimerHook", "TextLoggerHook", "CocoEvaluator"] # *** # Earlier we mentioned the `distrbution` strategy in SageMaker. Distributed training can be either multi GPU single node (ie training on 8 GPU in a single ml.p4d.24xlarge) or mutli GPU multi node (ie training on 32 GPUs across 4 ml.p4d.24xlarges). For TensorFlow SageMakerCV uses either Horovod or [SageMaker Distributed Data Parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) (SMDDP). For single node multi GPU, or multi node on small instances, we recommend Horovod. For multinode on large instance types, SMDDP is built to fully utilize AWS network topology, and EFA, providing improved scaling efficiency. # # To enable SMDDP, set `distribution = { "smdistributed": { "dataparallel": { "enabled": True } } }`. SageMakerCV already has SMDDP integrated. To implement SMDDP for your own models, follow [these instructions](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-intro.html). SMDDP will launch training from the first node in your cluster using [MPI](https://www.open-mpi.org/). # # For Horovod based training, we can call MPI directory by setting `distribution = {"mpi": {"enabled": True,}}`. if nodes>1 and instance_type in ['ml.p3dn.24xlarge', 'ml.p4d.24xlarge', 'ml.p3.16xlarge']: distribution = { "smdistributed": { "dataparallel": { "enabled": True } } } else: distribution = {"mpi": {"enabled": True,}} # *** # We'll set a job name based on the user name and time. We'll then set output directories on S3 using the date and job name. # # For this training, we'll use the same S3 location for all 3 SageMaker model outputs `/opt/ml/checkpoint`, `/opt/ml/model`, and `/opt/ml/output`. job_name = f'{user_id}-{time_str}' # Set the job name to user id and the current time output_path = os.path.join(s3_location, "sagemaker-output", date_str, job_name) # Organizes results on S3 by date and job name code_location = os.path.join(s3_location, "sagemaker-code", date_str, job_name) # *** # Next we need to add our data sources to our configuration file, but first let's talk a little more about how SageMaker gets data to your instance. # # The most straightforward way to get your data is using "Channels." These are S3 locations you specify in a dictionary when you launch a training job. For example, let's say you launch a training job with: # # ``` # channels = {'train': 's3://my-bucket/data/train/', # 'test': 's3://my-bucket/data/test/', # 'weights': 's3://my-bucket/data/weights/', # 'dave': 's3://my-bucket/data/daves_weird_data/'} # # pytorch_estimator.fit(channels) # ``` # # At the start of training, SageMaker will create a set of corresponding directories on each training node: # # ``` # /opt/ml/input/data/train/ # /opt/ml/input/data/test/ # /opt/ml/input/data/weights/ # /opt/ml/input/data/dave/ # ``` # # SageMaker will then copy all the contents of the corresponding S3 locations to these directories, which you can then access in training. # # One downside of setting up channels like this is that it requires all the data to be downloaded to your instance at the start of of training, which can delay the training launch if you're dealing with a large dataset. # # We have two ways to speed up launch. The first is [Fast File Mode](https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-sagemaker-fast-file-mode/) which downloads data from S3 as it's requested by the training model, speeding up your launch time. You can use fast file mode by sepcifying `TrainingInputMode='FastFile'` in your SageMaker estimator configuration. # # If you're dealing with really large datasets, you might prefer to instead continuously stream data from S3. Luckily, this feature is already supported in TensorFlow and SageMakerCV. If you provide the dataset builder with an S3 file pattern, it will stream TFRecords from S3 instead of reading them locally. # # In our case, we'll use a mix of channels and streaming from S3. We'll download the smaller pieces at the start of training (the validation data, pretrained weights, and image annotations), and we'll stream our training data directly from S3 during training. # # First, we setup our training channels. These are the locations where we earlier uploaded our COCO data, annotations, and weights. channels = {'val2017': os.path.join(s3_location, 'data', 'coco', 'tfrecord', 'val2017'), 'annotations': os.path.join(s3_location, 'data', 'coco', 'annotations'), 'weights': os.path.join(s3_location, 'data', 'weights', 'resnet')} # Now we setup the data sources in our configuration. The train file pattern will take and S3 location. The others are all set to the corresponding directory for each channel. We also set the output directory to be the SageMaker checkpoint directory, which will sync to our S3 output location. CHANNELS_DIR='/opt/ml/input/data/' # on node cfg.PATHS.TRAIN_FILE_PATTERN = os.path.join(s3_location, 'data', 'coco', 'tfrecord', 'train2017', 'train*') cfg.PATHS.VAL_FILE_PATTERN = os.path.join(CHANNELS_DIR, "val2017", "val*") cfg.PATHS.WEIGHTS = os.path.join(CHANNELS_DIR, "weights", "resnet50.ckpt") cfg.PATHS.VAL_ANNOTATIONS = os.path.join(CHANNELS_DIR, "annotations", "instances_val2017.json") cfg.PATHS.OUT_DIR = '/opt/ml/checkpoints' # Save the configuration file. dist_config_file = f"configs/dist-training-config.yaml" with open(dist_config_file, 'w') as outfile: with redirect_stdout(outfile): print(cfg.dump()) # Set the config file as a hyperparameter so it will be passed a command line arg when training launches. hyperparameters = {"config": dist_config_file} # And now we can launch training. With 4 P4d instances, this takes about an hour. This section will also print a lot of output logs. By setting `wait=False` you can avoid printing logs in the notebook. This setting will just launch the job then return, and is useful for when you want to launch several jobs at the same time. You can then montior each job from the [SageMaker Training Console](https://us-west-2.console.aws.amazon.com/sagemaker). estimator = TensorFlow( entry_point=entry_point, source_dir=source_dir, py_version='py37', framework_version='2.4.1', role=role, instance_count=nodes, instance_type=instance_type, distribution=distribution, output_path=output_path, checkpoint_s3_uri=output_path, model_dir=output_path, hyperparameters=hyperparameters, volume_size=500, disable_profiler=True, debugger_hook_config=False, code_location=code_location, ) estimator.fit(channels, wait=True, job_name=job_name) # *** # ### Visualizing Results # # And there you have it, a fully trained Mask RCNN model in about an hour. Now let's see how our model does on prediction by actually visualizing the output. # # Our model is stored at the S3 location we gave to the training job in `output_path`. The checkpointer hook creates a `trained_model` directory and stores the final checkpoint there. We'll need to grab the results and store them on our studio instance so we can check performance, and visualize the output. s3fs = S3FileSystem() model_loc = os.path.join(estimator.output_path, 'trained_model', 'model.h5') # Copy the model from S3 to our Studio instance. s3fs.get(model_loc, model_loc.split('/')[-1]) # We can load the trained model weights into the detector model we created earlier for the local training. detector.load_weights(model_loc.split('/')[-1]) # Like we did for the local model, let's grab a random image from the dataset and visualize the model's predictions. features, labels = next(dataset) result = detector(features, training=False) image_num = 3 # image number within the batch image = restore_image(result['images'][image_num], features['image_info'][image_num]) # converts the image back to its original shape and color boxes = result['detection_boxes'][image_num] classes = result['detection_classes'][image_num] scores = result['detection_scores'][image_num] detection_image = build_image(image, boxes, scores, classes, coco_categories, threshold=0.8) plt.figure(figsize = (15, 15)) plt.imshow(detection_image) # #### Conclusion # # In this notebook, we've walked through the entire process of training Mask RCNN on SageMaker. We've implemented several of SageMaker's more advanced features, such as distributed training, EFA, and streaming data directly from S3. From here you can use the provided template datasets to train on your own data, or modify the framework with your own object detection model. # # When you're done, make sure to check that all of your SageMaker training jobs have stopped by checking the [SageMaker Training Console](https://us-west-2.console.aws.amazon.com/sagemaker). Also check that you've stopped any Studio instance you have running by selecting the session monitor on the left (the circle with a square in it), and clicking the power button next to any running instances. Your files will still be saved on the Studio EBS volume. # # <img src="../assets/running_instances.png" style="width: 600px">
tensorflow/Tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # <h1> Time series prediction using RNNs, with TensorFlow and Cloud ML Engine </h1> # # This notebook illustrates: # <ol> # <li> Creating a Recurrent Neural Network in TensorFlow # <li> Creating a Custom Estimator in tf.contrib.learn # <li> Training on Cloud ML Engine # </ol> # # <p> # # <h3> Simulate some time-series data </h3> # # Essentially a set of sinusoids with random amplitudes and frequencies. import tensorflow as tf print tf.__version__ # + import numpy as np import tensorflow as tf import seaborn as sns import pandas as pd SEQ_LEN = 10 def create_time_series(): freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6 ampl = np.random.random() + 0.5 # 0.5 to 1.5 x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl return x for i in xrange(0, 5): sns.tsplot( create_time_series() ); # 5 series # + def to_csv(filename, N): with open(filename, 'w') as ofp: for lineno in xrange(0, N): seq = create_time_series() line = ",".join(map(str, seq)) ofp.write(line + '\n') to_csv('train.csv', 1000) # 1000 sequences to_csv('valid.csv', 50) # - # !head -5 train.csv valid.csv # <h2> RNN </h2> # # For more info, see: # <ol> # <li> http://colah.github.io/posts/2015-08-Understanding-LSTMs/ for the theory # <li> https://www.tensorflow.org/tutorials/recurrent for explanations # <li> https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb for sample code # </ol> # # Here, we are trying to predict from 9 values of a timeseries, the tenth value. # # <p> # # <h3> Imports </h3> # # Several tensorflow packages and shutil import tensorflow as tf import shutil import tensorflow.contrib.learn as tflearn import tensorflow.contrib.layers as tflayers from tensorflow.contrib.learn.python.learn import learn_runner import tensorflow.contrib.metrics as metrics import tensorflow.contrib.rnn as rnn # <h3> Input Fn to read CSV </h3> # # Our CSV file structure is quite simple -- a bunch of floating point numbers (note the type of DEFAULTS). We ask for the data to be read BATCH_SIZE sequences at a time. The Estimator API in tf.contrib.learn wants the features returned as a dict. We'll just call this timeseries column 'rawdata'. # <p> # Our CSV file sequences consist of 10 numbers. We'll assume that 9 of them are inputs and we need to predict the last one. DEFAULTS = [[0.0] for x in xrange(0, SEQ_LEN)] BATCH_SIZE = 20 TIMESERIES_COL = 'rawdata' N_OUTPUTS = 1 # in each sequence, 1-8 are features, and 10 is label N_INPUTS = SEQ_LEN - N_OUTPUTS # Reading data using the Estimator API in tf.learn requires an input_fn. This input_fn needs to return a dict of features and the corresponding labels. # <p> # So, we read the CSV file. The Tensor format here will be batchsize x 1 -- entire line. We then decode the CSV. At this point, all_data will contain a list of Tensors. Each tensor has a shape batchsize x 1. There will be 10 of these tensors, since SEQ_LEN is 10. # <p> # We split these 10 into 9 and 1 (N_OUTPUTS is 1). Put the 9 into a dict, call it features. The other is the ground truth, so labels. # read data and convert to needed format def read_dataset(filename, mode=tf.contrib.learn.ModeKeys.TRAIN): def _input_fn(): num_epochs = 100 if mode == tf.contrib.learn.ModeKeys.TRAIN else 1 # could be a path to one file or a file pattern. input_file_names = tf.train.match_filenames_once(filename) filename_queue = tf.train.string_input_producer( input_file_names, num_epochs=num_epochs, shuffle=True) reader = tf.TextLineReader() _, value = reader.read_up_to(filename_queue, num_records=BATCH_SIZE) value_column = tf.expand_dims(value, -1) print 'readcsv={}'.format(value_column) # all_data is a list of tensors all_data = tf.decode_csv(value_column, record_defaults=DEFAULTS) inputs = all_data[:len(all_data)-N_OUTPUTS] # first few values label = all_data[len(all_data)-N_OUTPUTS : ] # last few values # from list of tensors to tensor with one more dimension inputs = tf.concat(inputs, axis=1) label = tf.concat(label, axis=1) print 'inputs={}'.format(inputs) return {TIMESERIES_COL: inputs}, label # dict of features, label return _input_fn # <h3> Define RNN </h3> # # A recursive neural network consists of possibly stacked LSTM cells. # <p> # The RNN has one output per input, so it will have 8 output cells. We use only the last output cell, but rather use it directly, we do a matrix multiplication of that cell by a set of weights to get the actual predictions. This allows for a degree of scaling between inputs and predictions if necessary (we don't really need it in this problem). # <p> # Finally, to supply a model function to the Estimator API, you need to return a ModelFnOps. The rest of the function creates the necessary objects. # + LSTM_SIZE = 3 # number of hidden layers in each of the LSTM cells # create the inference model def simple_rnn(features, targets, mode): # 0. Reformat input shape to become a sequence x = tf.split(features[TIMESERIES_COL], N_INPUTS, 1) #print 'x={}'.format(x) # 1. configure the RNN lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0) outputs, _ = rnn.static_rnn(lstm_cell, x, dtype=tf.float32) # slice to keep only the last cell of the RNN outputs = outputs[-1] #print 'last outputs={}'.format(outputs) # output is result of linear activation of last layer of RNN weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS])) bias = tf.Variable(tf.random_normal([N_OUTPUTS])) predictions = tf.matmul(outputs, weight) + bias # 2. loss function, training/eval ops if mode == tf.contrib.learn.ModeKeys.TRAIN or mode == tf.contrib.learn.ModeKeys.EVAL: loss = tf.losses.mean_squared_error(targets, predictions) train_op = tf.contrib.layers.optimize_loss( loss=loss, global_step=tf.contrib.framework.get_global_step(), learning_rate=0.01, optimizer="SGD") eval_metric_ops = { "rmse": tf.metrics.root_mean_squared_error(targets, predictions) } else: loss = None train_op = None eval_metric_ops = None # 3. Create predictions predictions_dict = {"predicted": predictions} # 4. return ModelFnOps return tflearn.ModelFnOps( mode=mode, predictions=predictions_dict, loss=loss, train_op=train_op, eval_metric_ops=eval_metric_ops) # - # <h3> Experiment </h3> # # Distributed training is launched off using an Experiment. The key line here is that we use tflearn.Estimator rather than, say tflearn.DNNRegressor. This allows us to provide a model_fn, which will be our RNN defined above. Note also that we specify a serving_input_fn -- this is how we parse the input data provided to us at prediction time. # + def get_train(): return read_dataset('train.csv', mode=tf.contrib.learn.ModeKeys.TRAIN) def get_valid(): return read_dataset('valid.csv', mode=tf.contrib.learn.ModeKeys.EVAL) def serving_input_fn(): feature_placeholders = { TIMESERIES_COL: tf.placeholder(tf.float32, [None, N_INPUTS]) } features = { key: tf.expand_dims(tensor, -1) for key, tensor in feature_placeholders.items() } features[TIMESERIES_COL] = tf.squeeze(features[TIMESERIES_COL], axis=[2]) print 'serving: features={}'.format(features[TIMESERIES_COL]) return tflearn.utils.input_fn_utils.InputFnOps( features, None, feature_placeholders ) from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils def experiment_fn(output_dir): # run experiment return tflearn.Experiment( tflearn.Estimator(model_fn=simple_rnn, model_dir=output_dir), train_input_fn=get_train(), eval_input_fn=get_valid(), eval_metrics={ 'rmse': tflearn.MetricSpec( metric_fn=metrics.streaming_root_mean_squared_error ) }, export_strategies=[saved_model_export_utils.make_export_strategy( serving_input_fn, default_output_alternative_key=None, exports_to_keep=1 )] ) shutil.rmtree('outputdir', ignore_errors=True) # start fresh each time learn_runner.run(experiment_fn, 'outputdir') # - # <h3> Standalone Python module </h3> # # To train this on Cloud ML Engine, we take the code in this notebook, make an standalone Python module. # %bash # run module as-is REPO=$(pwd) # echo $REPO # rm -rf outputdir export PYTHONPATH=${PYTHONPATH}:${REPO}/simplernn python -m trainer.task \ --train_data_paths="${REPO}/train.csv*" \ --eval_data_paths="${REPO}/valid.csv*" \ --output_dir=${REPO}/outputdir \ --job-dir=./tmp # Try out online prediction. This is how the REST API will work after you train on Cloud ML Engine # %writefile test.json {"rawdata": [0,0.214,0.406,0.558,0.655,0.687,0.65,0.549,0.393]} # %bash MODEL_DIR=$(ls ./outputdir/export/Servo/) gcloud ml-engine local predict --model-dir=./outputdir/export/Servo/$MODEL_DIR --json-instances=test.json # <h3> Cloud ML Engine </h3> # # Now to train on Cloud ML Engine. # %bash # run module on Cloud ML Engine REPO=$(pwd) BUCKET=asl-ml-immersion-temp # CHANGE AS NEEDED OUTDIR=gs://${BUCKET}/simplernn/model_trained JOBNAME=simplernn_$(date -u +%y%m%d_%H%M%S) REGION=us-central1 gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${REPO}/simplernn/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC \ --runtime-version=1.2 \ -- \ --train_data_paths="gs://${BUCKET}/train.csv*" \ --eval_data_paths="gs://${BUCKET}/valid.csv*" \ --output_dir=$OUTDIR \ --num_epochs=100 # <h2> Variant: long sequence </h2> # # To create short sequences from a very long sequence. # + import tensorflow as tf import numpy as np def breakup(sess, x, lookback_len): N = sess.run(tf.size(x)) windows = [tf.slice(x, [b], [lookback_len]) for b in xrange(0, N-lookback_len)] windows = tf.stack(windows) return windows x = tf.constant(np.arange(1,11, dtype=np.float32)) with tf.Session() as sess: print 'input=', x.eval() seqx = breakup(sess, x, 5) print 'output=', seqx.eval() # - # Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
courses/machine_learning/deepdive/05_artandscience/d_customestimator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Effect of Scrambling Image on Training CNN-Based Classifiers # # Contents: # # 1. [Outline](#outline) # 2. [MNIST Dataset](#mnist_ds) # 3. [Fashion-MNIST Dataset](#fashion_mnist_ds) # 4. [CIFAR-10 Dataset](#cifar10_ds) # # <a id='outline'></a> # # Outline # # Here we explore the effect of a random scrambling of the input image (shuffling pixels pixels) on classification in three tasks: case # - MNIST # - Fashion-MNIST # - CIFAR10 # # First, we expect no effect on fully-connected networks. For CNN-based networks, this is expected to significantly interfere with the networks ability to learn. An interesting question that one can ask is whether or not the classification ability of a CNN-network can be recovered by making the network deeper and narrower at the same time to increase the receptive fields. # + import sys import os import tensorflow as tf import matplotlib.pyplot as plt import numpy as np sys.path.append( os.path.dirname(os.getcwd()) ) from utilities.tile_image_plot_utilities import\ custom_tile_image_plot,\ custom_tile_plot_with_inference_hists from utilities.generator_utilities import ScrambledImageDataGenerator # - # <a id='mnist_ds'></a> # <br><br><br> # # ------ # # MNIST Dataset # + # Get test and train features and labels for the MNIST dataset: mnist = tf.keras.datasets.mnist mnist_train, mnist_test = mnist.load_data() # Check the type and size of the test and train features and labels: print("Train data: ", mnist_train[0].shape) print("Train labels: ", mnist_train[1].shape) print("Test data: ", mnist_test[0].shape) print("Test labels: ", mnist_test[1].shape) # - fig = plt.figure(figsize=(16., 8.)) bins = np.linspace(start=-0.5, stop=9.5, num=11, endpoint=True) bar_heights, _, _ = plt.hist(mnist_train[1], bins=bins, color="royalblue", edgecolor="black", alpha=0.8, rwidth=0.9, align="mid", label="Train") plt.hist(mnist_test[1], bins=bins, bottom=bar_heights, color="salmon", edgecolor="black", alpha=0.8, rwidth=0.9, align="mid", label="Test") plt.xticks(np.arange(10), labels=np.arange(10), fontsize=14., fontweight="normal") plt.legend(fontsize=14.) plt.title("MNIST Dataset", fontsize=16., fontweight="bold") plt.grid() plt.gca().set_axisbelow(True) plt.show() # ## Unscrambled Images # + # Visualize some of the images: image_gen = ScrambledImageDataGenerator( features=mnist_train[0][0:256, :, :, np.newaxis], labels=mnist_train[1][0:256], batch_size=255, scrambler_array=None, normalize=False) custom_tile_image_plot( (15,15), image_gen[0][0], labels=image_gen[0][1], label_size=28., label_color="red", filename="", figure_size=(16., 16.)) # - # ## Scramble Images: # + # Visualize scrampled images: num_pixels = 28 * 28 scrambler = np.linspace( start=0, stop=num_pixels, num=num_pixels, endpoint=False, dtype=np.int32) np.random.shuffle(scrambler) scrambled_image_gen = ScrambledImageDataGenerator( features=mnist_train[0][0:256, :, :, np.newaxis], labels=mnist_train[1][0:256], batch_size=255, scrambler_array=scrambler, normalize=False) custom_tile_image_plot( (15,15), scrambled_image_gen[0][0], labels=scrambled_image_gen[0][1], label_size=28., label_color="red", filename="", figure_size=(16., 16.)) # - # <a id='fashion_mnist_ds'></a> # <br><br><br> # # ---- # # Fashion MNIST Dataset # + # Get test and train features and labels for the MNIST dataset: fashion_mnist = tf.keras.datasets.fashion_mnist fmnist_train, fmnist_test = fashion_mnist.load_data() # Check the type and size of the test and train features and labels: print("Train data: ", fmnist_train[0].shape) print("Train labels: ", fmnist_train[1].shape) print("Test data: ", fmnist_test[0].shape) print("Test labels: ", fmnist_test[1].shape) # - fig = plt.figure(figsize=(16., 8.)) bins = np.linspace(start=-0.5, stop=9.5, num=11, endpoint=True) bar_heights, _, _ = plt.hist(fmnist_train[1], bins=bins, color="royalblue", edgecolor="black", alpha=0.8, rwidth=0.9, align="mid", label="Train") plt.hist(fmnist_test[1], bins=bins, bottom=bar_heights, color="salmon", edgecolor="black", alpha=0.8, rwidth=0.9, align="mid", label="Test") plt.xticks(np.arange(10), labels=np.arange(10), fontsize=14., fontweight="normal") plt.legend(fontsize=14.) plt.title("Fashion MNIST Dataset", fontsize=16., fontweight="bold") plt.grid() plt.gca().set_axisbelow(True) plt.show() # ## Unscrambled Images # + # Visualize some of the images: image_gen = ScrambledImageDataGenerator( features=fmnist_train[0][0:256,:,:], labels=fmnist_train[1][0:256], batch_size=255, scrambler_array=None, normalize=False) custom_tile_image_plot( (15,15), image_gen[0][0], labels=image_gen[0][1], label_size=28., label_color="red", filename="", figure_size=(16., 16.)) # - # ## Scrambled Images # + # Visualize scrampled images: num_pixels = 28 * 28 scrambler = np.linspace( start=0, stop=num_pixels, num=num_pixels, endpoint=False, dtype=np.int32) np.random.shuffle(scrambler) scrambled_image_gen = ScrambledImageDataGenerator( features=fmnist_train[0][0:256,:,:], labels=fmnist_train[1][0:256], batch_size=255, scrambler_array=scrambler, normalize=False) custom_tile_image_plot( (15,15), scrambled_image_gen[0][0], labels=scrambled_image_gen[0][1], label_size=28., label_color="red", filename="", figure_size=(16., 16.)) # - # <a id='cifar10_ds'></a> # <br><br><br> # # ---- # # CIFAR-10 Dataset # + # Get test and train features and labels for the MNIST dataset: cifar10 = tf.keras.datasets.cifar10 cifar10_train, cifar10_test = cifar10.load_data() # Check the type and size of the test and train features and labels: print("Train data: ", cifar10_train[0].shape) print("Train labels: ", cifar10_train[1].shape) print("Test data: ", cifar10_test[0].shape) print("Test labels: ", cifar10_test[1].shape) # - fig = plt.figure(figsize=(16., 8.)) bins = np.linspace(start=-0.5, stop=9.5, num=11, endpoint=True) bar_heights, _, _ = plt.hist(cifar10_train[1], bins=bins, color="royalblue", edgecolor="black", alpha=0.8, rwidth=0.9, align="mid", label="Train") plt.hist(cifar10_test[1], bins=bins, bottom=bar_heights, color="salmon", edgecolor="black", alpha=0.8, rwidth=0.9, align="mid", label="Test") plt.xticks(np.arange(10), labels=np.arange(10), fontsize=14., fontweight="normal") plt.legend(fontsize=14.) plt.title("Fashion MNIST Dataset", fontsize=16., fontweight="bold") plt.grid() plt.gca().set_axisbelow(True) plt.show() # ## Unscrambled Images # + # Visualize some of the images: image_gen = ScrambledImageDataGenerator( features=cifar10_train[0][0:256,:,:], labels=cifar10_train[1][0:256, 0], batch_size=255, scrambler_array=None, normalize=False) custom_tile_image_plot( (15,15), image_gen[0][0], labels=image_gen[0][1], label_size=18., label_color="red", filename="", figure_size=(16., 16.)) # - # ## Scrambled Images # + # Visualize scrampled images: num_pixels = 32 * 32 scrambler = np.linspace( start=0, stop=num_pixels, num=num_pixels, endpoint=False, dtype=np.int32) np.random.shuffle(scrambler) scrambled_image_gen = ScrambledImageDataGenerator( features=cifar10_train[0][0:256,:,:], labels=cifar10_train[1][0:256, 0], batch_size=255, scrambler_array=scrambler, normalize=False) custom_tile_image_plot( (15,15), scrambled_image_gen[0][0], labels=scrambled_image_gen[0][1], label_size=18., label_color="red", filename="", figure_size=(16., 16.)) # - # ## Unscramble Scrambled Images # + unscrambler = np.argsort(scrambler) unscrambled_images = np.zeros_like(scrambled_image_gen[0][0]) for idx in range(unscrambled_images.shape[0]): for c in range(3): temp_array = scrambled_image_gen[0][0][idx, :, :, c].flatten()[unscrambler] unscrambled_images[idx, :, :, c] = temp_array.reshape( unscrambled_images.shape[1: -1]) custom_tile_image_plot( (15,15), unscrambled_images, labels=scrambled_image_gen[0][1], label_size=18., label_color="red", filename="", figure_size=(16., 16.)) # - # + # ## Model Constructor # def FCNClassifierModelConstructor( input_shape, # numb_classes, # hidden_layers_map={1:16,2:32,3:64,4:32,5:32,6:16,7:8}, # activation=tf.nn.relu ): # """ # Constructs and retursn a fully connected tf.keras model. # Args: # input_shape (tuple): Input shape. # numb_classes (int): Number of classes (output layer size). # hidden_layers_map (dict): If provided, the *hidden* layers are constructed as outlined. # Note that this dictionary excludes the last layer! # activation (tf.nn): An instance of activation function. # Returns: # tf.keras.model # """ # input_size=1 # for d in input_shape: # input_size *= d # # # ## Construct model # model_ = tf.keras.models.Sequential() # model_.add( tf.keras.layers.Flatten( input_shape=input_shape, # name="Flatten" ) ) # for l in sorted(hidden_layers_map,reverse=False): # if( l==1 ): # model_.add( tf.keras.layers.Dense( hidden_layers_map[l], # input_dim=input_size, # activation=activation, # use_bias=True, # kernel_initializer='glorot_uniform', # bias_initializer='zeros', # kernel_regularizer=None, # bias_regularizer=None, # activity_regularizer=None, # kernel_constraint=None, # bias_constraint=None, # name="Dense_"+str(l) ) ) # else: # model_.add( tf.keras.layers.Dense( hidden_layers_map[l], # activation=activation, # use_bias=True, # kernel_initializer='glorot_uniform', # bias_initializer='zeros', # kernel_regularizer=None, # bias_regularizer=None, # activity_regularizer=None, # kernel_constraint=None, # bias_constraint=None, # name="Dense_"+str(l) ) ) # model_.add( tf.keras.layers.Dense(numb_classes, activation=tf.nn.softmax,name="Softmax") ) # # # return model_ # def CNNClassifierModelConstructor( input_shape, # numb_classes, # cnn_layers_map={1:(16, (4,4), (1,1), (4,4), (1,1)), # 2:(16, (4,4), (1,1), (4,4), (1,1)), # 3:(16, (4,4), (1,1), (4,4), (1,1)), # 4:(16, (4,4), (1,1), None, None), # 5:(16, (4,4), (1,1), None, None) }, # fcn_layers_map={1:64,2:32}, # cnn_activation=tf.nn.relu, # fcn_activation=tf.nn.relu, # padding='valid', # data_format='channels_last' ): # """ # Constructs and retursn a CNN tf.keras model. # Args: # input_shape (tuple): Input shape. # numb_classes (int): Number of classes (output layer size). # cnn_layers_map (dict): If provided, the *convolutional* layers are constructed as outlined. It is # a dictionary with layer number as key and 5-dimensional tuple as value. The # last two elements in the tuple are pertinent to the max pool layers. If set # to None, max pooling will be skipped. # fcn_layers_map (dict): If provided, the *fully connected* layers are constructed as outlined. # Note that this dictionary excludes the last layer! # cnn_activation (tf.nn): An instance of activation function. # fcn_activation (tf.nn): An instance of activation function. # padding (str): Type of padding for CNN and MaxPool layers: 'valid' or 'simple' # data_format (str): Data format of the input: # channels_first <---> (batch, height, width, channels) # channels_last <---> (batch, channels, height, width) # Returns: # tf.keras.model # """ # model_ = tf.keras.models.Sequential() # for l in sorted(cnn_layers_map,reverse=False): # if( l==1 ): # model_.add( tf.keras.layers.Conv2D( input_shape=input_shape, # filters=cnn_layers_map[l][0], # kernel_size=cnn_layers_map[l][1], # strides=cnn_layers_map[l][2], # padding=padding, # data_format=data_format, # dilation_rate=(1,1), # activation=cnn_activation, # use_bias=True, # kernel_initializer='glorot_uniform', # bias_initializer='zeros', # kernel_regularizer=None, # bias_regularizer=None, # activity_regularizer=None, # kernel_constraint=None, # bias_constraint=None, # name="CNN_"+str(l) ) ) # if( cnn_layers_map[l][3] is not None ): # model_.add( tf.keras.layers.MaxPool2D( pool_size=cnn_layers_map[l][3], # strides=cnn_layers_map[l][4], # padding=padding, # data_format=None, # name="MaxPool_"+str(l) ) ) # else: # model_.add( tf.keras.layers.Conv2D( filters=cnn_layers_map[l][0], # kernel_size=cnn_layers_map[l][1], # strides=cnn_layers_map[l][2], # padding=padding, # data_format=data_format, # dilation_rate=(1,1), # activation=cnn_activation, # use_bias=True, # kernel_initializer='glorot_uniform', # bias_initializer='zeros', # kernel_regularizer=None, # bias_regularizer=None, # activity_regularizer=None, # kernel_constraint=None, # bias_constraint=None, # name="CNN_"+str(l) ) ) # if( cnn_layers_map[l][3] is not None ): # model_.add( tf.keras.layers.MaxPool2D( pool_size=cnn_layers_map[l][3], # strides=cnn_layers_map[l][4], # padding=padding, # data_format=None, # name="MaxPool_"+str(l) ) ) # model_.add(tf.keras.layers.Flatten( name="Flatten" )) # for l in sorted(fcn_layers_map,reverse=False): # model_.add( tf.keras.layers.Dense( fcn_layers_map[l], # activation=fcn_activation, # use_bias=True, # kernel_initializer='glorot_uniform', # bias_initializer='zeros', # kernel_regularizer=None, # bias_regularizer=None, # activity_regularizer=None, # kernel_constraint=None, # bias_constraint=None, # name="Dense_"+str(l+len(cnn_layers_map)) ) ) # ## Last layer: # model_.add( tf.keras.layers.Dense(numb_classes, activation=tf.nn.softmax,name="Softmax") ) # # # return model_ # + ## CASE Ia: FCN Without scrambling: tf.reset_default_graph() # ## Construct a model fcn_wo_model = FCNClassifierModelConstructor( input_shape=(28,28), numb_classes=10, hidden_layers_map={1:512, 2:256, 3:128, 4:64}, activation=tf.nn.relu ) print( fcn_wo_model.summary() ) print( "_"*32, end="\n\n" ) # ## Compiling the model: fcn_wo_model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) # ## We add a tensorboard callback: tbCallBack = tf.keras.callbacks.TensorBoard( log_dir='./MNIST_FCN_WO_Scrambling', histogram_freq=1, batch_size=32, write_graph=True, write_images=True, write_grads=True, update_freq='epoch') # ## Early stopping callback to prevent overfitting: earlystopCallback = tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0.001, patience=10, verbose=0, mode='auto', baseline=None, restore_best_weights=True ) # ## Construct generators for training and validation: tv_idx_split = int(0.8*mnist_x_train.shape[0]) train_generator = DataGenerator( features=mnist_x_train[0:tv_idx_split,:,:], labels=mnist_y_train[0:tv_idx_split], batch_size=32, scrambler_array=None, normalize=True ) validation_generator = DataGenerator( features=mnist_x_train[tv_idx_split:,:,:], labels=mnist_y_train[tv_idx_split:], batch_size=32, scrambler_array=None, normalize=True ) print( "Training Length: " , len(train_generator) ) print( "Validation Length: " , len(validation_generator) ) print( "_"*32, end="\n\n" ) # ## Trainig Time! ##============== fcn_wo_model.fit_generator( generator=train_generator, steps_per_epoch=None, epochs=10000, verbose=2, callbacks=[earlystopCallback,tbCallBack], validation_data=validation_generator, validation_steps=None, class_weight=None, max_queue_size=100, workers=8, use_multiprocessing=True, initial_epoch=0 ) print( "_"*32, end="\n\n" ) # ## Testing Time! ##============== fcn_wo_model.evaluate( x=mnist_x_test, y=mnist_y_test, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=8, use_multiprocessing=True ) # - test_generator = DataGenerator( features=mnist_x_test, labels=mnist_y_test, batch_size=1, scrambler_array=None, normalize=True ) print( "Test Length: " , len(test_generator) ) y_predict = fcn_wo_model.predict_generator( generator=test_generator, steps=None, callbacks=None, max_queue_size=10, workers=8, use_multiprocessing=True, verbose=0 ) CustomTilePlotWithHistogram( (10,10), images=mnist_x_test, labels=mnist_y_test, predictions=y_predict, classes=np.linspace(start=0,stop=10,num=10,endpoint=False,dtype=np.uint8), only_mispredicted=True, filename='', cmap='gray', label_size=32 ) # + ## CASE IIa: FCN Without scrambling: tf.reset_default_graph() # ## Construct a model fcn_w_model = FCNClassifierModelConstructor( input_shape=(28,28), numb_classes=10, hidden_layers_map={1:512, 2:256, 3:128, 4:64}, activation=tf.nn.relu ) print( fcn_w_model.summary() ) print( "_"*32, end="\n\n" ) # ## Compiling the model: fcn_w_model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) # ## We add a tensorboard callback: tbCallBack = tf.keras.callbacks.TensorBoard( log_dir='./MNIST_FCN_W_Scrambling', histogram_freq=1, batch_size=32, write_graph=True, write_images=True, write_grads=True, update_freq='epoch') # ## Early stopping callback to prevent overfitting: earlystopCallback = tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0.001, patience=10, verbose=0, mode='auto', baseline=None, restore_best_weights=True ) # ## Construct generators for training and validation: tv_idx_split = int(0.8*mnist_x_train.shape[0]) train_generator = DataGenerator( features=mnist_x_train[0:tv_idx_split,:,:], labels=mnist_y_train[0:tv_idx_split], batch_size=32, scrambler_array=mnist_scrambler, normalize=True ) validation_generator = DataGenerator( features=mnist_x_train[tv_idx_split:,:,:], labels=mnist_y_train[tv_idx_split:], batch_size=32, scrambler_array=mnist_scrambler, normalize=True ) print( "Training Length: " , len(train_generator) ) print( "Validation Length: " , len(validation_generator) ) print( "_"*32, end="\n\n" ) # ## Trainig Time! ##============== fcn_w_model.fit_generator( generator=train_generator, steps_per_epoch=None, epochs=10000, verbose=2, callbacks=[earlystopCallback,tbCallBack], validation_data=validation_generator, validation_steps=None, class_weight=None, max_queue_size=100, workers=8, use_multiprocessing=True, initial_epoch=0 ) print( "_"*32, end="\n\n" ) # ## Testing Time! ##============== test_generator = DataGenerator( features=mnist_x_test, labels=mnist_y_test, batch_size=32, scrambler_array=mnist_scrambler, normalize=True ) print( "Test Length: " , len(test_generator) ) fcn_w_model.evaluate_generator( test_generator, steps=None, callbacks=None, max_queue_size=10, workers=8, use_multiprocessing=True, verbose=0 ) # + # ## CASE Ib: CCN WITHOUT scrambling: tf.reset_default_graph() # ## Construct a model ccn_wo_model = CNNClassifierModelConstructor( input_shape=(28,28,1), numb_classes=10, cnn_layers_map={1:(32, (4,4), (1,1), (8,8), (1,1)), 2:(16, (4,4), (1,1), (4,4), (1,1)), 3:(8, (4,4), (1,1), (2,2), (1,1)) }, fcn_layers_map={1:64,2:32}, cnn_activation=tf.nn.relu, fcn_activation=tf.nn.relu, padding='valid', data_format='channels_last' ) print( ccn_wo_model.summary() ) print( "_"*32, end="\n\n" ) # ## Compiling the model: ccn_wo_model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) # ## We add a tensorboard callback: tbCallBack = tf.keras.callbacks.TensorBoard( log_dir='./MNIST_CCN_WO_Scrambling', histogram_freq=1, batch_size=32, write_graph=True, write_images=True, write_grads=True, update_freq='epoch') # ## Early stopping callback to prevent overfitting: earlystopCallback = tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0.001, patience=10, verbose=0, mode='auto', baseline=None, restore_best_weights=True ) # ## Construct generators for training and validation: tv_idx_split = int(0.8*mnist_x_train.shape[0]) train_generator = DataGenerator( features=mnist_x_train[0:tv_idx_split,:,:,np.newaxis], labels=mnist_y_train[0:tv_idx_split], batch_size=32, scrambler_array=None, normalize=True ) validation_generator = DataGenerator( features=mnist_x_train[tv_idx_split:,:,:,np.newaxis], labels=mnist_y_train[tv_idx_split:], batch_size=32, scrambler_array=None, normalize=True ) print( "Training Length: " , len(train_generator) ) print( "Validation Length: " , len(validation_generator) ) print( "_"*32, end="\n\n" ) # ## Trainig Time! ##============== ccn_wo_model.fit_generator( generator=train_generator, steps_per_epoch=None, epochs=10000, verbose=2, callbacks=[earlystopCallback,tbCallBack], validation_data=validation_generator, validation_steps=None, class_weight=None, max_queue_size=100, workers=8, use_multiprocessing=True, initial_epoch=0 ) # ## Testing Time! ##============== ccn_wo_model.evaluate( x=mnist_x_test[:,:,:,np.newaxis], y=mnist_y_test, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=8, use_multiprocessing=True ) # - # test_generator = DataGenerator( features=mnist_x_test[:,:,:,np.newaxis], # labels=mnist_y_test, # batch_size=1, # scrambler_array=None, # normalize=True ) # print( "Test Length: " , len(test_generator) ) # y_predict = ccn_wo_model.predict_generator( generator=test_generator, # steps=None, # callbacks=None, # max_queue_size=10, # workers=8, # use_multiprocessing=True, # verbose=0 ) y_predict = ccn_wo_model.predict( x=mnist_x_test[:,:,:,np.newaxis], batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=8, use_multiprocessing=True ) CustomTilePlotWithHistogram( (10,10), images=mnist_x_test, labels=mnist_y_test, predictions=y_predict, classes=np.linspace(start=0,stop=10,num=10,endpoint=False,dtype=np.uint8), only_mispredicted=True, filename='', cmap='gray', label_size=32 ) # + ## CASE IIb: CCN WITH scrambling: tf.reset_default_graph() # ## Construct a model ccn_w_model = CNNClassifierModelConstructor( input_shape=(28,28,1), numb_classes=10, cnn_layers_map={1:(32, (4,4), (1,1), (8,8), (1,1)), 2:(16, (4,4), (1,1), (4,4), (1,1)), 3:(8, (4,4), (1,1), (2,2), (1,1)) }, fcn_layers_map={1:64,2:32}, cnn_activation=tf.nn.relu, fcn_activation=tf.nn.relu, padding='valid', data_format='channels_last' ) print( ccn_w_model.summary() ) print( "_"*32, end="\n\n" ) # ## Compiling the model: ccn_w_model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) # ## We add a tensorboard callback: tbCallBack = tf.keras.callbacks.TensorBoard( log_dir='./MNIST_CCN_W_Scrambling', histogram_freq=1, batch_size=32, write_graph=True, write_images=True, write_grads=True, update_freq='epoch') # ## Early stopping callback to prevent overfitting: earlystopCallback = tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0.001, patience=10, verbose=0, mode='auto', baseline=None, restore_best_weights=True ) # ## Construct generators for training and validation: tv_idx_split = int(0.8*mnist_x_train.shape[0]) train_generator = DataGenerator( features=mnist_x_train[0:tv_idx_split,:,:,np.newaxis], labels=mnist_y_train[0:tv_idx_split], batch_size=32, scrambler_array=mnist_scrambler, normalize=True ) validation_generator = DataGenerator( features=mnist_x_train[tv_idx_split:,:,:,np.newaxis], labels=mnist_y_train[tv_idx_split:], batch_size=32, scrambler_array=mnist_scrambler, normalize=True ) print( "Training Length: " , len(train_generator) ) print( "Validation Length: " , len(validation_generator) ) print( "_"*32, end="\n\n" ) # ## Trainig Time! ##============== ccn_w_model.fit_generator( generator=train_generator, steps_per_epoch=None, epochs=10000, verbose=2, callbacks=[earlystopCallback,tbCallBack], validation_data=validation_generator, validation_steps=None, class_weight=None, max_queue_size=100, workers=8, use_multiprocessing=True, initial_epoch=0 ) # ## Testing Time! ##============== test_generator = DataGenerator( features=mnist_x_test[:,:,:,np.newaxis], labels=mnist_y_test, batch_size=32, scrambler_array=mnist_scrambler, normalize=True ) print( "Test Length: " , len(test_generator) ) ccn_w_model.evaluate_generator( test_generator, steps=None, callbacks=None, max_queue_size=10, workers=8, use_multiprocessing=True, verbose=0 )
old_projects/cnn_classifiers/classification_with_scrambled_images.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # In-Class Coding Lab: Strings # # The goals of this lab are to help you to understand: # # - String slicing for substrings # - How to use Python's built-in String functions in the standard library. # - Tokenizing and Parsing Data # - How to create user-defined functions to parse and tokenize strings # # # # Strings # # ## Strings are immutable sequences # # Python strings are immutable sequences.This means we cannot change them "in part" and there is impicit ordering. # # The characters in a string are zero-based. Meaning the index of the first character is 0. # # We can leverage this in a variety of ways. # # For example: # + x = input("Enter something: ") print ("You typed:", x) print ("number of characters:", len(x) ) print ("First character is:", x[0]) print ("Last character is:", x[-1]) ## They're sequences, so you can loop definately: print("Printing one character at a time: ") for ch in x: print(ch) # print a character at a time! # - # ## Slices as substrings # # Python lists and sequences use **slice notation** which is a clever way to get substring from a given string. # # Slice notation requires two values: A start index and the end index. The substring returned starts at the start index, and *ends at the position before the end index*. It ends at the position *before* so that when you slice a string into parts you know where you've "left off". # # For example: state = "Mississippi" print (state[0:4]) # Miss print (state[4:len(state)]) # issippi # In this next example, play around with the variable `split` adjusting it to how you want the string to be split up. Re run the cell several times with different values to get a feel for what happens. state = "Mississippi" split = 4 # TODO: play around with this number left = state[4:split] right = state[split:len(state)] print(left, right) # ### Slicing from the beginning or to the end # # If you omit the begin or end slice, Python will slice from the beginnning of the string or all the way to the end. So if you say `x[:5]` its the same as `x[0:5]` # # For example: state = "Ohio" print(state[0:2], state[:2]) # same! print(state[2:len(state)], state[2:]) # same # ### Now Try It! # # Split the string `"New Hampshire"` into two sub-strings one containing `"New"` the other containing `"Hampshire"` (without the space). ## TODO: Write code here state = "New Hampshire" print(state[0:3]) print(state[4:13]) # ## Python's built in String Functions # # Python includes several handy built-in string functions (also known as *methods* in object-oriented parlance). To get a list of available functions, use the `dir()` function on any string variable, or on the type `str` itself. # print ( dir(str)) # Let's suppose you want to learn how to use the `count` function. There are 2 ways you can do this. # # 1. search the web for `python 3 str count` or # 1. bring up internal help `help(str.count)` # # Both have their advantages and disadvanges. I would start with the second one, and only fall back to a web search when you can't figure it out from the Python documenation. # # Here's the documentation for `count` help(str.count) # You'll notice in the help output it says S.count() this indicates this function is a method function. this means you invoke it like this `variable.count()`. # # ### Now Try It # # Try to use the count() function method to count the number of `'i'`'s in the string `'Mississippi`: state = 'Mississippi' sub = 'i' print("The number of i's in Mississippi is: ") state.count(sub,0,11) # ### TANGENT: The Subtle difference between function and method. # # You'll notice sometimes we call our function alone, other times it's attached to a variable, as was the case in the example above. when we say `state.count('i')` the period (`.`) between the variable and function indicates this function is a *method function*. The key difference between a the two is a method is attached to a variable. To call a method function you must say `variable.function()` whereas when you call a function its just `function()`. The variable associated with the method call is usually part of the function's context. # # Here's an example: name = "Larry" print( len(name) ) # a function call len(name) stands on its own. Gets length of 'Larry' print( name.__len__() ) # a method call name.__len__() does the name thing for its variable 'Larry' # ### Now Try It # # Try to figure out which built in string function to use to accomplish this task. # # Write some code to find the text `'is'` in some text. The program shoud output the first position of `'is'` in the text. # # Examples: # # ``` # When: text = 'Mississippi' then position = 1 # When: text = "This is great" then position = 2 # When: text = "Burger" then position = -1 # ``` # TODO: Write your code here text = input("Enter some text: ") print(text.find('is')) # ### Now Try It # # **Is that a URL?** # # Try to write a rudimentary URL checker. The program should input a text string and then use the `startswith` function to check if the string begins with `"http://"` or `"https://"` If it does we can assume it is a URL. str = input("Enter a string: ") print(str.startswith( 'https://' ) or str.startswith( 'http://' ))
content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Simple Animations Using clear_output # Sometimes you want to clear the output area in the middle of a calculation. This can be useful for doing simple animations. In terminals, there is the carriage-return (`'\r'`) for overwriting a single line, but the notebook frontend can clear the whole output area, not just a single line. # # To clear output in the Notebook you can use the `clear_output()` function. If you are clearing the output every frame of an animation, calling `clear_output()` will create noticeable flickering. You can use `clear_output(wait=True)` to add the *clear_output* call to a queue. When data becomes available to replace the existing output, the *clear_output* will be called immediately before the new data is added. This avoids the flickering by not rendering the cleared output to the screen. # ## Simple example # Here we show our progress iterating through a list: import sys import time from IPython.display import display, clear_output for i in range(10): time.sleep(0.25) clear_output(wait=True) print(i) sys.stdout.flush() # ## AsyncResult.wait_interactive # The AsyncResult object has a special `wait_interactive()` method, which prints its progress interactively, # so you can watch as your parallel computation completes. # # **This example assumes you have an IPython cluster running, which you can start from the [cluster panel](/#clusters)** # + #from IPython import parallel #rc = parallel.Client() #view = rc.load_balanced_view() # #amr = view.map_async(time.sleep, [0.5]*100) # #amr.wait_interactive() # - # ## Matplotlib example # You can also use `clear_output()` to clear figures and plots. # %matplotlib inline import numpy as np import matplotlib.pyplot as plt # + from scipy.special import jn x = np.linspace(0,5) f, ax = plt.subplots() ax.set_title("Bessel functions") for n in range(1,10): time.sleep(1) ax.plot(x, jn(x,n)) clear_output(wait=True) display(f) # close the figure at the end, so we don't get a duplicate # of the last plot plt.close() # -
001-Jupyter/001-Tutorials/001-Basic-Tutorials/001-IPython-Kernel/Animations Using clear_output.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Monte Carlo Simulation and Random Walk Generation # $$ \frac{ S_{1+dt} - S_t}{S_t} = \mu dt + \sigma \sqrt {dt} \xi_t $$ # + import numpy as np import pandas as pd def gbm(n_years =10, n_scenarios = 1000, mu=0.07,sigma = 0.15, steps_per_year = 12, s_0 = 100.0): """ Evolution of a Stock Price using Geometric Browian Motion Model (Monte Carlo Simulation) """ dt = 1/steps_per_year n_steps = int(n_years * steps_per_year) rets_plus_1 = np.random.normal(loc= (1+mu*dt),scale = (sigma*np.sqrt(dt)),size = (n_steps, n_scenarios), ) rets_plus_1[0] = 1 prices = s_0*pd.DataFrame(rets_plus_1).cumprod() return prices # - import ashmodule as ash ax = gbm(n_scenarios = 20).plot(legend = False,figsize = (12,6)); ax.set_xlim(left = 0); gbm(n_scenarios = 10).head() # %load_ext autoreload # %autoreload 2 # # Using IPyWidget to Interact Plotting the Monte Carlo Simulation import ipywidgets as widgets from IPython.display import display import matplotlib.pyplot as plt # + def show_gbm(n_scenarios=1000, mu=0.07, sigma=0.15, s_0=100.0): """ Draw the results of a stock price evolution under a Geometric Brownian Motion model """ s_0=s_0 prices = gbm(n_scenarios=n_scenarios, mu=mu, sigma=sigma, s_0=s_0) ax = prices.plot(legend=False, color="indianred", alpha = 0.5, linewidth=2, figsize=(12,5)) ax.axhline(y=s_0, ls=":", color="black") # draw a dot at the origin ax.plot(0,s_0, marker='o',color='darkred', alpha=0.2) # - gbm_controls = widgets.interactive(ash.show_gbm, n_scenarios = widgets.IntSlider(min=1,max=1000,step=5), mu =(-0.3,0.3,0.05), sigma =(0,0.5,0.01), s_0 =(1,500,10) ) display(gbm_controls) # # Using IPyWidgets to interact with Monte Carlo Simulations and CPPI def show_cppi(n_scenarios=50, mu=0.07, sigma=0.15, m=3, floor=0.0, riskfree_rate=0.03, y_max=100,s_0=100, steps_per_year = 12): """ Plot the results of a Monte Carlo Simulation of CPPI """ start = s_0 sim_rets = ash.gbm(n_scenarios=n_scenarios, mu=mu, sigma=sigma, steps_per_year=steps_per_year) risky_r = pd.DataFrame(sim_rets) # run the "back"-test btr = ash.run_cppi(risky_r=pd.DataFrame(risky_r),riskfree_rate=riskfree_rate,m=m, start=start, floor=floor) wealth = btr["risky_r"] # calculate terminal wealth stats y_max=wealth.values.max()*y_max/100 ax = wealth.plot(legend = False, alpha = 0.3, color = "indianred", figsize = (12,6)) ax.axhline(y=start, ls=":", color= "black") ax.axhline(y=start*floor, ls="--",color = "red") ax.set_ylim(top=y_max) # + cppi_controls = widgets.interactive(show_cppi, n_scenarios=widgets.IntSlider(min=1, max=1000, step=5, value=50), mu=(0., +.2, .01), sigma=(0, .30, .05), floor=(0, 2, .1), m=(1, 5, .5), riskfree_rate=(0, .05, .01), y_max=widgets.IntSlider(min=0, max=100, step=1, value=100, description="Zoom Y Axis") ) # - display(cppi_controls) r_asset = ash.gbm(n_scenarios=50) r_asset ash.run_cppi((r_asset))["risky_r"][0].plot(legend=False,figsize =(12,6)) ash.run_cppi(r_asset,start = 100)["Wealth"].head() r_asset.shape r_asset.index = pd.date_range("2000-01",periods=r_asset.shape[0],freq="MS").to_period("M") r_asset.head() ash.run_cppi(r_asset,start = 100)["risky_r"].plot(legend = False); ash.run_cppi(r_asset,start = 100)["risky_r"].plot(legend = False,figsize = (12,6),color= "red", alpha = 0.3);
Introduction to Portfolio Construction and Analysis with Python/W3/.ipynb_checkpoints/Monte Carlo Simulation-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Example queries for Economic Characteristics on COVID-19 Knowledge Graph # [Work in progress] # # This notebook demonstrates how to run Cypher queries to get [Economic Characteristics from the American Community Survey 2018 5-year data](https://data.census.gov/cedsci/table?tid=ACSDP5Y2018.DP03) at multiple levels of geographic granularity. import pandas as pd import matplotlib.pyplot as plt from py2neo import Graph pd.options.display.max_rows = None # display all rows pd.options.display.max_columns = None # display all columsns # #### Connect to COVID-19-Net Knowledge Graph graph = Graph("bolt://172.16.17.32:7687", user="reader", password="<PASSWORD>") # ## Economic Characteristics # The following variables are available for queries. Variable names ending with Pct represent values in percent, whereas all other variables represent counts. For details see [Subject Definitions](https://www2.census.gov/programs-surveys/acs/tech_docs/subject_definitions/2018_ACSSubjectDefinitions.pdf). # #### Commuting # 'DP03_0018E': 'workers16YearsAndOver', # 'DP03_0019E': 'droveAloneToWorkInCarTruckOrVan', # 'DP03_0019PE': 'droveAloneToWorkInCarTruckOrVanPct', # 'DP03_0020E': 'carpooledToWorkInCarTruckOrVan', # 'DP03_0020PE': 'carpooledToWorkInCarTruckOrVanPct', # 'DP03_0021E': 'publicTransportToWork', # 'DP03_0021PE': 'publicTransportToWorkPct', # 'DP03_0022E': 'walkedToWork', # 'DP03_0022PE': 'walkedToWorkPct', # 'DP03_0023E': 'otherMeansOfCommutingToWork', # 'DP03_0023PE': 'otherMeansOfCommutingToWorkPct', # 'DP03_0024E': 'workedAtHome', # 'DP03_0024PE': 'workedAtHomePct', # 'DP03_0025E': 'meanTravelTimeToWorkMinutes', # #### Employment # 'DP03_0001E': 'population16YearsAndOver', # 'DP03_0002E': 'population16YearsAndOverInLaborForce', # 'DP03_0002PE': 'population16YearsAndOverInLaborForcePct', # 'DP03_0003E': 'population16YearsAndOverInCivilianLaborForce', # 'DP03_0003PE': 'population16YearsAndOverInCivilianLaborForcePct', # 'DP03_0006E': 'population16YearsAndOverInArmedForces', # 'DP03_0006PE': 'population16YearsAndOverInArmedForcesPct', # 'DP03_0007E': 'population16YearsAndOverNotInLaborForce', # 'DP03_0007PE': 'population16YearsAndOverNotInLaborForcePct', # #### HealthInsurance # 'DP03_0095E': 'civilianNoninstitutionalizedPopulation', # 'DP03_0096E': 'withHealthInsuranceCoverage', # 'DP03_0096PE': 'withHealthInsuranceCoveragePct', # 'DP03_0097E': 'withPrivateHealthInsurance', # 'DP03_0097PE': 'withPrivateHealthInsurancePct', # 'DP03_0098E': 'withPublicCoverage', # 'DP03_0098PE': 'withPublicCoveragePct', # 'DP03_0099E': 'noHealthInsuranceCoverage', # 'DP03_0099PE': 'noHealthInsuranceCoveragePct', # #### Income # 'DP03_0051E': 'totalHouseholds', # 'DP03_0052E': 'householdIncomeLessThan10000USD', # 'DP03_0052PE': 'householdIncomeLessThan10000USDPct', # 'DP03_0053E': 'householdIncome10000To14999USD', # 'DP03_0053PE': 'householdIncome10000To14999USDPct', # 'DP03_0054E': 'householdIncome15000To24999USD', # 'DP03_0054PE': 'householdIncome15000To24999USDPct', # 'DP03_0055E': 'householdIncome25000To34999USD', # 'DP03_0055PE': 'householdIncome25000To34999USDPct', # 'DP03_0056E': 'householdIncome35000To49999USD', # 'DP03_0056PE': 'householdIncome35000To49999USDPct', # 'DP03_0057E': 'householdIncome50000To74999USD', # 'DP03_0057PE': 'householdIncome50000To74999USDPct', # 'DP03_0058E': 'householdIncome75000To99999USD', # 'DP03_0058PE': 'householdIncome75000To99999USDPct', # 'DP03_0059E': 'householdIncome100000To149999USD', # 'DP03_0059PE': 'householdIncome100000To149999USDPct', # 'DP03_0060E': 'householdIncome150000To199999USD', # 'DP03_0060PE': 'householdIncome150000To199999USDPct', # 'DP03_0061E': 'householdIncomeMoreThan200000USD', # 'DP03_0061PE': 'householdIncomeMoreThan200000USDPct', # 'DP03_0062E': 'medianHouseholdIncomeUSD', # 'DP03_0063E': 'meanHouseholdIncomeUSD', # #### Occupation # 'DP03_0026E': 'civilianEmployedPopulation16YearsAndOver', # 'DP03_0027E': 'managementBusinessScienceAndArtsOccupations', # 'DP03_0027PE': 'managementBusinessScienceAndArtsOccupationsPct', # 'DP03_0028E': 'serviceOccupations', # 'DP03_0028PE': 'serviceOccupationsPct', # 'DP03_0029E': 'salesAndOfficeOccupations', # 'DP03_0029PE': 'salesAndOfficeOccupationsPct', # 'DP03_0030E': 'naturalResourcesConstructionAndMaintenanceOccupations', # 'DP03_0030PE': 'naturalResourcesConstructionAndMaintenanceOccupationsPct', # 'DP03_0031E': 'productionTransportationAndMaterialMovingOccupations', # 'DP03_0031PE': 'productionTransportationAndMaterialMovingOccupationsPct' # ## Geographic granularity # Data are available at 3 levels of granularity: # * US County # * US Zip Code # * US Census Tract # ## Query Examples # ### Get Data By US County # If the state and county fips codes are available, economic characteristics can be retrieved directly. # # Note, fips codes are represented as strings. # ##### Example: Commuting state_fips = '06' county_fips = '073' query = """ MATCH (c:Commuting{countyFips:$county_fips, stateFips:$state_fips}) RETURN c.stateFips, c.countyFips, c.droveAloneToWorkInCarTruckOrVanPct, c.publicTransportToWorkPct, c.walkedToWorkPct, c.otherMeansOfCommutingToWorkPct, c.workedAtHomePct """ df = graph.run(query, county_fips=county_fips, state_fips=state_fips).to_data_frame() df.head() # Example: Get Employment characteristics by traversing the KG county = 'Los Angeles County' query = """ MATCH (a:Admin2{name:$admin2})-[:HAS_ECONOMICS]-(:Economics)-[:HAS_EMPLOYMENT]-(e:Employment) RETURN a.name, e.population16YearsAndOverInLaborForcePct, e.population16YearsAndOverInCivilianLaborForcePct, e.population16YearsAndOverInArmedForcesPct, e.population16YearsAndOverNotInLaborForcePct """ df = graph.run(query, admin2=county).to_data_frame() df.head() # ### Get Data by US Postal Code # # Note, postal codes are represented as strings. zip_code = '92130' query = """ MATCH (h:HealthInsurance{postalCode: $zip_code}) RETURN h.postalCode, h.withHealthInsuranceCoveragePct, h.withPrivateHealthInsurancePct, h.withPublicCoveragePct, h.noHealthInsuranceCoveragePct """ df = graph.run(query, zip_code=zip_code).to_data_frame() df.head() # ##### Example: List income data for Zip codes with a place name # # Note, Zip code areas may cross city boundaries. Place names are the preferred names used by the US Postal Service. # + place_name = '<NAME>' query = """ MATCH (p:PostalCode{placeName:$place_name})-[:HAS_ECONOMICS]-(:Economics)-[:HAS_INCOME]-(i:Income) RETURN p.name AS `Zip code`, i.medianHouseholdIncomeUSD AS `Median Household Income`, i.meanHouseholdIncomeUSD AS `Mean Household Income` """ df = graph.run(query, place_name=place_name).to_data_frame() df.head() # - df.plot.bar(x='Zip code', y=["Median Household Income", "Mean Household Income"], title='Income [USD]', rot=0); # ### Get Data by US Census Tract # # Note, tracts are represented at strings. # ##### Example: Occupations for a tract tract = '06073008324' query = """ MATCH (o:Occupation{tract: $tract}) RETURN o.tract, o.managementBusinessScienceAndArtsOccupationsPct, o.serviceOccupationsPct, o.salesAndOfficeOccupationsPct, o.naturalResourcesConstructionAndMaintenanceOccupationsPct, o.productionTransportationAndMaterialMovingOccupationsPct """ df = graph.run(query, tract=tract).to_data_frame() df.head() # ##### Example: List Occupations for all tracts in a county # + state = 'California' county = 'Orange County' query = """ MATCH (a1:Admin1{name: $admin1})-[:IN]-(a2:Admin2{name: $admin2})-[:IN]-(t:Tract)-[:HAS_ECONOMICS]-(:Economics)-[:HAS_OCCUPATION]-(o:Occupation) RETURN a1.name AS State, a2.name AS County, t.name AS Tract, o.managementBusinessScienceAndArtsOccupationsPct, o.serviceOccupationsPct, o.salesAndOfficeOccupationsPct, o.naturalResourcesConstructionAndMaintenanceOccupationsPct, o.productionTransportationAndMaterialMovingOccupationsPct """ df = graph.run(query, admin1=state, admin2=county).to_data_frame() df.head()
notebooks/queries/EconomicCharacteristics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <table class="ee-notebook-buttons" align="left"> # <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Algorithms/Segmentation/segmentation_snic.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> # <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/Segmentation/segmentation_snic.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> # <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/Segmentation/segmentation_snic.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> # </table> # ## Install Earth Engine API and geemap # Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. # The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. # # **Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). # + # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as geemap except: import geemap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() # - # ## Create an interactive map # The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. Map = geemap.Map(center=[40,-100], zoom=4) Map # ## Add Earth Engine Python script # + # Add Earth Engine dataset # imageCollection = ee.ImageCollection("USDA/NAIP/DOQQ"), # geometry = ee.Geometry.Polygon( # [[[-121.89511299133301, 38.98496606984683], # [-121.89511299133301, 38.909335196675435], # [-121.69358253479004, 38.909335196675435], # [-121.69358253479004, 38.98496606984683]]], {}, False), # geometry2 = ee.Geometry.Polygon( # [[[-108.34304809570307, 36.66975278349341], # [-108.34225416183466, 36.66977859999848], # [-108.34226489067072, 36.67042400981031], # [-108.34308028221125, 36.670380982657925]]]), # imageCollection2 = ee.ImageCollection("USDA/NASS/CDL"), # cdl2016 = ee.Image("USDA/NASS/CDL/2016") # Map.centerObject(geometry, {}, 'roi') # # Map.addLayer(ee.Image(1), {'palette': "white"}) # cdl2016 = cdl2016.select(0).clip(geometry) # function erode(img, distance) { # d = (img.Not().unmask(1) \ # .fastDistanceTransform(30).sqrt() \ # .multiply(ee.Image.pixelArea().sqrt())) # return img.updateMask(d.gt(distance)) # } # function dilate(img, distance) { # d = (img.fastDistanceTransform(30).sqrt() \ # .multiply(ee.Image.pixelArea().sqrt())) # return d.lt(distance) # } # function expandSeeds(seeds) { # seeds = seeds.unmask(0).focal_max() # return seeds.updateMask(seeds) # } # bands = ["R", "G", "B", "N"] # img = imageCollection \ # .filterDate('2015-01-01', '2017-01-01') \ # .filterBounds(geometry) \ # .mosaic() # img = ee.Image(img).clip(geometry).divide(255).select(bands) # Map.addLayer(img, {'gamma': 0.8}, "RGBN", False) # seeds = ee.Algorithms.Image.Segmentation.seedGrid(36) # # Apply a softening. # kernel = ee.Kernel.gaussian(3) # img = img.convolve(kernel) # Map.addLayer(img, {'gamma': 0.8}, "RGBN blur", False) # # Compute and display NDVI, NDVI slices and NDVI gradient. # ndvi = img.normalizedDifference(["N", "R"]) # # print(ui.Chart.image.histogram(ndvi, geometry, 10)) # Map.addLayer(ndvi, {'min':0, 'max':1, 'palette': ["black", "tan", "green", "darkgreen"]}, "NDVI", False) # Map.addLayer(ndvi.gt([0, 0.2, 0.40, 0.60, 0.80, 1.00]).reduce('sum'), {'min':0, 'max': 6}, "NDVI steps", False) # ndviGradient = ndvi.gradient().pow(2).reduce('sum').sqrt() # Map.addLayer(ndviGradient, {'min':0, 'max':0.01}, "NDVI gradient", False) # gradient = img.spectralErosion().spectralGradient('emd') # Map.addLayer(gradient, {'min':0, 'max': 0.3}, "emd", False) # # Run SNIC on the regular square grid. # snic = ee.Algorithms.Image.Segmentation.SNIC({ # 'image': img, # 'size': 32, # compactness: 5, # connectivity: 8, # neighborhoodSize:256, # seeds: seeds # }).select(["R_mean", "G_mean", "B_mean", "N_mean", "clusters"], ["R", "G", "B", "N", "clusters"]) # clusters = snic.select("clusters") # Map.addLayer(clusters.randomVisualizer(), {}, "clusters") # Map.addLayer(snic, {'bands': ["R", "G", "B"], 'min':0, 'max':1, 'gamma': 0.8}, "means", False) # Map.addLayer(expandSeeds(seeds)) # # Compute per-cluster stdDev. # stdDev = img.addBands(clusters).reduceConnectedComponents(ee.Reducer.stdDev(), "clusters", 256) # Map.addLayer(stdDev, {'min':0, 'max':0.1}, "StdDev") # # Display outliers as transparent # outliers = stdDev.reduce('sum').gt(0.25) # Map.addLayer(outliers.updateMask(outliers.Not()), {}, "Outliers", False) # # Within each outlier, find most distant member. # distance = img.select(bands).spectralDistance(snic.select(bands), "sam").updateMask(outliers) # maxDistance = distance.addBands(clusters).reduceConnectedComponents(ee.Reducer.max(), "clusters", 256) # Map.addLayer(distance, {'min':0, 'max':0.3}, "max distance") # Map.addLayer(expandSeeds(expandSeeds(distance.eq(maxDistance))), {'palette': ["red"]}, "second seeds") # newSeeds = seeds.unmask(0).add(distance.eq(maxDistance).unmask(0)) # newSeeds = newSeeds.updateMask(newSeeds) # # Run SNIC again with both sets of seeds. # snic2 = ee.Algorithms.Image.Segmentation.SNIC({ # 'image': img, # 'size': 32, # compactness: 5, # connectivity: 8, # neighborhoodSize: 256, # seeds: newSeeds # }).select(["R_mean", "G_mean", "B_mean", "N_mean", "clusters"], ["R", "G", "B", "N", "clusters"]) # clusters2 = snic2.select("clusters") # Map.addLayer(clusters2.randomVisualizer(), {}, "clusters 2") # Map.addLayer(snic2, {'bands': ["R", "G", "B"], 'min':0, 'max':1, 'gamma': 0.8}, "means", False) # # Compute outliers again. # stdDev2 = img.addBands(clusters2).reduceConnectedComponents(ee.Reducer.stdDev(), "clusters", 256) # Map.addLayer(stdDev2, {'min':0, 'max':0.1}, "StdDev 2") # outliers2 = stdDev2.reduce('sum').gt(0.25) # outliers2 = outliers2.updateMask(outliers2.Not()) # Map.addLayer(outliers2, {}, "Outliers 2", False) # # Show the final set of seeds. # Map.addLayer(expandSeeds(newSeeds), {'palette': "white"}, "newSeeds") # Map.addLayer(expandSeeds(distance.eq(maxDistance)), {'palette': ["red"]}, "second seeds") # # Area, Perimeter, Width and Height (using snic1 for speed) # area = ee.Image.pixelArea().addBands(clusters).reduceConnectedComponents(ee.Reducer.sum(), "clusters", 256) # Map.addLayer(area, {'min':50000, 'max': 500000}, "Cluster Area") # minMax = clusters.reduceNeighborhood(ee.Reducer.minMax(), ee.Kernel.square(1)) # perimeterPixels = minMax.select(0).neq(minMax.select(1)).rename('perimeter') # Map.addLayer(perimeterPixels, {'min': 0, 'max': 1}, 'perimeterPixels') # perimeter = perimeterPixels.addBands(clusters) \ # .reduceConnectedComponents(ee.Reducer.sum(), 'clusters', 256) # Map.addLayer(perimeter, {'min': 100, 'max': 400}, 'Perimeter size', False) # sizes = ee.Image.pixelLonLat().addBands(clusters).reduceConnectedComponents(ee.Reducer.minMax(), "clusters", 256) # width = sizes.select("longitude_max").subtract(sizes.select("longitude_min")) # height = sizes.select("latitude_max").subtract(sizes.select("latitude_min")) # Map.addLayer(width, {'min':0, 'max':0.02}, "Cluster width") # Map.addLayer(height, {'min':0, 'max':0.02}, "Cluster height") # - # ## Display Earth Engine data layers Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
Algorithms/Segmentation/segmentation_snic.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Efficient Frontier of Optimal Portfolio Transactions # # ### Introduction # # [Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) showed that for each value of risk aversion there is a unique optimal execution strategy. The optimal strategy is obtained by minimizing the **Utility Function** $U(x)$: # # \begin{equation} # U(x) = E(x) + \lambda V(x) # \end{equation} # # where $E(x)$ is the **Expected Shortfall**, $V(x)$ is the **Variance of the Shortfall**, and $\lambda$ corresponds to the trader’s risk aversion. The expected shortfall and variance of the optimal trading strategy are given by: # # <img src="./text_images/eq.png" width="700" height="900"> # # In this notebook, we will learn how to visualize and interpret these equations. # # # The Expected Shortfall # # As we saw in the previous notebook, even if we use the same trading list, we are not guaranteed to always get the same implementation shortfall due to the random fluctuations in the stock price. This is why we had to reframe the problem of finding the optimal strategy in terms of the average implementation shortfall and the variance of the implementation shortfall. We call the average implementation shortfall, the expected shortfall $E(x)$, and the variance of the implementation shortfall $V(x)$. So, whenever we talk about the expected shortfall we are really talking about the average implementation shortfall. Therefore, we can think of the expected shortfall as follows. Given a single trading list, the expected shortfall will be the value of the average implementation shortfall if we were to implement this trade list in the stock market many times. # # To see this, in the code below we implement the same trade list on 50,000 trading simulations. We call each trading simulation an episode. Each episode will consist of different random fluctuations in stock price. For each episode we will compute the corresponding implemented shortfall. After all the 50,000 trading simulations have been carried out we calculate the average implementation shortfall and the variance of the implemented shortfalls. We can then compare these values with the values given by the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model. # + # %matplotlib inline import matplotlib.pyplot as plt import utils # Set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Set the liquidation time l_time = 60 # Set the number of trades n_trades = 60 # Set trader's risk aversion t_risk = 1e-6 # Set the number of episodes to run the simulation episodes = 10 utils.get_av_std(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, trs = episodes) # Get the AC Optimal strategy for the given parameters ac_strategy = utils.get_optimal_vals(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk) ac_strategy # - # # Extreme Trading Strategies # # Because some investors may be willing to take more risk than others, when looking for the optimal strategy we have to consider a wide range of risk values, ranging from those traders that want to take zero risk to those who want to take as much risk as possible. Let's take a look at these two extreme cases. We will define the **Minimum Variance** strategy as that one followed by a trader that wants to take zero risk and the **Minimum Impact** strategy at that one followed by a trader that wants to take as much risk as possible. Let's take a look at the values of $E(x)$ and $V(x)$ for these extreme trading strategies. The `utils.get_min_param()` uses the above equations for $E(x)$ and $V(x)$, along with the parameters from the trading environment to calculate the expected shortfall and standard deviation (the square root of the variance) for these strategies. We'll start by looking at the Minimum Impact strategy. # + import utils # Get the minimum impact and minimum variance strategies minimum_impact, minimum_variance = utils.get_min_param() # - # ### Minimum Impact Strategy # # This trading strategy will be taken by trader that has no regard for risk. In the Almgren and Chriss model this will correspond to having the trader's risk aversion set to $\lambda = 0$. In this case the trader will sell the shares at a constant rate over a long period of time. By doing so, he will minimize market impact, but will be at risk of losing a lot of money due to the large variance. Hence, this strategy will yield the lowest possible expected shortfall and the highest possible variance, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of \$197,000 dollars but has a very big standard deviation of over 3 million dollars. minimum_impact # ### Minimum Variance Strategy # # This trading strategy will be taken by trader that wants to take zero risk, regardless of transaction costs. In the Almgren and Chriss model this will correspond to having a variance of $V(x) = 0$. In this case, the trader would prefer to sell the all his shares immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. This strategy will yield the smallest possible variance, $V(x) = 0$, and the highest possible expected shortfall, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of over 2.5 million dollars but has a standard deviation equal of zero. minimum_variance # # The Efficient Frontier # # The goal of Almgren and Chriss was to find the optimal strategies that lie between these two extremes. In their paper, they showed how to compute the trade list that minimizes the expected shortfall for a wide range of risk values. In their model, Almgren and Chriss used the parameter $\lambda$ to measure a trader's risk-aversion. The value of $\lambda$ tells us how much a trader is willing to penalize the variance of the shortfall, $V(X)$, relative to expected shortfall, $E(X)$. They showed that for each value of $\lambda$ there is a uniquely determined optimal execution strategy. We define the **Efficient Frontier** to be the set of all these optimal trading strategies. That is, the efficient frontier is the set that contains the optimal trading strategy for each value of $\lambda$. # # The efficient frontier is often visualized by plotting $(x,y)$ pairs for a wide range of $\lambda$ values, where the $x$-coordinate is given by the equation of the expected shortfall, $E(X)$, and the $y$-coordinate is given by the equation of the variance of the shortfall, $V(X)$. Therefore, for a given a set of parameters, the curve defined by the efficient frontier represents the set of optimal trading strategies that give the lowest expected shortfall for a defined level of risk. # # In the code below, we plot the efficient frontier for $\lambda$ values in the range $(10^{-7}, 10^{-4})$, using the default parameters in our trading environment. Each point of the frontier represents a distinct strategy for optimally liquidating the same number of stocks. A risk-averse trader, who wishes to sell quickly to reduce exposure to stock price volatility, despite the trading costs incurred in doing so, will likely choose a value of $\lambda = 10^{-4}$. On the other hand, a trader # who likes risk, who wishes to postpones selling, will likely choose a value of $\lambda = 10^{-7}$. In the code, you can choose a particular value of $\lambda$ to see the expected shortfall and level of variance corresponding to that particular value of trader's risk aversion. # + # %matplotlib inline import matplotlib.pyplot as plt import utils # Set the default figure size plt.rcParams['figure.figsize'] = [17.0, 7.0] # Plot the efficient frontier for the default values. The plot points out the expected shortfall and variance of the # optimal strategy for the given the trader's risk aversion. Valid range for the trader's risk aversion (1e-7, 1e-4). utils.plot_efficient_frontier(tr_risk = 1e-6) # -
finance/Efficient Frontier.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Skip-gram word2vec # # In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. # # ## Readings # # Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. # # * A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of word2vec from <NAME> # * [First word2vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al. # * [NIPS paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for word2vec also from Mikolov et al. # * An [implementation of word2vec](http://www.thushv.com/natural_language_processing/word2vec-part-1-nlp-with-deep-learning-with-tensorflow-skip-gram/) from Thushan Ganegedara # * TensorFlow [word2vec tutorial](https://www.tensorflow.org/tutorials/word2vec) # # ## Word embeddings # # When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. # # ![one-hot encodings](assets/one_hot_encoding.png) # # To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. # # ![lookup](assets/lookup_matrix.png) # # Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**. # # <img src='assets/tokenize_lookup.png' width=500> # # There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. # # Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning. # # # ## Word2Vec # # The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. # # <img src="assets/word2vec_architectures.png" width="500"> # # In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. # # First up, importing packages. # + import time import numpy as np import tensorflow as tf import utils # - # Load the [text8 dataset](http://mattmahoney.net/dc/textdata.html), a file of cleaned up Wikipedia articles from <NAME>. The next cell will download the data set to the `data` folder. Then you can extract it and delete the archive file to save storage space. # + from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() # - # ## Preprocessing # # Here I'm fixing up the text to make training easier. This comes from the `utils` module I wrote. The `preprocess` function coverts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) # And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list `int_words`. vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] # ## Subsampling # # Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by # # $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ # # where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. # # I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. # # > **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to `train_words`. # + from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} print(max(p_drop.values())) print(min(p_drop.values())) train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] # - print(train_words[:5]) print(len(train_words)) # ## Making batches # Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. # # From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf): # # "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." # # > **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words) # Here's a function that returns batches for our network. The idea is that it grabs `batch_size` words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y # ## Building the graph # # From [Chris McCormick's blog](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), we can see the general structure of our network. # ![embedding_network](./assets/skip_gram_net_arch.png) # # The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. # # The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. # # I'm going to have you build the graph in stages now. First off, creating the `inputs` and `labels` placeholders like normal. # # > **Exercise:** Assign `inputs` and `labels` using `tf.placeholder`. We're going to be passing in integers, so set the data types to `tf.int32`. The batches we're passing in will have varying sizes, so set the batch sizes to [`None`]. To make things work later, you'll need to set the second dimension of `labels` to `None` or `1`. train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') # ## Embedding # # # The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. # # # > **Exercise:** Tensorflow provides a convenient function [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use `tf.nn.embedding_lookup` to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using [tf.random_uniform](https://www.tensorflow.org/api_docs/python/tf/random_uniform). n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs) # ## Negative sampling # # # For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). Tensorflow has a convenient function to do this, [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss). # # > **Exercise:** Below, create weights and biases for the softmax layer. Then, use [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss) to calculate the loss. Be sure to read the documentation to figure out how it works. # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) # ## Validation # # This code is from <NAME>'s implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. with train_graph.as_default(): ## From <NAME>'s implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: # !mkdir checkpoints # + epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) # - # Restore the trained network if you need to: # + with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) # - # ## Visualizing the word vectors # # Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from <NAME>](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data. # + # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE # - viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
embeddings/Skip-Grams-Solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Example: Compare CZT to FFT # %load_ext autoreload # %autoreload 2 # + import numpy as np import matplotlib.pyplot as plt # CZT package import czt # https://github.com/garrettj403/SciencePlots plt.style.use(['science', 'notebook']) # - # # Generate Time-Domain Signal for Example # + # Time data t = np.arange(0, 20, 0.1) * 1e-3 dt = t[1] - t[0] Fs = 1 / dt N = len(t) print("Sampling period: {:5.2f} ms".format(dt * 1e3)) print("Sampling frequency: {:5.2f} kHz".format(Fs / 1e3)) print("Nyquist frequency: {:5.2f} kHz".format(Fs / 2 / 1e3)) print("Number of points: {:5d}".format(N)) # + # Signal data def model1(t): """Exponentially decaying sine wave with higher-order distortion.""" output = (1.0 * np.sin(2 * np.pi * 1e3 * t) + 0.3 * np.sin(2 * np.pi * 2.5e3 * t) + 0.1 * np.sin(2 * np.pi * 3.5e3 * t)) * np.exp(-1e3 * t) return output def model2(t): """Exponentially decaying sine wave without higher-order distortion.""" output = (1.0 * np.sin(2 * np.pi * 1e3 * t)) * np.exp(-1e3 * t) return output sig = model1(t) # - # Plot time-domain data plt.figure() t_tmp = np.linspace(0, 6, 601) / 1e3 plt.plot(t_tmp*1e3, model1(t_tmp), 'k', lw=0.5, label='Data') plt.plot(t*1e3, sig, 'ro--', label='Samples') plt.xlabel("Time (ms)") plt.ylabel("Signal") plt.xlim([0, 6]) plt.legend() plt.title("Time-domain signal"); # # Frequency-domain # + sig_fft = np.fft.fftshift(np.fft.fft(sig)) f_fft = np.fft.fftshift(np.fft.fftfreq(N, d=dt)) freq, sig_f = czt.time2freq(t, sig) plt.figure() plt.plot(f_fft / 1e3, np.abs(sig_fft), 'k', label='FFT') plt.plot(freq / 1e3, np.abs(sig_f), 'ro--', label='CZT') plt.xlabel("Frequency (kHz)") plt.ylabel("Signal magnitude") plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3]) plt.legend() plt.title("Frequency-domain") plt.savefig("results/freq-domain.png", dpi=600) plt.figure() plt.plot(f_fft / 1e3, np.angle(sig_fft), 'k', label='FFT') plt.plot(freq / 1e3, np.angle(sig_f), 'ro--', label='CZT') plt.xlabel("Frequency (kHz)") plt.ylabel("Signal phase") plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3]) plt.legend() plt.title("Frequency-domain");
examples/compare-czt-fft.ipynb
# # Many-body perturbation theory # # We assume here that we are only interested in the ground state of the system and # expand the exact wave function in term of a series of Slater determinants # $$ # \vert \Psi_0\rangle = \vert \Phi_0\rangle + \sum_{m=1}^{\infty}C_m\vert \Phi_m\rangle, # $$ # where we have assumed that the true ground state is dominated by the # solution of the unperturbed problem, that is # $$ # \hat{H}_0\vert \Phi_0\rangle= W_0\vert \Phi_0\rangle. # $$ # The state $\vert \Psi_0\rangle$ is not normalized, rather we have used an intermediate # normalization $\langle \Phi_0 \vert \Psi_0\rangle=1$ since we have $\langle \Phi_0\vert \Phi_0\rangle=1$. # # # # The Schroedinger equation is # $$ # \hat{H}\vert \Psi_0\rangle = E\vert \Psi_0\rangle, # $$ # and multiplying the latter from the left with $\langle \Phi_0\vert $ gives # $$ # \langle \Phi_0\vert \hat{H}\vert \Psi_0\rangle = E\langle \Phi_0\vert \Psi_0\rangle=E, # $$ # and subtracting from this equation # $$ # \langle \Psi_0\vert \hat{H}_0\vert \Phi_0\rangle= W_0\langle \Psi_0\vert \Phi_0\rangle=W_0, # $$ # and using the fact that the both operators $\hat{H}$ and $\hat{H}_0$ are hermitian # results in # $$ # \Delta E=E-W_0=\langle \Phi_0\vert \hat{H}_I\vert \Psi_0\rangle, # $$ # which is an exact result. We call this quantity the correlation energy. # # # # This equation forms the starting point for all perturbative derivations. However, # as it stands it represents nothing but a mere formal rewriting of Schroedinger's equation and is not of much practical use. The exact wave function $\vert \Psi_0\rangle$ is unknown. In order to obtain a perturbative expansion, we need to expand the exact wave function in terms of the interaction $\hat{H}_I$. # # Here we have assumed that our model space defined by the operator $\hat{P}$ is one-dimensional, meaning that # $$ # \hat{P}= \vert \Phi_0\rangle \langle \Phi_0\vert , # $$ # and # $$ # \hat{Q}=\sum_{m=1}^{\infty}\vert \Phi_m\rangle \langle \Phi_m\vert . # $$ # We can thus rewrite the exact wave function as # $$ # \vert \Psi_0\rangle= (\hat{P}+\hat{Q})\vert \Psi_0\rangle=\vert \Phi_0\rangle+\hat{Q}\vert \Psi_0\rangle. # $$ # Going back to the Schr\"odinger equation, we can rewrite it as, adding and a subtracting a term $\omega \vert \Psi_0\rangle$ as # $$ # \left(\omega-\hat{H}_0\right)\vert \Psi_0\rangle=\left(\omega-E+\hat{H}_I\right)\vert \Psi_0\rangle, # $$ # where $\omega$ is an energy variable to be specified later. # # # We assume also that the resolvent of $\left(\omega-\hat{H}_0\right)$ exits, that is # it has an inverse which defined the unperturbed Green's function as # $$ # \left(\omega-\hat{H}_0\right)^{-1}=\frac{1}{\left(\omega-\hat{H}_0\right)}. # $$ # We can rewrite Schroedinger's equation as # $$ # \vert \Psi_0\rangle=\frac{1}{\omega-\hat{H}_0}\left(\omega-E+\hat{H}_I\right)\vert \Psi_0\rangle, # $$ # and multiplying from the left with $\hat{Q}$ results in # $$ # \hat{Q}\vert \Psi_0\rangle=\frac{\hat{Q}}{\omega-\hat{H}_0}\left(\omega-E+\hat{H}_I\right)\vert \Psi_0\rangle, # $$ # which is possible since we have defined the operator $\hat{Q}$ in terms of the eigenfunctions of $\hat{H}$. # # # # # These operators commute meaning that # $$ # \hat{Q}\frac{1}{\left(\omega-\hat{H}_0\right)}\hat{Q}=\hat{Q}\frac{1}{\left(\omega-\hat{H}_0\right)}=\frac{\hat{Q}}{\left(\omega-\hat{H}_0\right)}. # $$ # With these definitions we can in turn define the wave function as # $$ # \vert \Psi_0\rangle=\vert \Phi_0\rangle+\frac{\hat{Q}}{\omega-\hat{H}_0}\left(\omega-E+\hat{H}_I\right)\vert \Psi_0\rangle. # $$ # This equation is again nothing but a formal rewrite of Schr\"odinger's equation # and does not represent a practical calculational scheme. # It is a non-linear equation in two unknown quantities, the energy $E$ and the exact # wave function $\vert \Psi_0\rangle$. We can however start with a guess for $\vert \Psi_0\rangle$ on the right hand side of the last equation. # # # # The most common choice is to start with the function which is expected to exhibit the largest overlap with the wave function we are searching after, namely $\vert \Phi_0\rangle$. This can again be inserted in the solution for $\vert \Psi_0\rangle$ in an iterative fashion and if we continue along these lines we end up with # $$ # \vert \Psi_0\rangle=\sum_{i=0}^{\infty}\left\{\frac{\hat{Q}}{\omega-\hat{H}_0}\left(\omega-E+\hat{H}_I\right)\right\}^i\vert \Phi_0\rangle, # $$ # for the wave function and # $$ # \Delta E=\sum_{i=0}^{\infty}\langle \Phi_0\vert \hat{H}_I\left\{\frac{\hat{Q}}{\omega-\hat{H}_0}\left(\omega-E+\hat{H}_I\right)\right\}^i\vert \Phi_0\rangle, # $$ # which is now a perturbative expansion of the exact energy in terms of the interaction # $\hat{H}_I$ and the unperturbed wave function $\vert \Psi_0\rangle$. # # # # In our equations for $\vert \Psi_0\rangle$ and $\Delta E$ in terms of the unperturbed # solutions $\vert \Phi_i\rangle$ we have still an undetermined parameter $\omega$ # and a dependecy on the exact energy $E$. Not much has been gained thus from a practical computational point of view. # # In Brilluoin-Wigner perturbation theory it is customary to set $\omega=E$. This results in the following perturbative expansion for the energy $\Delta E$ # 1 # 7 # # < # < # < # ! # ! # M # A # T # H # _ # B # L # O # C # K # $$ # \langle \Phi_0\vert \left(\hat{H}_I+\hat{H}_I\frac{\hat{Q}}{E-\hat{H}_0}\hat{H}_I+ # \hat{H}_I\frac{\hat{Q}}{E-\hat{H}_0}\hat{H}_I\frac{\hat{Q}}{E-\hat{H}_0}\hat{H}_I+\dots\right)\vert \Phi_0\rangle. # $$ # 1 # 9 # # < # < # < # ! # ! # M # A # T # H # _ # B # L # O # C # K # $$ # \langle \Phi_0\vert \left(\hat{H}_I+\hat{H}_I\frac{\hat{Q}}{E-\hat{H}_0}\hat{H}_I+ # \hat{H}_I\frac{\hat{Q}}{E-\hat{H}_0}\hat{H}_I\frac{\hat{Q}}{E-\hat{H}_0}\hat{H}_I+\dots\right)\vert \Phi_0\rangle. # $$ # This expression depends however on the exact energy $E$ and is again not very convenient from a practical point of view. It can obviously be solved iteratively, by starting with a guess for $E$ and then solve till some kind of self-consistency criterion has been reached. # # Actually, the above expression is nothing but a rewrite again of the full Schr\"odinger equation. # # Defining $e=E-\hat{H}_0$ and recalling that $\hat{H}_0$ commutes with # $\hat{Q}$ by construction and that $\hat{Q}$ is an idempotent operator # $\hat{Q}^2=\hat{Q}$. # Using this equation in the above expansion for $\Delta E$ we can write the denominator # 2 # 1 # # < # < # < # ! # ! # M # A # T # H # _ # B # L # O # C # K # $$ # \hat{Q}\left[\frac{1}{\hat{e}}+\frac{1}{\hat{e}}\hat{Q}\hat{H}_I\hat{Q} # \frac{1}{\hat{e}}+\frac{1}{\hat{e}}\hat{Q}\hat{H}_I\hat{Q} # \frac{1}{\hat{e}}\hat{Q}\hat{H}_I\hat{Q}\frac{1}{\hat{e}}+\dots\right]\hat{Q}. # $$ # Inserted in the expression for $\Delta E$ leads to # $$ # \Delta E= # \langle \Phi_0\vert \hat{H}_I+\hat{H}_I\hat{Q}\frac{1}{E-\hat{H}_0-\hat{Q}\hat{H}_I\hat{Q}}\hat{Q}\hat{H}_I\vert \Phi_0\rangle. # $$ # In RS perturbation theory we set $\omega = W_0$ and obtain the following expression for the energy difference # 2 # 4 # # < # < # < # ! # ! # M # A # T # H # _ # B # L # O # C # K # $$ # \langle \Phi_0\vert \left(\hat{H}_I+\hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}(\hat{H}_I-\Delta E)+ # \hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}(\hat{H}_I-\Delta E)\frac{\hat{Q}}{W_0-\hat{H}_0}(\hat{H}_I-\Delta E)+\dots\right)\vert \Phi_0\rangle. # $$ # Recalling that $\hat{Q}$ commutes with $\hat{H_0}$ and since $\Delta E$ is a constant we obtain that # $$ # \hat{Q}\Delta E\vert \Phi_0\rangle = \hat{Q}\Delta E\vert \hat{Q}\Phi_0\rangle = 0. # $$ # Inserting this results in the expression for the energy results in # $$ # \Delta E=\langle \Phi_0\vert \left(\hat{H}_I+\hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}\hat{H}_I+ # \hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}(\hat{H}_I-\Delta E)\frac{\hat{Q}}{W_0-\hat{H}_0}\hat{H}_I+\dots\right)\vert \Phi_0\rangle. # $$ # We can now this expression in terms of a perturbative expression in terms # of $\hat{H}_I$ where we iterate the last expression in terms of $\Delta E$ # $$ # \Delta E=\sum_{i=1}^{\infty}\Delta E^{(i)}. # $$ # We get the following expression for $\Delta E^{(i)}$ # $$ # \Delta E^{(1)}=\langle \Phi_0\vert \hat{H}_I\vert \Phi_0\rangle, # $$ # which is just the contribution to first order in perturbation theory, # $$ # \Delta E^{(2)}=\langle\Phi_0\vert \hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}\hat{H}_I\vert \Phi_0\rangle, # $$ # which is the contribution to second order. # $$ # \Delta E^{(3)}=\langle \Phi_0\vert \hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}\hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}\hat{H}_I\Phi_0\rangle- # \langle\Phi_0\vert \hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}\langle \Phi_0\vert \hat{H}_I\vert \Phi_0\rangle\frac{\hat{Q}}{W_0-\hat{H}_0}\hat{H}_I\vert \Phi_0\rangle, # $$ # being the third-order contribution. # # # ## Interpreting the correlation energy and the wave operator # # In the shell-model lectures we showed that we could rewrite the exact state function for say the ground state, as a linear expansion in terms of all possible Slater determinants. That is, we # define the ansatz for the ground state as # $$ # |\Phi_0\rangle = \left(\prod_{i\le F}\hat{a}_{i}^{\dagger}\right)|0\rangle, # $$ # where the index $i$ defines different single-particle states up to the Fermi level. We have assumed that we have $N$ fermions. # A given one-particle-one-hole ($1p1h$) state can be written as # $$ # |\Phi_i^a\rangle = \hat{a}_{a}^{\dagger}\hat{a}_i|\Phi_0\rangle, # $$ # while a $2p2h$ state can be written as # $$ # |\Phi_{ij}^{ab}\rangle = \hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_j\hat{a}_i|\Phi_0\rangle, # $$ # and a general $ApAh$ state as # $$ # |\Phi_{ijk\dots}^{abc\dots}\rangle = \hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_{c}^{\dagger}\dots\hat{a}_k\hat{a}_j\hat{a}_i|\Phi_0\rangle. # $$ # We use letters $ijkl\dots$ for states below the Fermi level and $abcd\dots$ for states above the Fermi level. A general single-particle state is given by letters $pqrs\dots$. # # We can then expand our exact state function for the ground state # as # $$ # |\Psi_0\rangle=C_0|\Phi_0\rangle+\sum_{ai}C_i^a|\Phi_i^a\rangle+\sum_{abij}C_{ij}^{ab}|\Phi_{ij}^{ab}\rangle+\dots # =(C_0+\hat{C})|\Phi_0\rangle, # $$ # where we have introduced the so-called correlation operator # $$ # \hat{C}=\sum_{ai}C_i^a\hat{a}_{a}^{\dagger}\hat{a}_i +\sum_{abij}C_{ij}^{ab}\hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_j\hat{a}_i+\dots # $$ # Since the normalization of $\Psi_0$ is at our disposal and since $C_0$ is by hypothesis non-zero, we may arbitrarily set $C_0=1$ with # corresponding proportional changes in all other coefficients. Using this so-called intermediate normalization we have # $$ # \langle \Psi_0 | \Phi_0 \rangle = \langle \Phi_0 | \Phi_0 \rangle = 1, # $$ # resulting in # $$ # |\Psi_0\rangle=(1+\hat{C})|\Phi_0\rangle. # $$ # In a shell-model calculation, the unknown coefficients in $\hat{C}$ are the # eigenvectors which result from the diagonalization of the Hamiltonian matrix. # # How can we use perturbation theory to determine the same coefficients? Let us study the contributions to second order in the interaction, namely # $$ # \Delta E^{(2)}=\langle\Phi_0\vert \hat{H}_I\frac{\hat{Q}}{W_0-\hat{H}_0}\hat{H}_I\vert \Phi_0\rangle. # $$ # The intermediate states given by $\hat{Q}$ can at most be of a $2p-2h$ nature if we have a two-body Hamiltonian. This means that second order in the perturbation theory can have $1p-1h$ and $2p-2h$ at most as intermediate states. When we diagonalize, these contributions are included to infinite order. This means that higher-orders in perturbation theory bring in more complicated correlations. # # If we limit the attention to a Hartree-Fock basis, then we have that # $\langle\Phi_0\vert \hat{H}_I \vert 2p-2h\rangle$ is the only contribution and the contribution to the energy reduces to # $$ # \Delta E^{(2)}=\frac{1}{4}\sum_{abij}\langle ij\vert \hat{v}\vert ab\rangle \frac{\langle ab\vert \hat{v}\vert ij\rangle}{\epsilon_i+\epsilon_j-\epsilon_a-\epsilon_b}. # $$ # If we compare this to the correlation energy obtained from full configuration interaction theory with a Hartree-Fock basis, we found that # $$ # E-E_0 =\Delta E= # \sum_{abij}\langle ij | \hat{v}| ab \rangle C_{ij}^{ab}, # $$ # where the energy $E_0$ is the reference energy and $\Delta E$ defines the so-called correlation energy. # # We see that if we set # $$ # C_{ij}^{ab} =\frac{1}{4}\frac{\langle ab \vert \hat{v} \vert ij \rangle}{\epsilon_i+\epsilon_j-\epsilon_a-\epsilon_b}, # $$ # we have a perfect agreement between FCI and MBPT. However, FCI includes such $2p-2h$ correlations to infinite order. In order to make a meaningful comparison we would at least need to sum such correlations to infinite order in perturbation theory. # # Summing up, we can see that # * MBPT introduces order-by-order specific correlations and we make comparisons with exact calculations like FCI # # * At every order, we can calculate all contributions since they are well-known and either tabulated or calculated on the fly. # # * MBPT is a non-variational theory and there is no guarantee that higher orders will improve the convergence. # # * However, since FCI calculations are limited by the size of the Hamiltonian matrices to diagonalize (today's most efficient codes can attach dimensionalities of ten billion basis states, MBPT can function as an approximative method which gives a straightforward (but tedious) calculation recipe. # # * MBPT has been widely used to compute effective interactions for the nuclear shell-model. # # * But there are better methods which sum to infinite order important correlations. Coupled cluster theory is one of these methods.
doc/LectureNotes/mbpt.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fictional Army - Filtering and Sorting # ### Introduction: # # This exercise was inspired by this [page](http://chrisalbon.com/python/) # # Special thanks to: https://github.com/chrisalbon for sharing the dataset and materials. # # ### Step 1. Import the necessary libraries import pandas as pd # ### Step 2. This is the data given as a dictionary # Create an example dataframe about a fictional army raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'], 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'], 'deaths': [523, 52, 25, 616, 43, 234, 523, 62, 62, 73, 37, 35], 'battles': [5, 42, 2, 2, 4, 7, 8, 3, 4, 7, 8, 9], 'size': [1045, 957, 1099, 1400, 1592, 1006, 987, 849, 973, 1005, 1099, 1523], 'veterans': [1, 5, 62, 26, 73, 37, 949, 48, 48, 435, 63, 345], 'readiness': [1, 2, 3, 3, 2, 1, 2, 3, 2, 1, 2, 3], 'armored': [1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1], 'deserters': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3], 'origin': ['Arizona', 'California', 'Texas', 'Florida', 'Maine', 'Iowa', 'Alaska', 'Washington', 'Oregon', 'Wyoming', 'Louisana', 'Georgia']} # ### Step 3. Create a dataframe and assign it to a variable called army. # # #### Don't forget to include the columns names in the order presented in the dictionary ('regiment', 'company', 'deaths'...) so that the column index order is consistent with the solutions. If omitted, pandas will order the columns alphabetically. army = pd.DataFrame(raw_data, columns=["regiment", "company", "deaths", "battles", "size", "veterans", "readiness", "armored", "deserters", "origin"]) # ### Step 4. Set the 'origin' colum as the index of the dataframe army.set_index(keys="origin", inplace=True) # ### Step 5. Print only the column veterans army["veterans"] # ### Step 6. Print the columns 'veterans' and 'deaths' army[["veterans", "deaths"]] # ### Step 7. Print the name of all the columns. army.columns # ### Step 8. Select the 'deaths', 'size' and 'deserters' columns from Maine and Alaska army.loc[["Maine", "Alaska"], ["deaths", "size", "deserters"]] # ### Step 9. Select the rows 3 to 7 and the columns 3 to 6 army.iloc[2:7, 2:6] # ### Step 10. Select every row after the fourth row and all columns army.iloc[4:, :] # ### Step 11. Select every row up to the 4th row and all columns army.iloc[:4, :] # ### Step 12. Select the 3rd column up to the 7th column army.iloc[:, 2:7] # ### Step 13. Select rows where df.deaths is greater than 50 army[army["deaths"] > 50] # ### Step 14. Select rows where df.deaths is greater than 500 or less than 50 army[(army["deaths"] > 500) | (army["deaths"] < 50)] # ### Step 15. Select all the regiments not named "Dragoons" army[army["regiment"] != "Dragoons"] # ### Step 16. Select the rows called Texas and Arizona army.loc[["Texas", "Arizona"]] # ### Step 17. Select the third cell in the row named Arizona army.loc[["Arizona"]].iloc[:, 2] # ### Step 18. Select the third cell down in the column named deaths army.loc[:, ["deaths"]].iloc[2]
02_Filtering_&_Sorting/Fictional Army/Exercise.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # The 3ML workflow # # Generally, an analysis in 3ML is performed in 3 steps: # # 1. Load the data: one or more datasets are loaded and then listed in a DataList object # 2. Define the model: a model for the data is defined by including one or more PointSource, ExtendedSource or ParticleSource instances # 3. Perform a likelihood or a Bayesian analysis: the data and the model are used together to perform either a Maximum Likelihood analysis, or a Bayesian analysis # ## Loading data # # 3ML is built around the concept of _plugins_. A plugin is used to load a particular type of data, or the data from a particular instrument. There is a plugin of optical data, one for X-ray data, one for Fermi/LAT data and so on. Plugins instances can be added and removed at the loading stage without changing any other stage of the analysis (but of course, you need to rerun all stages to update the results). # # First, let's import 3ML: from threeML import * import matplotlib.pyplot as plt # %matplotlib notebook # + nbsphinx="hidden" plt.style.use('mike') import warnings warnings.filterwarnings('ignore') # - # Let's start by loading one dataset, which in the 3ML workflow means creating an instance of the appropriate plugin: # + # Get some example data from threeML.io.package_data import get_path_of_data_file data_path = get_path_of_data_file("datasets/xy_powerlaw.txt") # Create an instance of the XYLike plugin, which allows to analyze simple x,y points # with error bars xyl = XYLike.from_text_file("xyl", data_path) # Let's plot it just to see what we have loaded fig = xyl.plot(x_scale='log', y_scale='log') # - # Now we need to create a DataList object, which in this case contains only one instance: data = DataList(xyl) # The DataList object can receive one or more plugin instances on initialization. So for example, to use two datasets we can simply do: # + # Create the second instance, this time of a different type pha = get_path_of_data_file("datasets/ogip_powerlaw.pha") bak = get_path_of_data_file("datasets/ogip_powerlaw.bak") rsp = get_path_of_data_file("datasets/ogip_powerlaw.rsp") ogip = OGIPLike("ogip", pha, bak, rsp) # Now use both plugins data = DataList(xyl, ogip) # - # The DataList object can accept any number of plugins in input. # # You can also create a list of plugins, and then create a DataList using the "expansion" feature of the python language ('*'), like this: # + # This is equivalent to write data = DataList(xyl, ogip) my_plugins = [xyl, ogip] data = DataList(*my_plugins) # - # This is useful if you need to create the list of plugins at runtime, for example looping over many files. # ## Define the model # # After you have loaded your data, you need to define a model for them. A model is a collection of one or more sources. A source represents an astrophysical reality, like a star, a galaxy, a molecular cloud... There are 3 kinds of sources: PointSource, ExtendedSource and ParticleSource. The latter is used only in special situations. The models are defined using the package astromodels. Here we will only go through the basics. You can find a lot more information here: [astromodels.readthedocs.org](https://astromodels.readthedocs.org) # # ### Point sources # A point source is characterized by a name, a position, and a spectrum. These are some examples: # + # A point source with a power law spectrum source1_sp = Powerlaw() source1 = PointSource("source1", ra=23.5, dec=-22.7, spectral_shape=source1_sp) # Another source with a log-parabolic spectrum plus a power law source2_sp = Log_parabola() + Powerlaw() source2 = PointSource("source2", ra=30.5, dec=-27.1, spectral_shape=source2_sp) # A third source defined in terms of its Galactic latitude and longitude source3_sp = Cutoff_powerlaw() source3 = PointSource("source3", l=216.1, b=-74.56, spectral_shape=source3_sp) # - # ### Extended sources # # An extended source is characterized by its spatial shape and its spectral shape: # + # An extended source with a Gaussian shape centered on R.A., Dec = (30.5, -27.1) # and a sigma of 3.0 degrees ext1_spatial = Gaussian_on_sphere(lon0=30.5, lat0=-27.1, sigma=3.0) ext1_spectral = Powerlaw() ext1 = ExtendedSource("ext1", ext1_spatial, ext1_spectral) # An extended source with a 3D function # (i.e., the function defines both the spatial and the spectral shape) ext2_spatial = Continuous_injection_diffusion() ext2 = ExtendedSource("ext2", ext2_spatial) # - # **NOTE**: not all plugins support extended sources. For example, the XYLike plugin we used above do not, as it is meant for data without spatial resolution. # ### Create the likelihood model # Now that we have defined our sources, we can create a model simply as: # + model = Model(source1, source2, source3, ext1, ext2) # We can see a summary of the model like this: model.display(complete=True) # - # You can easily interact with the model. For example: # + # Fix a parameter model.source1.spectrum.main.Powerlaw.K.fix = True # or model.source1.spectrum.main.Powerlaw.K.free = False # Free it again model.source1.spectrum.main.Powerlaw.K.free = True # or model.source1.spectrum.main.Powerlaw.K.fix = False # Change the value model.source1.spectrum.main.Powerlaw.K = 2.3 # or using physical units (need to be compatible with what shown # in the table above) model.source1.spectrum.main.Powerlaw.K = 2.3 * 1 / (u.cm**2 * u.s * u.TeV) # Change the boundaries for the parameter model.source1.spectrum.main.Powerlaw.K.bounds = (1e-10, 1.0) # you can use units here as well, like: model.source1.spectrum.main.Powerlaw.K.bounds = (1e-5 * 1 / (u.cm**2 * u.s * u.TeV), 10.0 * 1 / (u.cm**2 * u.s * u.TeV)) # Link two parameters so that they are forced to have the same value model.link(model.source2.spectrum.main.composite.K_1, model.source1.spectrum.main.Powerlaw.K) # Link two parameters with a law. The parameters of the law become free # parameters in the fit. In this case we impose a linear relationship # between the index of the log-parabolic spectrum and the index of the # powerlaw in source2: index_2 = a * alpha_1 + b. law = Line() model.link(model.source2.spectrum.main.composite.index_2, model.source2.spectrum.main.composite.alpha_1, law) # If you want to force them to be in a specific relationship, # say index_2 = alpha_1 + 1, just fix a and b to the corresponding values, # after the linking, like: # model.source2.spectrum.main.composite.index_2.Line.a = 1.0 # model.source2.spectrum.main.composite.index_2.Line.a.fix = True # model.source2.spectrum.main.composite.index_2.Line.b = 0.0 # model.source2.spectrum.main.composite.index_2.Line.b.fix = True # Now display() will show the links model.display(complete=True) # - # Now, for the following steps, let's keep it simple and let's use a single point source: # + new_model = Model(source1) source1_sp.K.bounds = (0.01, 100) # - # A model can be saved to disk, and reloaded from disk, as: # + new_model.save("new_model.yml", overwrite=True) new_model_reloaded = load_model("new_model.yml") # - # The output is in [YAML format](http://www.yaml.org/start.html), a human-readable text-based format. # ## Perform the analysis # # ### Maximum likelihood analysis # # Now that we have the data and the model, we can perform an analysis very easily: # + data = DataList(ogip) jl = JointLikelihood(new_model, data) best_fit_parameters, likelihood_values = jl.fit() # - # The output of the fit() method of the JointLikelihood object consists of two pandas DataFrame objects, which can be queried, saved to disk, reloaded and so on. Refer to the [pandas manual](http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe) for details. # # After the fit the JointLikelihood instance will have a .results attribute which contains the results of the fit. jl.results.display() # This object can be saved to disk in a FITS file: jl.results.write_to("my_results.fits", overwrite=True) # The produced FITS file contains the complete definition of the model and of the results, so it can be reloaded in a separate session as: # + results_reloaded = load_analysis_results("my_results.fits") results_reloaded.display() # - # The flux of the source can be computed from the 'results' object (even in another session by reloading the FITS file), as: # + fluxes = jl.results.get_flux(100 * u.keV, 1 * u.MeV) # Same results would be obtained with # fluxes = results_reloaded.get_point_source_flux(100 * u.keV, 1 * u.MeV) # - # We can also plot the spectrum with its error region, as: fig = plot_spectra(jl.results, ene_min=0.1, ene_max=1e6, num_ene=500, flux_unit='erg / (cm2 s)') # ### Bayesian analysis # In a very similar way, we can also perform a Bayesian analysis. As a first step, we need to define the priors for all parameters: # + # It can be set using the currently defined boundaries new_model.source1.spectrum.main.Powerlaw.index.set_uninformative_prior(Uniform_prior) # or uniform prior can be defined directly, like: new_model.source1.spectrum.main.Powerlaw.index.prior = Uniform_prior(lower_bound=-3, upper_bound=0) # The same for the Log_uniform prior new_model.source1.spectrum.main.Powerlaw.K.prior = Log_uniform_prior(lower_bound=1e-3, upper_bound=100) # or new_model.source1.spectrum.main.Powerlaw.K.set_uninformative_prior(Log_uniform_prior) new_model.display(complete=True) # - # Then, we can perform our Bayesian analysis like: bs = BayesianAnalysis(new_model, data) bs.set_sampler('ultranest') bs.sampler.setup() # This uses the emcee sampler samples = bs.sample(quiet=True) # The BayesianAnalysis object will now have a "results" member which will work exactly the same as explained for the Maximum Likelihood analysis (see above): bs.results.display() fluxes_bs = bs.results.get_flux(100 * u.keV, 1 * u.MeV) fig = plot_spectra(bs.results, ene_min=0.1, ene_max=1e6, num_ene=500, flux_unit='erg / (cm2 s)') # We can also produce easily a "corner plot", like: bs.results.corner_plot();
docs/notebooks/The_3ML_workflow.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] tags=[] # # MiRAC-A # The following example demonstrates the use of MiRAC-A data collected during ACLOUD, AFLUX and MOSAiC-ACA. The Microwave Radar/Radiometer for Arctic Clouds (MiRAC) consists of an active component, a 94 GHz Frequency Modulated Continuous Wave (FMCW) cloud radar, and a passive 89 GHz microwave radiometer. MiRAC-A is mounted on Polar 5 with a fixed viewing angle of 25° against flight direction. # # More information on the instrument can be found in [Mech et al. (2019)](https://amt.copernicus.org/articles/12/5019/2019/). If you have questions or if you would like to use the data for a publication, please don't hesitate to get in contact with the dataset authors as stated in the dataset attributes `contact` or `author`. # # ## Data access # * To analyse the data they first have to be loaded by importing the (AC)³airborne meta data catalogue. To do so the ac3airborne package has to be installed. More information on how to do that and about the catalog can be found [here](https://github.com/igmk/ac3airborne-intake#ac3airborne-intake-catalogue). # - # ## Get data import ac3airborne cat = ac3airborne.get_intake_catalog() list(cat.P5.MIRAC_A) # ```{note} # Have a look at the attributes of the xarray dataset `ds_mirac_a` for all relevant information on the dataset, such as author, contact, or citation information. # ``` ds_mirac_a = cat['P5']['MIRAC_A']['ACLOUD_P5_RF05'].to_dask() ds_mirac_a # The dataset includes the radar reflectivity (`Ze`, `Ze_unfiltered`), the radar reflectivity filter mask (`Ze_flag`), the 89 GHz brightness temperature (`TB_89`) as well as information on the aircraft's flight altitude (`altitude`). The radar reflectivity is defined on a regular `time`-`height` grid with corresponding target positions (`lat`, `lon`). The full dataset is available on PANGAEA. # + [markdown] tags=[] # ## Load Polar 5 flight phase information # Polar 5 flights are divided into segments to easily access start and end times of flight patterns. For more information have a look at the respective [github](https://github.com/igmk/flight-phase-separation) repository. # # At first we want to load the flight segments of (AC)³airborne # - meta = ac3airborne.get_flight_segments() # The following command lists all flight segments into the dictionary `segments` segments = {s.get("segment_id"): {**s, "flight_id": flight["flight_id"]} for platform in meta.values() for flight in platform.values() for s in flight["segments"] } # In this example we want to look at a high-level segment during ACLOUD RF05 seg = segments["ACLOUD_P5_RF05_hl09"] # Using the start and end times of the segment `ACLOUD_P5_RF05_hl09` stored in `seg`, we slice the MiRAC data to this flight section. ds_mirac_a_sel = ds_mirac_a.sel(time=slice(seg["start"], seg["end"])) # ## Plots # The flight section during ACLOUD RF05 is flown at about 3 km altitude in west-east direction during a cold-air outbreak event perpendicular to the wind field. Clearly one can identify the roll-cloud structure in the radar reflectivity and the 89 GHz brightness temperature. # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.dates as mdates from numpy import log10 plt.style.use("../mplstyle/book") # + fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True) # 1st: plot flight altitude and radar reflectivity ax1.plot(ds_mirac_a_sel.time, ds_mirac_a_sel.altitude*1e-3, label='Flight altitude', color='k') im = ax1.pcolormesh(ds_mirac_a_sel.time, ds_mirac_a_sel.height*1e-3, 10*log10(ds_mirac_a_sel.Ze).T, vmin=-40, vmax=30, cmap='jet', shading='nearest') fig.colorbar(im, ax=ax1, label='Radar reflectivity [dBz]') ax1.set_ylim(-0.25, 3.5) ax1.set_ylabel('Height [km]') ax1.legend(frameon=False, loc='upper left') # 2nd: plot 89 GHz TB ax2.plot(ds_mirac_a_sel.time, ds_mirac_a_sel.TB_89, label='Tb(89 GHz)', color='k') ax2.set_ylim(177, 195) ax2.set_ylabel('$T_b$ [K]') ax2.set_xlabel('Time (hh:mm) [UTC]') ax2.legend(frameon=False, loc='upper left') ax2.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M')) plt.show() # -
how_to_ac3airborne/datasets/mirac_a.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <div style = "font-family:Georgia; # font-size:2.5vw; # color:lightblue; # font-style:bold; # text-align:center; # background:url('./iti/Title Background.gif') no-repeat center; # background-size:cover)"> # # <br></br><br></br><br></br> # You Only Look Once (YOLO) # <br></br><br></br> # # </div> # # <h1 style = "text-align:left">Introduction</h1> # # As you learned in the previous lessons, YOLO is a state-of-the-art, real-time object detection algorithm. In this notebook, we will apply the YOLO algorithm to detect objects in images. We have provided a series of images that you can test the YOLO algorithm on. Below is a list of the available images that you can load: # # * cat.jpg # * city_scene.jpg # * dog.jpg # * dog2.jpg # * eagle.jpg # * food.jpg # * giraffe.jpg # * horses.jpg # * motorbike.jpg # * person.jpg # * surf.jpg # * wine.jpg # # These images are located in the`./images/`folder. We encourage you to test the YOLO algorithm on your own images as well. Have fun! # # Importing Resources # # We will start by loading the required packages into Python. We will be using *OpenCV* to load our images, *matplotlib* to plot them, a`utils` module that contains some helper functions, and a modified version of *Darknet*. YOLO uses *Darknet*, an open source, deep neural network framework written by the creators of YOLO. The version of *Darknet* used in this notebook has been modified to work in PyTorch 0.4 and has been simplified because we won't be doing any training. Instead, we will be using a set of pre-trained weights that were trained on the Common Objects in Context (COCO) database. For more information on *Darknet*, please visit <a href="https://pjreddie.com/darknet/">Darknet</a>. # + import cv2 import matplotlib.pyplot as plt from utils import * from darknet import Darknet # - # # Setting Up The Neural Network # # We will be using the latest version of YOLO, known as YOLOv3. We have already downloaded the `yolov3.cfg` file that contains the network architecture used by YOLOv3 and placed it in the `/cfg/` folder. Similarly, we have placed the `yolov3.weights` file that contains the pre-trained weights in the `/weights/` directory. Finally, the `/data/` directory, contains the `coco.names` file that has the list of the 80 object classes that the weights were trained to detect. # # In the code below, we start by specifying the location of the files that contain the neural network architecture, the pre-trained weights, and the object classes. We then use *Darknet* to setup the neural network using the network architecture specified in the `cfg_file`. We then use the`.load_weights()` method to load our set of pre-trained weights into the model. Finally, we use the `load_class_names()` function, from the `utils` module, to load the 80 object classes. # + # Set the location and name of the cfg file cfg_file = './cfg/yolov3.cfg' # Set the location and name of the pre-trained weights file weight_file = './weights/yolov3.weights' # Set the location and name of the COCO object classes file namesfile = 'data/coco.names' # Load the network architecture m = Darknet(cfg_file) # Load the pre-trained weights m.load_weights(weight_file) # Load the COCO object classes class_names = load_class_names(namesfile) # - # # Taking a Look at The Neural Network # # Now that the neural network has been setup, we can see what it looks like. We can print the network using the `.print_network()` function. # Print the neural network used in YOLOv3 m.print_network() # As we can see, the neural network used by YOLOv3 consists mainly of convolutional layers, with some shortcut connections and upsample layers. For a full description of this network please refer to the <a href="https://pjreddie.com/media/files/papers/YOLOv3.pdf">YOLOv3 Paper</a>. # # # Loading and Resizing Our Images # # In the code below, we load our images using OpenCV's `cv2.imread()` function. Since, this function loads images as BGR we will convert our images to RGB so we can display them with the correct colors. # # As we can see in the previous cell, the input size of the first layer of the network is 416 x 416 x 3. Since images have different sizes, we have to resize our images to be compatible with the input size of the first layer in the network. In the code below, we resize our images using OpenCV's `cv2.resize()` function. We then plot the original and resized images. # + # Set the default figure size plt.rcParams['figure.figsize'] = [24.0, 14.0] # Load the image img = cv2.imread('./images/surf.jpg') # Convert the image to RGB original_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # We resize the image to the input width and height of the first layer of the network. resized_image = cv2.resize(original_image, (m.width, m.height)) # Display the images plt.subplot(121) plt.title('Original Image') plt.imshow(original_image) plt.subplot(122) plt.title('Resized Image') plt.imshow(resized_image) plt.show() # - # # Setting the Non-Maximal Suppression Threshold # # As you learned in the previous lessons, YOLO uses **Non-Maximal Suppression (NMS)** to only keep the best bounding box. The first step in NMS is to remove all the predicted bounding boxes that have a detection probability that is less than a given NMS threshold. In the code below, we set this NMS threshold to `0.6`. This means that all predicted bounding boxes that have a detection probability less than 0.6 will be removed. # Set the NMS threshold nms_thresh = 0.6 # # Setting the Intersection Over Union Threshold # # After removing all the predicted bounding boxes that have a low detection probability, the second step in NMS, is to select the bounding boxes with the highest detection probability and eliminate all the bounding boxes whose **Intersection Over Union (IOU)** value is higher than a given IOU threshold. In the code below, we set this IOU threshold to `0.4`. This means that all predicted bounding boxes that have an IOU value greater than 0.4 with respect to the best bounding boxes will be removed. # # In the `utils` module you will find the `nms` function, that performs the second step of Non-Maximal Suppression, and the `boxes_iou` function that calculates the Intersection over Union of two given bounding boxes. You are encouraged to look at these functions to see how they work. # Set the IOU threshold iou_thresh = 0.4 # # Object Detection # # Once the image has been loaded and resized, and you have chosen your parameters for `nms_thresh` and `iou_thresh`, we can use the YOLO algorithm to detect objects in the image. We detect the objects using the `detect_objects(m, resized_image, iou_thresh, nms_thresh)`function from the `utils` module. This function takes in the model `m` returned by *Darknet*, the resized image, and the NMS and IOU thresholds, and returns the bounding boxes of the objects found. # # Each bounding box contains 7 parameters: the coordinates *(x, y)* of the center of the bounding box, the width *w* and height *h* of the bounding box, the confidence detection level, the object class probability, and the object class id. The `detect_objects()` function also prints out the time it took for the YOLO algorithm to detect the objects in the image and the number of objects detected. Since we are running the algorithm on a CPU it takes about 2 seconds to detect the objects in an image, however, if we were to use a GPU it would run much faster. # # Once we have the bounding boxes of the objects found by YOLO, we can print the class of the objects found and their corresponding object class probability. To do this we use the `print_objects()` function in the `utils` module. # # Finally, we use the `plot_boxes()` function to plot the bounding boxes and corresponding object class labels found by YOLO in our image. If you set the `plot_labels` flag to `False` you will display the bounding boxes with no labels. This makes it easier to view the bounding boxes if your `nms_thresh` is too low. The `plot_boxes()`function uses the same color to plot the bounding boxes of the same object class. However, if you want all bounding boxes to be the same color, you can use the `color` keyword to set the desired color. For example, if you want all the bounding boxes to be red you can use: # # `plot_boxes(original_image, boxes, class_names, plot_labels = True, color = (1,0,0))` # # You are encouraged to change the `iou_thresh` and `nms_thresh` parameters to see how they affect the YOLO detection algorithm. The default values of `iou_thresh = 0.4` and `nms_thresh = 0.6` work well to detect objects in different kinds of images. In the cell below, we have repeated some of the code used before in order to prevent you from scrolling up down when you want to change the `iou_thresh` and `nms_thresh`parameters or the image. Have Fun! # + # Set the default figure size plt.rcParams['figure.figsize'] = [24.0, 14.0] # Load the image img = cv2.imread('./images/surf.jpg') # Convert the image to RGB original_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # We resize the image to the input width and height of the first layer of the network. resized_image = cv2.resize(original_image, (m.width, m.height)) # Set the IOU threshold. Default value is 0.4 iou_thresh = 0.4 # Set the NMS threshold. Default value is 0.6 nms_thresh = 0.6 # Detect objects in the image boxes = detect_objects(m, resized_image, iou_thresh, nms_thresh) # Print the objects found and the confidence level print_objects(boxes, class_names) #Plot the image with bounding boxes and corresponding object class labels plot_boxes(original_image, boxes, class_names, plot_labels = True) # -
2_2_YOLO/.ipynb_checkpoints/YOLO-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 机器学习练习 3 - 多类分类 # 该代码涵盖了基于Python的解决方案,用于Coursera机器学习课程的第三个编程练习。 有关详细说明和方程式,请参阅[exercise text](ex3.pdf)。 # # # 代码修改并注释:黄海广,<EMAIL> # 对于此练习,我们将使用逻辑回归来识别手写数字(0到9)。 我们将扩展我们在练习2中写的逻辑回归的实现,并将其应用于一对一的分类。 让我们开始加载数据集。 它是在MATLAB的本机格式,所以要加载它在Python,我们需要使用一个SciPy工具。 import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.io import loadmat data = loadmat('ex3data1.mat') data data['X'].shape, data['y'].shape # 好的,我们已经加载了我们的数据。图像在martix X中表示为400维向量(其中有5,000个)。 400维“特征”是原始20 x 20图像中每个像素的灰度强度。类标签在向量y中作为表示图像中数字的数字类。 # # # 第一个任务是将我们的逻辑回归实现修改为完全向量化(即没有“for”循环)。这是因为向量化代码除了简洁外,还能够利用线性代数优化,并且通常比迭代代码快得多。但是,如果从练习2中看到我们的代价函数已经完全向量化实现了,所以我们可以在这里重复使用相同的实现。 # # sigmoid 函数 # g 代表一个常用的逻辑函数(logistic function)为S形函数(Sigmoid function),公式为: \\[g\left( z \right)=\frac{1}{1+{{e}^{-z}}}\\] # 合起来,我们得到逻辑回归模型的假设函数: # \\[{{h}_{\theta }}\left( x \right)=\frac{1}{1+{{e}^{-{{\theta }^{T}}X}}}\\] def sigmoid(z): return 1 / (1 + np.exp(-z)) # 代价函数: # $J\left( \theta \right)=\frac{1}{m}\sum\limits_{i=1}^{m}{[-{{y}^{(i)}}\log \left( {{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)-\left( 1-{{y}^{(i)}} \right)\log \left( 1-{{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)]}$ def cost(theta, X, y, learningRate): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T))) reg = (learningRate / (2 * len(X))) * np.sum(np.power(theta[:,1:theta.shape[1]], 2)) return np.sum(first - second) / len(X) + reg # 如果我们要使用梯度下降法令这个代价函数最小化,因为我们未对${{\theta }_{0}}$ 进行正则化,所以梯度下降算法将分两种情形: # \begin{align} # & Repeat\text{ }until\text{ }convergence\text{ }\!\!\{\!\!\text{ } \\ # & \text{ }{{\theta }_{0}}:={{\theta }_{0}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{_{0}}^{(i)}} \\ # & \text{ }{{\theta }_{j}}:={{\theta }_{j}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{j}^{(i)}}+\frac{\lambda }{m}{{\theta }_{j}} \\ # & \text{ }\!\!\}\!\!\text{ } \\ # & Repeat \\ # \end{align} # # 以下是原始代码是使用for循环的梯度函数: def gradient_with_loop(theta, X, y, learningRate): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) grad = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:,i]) if (i == 0): grad[i] = np.sum(term) / len(X) else: grad[i] = (np.sum(term) / len(X)) + ((learningRate / len(X)) * theta[:,i]) return grad # 向量化的梯度函数 def gradient(theta, X, y, learningRate): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) error = sigmoid(X * theta.T) - y grad = ((X.T * error) / len(X)).T + ((learningRate / len(X)) * theta) # intercept gradient is not regularized grad[0, 0] = np.sum(np.multiply(error, X[:,0])) / len(X) return np.array(grad).ravel() # 现在我们已经定义了代价函数和梯度函数,现在是构建分类器的时候了。 对于这个任务,我们有10个可能的类,并且由于逻辑回归只能一次在2个类之间进行分类,我们需要多类分类的策略。 在本练习中,我们的任务是实现一对一全分类方法,其中具有k个不同类的标签就有k个分类器,每个分类器在“类别 i”和“不是 i”之间决定。 我们将把分类器训练包含在一个函数中,该函数计算10个分类器中的每个分类器的最终权重,并将权重返回为k X(n + 1)数组,其中n是参数数量。 # + from scipy.optimize import minimize def one_vs_all(X, y, num_labels, learning_rate): rows = X.shape[0] params = X.shape[1] # k X (n + 1) array for the parameters of each of the k classifiers all_theta = np.zeros((num_labels, params + 1)) # insert a column of ones at the beginning for the intercept term X = np.insert(X, 0, values=np.ones(rows), axis=1) # labels are 1-indexed instead of 0-indexed for i in range(1, num_labels + 1): theta = np.zeros(params + 1) y_i = np.array([1 if label == i else 0 for label in y]) y_i = np.reshape(y_i, (rows, 1)) # minimize the objective function fmin = minimize(fun=cost, x0=theta, args=(X, y_i, learning_rate), method='TNC', jac=gradient) all_theta[i-1,:] = fmin.x return all_theta # - # 这里需要注意的几点:首先,我们为theta添加了一个额外的参数(与训练数据一列),以计算截距项(常数项)。 其次,我们将y从类标签转换为每个分类器的二进制值(要么是类i,要么不是类i)。 最后,我们使用SciPy的较新优化API来最小化每个分类器的代价函数。 如果指定的话,API将采用目标函数,初始参数集,优化方法和jacobian(渐变)函数。 然后将优化程序找到的参数分配给参数数组。 # # 实现向量化代码的一个更具挑战性的部分是正确地写入所有的矩阵,保证维度正确。 # + rows = data['X'].shape[0] params = data['X'].shape[1] all_theta = np.zeros((10, params + 1)) X = np.insert(data['X'], 0, values=np.ones(rows), axis=1) theta = np.zeros(params + 1) y_0 = np.array([1 if label == 0 else 0 for label in data['y']]) y_0 = np.reshape(y_0, (rows, 1)) X.shape, y_0.shape, theta.shape, all_theta.shape # - # 注意,theta是一维数组,因此当它被转换为计算梯度的代码中的矩阵时,它变为(1×401)矩阵。 我们还检查y中的类标签,以确保它们看起来像我们想象的一致。 np.unique(data['y'])#看下有几类标签 # 让我们确保我们的训练函数正确运行,并且得到合理的输出。 all_theta = one_vs_all(data['X'], data['y'], 10, 1) all_theta # 我们现在准备好最后一步 - 使用训练完毕的分类器预测每个图像的标签。 对于这一步,我们将计算每个类的类概率,对于每个训练样本(使用当然的向量化代码),并将输出类标签为具有最高概率的类。 def predict_all(X, all_theta): rows = X.shape[0] params = X.shape[1] num_labels = all_theta.shape[0] # same as before, insert ones to match the shape X = np.insert(X, 0, values=np.ones(rows), axis=1) # convert to matrices X = np.matrix(X) all_theta = np.matrix(all_theta) # compute the class probability for each class on each training instance h = sigmoid(X * all_theta.T) # create array of the index with the maximum probability h_argmax = np.argmax(h, axis=1) # because our array was zero-indexed we need to add one for the true label prediction h_argmax = h_argmax + 1 return h_argmax # 现在我们可以使用predict_all函数为每个实例生成类预测,看看我们的分类器是如何工作的。 y_pred = predict_all(data['X'], all_theta) correct = [1 if a == b else 0 for (a, b) in zip(y_pred, data['y'])] accuracy = (sum(map(int, correct)) / float(len(correct))) print ('accuracy = {0}%'.format(accuracy * 100)) # 在下一个练习中,我们将介绍如何从头开始实现前馈神经网络。 # # 神经网络模型图示 # <img style="float: left;" src="../img/nn_model.png">
Coursera-ML-AndrewNg-Notes-master/code/ex3-neural network/ML-Exercise3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # <div style="background-color: #007bff; border-radius: 5px; width: 100%; padding: 10px; color: white"> # <b>Note:</b> This guide is written for an interactive environment such as Jupyter notebooks. The interactive widgets will not work in a static version of this documentation. Instructions for installing Panel and the example notebooks can be found in the <a href="https://panel.holoviz.org/#installation" target="_blank" style="color:white">Installation Guide</a> # </div> # # Panel lets you add interactive controls for just about anything you can display in Python. Panel can help you build simple interactive apps, complex multi-page dashboards, or anything in between. As a simple example, let's say we have loaded the [UCI ML dataset measuring the environment in a meeting room](http://archive.ics.uci.edu/ml/datasets/Occupancy+Detection+): # + import pandas as pd; import numpy as np; import matplotlib.pyplot as plt data = pd.read_csv('../assets/occupancy.csv') data['date'] = data.date.astype('datetime64[ns]') data = data.set_index('date') data.tail() # - # And we've written some code that smooths a time series and plots it using Matplotlib with outliers highlighted: # + from matplotlib.figure import Figure from matplotlib.backends.backend_agg import FigureCanvas # %matplotlib inline def mpl_plot(avg, highlight): fig = Figure() FigureCanvas(fig) # not needed in mpl >= 3.1 ax = fig.add_subplot() avg.plot(ax=ax) if len(highlight): highlight.plot(style='o', ax=ax) return fig def find_outliers(variable='Temperature', window=30, sigma=10, view_fn=mpl_plot): avg = data[variable].rolling(window=window).mean() residual = data[variable] - avg std = residual.rolling(window=window).std() outliers = (np.abs(residual) > std * sigma) return view_fn(avg, avg[outliers]) # - # We can call the function with parameters and get a plot: find_outliers(variable='Temperature', window=20, sigma=10) # It works! But exploring all these parameters by typing Python is slow and tedious. Plus we want our boss, or the boss's boss, to be able to try it out. # If we wanted to try out lots of combinations of these values to understand how the window and sigma affect the plot, we could reevaluate the above cell lots of times, but that would be a slow and painful process, and is only really appropriate for users who are comfortable with editing Python code. In the next few examples we will demonstrate how to use Panel to quickly add some interactive controls to some object and make a simple app. # # To see an overview of the different APIs Panel offers see the [API user guide](../user_guide/APIs.ipynb) and for a quick reference for various Panel functionality see the [overview](../user_guide/Overview.ipynb). # # ## Interactive Panels # # Instead of editing code, it's much quicker and more straightforward to use sliders to adjust the values interactively. You can easily make a Panel app to explore a function's parameters using `pn.interact`, which is similar to the [ipywidgets interact function](https://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html): # + import panel as pn pn.extension() pn.interact(find_outliers) # - # As long as you have a live Python process running, dragging these widgets will trigger a call to the `find_outliers` callback function, evaluating it for whatever combination of parameter values you select and displaying the results. A Panel like this makes it very easy to explore any function that produces a visual result of a [supported type](https://github.com/pyviz/panel/issues/2), such as Matplotlib (as above), Bokeh, Plotly, Altair, or various text and image types. # # ## Components of Panels # # `interact` is convenient, but what if you want more control over how it looks or works? First, let's see what `interact` actually creates, by grabbing that object and displaying its representation: kw = dict(window=(1, 60), variable=sorted(list(data.columns)), sigma=(1, 20)) i = pn.interact(find_outliers, **kw) i.pprint() # As you can see, the `interact` call created a `pn.Column` object consisting of a WidgetBox (with 3 widgets) and a `pn.Row` with one Matplotlib figure object. Panel is compositional, so you can mix and match these components any way you like, adding other objects as needed: # + text = "<br>\n# Room Occupancy\nSelect the variable, and the time window for smoothing" p = pn.Row(i[1][0], pn.Column(text, i[0][0], i[0][1])) p # - # Note that the widgets stay linked to their plot even if they are in a different notebook cell: i[0][2] # Also note that Panel widgets are reactive, so they will update even if you set the values by hand: i[0][2].value = 5 # ## Composing new Panels # # You can use this compositional approach to combine different components such as widgets, plots, text, and other elements needed for an app or dashboard in arbitrary ways. The ``interact`` example builds on a reactive programming model, where an input to the function changes and Panel reactively updates the output of the function. ``interact`` is a convenient way to create widgets from the arguments to your function automatically, but Panel also provides a more explicit reactive API letting you specifically define connections between widgets and function arguments, and then lets you compose the resulting dashboard manually from scratch. # # In the example below we explicitly declare each of the components of an app: widgets, a function to return the plot, column and row containers, and the completed `occupancy` Panel app. Widget objects have multiple "parameters" (current value, allowed ranges, and so on), and here we will use Panel's ``bind`` function to declare that function's input values should come from the widgets' ``value`` parameters. Now when the function and the widgets are displayed, Panel will automatically update the displayed output whenever any of the inputs change: # + import panel.widgets as pnw variable = pnw.RadioButtonGroup(name='variable', value='Temperature', options=list(data.columns)) window = pnw.IntSlider(name='window', value=10, start=1, end=60) reactive_outliers = pn.bind(find_outliers, variable, window, 10) widgets = pn.Column("<br>\n# Room occupancy", variable, window) occupancy = pn.Row(reactive_outliers, widgets) occupancy # - # ## Deploying Panels # # The above panels all work in the notebook cell (if you have a live Jupyter kernel running), but unlike other approaches such as ipywidgets, Panel apps work just the same in a standalone server. For instance, the app above can be launched as its own web server on your machine by uncommenting and running the following cell: # + #occupancy.show() # - # Or, you can simply mark whatever you want to be in the separate web page with `.servable()`, and then run the shell command `panel serve --show Introduction.ipynb` to launch a server containing that object. (Here, we've also added a semicolon to avoid getting another copy of the occupancy app here in the notebook.) occupancy.servable(); # During development, particularly when working with a raw script using `panel serve --show --autoreload` can be very useful as the application will automatically update whenever the script or notebook or any of its imports change. # ## Declarative Panels # # The above compositional approach is very flexible, but it ties your domain-specific code (the parts about sine waves) with your widget display code. That's fine for small, quick projects or projects dominated by visualization code, but what about large-scale, long-lived projects, where the code is used in many different contexts over time, such as in large batch runs, one-off command-line usage, notebooks, and deployed dashboards? For larger projects like that, it's important to be able to separate the parts of the code that are about the underlying domain (i.e. application or research area) from those that are tied to specific display technologies (such as Jupyter notebooks or web servers). # # For such usages, Panel supports objects declared with the separate [Param](http://param.pyviz.org) library, which provides a GUI-independent way of capturing and declaring the parameters of your objects (and dependencies between your code and those parameters), in a way that's independent of any particular application or dashboard technology. For instance, the above code can be captured in an object that declares the ranges and values of all parameters, as well as how to generate the plot, independently of the Panel library or any other way of interacting with the object: # + import param class RoomOccupancy(param.Parameterized): variable = param.Selector(objects=list(data.columns)) window = param.Integer(default=10, bounds=(1, 20)) sigma = param.Number(default=10, bounds=(0, 20)) def view(self): return find_outliers(self.variable, self.window, self.sigma) obj = RoomOccupancy() obj # - # The `RoomOccupancy` class and the `obj` instance have no dependency on Panel, Jupyter, or any other GUI or web toolkit; they simply declare facts about a certain domain (such as that smoothing requires window and sigma parameters, and that window is an integer greater than 0 and sigma is a positive real number). This information is then enough for Panel to create an editable and viewable representation for this object without having to specify anything that depends on the domain-specific details encapsulated in `obj`: pn.Row(obj.param, obj.view) # To support a particular domain, you can create hierarchies of such classes encapsulating all the parameters and functionality you need across different families of objects, with both parameters and code inheriting across the classes as appropriate, all without any dependency on a particular GUI library or even the presence of a GUI at all. This approach makes it practical to maintain a large codebase, all fully displayable and editable with Panel, in a way that can be maintained and adapted over time. # ## Linking plots and actions between panes # # The above approaches each work with a very wide variety of displayable objects, including images, equations, tables, and plots. In each case, Panel provides interactive functionality using widgets and updates the displayed objects accordingly, while making very few assumptions about what actually is being displayed. Panel also supports richer, more dynamic interactivity where the displayed object is itself interactive, such as the JavaScript-based plots from Bokeh and Plotly. # # For instance, if we substitute the [Bokeh](http://bokeh.pydata.org) wrapper [hvPlot](http://hvplot.pyviz.org) for the Matplotlib wrapper provided with Pandas, we automatically get interactive plots that allow zooming, panning and hovering: # + import hvplot.pandas def hvplot(avg, highlight): return avg.hvplot(height=200) * highlight.hvplot.scatter(color='orange', padding=0.1) text2 = "## Room Occupancy\nSelect the variable and the smoothing values" hvp = pn.interact(find_outliers, view_fn=hvplot, **kw) pn.Column(pn.Row(pn.panel(text2, width=400), hvp[0]), hvp[1]).servable("Occupancy") # - # These interactive actions can be combined with more complex interactions with a plot (e.g. tap, hover) to make it easy to explore data more deeply and uncover connections. For instance, we can use HoloViews to make a more full-featured version of the hvPlot example that displays a table of the current measurement values at the hover position on the plot: # + import holoviews as hv tap = hv.streams.PointerX(x=data.index.min()) def hvplot2(avg, highlight): line = avg.hvplot(height=300, width=500) outliers = highlight.hvplot.scatter(color='orange', padding=0.1) tap.source = line return (line * outliers).opts(legend_position='top_right') @pn.depends(tap.param.x) def table(x): index = np.abs((data.index - x).astype(int)).argmin() return data.iloc[index] app = pn.interact(find_outliers, view_fn=hvplot2, **kw) pn.Row( pn.Column("## Room Occupancy\nHover over the plot for more information.", app[0]), pn.Row(app[1], table) ) # - # ## Exploring further # # For a quick reference of different Panel functionality refer to the [overview](../user_guide/Overview.ipynb). If you want a more detailed description of different ways of using Panel, each appropriate for different applications see the following materials: # # - [APIs](../user_guide/APIs.ipynb): An overview of the different APIs offered by Panel. # - [Interact](../user_guide/Interact.ipynb): Instant GUI, given a function with arguments # - [Widgets](../user_guide/Widgets.ipynb): Explicitly instantiating widgets and linking them to actions # - [Parameters](../user_guide/Param.ipynb): Capturing parameters and their links to actions declaratively # # Just pick the style that seems most appropriate for the task you want to do, then study that section of the user guide. Regardless of which approach you take, you'll want to learn more about Panel's panes and layouts: # # - [Components](../user_guide/Components.ipynb): An overview of the core components of Panel including Panes, Widgets and Layouts # - [Customization](../user_guide/Customization.ipynb): How to set styles and sizes of Panel components # - [Deploy & Export](../user_guide/Deploy_and_Export.ipynb): An overview on how to display, export and deploy Panel apps and dashboards # # # Finally, if you are building a complex multi-stage application, you can consider our support for organizing workflows consisting of multiple stages: # # - [Pipelines](../user_guide/Pipelines.ipynb): Making multi-stage processing pipelines in notebooks and as deployed apps # # Or for more polished apps you can make use of Templates to achieve exactly the look and feel you want: # # - [Templates](../user_guide/Templates.ipynb): Composing one or more Panel objects into jinja2 template with full control over layout and styling.
examples/getting_started/Introduction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + tags=["remove_input"] from datascience import * # %matplotlib inline path_data = '../../data/' import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') import numpy as np # - # ### Deflategate ### # On January 18, 2015, the Indianapolis Colts and the New England Patriots played the American Football Conference (AFC) championship game to determine which of those teams would play in the Super Bowl. After the game, there were allegations that the Patriots' footballs had not been inflated as much as the regulations required; they were softer. This could be an advantage, as softer balls might be easier to catch. # # For several weeks, the world of American football was consumed by accusations, denials, theories, and suspicions: the press labeled the topic Deflategate, after the Watergate political scandal of the 1970's. The National Football League (NFL) commissioned an independent analysis. In this example, we will perform our own analysis of the data. # # Pressure is often measured in pounds per square inch (psi). NFL rules stipulate that game balls must be inflated to have pressures in the range 12.5 psi and 13.5 psi. Each team plays with 12 balls. Teams have the responsibility of maintaining the pressure in their own footballs, but game officials inspect the balls. Before the start of the AFC game, all the Patriots' balls were at about 12.5 psi. Most of the Colts' balls were at about 13.0 psi. However, these pre-game data were not recorded. # # During the second quarter, the Colts intercepted a Patriots ball. On the sidelines, they measured the pressure of the ball and determined that it was below the 12.5 psi threshold. Promptly, they informed officials. # # At half-time, all the game balls were collected for inspection. Two officials, <NAME> and <NAME>, measured the pressure in each of the balls. # # Here are the data. Each row corresponds to one football. Pressure is measured in psi. The Patriots ball that had been intercepted by the Colts was not inspected at half-time. Nor were most of the Colts' balls – the officials simply ran out of time and had to relinquish the balls for the start of second half play. football = Table.read_table(path_data + 'deflategate.csv') football.show() # For each of the 15 balls that were inspected, the two officials got different results. It is not uncommon that repeated measurements on the same object yield different results, especially when the measurements are performed by different people. So we will assign to each the ball the average of the two measurements made on that ball. football = football.with_column( 'Combined', (football.column(1)+football.column(2))/2 ).drop(1, 2) football.show() # At a glance, it seems apparent that the Patriots' footballs were at a lower pressure than the Colts' balls. Because some deflation is normal during the course of a game, the independent analysts decided to calculate the drop in pressure from the start of the game. Recall that the Patriots' balls had all started out at about 12.5 psi, and the Colts' balls at about 13.0 psi. Therefore the drop in pressure for the Patriots' balls was computed as 12.5 minus the pressure at half-time, and the drop in pressure for the Colts' balls was 13.0 minus the pressure at half-time. # # We can calculate the drop in pressure for each football, by first setting up an array of the starting values. For this we will need an array consisting of 11 values each of which is 12.5, and another consisting of four values each of which is all 13. We will use the NumPy function `np.ones`, which takes a count as its argument and returns an array of that many elements, each of which is 1. np.ones(11) patriots_start = 12.5 * np.ones(11) colts_start = 13 * np.ones(4) start = np.append(patriots_start, colts_start) start # The drop in pressure for each football is the difference between the starting pressure and the combined pressure measurement. drop = start - football.column('Combined') football = football.with_column('Pressure Drop', drop) football.show() # It looks as though the Patriots' drops were larger than the Colts'. Let's look at the average drop in each of the two groups. We no longer need the combined scores. football = football.drop('Combined') football.group('Team', np.average) # The average drop for the Patriots was about 1.2 psi compared to about 0.47 psi for the Colts. # # The question now is why the Patriots' footballs had a larger drop in pressure, on average, than the Colts footballs. Could it be due to chance? # # ### The Hypotheses ### # How does chance come in here? Nothing was being selected at random. But we can make a chance model by hypothesizing that the 11 Patriots' drops look like a random sample of 11 out of all the 15 drops, with the Colts' drops being the remaining four. That's a completely specified chance model under which we can simulate data. So it's the **null hypothesis**. # # For the alternative, we can take the position that the Patriots' drops are too large, on average, to resemble a random sample drawn from all the drops. # # ### Test Statistic ### # A natural statistic is the difference between the two average drops, which we will compute as "average drop for Patriots - average drop for Colts". Large values of this statistic will favor the alternative hypothesis. # + observed_means = football.group('Team', np.average).column(1) observed_difference = observed_means.item(1) - observed_means.item(0) observed_difference # - # This positive difference reflects the fact that the average drop in pressure of the Patriots' footballs was greater than that of the Colts. # The function `difference_of_means` takes three arguments: # # - the name of the table of data # - the label of the column containing the numerical variable whose average is of interest # - the label of the column containing the two group labels # # It returns the difference between the means of the two groups. # # We have defined this function in an earlier section. The definition is repeated here for ease of reference. def difference_of_means(table, label, group_label): reduced = table.select(label, group_label) means_table = reduced.group(group_label, np.average) means = means_table.column(1) return means.item(1) - means.item(0) difference_of_means(football, 'Pressure Drop', 'Team') # Notice that the difference has been calculated as Patriots' drops minus Colts' drops as before. # ### Predicting the Statistic Under the Null Hypothesis ### # If the null hypothesis were true, then it shouldn't matter which footballs are labeled Patriots and which are labeled Colts. The distributions of the two sets of drops would be the same. We can simulate this by randomly shuffling the team labels. shuffled_labels = football.sample(with_replacement=False).column(0) original_and_shuffled = football.with_column('Shuffled Label', shuffled_labels) original_and_shuffled.show() # How do all the group averages compare? difference_of_means(original_and_shuffled, 'Pressure Drop', 'Shuffled Label') difference_of_means(original_and_shuffled, 'Pressure Drop', 'Team') # The two teams' average drop values are closer when the team labels are randomly assigned to the footballs than they were for the two groups actually used in the game. # # ### Permutation Test ### # It's time for a step that is now familiar. We will do repeated simulations of the test statistic under the null hypothesis, by repeatedly permuting the footballs and assigning random sets to the two teams. # # Once again, we will use the function `one_simulated_difference` defined in an earlier section as follows. def one_simulated_difference(table, label, group_label): shuffled_labels = table.sample(with_replacement = False ).column(group_label) shuffled_table = table.select(label).with_column( 'Shuffled Label', shuffled_labels) return difference_of_means(shuffled_table, label, 'Shuffled Label') # We can now use this function to create an array `differences` that contains 10,000 values of the test statistic simulated under the null hypothesis. # + differences = make_array() repetitions = 10000 for i in np.arange(repetitions): new_difference = one_simulated_difference(football, 'Pressure Drop', 'Team') differences = np.append(differences, new_difference) # - # ### Conclusion of the Test ### # To calculate the empirical P-value, it's important to recall the alternative hypothesis, which is that the Patriots' drops are too large to be the result of chance variation alone. # # Larger drops for the Patriots favor the alternative hypothesis. So the P-value is the chance (computed under the null hypothesis) of getting a test statistic equal to our observed value of 0.733522727272728 or larger. empirical_P = np.count_nonzero(differences >= observed_difference) / 10000 empirical_P # That's a pretty small P-value. To visualize this, here is the empirical distribution of the test statistic under the null hypothesis, with the observed statistic marked on the horizontal axis. Table().with_column('Difference Between Group Averages', differences).hist() plots.scatter(observed_difference, 0, color='red', s=30) plots.title('Prediction Under the Null Hypothesis') print('Observed Difference:', observed_difference) print('Empirical P-value:', empirical_P) # As in previous examples of this test, the bulk of the distribution is centered around 0. Under the null hypothesis, the Patriots' drops are a random sample of all 15 drops, and therefore so are the Colts'. Therefore the two sets of drops should be about equal on average, and therefore their difference should be around 0. # # But the observed value of the test statistic is quite far away from the heart of the distribution. By any reasonable cutoff for what is "small", the empirical P-value is small. So we end up rejecting the null hypothesis of randomness, and conclude that the Patriots drops were too large to reflect chance variation alone. # # The independent investigative team analyzed the data in several different ways, taking into account the laws of physics. The final report said, # # > "[T]he average pressure drop of the Patriots game balls exceeded the average pressure drop of the Colts balls by 0.45 to 1.02 psi, depending on various possible assumptions regarding the gauges used, and assuming an initial pressure of 12.5 psi for the Patriots balls and 13.0 for the Colts balls." # > # > -- *Investigative report commissioned by the NFL regarding the AFC Championship game on January 18, 2015* # # Our analysis shows an average pressure drop of about 0.73 psi, which is close to the center of the interval "0.45 to 1.02 psi" and therefore consistent with the official analysis. # Remember that our test of hypotheses does not establish the reason *why* the difference is not due to chance. Establishing causality is usually more complex than running a test of hypotheses. # # But the all-important question in the football world was about causation: the question was whether the excess drop of pressure in the Patriots' footballs was deliberate. If you are curious about the answer given by the investigators, here is the [full report](https://nfllabor.files.wordpress.com/2015/05/investigative-and-expert-reports-re-footballs-used-during-afc-championsh.pdf).
interactivecontent/compare-two-samples-by-bootstrapping/deflategate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dynamical Power Spectra (on real data) # %matplotlib inline # + # load auxiliary libraries import numpy as np import matplotlib.pyplot as plt from astropy.io import fits # import stingray import stingray plt.style.use('seaborn-talk') # - # # All starts with a lightcurve.. # Open the event file with astropy.io.fits f = fits.open('emr_cleaned.fits') # The time resolution is stored in the header of the first extension under the Keyword `TIMEDEL` dt = f[1].header['TIMEDEL'] # The collumn `TIME` of the first extension stores the time of each event toa = f[1].data['Time'] # Let's create a Lightcurve from the Events time of arrival witha a given time resolution lc = stingray.Lightcurve.make_lightcurve(toa=toa, dt=dt) lc.plot() # --- # # DynamicPowerspectrum # Let's create a dynamic powerspectrum with the a segment size of 16s and the powers with a "leahy" normalization dynspec = stingray.DynamicalPowerspectrum(lc=lc, segment_size=16, norm='leahy') # The dyn_ps attribute stores the power matrix, each column corresponds to the powerspectrum of each segment of the light curve dynspec.dyn_ps # To plot the DynamicalPowerspectrum matrix, we use the attributes `time` and `freq` to set the extend of the image axis. have a look at the documentation of matplotlib's `imshow()`. # + extent = min(dynspec.time), max(dynspec.time), max(dynspec.freq), min(dynspec.freq) plt.imshow(dynspec.dyn_ps, origin="lower left", aspect="auto", vmin=1.98, vmax=3.0, interpolation="none", extent=extent) plt.colorbar() plt.ylim(700,850) # - print("The dynamical powerspectrun has {} frequency bins and {} time bins".format(len(dynspec.freq), len(dynspec.time))) # --- # # Rebinning in Frequency print("The current frequency resolution is {}".format(dynspec.df)) # Let's rebin to a frequency resolution of 2 Hz and using the average of the power dynspec.rebin_frequency(df_new=2.0, method="average") print("The new frequency resolution is {}".format(dynspec.df)) # Let's see how the Dynamical Powerspectrum looks now extent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq) plt.imshow(dynspec.dyn_ps, origin="lower", aspect="auto", vmin=1.98, vmax=3.0, interpolation="none", extent=extent) plt.colorbar() plt.ylim(500, 1000) extent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq) plt.imshow(dynspec.dyn_ps, origin="lower", aspect="auto", vmin=2.0, vmax=3.0, interpolation="none", extent=extent) plt.colorbar() plt.ylim(700,850) # # Rebin time # Let's try to improve the visualization by rebinnin our matrix in the time axis print("The current time resolution is {}".format(dynspec.dt)) # Let's rebin to a time resolution of 64 s dynspec.rebin_time(dt_new=64.0, method="average") print("The new time resolution is {}".format(dynspec.dt)) extent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq) plt.imshow(dynspec.dyn_ps, origin="lower", aspect="auto", vmin=2.0, vmax=3.0, interpolation="none", extent=extent) plt.colorbar() plt.ylim(700,850) # # Trace maximun # Let's use the method `trace_maximum()` to find the index of the maximum on each powerspectrum in a certain frequency range. For example, between 755 and 782Hz) tracing = dynspec.trace_maximum(min_freq=755, max_freq=782) # This is how the trace function looks like plt.plot(dynspec.time, dynspec.freq[tracing], color='red', alpha=1) plt.show() # Let's plot it on top of the dynamic spectrum extent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq) plt.imshow(dynspec.dyn_ps, origin="lower", aspect="auto", vmin=2.0, vmax=3.0, interpolation="none", extent=extent, alpha=0.7) plt.colorbar() plt.ylim(740,800) plt.plot(dynspec.time, dynspec.freq[tracing], color='red', lw=3, alpha=1) plt.show() # The spike at 400 Hz is probably a statistical fluctutations, tracing by the maximum power can be dangerous! # # We will implement better methods in the future, stay tunned ;)
DynamicalPowerspectrum/DynamicalPowerspectrum_tutorial_[real_data].ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Lesson 03 Simple metrics and grid size # *This lesson made by <NAME> and last updated 22/11/2021* # In this lesson we are going to look at some simple topographic metrics and how the grid size affects these metrics. # # **Side note**: The grid cell size in a geospatial raster is the length of the edges of each pixel. This is often referred to as the "resolution" but technically the resolution is defined as the minimum distance between two objects that can be separated in an image, so the resolution of a raster is larger than the grid cell size. [You can read about that here](https://www.semanticscholar.org/paper/The-differentiation-between-grid-spacing-and-and-to-Grasso/23ed9f85a67dea01eb6501701478555721b6af13). # # You will need to have data files from the previous lessons, so if you have not done those lessons please open them and execute the cells (the quick way to do that is to use the `Restart & run all` button in the `Kernel` menu above). # ## First import some stuff we need # First we make sure lsdviztools version is updated (it needs to be > 0.4.7): # !pip install lsdviztools --upgrade # Now import stuff we need. import lsdviztools.lsdbasemaptools as bmt from lsdviztools.lsdplottingtools import lsdmap_gdalio as gio import lsdviztools.lsdmapwrappers as lsdmw import rasterio as rio from rasterio.plot import show import matplotlib.pyplot as plt # ## Grab some (more) data # We are going to grab a few more datasets. These will be in the same area, but they will be a 90 m dataset and a different 30 m dataset. They require 2 different calls to the `ot_scraper` in `lsdviztools`. lower_left = [36.990554387425014, -2.318307057720176] upper_right = [37.23367133834253, -1.8425313329873874] # This downloads 90m (3 arcsecond) SRTM Aguas_DEM = bmt.ot_scraper(source = "SRTMGL3", lower_left_coordinates = lower_left, upper_right_coordinates = upper_right, prefix = "rio_aguas") Aguas_DEM.print_parameters() Aguas_DEM.download_pythonic() # This downloads ALOS World 3D 30m Aguas_DEM = bmt.ot_scraper(source = "AW3D30", lower_left_coordinates = lower_left, upper_right_coordinates = upper_right, prefix = "rio_aguas") Aguas_DEM.print_parameters() Aguas_DEM.download_pythonic() # ## Warp these three datasets into UTM coordinates with 30 and 90 m pixel spacing # We are going to use gdal to warp the three DEMs. You can use the syntax from lesson 2. I've looked up the UTM zone for you. # !gdalwarp -t_srs EPSG:32630 rio_aguas_SRTMGL1.tif RA_SRTM_UTM.tif -r cubic -tr 30 30 # !gdalwarp -t_srs EPSG:32630 rio_aguas_AW3D30.tif RA_AW3D30_UTM.tif -r cubic -tr 30 30 # !gdalwarp -t_srs EPSG:32630 rio_aguas_SRTMGL3.tif RA_SRTM3_UTM.tif -r cubic -tr 90 90 # ## Get the hillshades # !gdaldem hillshade RA_SRTM_UTM.tif RA_SRTM_UTM_HS.tif -alg ZevenbergenThorne # !gdaldem hillshade RA_SRTM3_UTM.tif RA_SRTM3_UTM_HS.tif -alg ZevenbergenThorne # !gdaldem hillshade RA_AW3D30_UTM.tif RA_AW3D30_UTM_HS.tif -alg ZevenbergenThorne # We will use gdal to get the hillshades as well. # ## Plot some data with rasterio # Lets look at the hillshades. This time we will plot with the `rasterio show`. First load the datasets SA_SRTM_hs = rio.open("RA_SRTM_UTM_HS.tif") SA_SRTM3_hs = rio.open("RA_SRTM3_UTM_HS.tif") SA_AW3D_hs = rio.open("RA_AW3D30_UTM_HS.tif") # + # %matplotlib inline f, (ax1,ax2,ax3) = plt.subplots(3, 1) f.set_size_inches(10.5, 18) show(SA_SRTM_hs, ax=ax1, cmap='gray', title = "SRTM 30m") show(SA_SRTM3_hs,ax=ax2, cmap='gray', title = "SRTM 90m") show(SA_AW3D_hs, ax=ax3, cmap='gray', title = "AW3D 30m") # - # Right, what can you see here? Hopefully the difference between SRTM1 and SRTM3 is obvious. Less obvious is the difference between SRTM1 and AW3D. We will need to zoom in to see that. To do that we need to use the subscripting functionality of `rasterio` (which allows us to subsample the underlying array) # + f, (ax1,ax2) = plt.subplots(1, 2) f.set_size_inches(18.5, 10) show(SA_SRTM_hs.read(1)[300:700,600:900], ax=ax1, transform=SA_SRTM_hs.transform, cmap='gray', title = "SRTM 30m") show(SA_AW3D_hs.read(1)[300:700,600:900], ax=ax2, transform=SA_AW3D_hs.transform, cmap='gray', title = "AW3D 30m") # - # Okay, so hopefully you can see from the above images that not all DEMs are created equal. SRTM was a total breakthrough in terms of producing global topographic data. But the radar only imaged the surface over a very short period so the accuracy is not as good as DEMs based on many years of stacked images (such as AW3D 30 or the Copernicus DEM). SRTM, however, is unique in that is a snapshot of the Earth's surface in 2000, so comparisons to much later topographic data (AW3D and Copernicus) should be able to detect large changes, such as big landslides. The accuracy of these DEM is not sufficient to find small changes, however. # # *If you sign in to OpenTopography and get an api key you can try to compare AW3D 30 to the Copernicus and NASADEM topographic data.* # # Grid cell spacing on derivative data # Lets turn our attention to derived datasets. Hillshade is derived from the DEM. But we can also take the slope, or curvature, or other metrics. We will use some more **gdal** and we will also use the histogram function in `rasterio`. # # One thing to note: if you take gradient or curvature using gdal or a GIS you will get a local result: the gradient in a pixel is based only on the local pixel. This can lead to noisy results, and we think taking a neighbourhood gradient is better. But that can come in the next lesson where we show you specialised software. For now we will stick to basics. # # We are going to use the two SRTM datasets since they come from the same underlying data but are at different grid spacings. # !gdaldem slope RA_SRTM_UTM.tif RA_SRTM_UTM_S.tif -alg ZevenbergenThorne -p # !gdaldem slope RA_SRTM3_UTM.tif RA_SRTM3_UTM_S.tif -alg ZevenbergenThorne -p # Now we use the `rasterio` histogram function to look at the data. # + from rasterio.plot import show_hist fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(14,7)) RA_SRTM_S = rio.open("RA_SRTM_UTM_S.tif") RA_SRTM3_S = rio.open("RA_SRTM3_UTM_S.tif") show_hist(RA_SRTM_S, bins=100, histtype='stepfilled',lw=0.0, stacked=False, alpha=0.3,ax=ax1, title = "SRTM 30m") ax1.set_xlabel("% slope") ax1.set_xlim([0,100]) ax1.get_legend().remove() show_hist(RA_SRTM3_S, bins=100, histtype='stepfilled',lw=0.0, stacked=False, alpha=0.3,ax=ax2, title = "SRTM 90m") ax2.set_xlabel("% slope") ax2.set_xlim([0,100]) ax2.get_legend().remove() # - # If you look at this data, you will see that the slope is gentler in the 90 m data. This is systematic. Coarse DEMs underestimate topographic gradients. They also give systematic biases in other derivative datasets (like curvature). *This is important because topographic gradient has been associated with (amongst other things) landslide risk and curvature has been associated with erosion rate.* # # You can read all about this in the following paper: [Grieve et al., 2016](https://esurf.copernicus.org/articles/4/627/2016/) # ## What you should have learned and potential modifications # * You will have more experience downloading and warping raster data. # * You have seen that not all datasets are of the same quality. # * You will have seen that coarser grid spacing can lead to bias in derivative datasets, like topographic gradient. # # Further steps: # * Try downloading the Copernicus DEM and comparing that to the ALOS world 3D data. # * Try to zoom in on a different parts of the DEM (to test if you understand how that was done).
Basic_topography/Lesson_03_simple_metrics_and_resolution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img src="https://www.microsoft.com/en-us/research/uploads/prod/2020/05/Attribution.png" width="400"> # # <h1 align="left">Multi-investment Attribution: Distinguish the Effects of Multiple Outreach Efforts</h1> # # A startup that sells software would like to know whether its multiple outreach efforts were successful in attracting new customers or boosting consumption among existing customers. They would also like to distinguish the effects of several incentives on different kinds of customers. In other words, they would like to learn the **heterogeneous treatment effect** of each investment on customers' software usage. # # In an ideal world, the startup would run several randomized experiments where each customer would receive a random assortment of investments. However, this can be logistically prohibitive or strategically unsound: the startup might not have the resources to design such experiments or they might not want to risk losing out on big opportunities due to lack of incentives. # # In this customer scenario walkthrough, we show how tools from the [EconML](https://aka.ms/econml) library can use historical investment data to learn the effects of multiple investments. # ### Summary # # 1. [Background](#Background) # 2. [Data](#Data) # 3. [Get Causal Effects with EconML](#Get-Causal-Effects-with-EconML) # 4. [Understand Treatment Effects with EconML](#Understand-Treatment-Effects-with-EconML) # 5. [Make Policy Decisions with EconML](#Make-Policy-Decisions-with-EconML) # 6. [Conclusions](#Conclusions) # # Background # # <img src="https://get.pxhere.com/photo/update-software-upgrade-laptop-computer-install-program-screen-system-repair-data-development-electronic-load-pc-process-progress-support-technical-load-1565823.jpg" width="400"> # # In this scenario, a startup that sells software provides two types of incentives to its customers: technical support and discounts. A customer might be given one, both or none of these incentives. # # The startup has historical data on these two investments for 2,000 customers, as well as how much revenue these customers generated in the year after the investments were made. They would like to use this data to learn the optimal incentive policy for each existing or new customer in order to maximize the return on investment (ROI). # # The startup faces two challenges: 1) the dataset is biased because historically the larger customers received the most incentives and 2) the observed outcome combines effects from two different investments. Thus, they need a causal model that can accommodate multiple concurrent interventions. # # **Solution:** EconML’s `Doubly Robust Learner` model jointly estimates the effects of multiple discrete treatments. The model uses flexible functions of observed customer features to filter out spurious correlations in existing data and deliver the causal effect of each intervention on revenue. # # + # Some imports to get us started import warnings warnings.simplefilter('ignore') # Utilities import os import urllib.request import numpy as np import pandas as pd # Generic ML imports from xgboost import XGBRegressor, XGBClassifier # EconML imports from econml.dr import LinearDRLearner import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # - # # Data # # The data* contains ~2,000 customers and is comprised of: # # * Customer features: details about the industry, size, revenue, and technology profile of each customer. # * Interventions: information about which incentive was given to a customer. # * Outcome: the amount of product the customer bought in the year after the incentives were given. # # Feature Name | Type | Details # :--- |:--- |:--- # **Global Flag** | W | whether the customer has global offices # **Major Flag** | W | whether the customer is a large consumer in their industry (as opposed to SMC - Small Medium Corporation - or SMB - Small Medium Business) # **SMC Flag** | W | whether the customer is a Small Medium Corporation (SMC, as opposed to major and SMB) # **Commercial Flag** | W | whether the customer's business is commercial (as opposed to public secor) # **IT Spend** | W | \\$ spent on IT-related purchases # **Employee Count** | W | number of employees # **PC Count** | W | number of PCs used by the customer # **Size** | X | customer's size given by their yearly total revenue # **Tech Support** | T | whether the customer received tech support (binary) # **Discount** | T | whether the customer was given a discount (binary) # **Revenue** | Y | \\$ Revenue from customer given by the amount of software purchased # # **To protect the privacy of the startup's customers, the data used in this scenario is synthetically generated and the feature distributions don't correspond to real distributions. However, the feature names have preserved their names and meaning.* # Import the sample multi-attribution data file_url = "https://msalicedatapublic.blob.core.windows.net/datasets/ROI/multi_attribution_sample.csv" multi_data = pd.read_csv(file_url) # Data sample multi_data.head() # Define estimator inputs T_bin = multi_data[ ["Tech Support", "Discount"] ] # multiple interventions, or treatments Y = multi_data["Revenue"] # amount of product purchased, or outcome X = multi_data[["Size"]] # heterogeneity feature W = multi_data.drop( columns=["Tech Support", "Discount", "Revenue", "Size"] ) # controls # We investigate below whether the number of investments given is correlated with the size of the customer. We note that the average customer size is larger for more incentives given. # Average customer size per incentive combination multi_data[["Size", "Tech Support", "Discount"]].groupby( by=["Tech Support", "Discount"], as_index=False ).mean().astype(int) # The data was generated using the following underlying treatment effect function: # # $$ # \text{treatment_effect(Size)} = (5,000 + 2\% \cdot \text{Size}) \cdot I_\text{Tech Support} + (5\% \cdot \text{Size}) \cdot I_\text{Discount} # $$ # # Therefore, the treatment effect depends on the customer's size as follows: tech support provides an consumption boost of \$5,000 + 2\% Size and a discount provides an consumption boost of 5\% Size.**This is the relationship we seek to learn from the data.** # + # Define underlying treatment effect function TE_fn = lambda X: np.hstack([5000 + 2 / 100 * X, 5 / 100 * X]) true_TE = TE_fn(X) # Define true coefficients for the three treatments # The third coefficient is just the sum of the first two since we assume an additive effect true_coefs = [2 / 100, 5 / 100, 7 / 100] true_intercepts = [5000, 0, 5000] treatment_names = ["Tech Support", "Discount", "Tech Support & Discount"] # - # # Get Causal Effects with EconML # To get causal effects, we use EconML's `LinearDRLearner`* estimator. This estimator requires a set of discrete treatments $T$ that corresponds to different types of interventions. Thus, we first map the binary interventions tech support and discount into one categorical variable: # # Tech support| Discount| Treatment encoding| Details # :--- |:--- |:--- |:--- # 0 | 0 | 0 | no incentive # 1 | 0 | 1 | tech support only # 0 | 1 | 2 | discount only # 1 | 1 | 3 | both incentives # # The estimator takes as input the outcome of interest $Y$ (amount of product purchased), a discrete treatment $T$ (interventions given), heterogeneity features $X$ (here, customer's size) and controls $W$ (all other customer features). # # # The LinearDRLearner also requires two auxiliary models to model the relationships $T\sim (W, X)$ (`model_propensity`) and $Y \sim (W, X)$(`model_regression`). These can be generic, flexible classification and regression models, respectively. # # # **This estimator assumes a linear relationship between the treatment effect and a transformation of the features $X$ (e.g. a polynomial basis expansion). For more generic forms of the treatment effect, see the `DRLearner` estimator.* # + # Transform T to one-dimensional array with consecutive integer encoding def treat_map(t): return np.dot(t, 2 ** np.arange(t.shape[0])) T = np.apply_along_axis(treat_map, 1, T_bin).astype(int) # - # Train EconML model with generic helper models model = LinearDRLearner( model_regression=XGBRegressor(learning_rate=0.1, max_depth=3), model_propensity=XGBClassifier(learning_rate=0.1, max_depth=3, objective="multi:softmax"), random_state=1, ) # Specify final stage inference type and fit model model.fit(Y=Y, T=T, X=X, W=W, inference="statsmodels") # # Understand Treatment Effects with EconML # # We can obtain a summary of the coefficient values as well as confidence intervals by calling the `summary` function on the fitted model for each treatment. for i in range(model._d_t[0]): print(f"Investment: {treatment_names[i]}") print(f"True treatment effect: {true_intercepts[i]} + {true_coefs[i]}*Size") display(model.summary(T=i + 1)) # From the summary panels, we see that the learned coefficients/intercepts are close to the true coefficients/intercepts and the p-values are small for most of these. # # We further use the `coef_, coef__interval` and the `intercept_, intercept__interval` methods to obtain the learned coefficient values and build confidence intervals. We compare the true and the learned coefficients through the plots below. # + # Compare learned coefficients with true model coefficients # Aggregate data coef_indices = np.arange(model._d_t[0]) coefs = np.hstack([model.coef_(T=i) for i in 1 + coef_indices]) intercepts = np.hstack([model.intercept_(T=i) for i in 1 + coef_indices]) # Calculate coefficient error bars for 90% confidence interval coef_error = np.hstack([model.coef__interval(T=i) for i in 1 + coef_indices]) coef_error[0, :] = coefs - coef_error[0, :] coef_error[1, :] = coef_error[1, :] - coefs # Calculate intercept error bars for 90% confidence interval intercept_error = np.vstack( [model.intercept__interval(T=i) for i in 1 + coef_indices] ).T intercept_error[0, :] = intercepts - intercept_error[0, :] intercept_error[1, :] = intercept_error[1, :] - intercepts # + # Plot coefficients plt.figure(figsize=(6, 5)) ax1 = plt.subplot(2, 1, 1) plt.errorbar( coef_indices, coefs, coef_error, fmt="o", label="Learned values\nand 90% confidence interval", ) plt.scatter(coef_indices, true_coefs, color="C1", label="True values", zorder=3) plt.xticks(coef_indices, treatment_names) plt.setp(ax1.get_xticklabels(), visible=False) plt.title("Coefficients") plt.legend(loc=(1.05, 0.65)) plt.grid() # Plot intercepts plt.subplot(2, 1, 2) plt.errorbar(coef_indices, intercepts, intercept_error, fmt="o") plt.scatter(coef_indices, true_intercepts, color="C1", zorder=3) plt.xticks(coef_indices, treatment_names) plt.title("Intercepts") plt.grid() plt.show() # - # # Make Policy Decisions with EconML # # Investments such as tech support and discounts come with an associated cost. Thus, we would like to know what incentives to give to each customer to maximize the profit from their increased engagement. This is the **treatment policy**. # # In this scenario, we define a cost function as follows: # * The cost of `tech support` scales with the number of PCs a customer has. You can imagine that if the software product needs tech support to be installed on each machine, there is a cost (\\$100 here) per machine. # * The cost of `discount` is a fixed \\$7,000. Think of this as giving the customer the first \\$7,000 worth of product for free. # * The cost of `tech support` and `discount` is the sum of the cost of each of these. Note that this might not be the case in every business application: it is possible that managing multiple incentive programs can add overhead. # Define cost function def cost_fn(multi_data): t1_cost = multi_data[["PC Count"]].values * 100 t2_cost = np.ones((multi_data.shape[0], 1)) * 7000 return np.hstack([t1_cost, t2_cost, t1_cost + t2_cost]) # We use the model's `const_marginal_effect` method to find the counterfactual treatment effect for each possible treatment. We then subtract the treatment cost and choose the treatment which the highest return. That is the recommended policy. # Get roi for each customer and possible treatment potential_roi = model.const_marginal_effect(X=X.values) - cost_fn(multi_data) # Add a column of 0s for no treatment potential_roi = np.hstack([np.zeros(X.shape), potential_roi]) all_treatments = np.array(["None"] + treatment_names) recommended_T = np.argmax(potential_roi, axis=1) ax1 = sns.scatterplot( x=X.values.flatten(), y=multi_data["PC Count"].values, hue=all_treatments[recommended_T], hue_order=all_treatments, cmap="Dark2", s=40, ) plt.legend(title="Investment Policy") plt.setp( ax1, xlabel="Customer Size", ylabel="PC Count", title="Optimal Investment Policy by Customer", ) plt.show() # We compare different policies: the optimal policy we learned, the current policy, and the policy under which each customer is given all incentives. We note that the optimal policy has a much higher ROI than the alternatives. roi_current = potential_roi[np.arange(X.shape[1]), T].sum() roi_optimal = potential_roi[np.arange(X.shape[1]), recommended_T].sum() roi_bothT = potential_roi[:, -1].sum() all_rois = np.array([roi_optimal, roi_current, roi_bothT]) Y_baseline = (Y - model.effect(X=X.values, T1=T)).sum() pd.DataFrame( { "Policy": ["Optimal", "Current", "All Investments"], "ROI ($)": all_rois, "ROI (% of baseline Y)": np.round(all_rois / Y_baseline * 100, 1), } ) # # Conclusions # # In this notebook, we have demonstrated the power of using EconML to: # # * Learn the effects of multiple concurrent interventions # * Interpret the resulting individual-level treatment effects # * Build investment policies around the learned effects # # To learn more about what EconML can do for you, visit our [website](https://aka.ms/econml), our [GitHub page](https://github.com/microsoft/EconML) or our [documentation](https://econml.azurewebsites.net/).
notebooks/CustomerScenarios/Case Study - Multi-investment Attribution at A Software Company.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Population Density of Mexican States and Municipalities # # This Notebook downloads Geopandas GeoDataFrames for States (admin1) and Municipalities (admin2) derived from the 2020 Mexican Census: [INEGI](https://www.inegi.org.mx/temas/mg/). # # For details how these dataframe were created, see the [mexican-boundaries](https://github.com/sbl-sdsc/mexico-boundaries) GitHub project. # # It also uses the variables of dataframe obtain in the [Week 3 analyzes](Week3States.ipynb). # # Due the bad clustering generated this part of the project will remain pending until the data that is being used is optimal for the clustering method is better or when a better understanding of clustering is achieve. # + from io import BytesIO from urllib.request import urlopen import pandas as pd import geopandas as gpd import matplotlib.pyplot as plt import ipywidgets as widgets import numpy as np from sklearn.cluster import DBSCAN from sklearn import metrics from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler # - pd.options.display.max_rows = None # display all rows pd.options.display.max_columns = None # display all columsns # ## Boundaries of Mexican Municipalities # Read boundary polygons for Mexican states from shapefile admin2_url = 'https://raw.githubusercontent.com/sbl-sdsc/mexico-boundaries/main/data/mexico_admin2.parquet' resp = urlopen(admin2_url) admin2 = gpd.read_parquet(BytesIO(resp.read())) # Calculate the area of each state (convert area from m^2 to km^2 admin2.crs admin2['CVE_MUNI'] = admin2['CVE_ENT'] + admin2['CVE_MUN'] admin2.head() admin2.plot(); # ## Map of Population by Municipality # Get week 3 analyzes data files var_admin2 = pd.read_csv('../../data/week3analyzesMunicipalities.csv') var_admin2.head() # Add 5-digit municipality code column (example: convert 5035 -> 05035) var_admin2['CVE_MUNI'] = var_admin2['cve_ent'].apply(lambda i: f'{i:05d}') var_admin2.head() # Merge the geo dataframe with the population dataframe using the common CVE_MUNI column df_admin2 = admin2.merge(var_admin2, on='CVE_MUNI') df_admin2.head() # The columns that are not needed for this analysis are excluded a2 = df_admin2.iloc[:,7:].copy() a2.head() # Only the data of interest is selected a2 = a2[['case_rate', 'death_rate', 'pct_mental_problem', 'pct_no_problems','pct_pop_obesity', 'population/sqkm']].copy() # The data is normalized # + #std_scaler = StandardScaler() std_scaler = RobustScaler() #std_scaler = MinMaxScaler() std_scaler # fit and transform the data X = pd.DataFrame(std_scaler.fit_transform(a2)) X.head(10) # - # The clustering beggings using the DBSCAN method # + # Compute DBSCAN db = DBSCAN(eps=0.5, min_samples=5).fit(X) core_samples_mask = np.zeros_like(db.labels_, dtype=bool) core_samples_mask[db.core_sample_indices_] = True labels = db.labels_ #with np.printoptions(threshold=np.inf): # print(labels) # Number of clusters in labels, ignoring noise if present. n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0) n_noise_ = list(labels).count(-1) print('Estimated number of clusters: %d' % n_clusters_) print('Estimated number of noise points: %d' % n_noise_) print("Silhouette Coefficient: %0.3f" % metrics.silhouette_score(X, labels)) # - df_labels = pd.DataFrame(labels, columns=['cluster']) df2 = pd.concat([df_admin2, df_labels], axis=1) df2.head() title = 'Population Density Clusters for Municipalities in Mexico' ax1 = df2.plot(column='cluster', # cmap='OrRd', # color maps: https://matplotlib.org/stable/tutorials/colors/colormaps.html cmap='Set1', legend=True, legend_kwds={'label': 'Cluster Number', 'orientation': 'horizontal'}, figsize=(16, 11)); ax1.set_title(title, fontsize=15); # + # try Plotly with KDE density plot # bubble maps (see Ebola example with time series): # https://plotly.com/python/bubble-maps/
notebooks/dev/Week6.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Wine # ### Introduction: # # This exercise is a adaptation from the UCI Wine dataset. # The only pupose is to practice deleting data with pandas. # # ### Step 1. Import the necessary libraries import pandas as pd import numpy as np # ### Step 2. Import the dataset from this [address](https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data). # ### Step 3. Assign it to a variable called wine # + url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data' wine = pd.read_csv(url) wine.head() # - # ### Step 4. Delete the first, fourth, seventh, nineth, eleventh, thirteenth and fourteenth columns # + wine = wine.drop(wine.columns[[0,3,6,8,11,12,13]], axis = 1) wine.head() # - # ### Step 5. Assign the columns as below: # # The attributes are (dontated by <NAME>, riclea '@' anchem.unige.it): # 1) alcohol # 2) malic_acid # 3) alcalinity_of_ash # 4) magnesium # 5) flavanoids # 6) proanthocyanins # 7) hue wine.columns = ['alcohol', 'malic_acid', 'alcalinity_of_ash', 'magnesium', 'flavanoids', 'proanthocyanins', 'hue'] wine.head() # ### Step 6. Set the values of the first 3 rows from alcohol as NaN wine.iloc[0:3, 0] = np.nan wine.head() # ### Step 7. Now set the value of the rows 3 and 4 of magnesium as NaN wine.iloc[2:4, 3] = np.nan wine.head() # ### Step 8. Fill the value of NaN with the number 10 in alcohol and 100 in magnesium # + wine.alcohol.fillna(10, inplace = True) wine.magnesium.fillna(100, inplace = True) wine.head() # - # ### Step 9. Count the number of missing values wine.isnull().sum() # ### Step 10. Create an array of 10 random numbers up until 10 random = np.random.randint(10, size = 10) random # ### Step 11. Use random numbers you generated as an index and assign NaN value to each of cell. wine.alcohol[random] = np.nan wine.head(10) # ### Step 12. How many missing values do we have? wine.isnull().sum() # ### Step 13. Delete the rows that contain missing values wine = wine.dropna(axis = 0, how = "any") wine.head() # ### Step 14. Print only the non-null values in alcohol mask = wine.alcohol.notnull() mask wine.alcohol[mask] # ### Step 15. Reset the index, so it starts with 0 again wine = wine.reset_index(drop = True) wine.head() # ### BONUS: Create your own question and answer it.
10_Deleting/Wine/Exercises_code_and_solutions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # RBM training $ L=100$ - Dataset with 29 different temperatures # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats from sklearn.model_selection import train_test_split import seaborn as sns import torch import torch.nn as nn torch.manual_seed(12) import sys sys.path.append('../modules') from rbm import RBM from mc_ising2d import IsingMC L = 100 # - # ## Loading Ising data # # The first step is to load the Ising data generated by Monte Carlo procedure. The file `L100_Ising2D_MC.pkl` was constructed considering a square lattice $L\times L$, with $L=100$. There are 1000 spins configurations for each of 29 different temperatures $ T/J = 1, 1.1, 1.2, ..., 3.5 $ and $ T/J = 2.259, 2.269, 2.279$. ising_data = pd.read_pickle('../data/L100_Ising2D_MC.pkl') ising_data.head() ising_data['temp'].value_counts() # In the thermodynamic limit $( L^2 \rightarrow \infty )$, the isotropic Ising model in a square lattice was analytically solved by [Lars Onsager](https://en.wikipedia.org/wiki/Lars_Onsager) in [1944](https://journals.aps.org/pr/abstract/10.1103/PhysRev.65.117). In this limit, the model show spontaneous magnetization for $ T < T_c$, with $T_c$ given by # # $$ \frac{T_c}{J} = \frac{2}{\log\left(1 + \sqrt{ 2}\right)} \approx 2.269185 \;. $$ # # With the class `IsingMC` we can check some thermodynamics quantities. IsingMC(L= 100).plot_thermodynamics(spin_MC= ising_data, Tc_scaled= False) data = ising_data.drop(columns= ['energy', 'magn']) # ### Constructing training and test sets # # Using `train_test_split` from [`sklearn`](http://scikit-learn.org/) it is easy to split the data into training and test sets. Since `train_test_split` is a random process and our data has 1000 samples for each of the 29 temperatures values, we split the data for each temperature in order to avoid the possibility of a biased split towards some temperature value. # + train_data_ = pd.DataFrame() test_data_ = pd.DataFrame() for _, temp in enumerate(data['temp'].value_counts().index.tolist()): train_data_T, test_data_T = train_test_split(data[data['temp'] == temp], test_size= 0.2, random_state= 12) train_data_ = pd.concat([train_data_, train_data_T]) test_data_ = pd.concat([test_data_, test_data_T]) # - # ### Training the model # # Our code implementing a Restricted Boltzmann Machine is written a python class called `RBM` which is imported from `rbm.py`. # # For simplification, the units have no bias and the RBM stochasticity parameter, represented below by $T$ is set to unity, as usual in most practical applications. Note that we set `use_cuda=True`, which makes use of [CUDA tensor types](https://pytorch.org/docs/stable/cuda.html), implementing GPU computation. If a GPU is not available, one should just set `use_cuda=False`. # + training_set = torch.Tensor(list(train_data_['state'])) training_set = training_set[torch.randperm(training_set.size()[0])] test_set = torch.Tensor(list(test_data_['state'])) lr = 0.001 k_learning = 1 batch_size = 100 nb_epoch = 2000 k_sampling = 1 rbm = RBM(num_visible= training_set.shape[1], num_hidden= training_set.shape[1], bias= False, T= 1.0, use_cuda= True) rbm.learn(training_set= training_set, test_set= test_set, lr= lr, nb_epoch= nb_epoch, batch_size= batch_size, k_learning= k_learning, k_sampling = k_sampling, verbose= 1) # - # ### Saving the trained model # + nb_epoch= rbm.num_train_epochs() Nv= training_set.shape[1] Nh= training_set.shape[1] name = 'RBM_model_T_complete_nv%d_nh%d_lr%.1E_k%d_bsize%d_nepochs%d' % (Nv, Nh, lr, k_learning, batch_size, nb_epoch) PATH = '../RBM_trained_models/'+ name + '.pt' torch.save(rbm, PATH) # - # ### Weights distribution # + W, v, h = rbm.parameters() del v del h torch.cuda.empty_cache() # + W_ = W.cpu().numpy().reshape((W.shape[0]*W.shape[1])) # Plot normalized histogram plt.hist(W_, bins= 1000, density= True) # Maximum and minimum of xticks to compute the theoretical distribution x_min, x_max = min(plt.xticks()[0]), max(plt.xticks()[0]) domain = np.linspace(x_min, x_max, len(W_)) # Fitting a normal distribution muW_, sigmaW_ = stats.norm.fit(W_) plot_pdf = stats.norm.pdf(domain, muW_, sigmaW_) # Fitting the PDF in the interval plt.plot(domain, plot_pdf, linewidth= 2.5, label= '$\mu= %f$ \n$\sigma$ = %f' % (muW_, sigmaW_ )) plt.title('Fitting a Normal Distribution for the weights ${\cal W}$') plt.xlim([-1, 1]) plt.legend() plt.show() # -
training_RBM/.ipynb_checkpoints/RBM_L100_train_dataset_V_complete-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exercise Sheet 02 (Connectionist Neurons and Multi Layer Perceptrons) # # ## Group: ALT # # ### Exercise H2.1: Connectionist Neuron # + import math import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # %matplotlib inline # + #importing and visualizing the data ds = pd.read_csv('applesOranges.csv') ds.columns = ['x1','x2','y'] ds.head() # - # y= 0 indicates that the sample is that of an “apple”. # # y= 1 assigns an observation to “orange”. # #### a) Plot the data in a scatter plot (x2vs.x1). Mark the points with different colors to indicate the type of each object. # + #Separating apples and oranges y0 = ds[ds.y == 0] #apples y1 = ds[ds.y == 1] #oranges fig = plt.figure() ax1 = fig.add_subplot(111) ax1.scatter(y0.x1,y0.x2,s=10, c='r', marker="s", label='apple') ax1.scatter(y1.x1,y1.x2,s=10, color='orange', marker="o" ,label='orange') plt.xlabel('$x_1$') plt.ylabel('$x_2$', rotation='horizontal') plt.title("Plot of $x_2$ vs $x_1$") plt.legend(loc='best'); plt.show() # - # #### b) For each of these weight w # (i) determine % correct classifications ρ of the corresponding neuron and # # (ii) plot a curve showing ρ as a function of γ # + #parameters to be used in the connectionist neuron theta = 0. gamma = np.linspace(0,180,19) gamma_rad = np.radians(gamma) w = np.vstack((np.cos(gamma_rad),np.sin(gamma_rad))).T #prepend the bias w = np.insert(w, 0, theta, axis=1) inputs = np.vstack((ds.x1,ds.x2)).T #prepend the bias x0 = 1 inputs = np.insert(inputs, 0, 1., axis=1).T tot_input = np.matmul(w,inputs) # + y_tot = np.heaviside(tot_input, 1) y_tot #each line in y_tot corresponds to the predicted value of all y from the data #for that specific pair of weights #First line: first pair of weights applied to the entire data yT = np.asarray(ds.y) #True values of y # + equals = (yT==y_tot) equals #which of the calculated outputs are equal to the real outputs # + rho_partial = np.sum(equals, axis =1) rho = (rho_partial/200.)*100 #i-th entry corresponds to the % of right results of the i-th pair of weights plt.plot(gamma,rho) plt.xlabel('$\gamma$ $(^{o})$') plt.ylabel(r'$\rho$ (%)', rotation='horizontal') plt.title(r"Plot of $\rho$ vs $\gamma$") plt.show() # - rho # #### c) Out of the 19 weight vectors from above, pick the w that yields the best performance. # w that yields best performance max_arg = np.argmax(rho) max_w = w[max_arg] print(max_w[1:]) # #### Now,vary the bias θ∈[−3,3] and pick the value of θ that gives the best performance # + theta_vec = -(np.linspace(-3,3,19)) w_line = np.tile(max_w[1:], (19, 1)) theta_w = np.column_stack((theta_vec,w_line)) theta_w # + #multiplying the inputs and the matrix of weights tot_input_theta = np.matmul(theta_w,inputs) #inserting into the heaviside function y_tot_theta = np.heaviside(tot_input_theta, 1) y_tot_theta # + #comparing with the true value for y equals_theta = (yT==y_tot_theta) equals_theta # + #determining the correct classifications rho_partial_theta = np.sum(equals_theta, axis =1) rho_theta = (rho_partial_theta/200.)*100 rho_theta # + #choosing the theta that maximizes the percentage of correct classifications max_arg_theta = np.argmax(rho_theta) max_w_theta = theta_w[max_arg_theta] -max_w_theta[0] # - # Which means that higher performance is achieved with: # # w = [0.93969262 , 0.34202014] and # # $\theta$ = 0.333. # #### d) Plot the data points and color them according to the predicted classification when using the w and θ that led to the highest performance. Plot the weight vector w in the same plot. How do you interpret your results? input_end = np.matmul(max_w_theta,inputs) y_pred = np.heaviside(input_end, 1) # + data_pred = np.vstack((ds.x1,ds.x2, y_pred)) apples = np.array((data_pred[0][data_pred[2]==0],data_pred[1][data_pred[2]==0])) oranges = np.array((data_pred[0][data_pred[2]==1],data_pred[1][data_pred[2]==1])) # + fig2 = plt.figure() ax2 = fig2.add_subplot(111) ax2.scatter(apples[0],apples[1],s=10, c='r', marker="s", label='apple') ax2.scatter(oranges[0],oranges[1],s=10, color='orange', marker="o" ,label='orange') plt.xlabel('$x_1$') plt.ylabel('$x_2$', rotation='horizontal') plt.title("Plot of $x_2$ vs $x_1$ with the predicted classifications") plt.arrow(0,0, max_w_theta[1],max_w_theta[2],length_includes_head=True, head_width=0.08, head_length=0.2 ) plt.legend(loc='best'); plt.show() # - # _Interpretation:_ # The weight vector $\mathbf{w}$ is the normal vector of the hyperplane. Therefore, $\mathbf{w}$ represents the orientation of the hyperplane. As per convention, $\mathbf{w}$ points in the direction of y_pred = 1. # #### e) Find the best combination of w and θ by exploring all combinations of γ and θ (within a reasonable range and precision). Compute and plot the performance of all combinations in a heatmap. # + size = 100 gamma = np.linspace(0,180,size) gamma_rad = np.radians(gamma) w = np.vstack((np.cos(gamma_rad),np.sin(gamma_rad))).T theta_vec = np.linspace(-3,3,size) # + from matplotlib.ticker import FormatStrFormatter combined = np.zeros((size,size)) for i in range(size): for j in range(size): weight = np.insert(w[i], 0, -theta_vec[j], axis=0) temp_input = np.matmul(weight,inputs) temp_y = np.heaviside(temp_input, 1) temp_equals = (yT==temp_y) temp_performance = np.sum(temp_equals, axis =0) performance = (temp_performance/200.)*100 combined[i][j] = performance num_ticks = 5 # the index of the position of yticks yticks = np.linspace(0, size - 1, num_ticks, dtype=np.int) xticks = np.linspace(0, size - 1, num_ticks, dtype=np.int) # the content of labels of these yticks xticklabels = [np.round(theta_vec[idx]) for idx in yticks] yticklabels = [np.round(gamma_rad[idx]) for idx in xticks] ax = sns.heatmap(combined, yticklabels=yticklabels, xticklabels=xticklabels) ax.set_yticks(yticks) ax.set_xticks(xticks) ax.set_ylabel('$\gamma$(rad)') ax.set_xlabel(r'$\theta$') ax.set_title(r'Performance of all combinations of $\gamma$ and $\theta$') plt.show() # - # + result = np.where(combined == np.amax(combined)) listOfCordinates = list(zip(result[0], result[1])) np.amax(combined) # - # The best performance is 92% correct classifications. # + best_weight = w[listOfCordinates[0][0]] best_theta = theta_vec[listOfCordinates[0][1]] best_weight # - -best_theta # The combination that yields the best performance is: # # w = [0.70147489, 0.71269417] # # $\theta$ = 0.333 # #### f) Can the grid-search optimization procedure used in (e) be applied to any classification problem? Discuss potential problems and give an application example in which the above method must fail. # # XOR problem? For a classification problem that is not linearly separable we need a multilayer perceptron, but this means we would have a matrix of weights and a bias for each layer and searching through all the possible combinations has a high computational cost. # ### Exercise H2.2: Multilayer Perceptrons (MLP) # #### a) Create 50 independent MLPs with Nhid= 10 hidden units by sampling for each MLP a set of random parameters {w211i,w10i1,bi}, i= 1,...,10. # + weight21 = np.random.standard_normal((50,10)) weight10 = np.random.normal(loc=0.0, scale=2.0, size=(50,10)) bi = np.random.uniform(low=-2., high=2.0, size=(50,10)) #each row of the above is the parameter for one MLP def mlp(w21,w10,b,x): transf = np.tanh(x*w10 - b) nodes = w21*transf y = np.sum(nodes,axis=1) return y ############################################################################## x = np.linspace(-2,2,50) y_tot = [mlp(weight21,weight10,bi,x[i]) for i in range(len(x))] x_expand = np.ones((50,len(x)))*x for i in range(len(x)): plt.plot(x_expand[:,i], y_tot[i], '.') plt.xlabel("x") plt.ylabel("y(x)") plt.title("Response of each MLP for different values of x, with $w_{i1}^{10}$ ~ N(0,2)") plt.show() # + t_2 = list(zip(*y_tot)) for i in range(50): plt.plot(x, t_2[i]) plt.xlabel("x") plt.ylabel("y(x)") plt.title("Response of the 50 MLP, y(x) with $w_{i1}^{10}$ ~ N(0,2)") plt.show() # - # #### c) Repeat this procedure using a different intialization scheme for the weights of the hidden neurons: w10i1∼N(0,0.5). What difference can you observe? # + weight105 = np.random.normal(loc=0.0, scale=0.5, size=(50,10)) #changing the weights of the hidden neurons y_tot_5 = [mlp(weight21,weight105,bi,x[i]) for i in range(len(x))] for i in range(50): plt.plot(x_expand[:,i], y_tot_5[i], '.') plt.xlabel("x") plt.ylabel("y(x)") plt.title("Response of each MLP for different values of x, with $w_{i1}^{10}$ ~ N(0,0.5)") plt.show() # - t = list(zip(*y_tot_5)) for i in range(50): plt.plot(x, t[i]) plt.xlabel("x") plt.ylabel("y(x)") plt.title("Response of the 50 MLP, y(x), with $w_{i1}^{10}$ ~ N(0,0.5)") plt.show() # # What differences can we observe? # # ## There are less abrupt changes in the values of y(x) when we use $w_{i1}^{10}$ ~ N(0,0.5). But y(x) is still in the same range of values. # #### d)Compute the mean squared error (MSE) between each of these 2×50 (50 from each of the above two initialization procedures) input-output functions and the functiong(x) =−x.For each of the two initialization procedures, which MLP approximates g best? Plot y(x) for these two MLPs. # + gT = -x #true values of the function t = list(zip(*y_tot_5)) t_2 = list(zip(*y_tot)) MSE = np.zeros((50,2)) for j in range(50): MSE[j][0] = (1/len(gT))*np.sum((gT - t_2[j])**2) MSE[j][1] = (1/len(gT))*np.sum((gT - t[j])**2) #Finding the minimimum MSE min_2 = np.min(MSE[:,0]) #min MLP with N(0,2) min_5 = np.min(MSE[:,1]) #min MLP with N(0,0.5) print(min_2,min_5) # + #The minimum corresponds to the MLP mlp_2 = np.argmin(MSE[:,0]) mlp_5 = np.argmin(MSE[:,1]) print(mlp_2,mlp_5) # - plt.plot(x, t_2[mlp_2], label='~N(0,2)') plt.plot(x, t[mlp_5], label='~N(0,0.5)') plt.plot(x,gT,label='g(x)') plt.xlabel("x") plt.ylabel("y(x)") plt.legend() plt.title("y(x) for the 2 MLP with min(MSE)") plt.show()
ALT_Laura.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''base'': conda)' # language: python # name: python37664bitbaseconda70fb04b0bd9543d0a4d5588de79b26c5 # --- # << [第七章:高级深度学习最佳实践](Chapter7_Advanced_deep_learning_best_pratices.ipynb)|| [目录](index.md) || [第九章:总结](Chapter9_Conclusions.ipynb) >> # # 第八章:生成模型深度学习 # # > The potential of artificial intelligence to emulate human thought processes goes beyond # passive tasks such as object recognition, or mostly reactive tasks such as driving a car. It # extends well into creative activities. When I first made the claim that in a not-so-distant # future, most of the cultural content that we consume will be created with heavy help from # AIs, I was met with utter disbelief, even from long-time machine learning practitioners. # That was in 2014. Fast forward three years, and the disbelief has receded—at an # incredible speed. In the summer of 2015, you were entertained by Google’s Deep Dream # algorithm turning an image into a psychedelic mess of dog eyes and pareidolic artifacts; # in 2016 you used the Prisma application to turn your photos into paintings of various # styles. In the summer of 2016, a first experimental short movie, Sunspring , was directed # using a script written by a LSTM—complete with dialogue lines. Maybe you even # recently listened to music tentatively generated by a neural network. # # 人工智能来模拟人类思维过程除了前面那些被动任务,比方说目标识别,或者很多响应式任务,比方说车辆驾驶之外,还能拓展创造性活动的领域。当作者首次断言在不久的将来,大多数我们消费的文化内容都会在AI的帮助下完成,遇到了很多的怀疑,这些怀疑甚至来自多年的参与机器学习的研究人员。那是在2014年,仅仅三年后,这些怀疑开始逐渐散去。在2015年夏天,谷歌推出了一个Deep Dream算法能够将图像转换成具有魔幻色彩的狗眼睛和古董的图像,吸引了很多人的注意;在2016年用户可以使用Prisma应用来将自己的照片转换成不同风格的画像;在2016年夏天,一部实验性的短电影叫Sunspring被摄制出来,其中的剧本使用了LSTM生成。很有可能最近你听到的一些音乐也是由神经网络申城的。 # # > Granted, the artistic productions we have seen from AI so far are all fairly # low-quality. AI is not anywhere close to rivaling human screenwriters, painters and # composers. But replacing humans was always besides the point: artificial intelligence is # not about replacing our own intelligence with something else, it is about bringing into our # lives and work more intelligence, intelligence of a different kind. In many fields, but # especially in creative ones, AI will be used by humans as a tool to augment their own # capabilities: more augmented intelligence than artificial intelligence. # # 诚然我们目前看到的那些AI艺术创作的质量都还很低。AI距离与人类剧作家、画家和作曲家竞争还差距着十万八千里。但实际上AI的目标永远不是取代人类:人工智能不是为了将人类的只能取代变成另一种智能,而是为了为人类的生活和工作带来更多的智能,不同形式的只能。在许多领域中,特别是创造性领域中,AI将称为人类的工具并增强人类的能力:更像增强智能而不是人工智能。 # # > A large part of artistic creation consists of simple pattern recognition and technical # skill. And that is precisely the part of the process that many find less attractive, even # skippable. That’s where AI comes in. Our perceptual modalities, our language, our # artworks all have statistical structure. Learning this structure is precisely what deep # learning algorithms excel at. Machine learning models can learn the statistical "latent # space" of images or music or even stories, and they can then "sample" from this space, # creating new artworks with similar characteristics as what the model has seen in its # training data. Naturally, such sampling is hardly an act of artistic creation in itself. It is a # mere mathematical operation: the algorithm has no grounding in human life, human # emotions, our experience of the world; instead it learns from an "experience" that has # little in common with ours. It is only our interpretation, as human spectators, that will # give meaning to what the model generates. But in the hands of a skilled artist, # algorithmic generation can be steered to become meaningful—and beautiful. Latent # space sampling can become a brush that empowers the artist, augments our creative # affordances, expands the space of what we can imagine. What’s more, it can make # artistic creation more accessible by eliminating the need for technical skill and # practice—setting up a new medium of pure expression, factoring art apart from craft. # # 艺术创作中的一大部分都含有简单的模式识别和技术工作。这也是很多人认为不够有趣的地方,甚至可以跳过的部分。这些就是AI能够进入的部分。我们的感知模型,我们的语言,我们的艺术品都有着统计学结构。从这些结构中学习正是深度学习算法擅长之处。机器学习模型可以从图像、音乐或者甚至是故事中学习到统计学的潜在空间,然后就能在空间中取样,从而创作一件与模型训练数据具有相似特征的新艺术作品。很显然,这样的取样行为很难认为是一种艺术创作。它仅仅就是一个数学运算:使用的算法没有任何对人类生活、情感、世界观的认知,而是从“经验”中进行学习,并不具有我们的共情能力。它创造出来的作品只有通过人类观众的解读才能赋予意义。但是对于高超的艺术家来说,如果掌握了这种技巧,算法生成的作品可被引导到有意义和优美的方向。潜在空间取样可以成为艺术家的神奇画笔,增强我们的创造性灵感,扩展我们的想象空间。更加有用的是,它能通过消除对艺术家技巧和技艺训练的要求使得艺术创作变得更加容易,构建出一种全新的纯表达的媒介,将艺术领域和工艺领域分开。 # # > <NAME>, a visionary pioneer of electronic and algorithmic music, beautifully # expressed this same idea in the 1960s, in the context of the application of automation # technology to music composition: # # > # ``` # "Freed from tedious calculations, the composer is able to devote himself to the # general problems that the new musical form poses and to explore the nooks and crannies # of this form while modifying the values of the input data. For example, he may test all # instrumental combinations from soloists to chamber orchestras, to large orchestras. With # the aid of electronic computers the composer becomes a sort of pilot: he presses the # buttons, introduces coordinates, and supervises the controls of a cosmic vessel sailing in # the space of sound, across sonic constellations and galaxies that he could formerly # glimpse only as a distant dream." # ``` # # <NAME>作为一个电子和算法音乐的先驱者,在60年代就在自动化音乐谱曲应用方面做过相关的描述: # # ``` # “将作曲家从枯燥乏味的计算当中释放出来,能够让他们更加专注于曲目的共性问题,如一种新的音乐形式,以及在这种形式下来探索各种细枝末节,通过修改输入数据来得到最理想的结果。例如,作曲家可以测试所有的演奏形式,从独奏到小乐队到交响乐团。有了计算机帮助的作曲家就像某种航天员:他按下按钮,输入坐标,然后监控着宇宙飞船在音乐空间中飞行的轨迹,从而能够穿越各种星座甚至星系,而这之前,可能这些地方只能通过望远镜匆匆一瞥。” # ``` # # > In this chapter, we will explore under various angles the potential of deep learning to # augment artistic creation. We will review sequence data generation (which can be used to # generate text or music), Deep Dreams, and image generation using both Variational # Auto-Encoders and Generative Adversarial Networks. We will get your computer to # dream up content never seen before, and maybe, we will get you to dream too, about the # fantastic possibilities that lie at the intersection of technology and art. # # 在本章中我们会从多个角度介绍深度学习在增强艺术创作上的能力。我们会涵盖序列数据生成(可以用来创作文字或音乐),Deep Dreams,以及图像生成的两种方式变分自动编码和生成对抗网络。本章会让你的计算机创作出之前从未想象过的成果,也有可能本章会让读者也开始梦想未来这种科技与艺术结合之后的奇妙世界。 # # > You will find five sections in this chapter: # # > - Text generation with LSTM, where you will use the recurrent networks you discovered in # Chapter 7 to dream up a pastiche of Nietzschean philosophy, character by character. # - Deep Dreams, where you will find out what dreams look like when all you know of the # world is the ImageNet dataset. # - Neural style transfer, where you will learn to apply the style of a famous painting to your # vacation pictures. # - Variational Autoencoders, where you find out about "latent spaces" of images, and how # to use them for creating new images. # - Adversarial Networks—deep networks that fight each other in a quest to produce the # most realistic pictures possible. # # > Let’s get started. # # 你可以在本章中学习到下面5方面内容: # # - 使用LSTM生成文本,你会使用我们在第七章中学习的循环网络来模仿生成尼采的哲学文章,一篇接一篇。 # - Deep Dreams,你会看到如果世界是由ImageNet数据集组成的话,它将会变成什么样子。 # - 神经风格转移,你可以学习到如何将名画作的风格应用到你自己的照片上。 # - 变分自动编码,你可以学习如何找到潜在空间,以及如何使用潜在空间创作新图像。 # - 对抗网络,深度网络能够互相对抗以产生最接近真实的照片。 # # 让我们开始这一章。 # ## 8.1 使用LSTM生成文本 # # > In this section, we present how recurrent neural networks can be used to generate # sequence data. We will use text generation as an example, but the exact same techniques # can be generalized to any kind of sequence data: you could apply it to sequences of # musical notes in order to generate new music, you could apply it to timeseries of brush # stroke data (e.g. recorded while an artist paints on an iPad) to generate paintings # stroke-by-stroke, and so on. # # 在本节中我们将介绍循环神经网络用来生成序列数据的方法。我们会使用文本生成作为一个例子,但是相同的技巧能够应用在任何序列数据生成任务上:你可以将它应用在一系列音符上以产生乐谱,你可以将它应用在一个时序的画笔描绘数据上(例如一个画家在iPad上作画的记录)来一笔一笔的产生画作,等等。 # # > Sequence data generation is no way limited to artistic content generation, either. It # has been successfully applied to speech synthesis, and dialog generation for chatbots. The # "smart reply" feature that Google released in 2016, capable of automatically generating a # selection of quick replies to your emails or text messages, is powered by similar # techniques. # # 序列数据生成不仅限于艺术内容生成,它还被成功的应用到了语音生成和对话机器人领域。谷歌在2016年发布的“smart reply”特性,能够为你的电子邮件或文字短信息自动产生快速的回复,也是使用类似的技术。 # ### 8.1.1 生成循环网络简史 # # > In late 2014, few people had ever heard the abbreviation "LSTM", even in the machine # learning community. Successful applications of sequence data generation with recurrent # networks only started appearing in the mainstream in 2016. But these techniques actually # have a fairly long history, starting with the development of the LSTM algorithm by # Hochreiter in 1997. This new algorithm was used early on to generate text character by # character. # # 在2014年底的时候,即使在机器学习社区中也很少人听说过缩写“LSTM”。使用循环网络生成序列数据的成功应用直到2016年才开始进入主流。但其实这项技术实际上有着很长的历史,可以回溯到1997年Hochreiter发明LSTM的时候。当时这个新算法用来实现字符层级的文本生成。 # # > In 2002, <NAME>, then at Schmidhuber’s lab in Switzerland, applied LSTM to # music generation for the first time, with promising results. <NAME> is now a # researcher at Google Brain, and in 2016 he started a new research group there, called # Magenta, focused on applying modern deep learning techniques to produce engaging # music. Sometimes, good ideas take fifteen years to get started. # # 瑞士Schmidhuber实验室的<NAME>在2002年第一次将LSTM应用到了音乐生成,获得了不错的结果。<NAME>现在是谷歌Brain的一名研究人员,他在2016年成立了一个新的研究小组,叫做Magenta,专注于应用现代深度学习技术来生成优秀的音乐。有的时候,一个好的想法需要15年才能开始实践。 # # > In the late 2000s and early 2010, <NAME> did important pioneering work on # using recurrent networks for sequence data generation. In particular, his 2013 work on # applying Recurrent Mixture Density Networks to generate human-like handwriting using # timeseries of pen positions, is seen by some as a turning point. This specific application # of neural networks at that specific moment in time captured for me the notion of # "machines that dream" and was a significant inspiration around the time I started # developing Keras. <NAME> left a similar commented-out remark hidden in a 2013 # LateX file uploaded to the preprint server Arxiv.org : "generating sequential data is the # closest computers get to dreaming" . Several years later, we have come to take a lot of # these developments for granted, but at the time, it was hard to watch Graves' # demonstrations and not walk away awe-inspired by the possibilities. # # 在00年代末和10年代初的时候,<NAME>在使用循环网络来生成序列数据方面做了许多重要的领先贡献。特别要指出的是,他在2013年在笔触时序数据使用循环混合全连接网络来生成人类笔迹的实验,经常被视为一个转折点。这个神经网络的应用当时正好与作者的“能梦想的机器”观点迎合,因此成为了作者开发Keras的一个重要激励。<NAME>在2013年提交到预付印平台Arxiv.org上的论文中,使用Latex注释了一句话,表达了相同的观点:“生成序列数据是最接近计算机能梦想的方式”。许多年以后,我们已经将这方面的进展视作习以为常,但在当时,很难不被Grave给我们展现的内容惊呆,然后以令人敬畏的态度来面对未来的这种可能性。 # # > Since then, recurrent neural networks have been successfully used for music # generation, dialogue generation, image generation, speech synthesis, molecule design, # and were even used to produce a movie script that was then cast with real live actors. # # 从那之后,循环神经网络已经被成功的运用到了音乐生成、对话生成、图像生成、语音生成、高分子设计,甚至还被运用到产生由真实演员出演的电影剧本之中。 # ### 8.1.2 我们该如何产生序列数据? # # > The universal way to generate sequence data in deep learning is to train a network # (usually either a RNN or a convnet) to predict the next token or next few tokens in a # sequence, using the previous tokens as input. For instance, given the input "the cat is on # the ma" , the network would be trained to predict the target "t" , the next character. As # usual when working with text data, "tokens" are typically words or characters, and any # such network that can model the probability of the next token given the previous ones is # called a language model . A language model captures the latent space of language, i.e. its # statistical structure. # # 在深度学习中生成序列数据一个通用方法是训练一个模型(通常是一个RNN或CNN)来预测序列中的下一个标记或者下几个标记,使用前面的标记作为输入。例如,给定输入“the cat is on the ma”,网络可能被训练来预测得到目标“t”,也就是下一个字符。通常当处理文本数据时,“标记”会是单词或字符,这样的网络可以根据之前的标记获得下一个标记的概率,被称为语言模型。语言模型能够感知到语言的潜在空间,也就是它的统计学结构。 # # > Once we have such a trained language model, we can sample from it, i.e. generate # new sequences: we would feed it some initial string of text (called "conditioning data"), # ask it to generate the next character or the next word (we could even generate several # tokens at once), then add the generated output back to the input data, and repeat the # process many times (see Figure 8.1). This loop allows to generate sequences of arbitrary # length that reflect the structure of the data that the model was trained on, i.e. sequences # that look almost like human-written sentences. In our case, we will take a LSTM layer, # feed it with strings of N characters extracted from a text corpus, and train it to predict # character N+1 . The output of our model will be a softmax over all possible characters: a # probability distribution for the next character. This LSTM would be called a # "character-level neural language model". # # 我们有了这样的训练过的语言模型之后,我们就可以从中取样,也就是生成新的序列:我们可以将一些初始化的文本字符串输入给模型(被称为“条件数据”),然后让模型生成下一个字符或者下一个单词(甚至可以一次生成多个标记),然后将生成的输出放回输入数据中,多次重复这个过程(参见图8-1)。这个循环能够产生任意长度的序列数据,能够反映模型训练得到的统计学结构,也就是说获得一个几乎类似人类生成的序列数据。在我们的场景中,我们会使用一个LSTM层,用文本语料库中提取的N个字符作为输入,然后训练模型能够预测第N+1个字符。模型的输出会是所有可能字符的softmax结果:就是下一个字符的概率分布。这个LSTM层被称为“字符级神经语言模型”。 # # ![language model](imgs/f8.1.jpg) # # 图8-1 使用语言模型生成字符级文本的过程 # ### 8.1.3 取样策略的重要性 # # > When generating text, the way we pick the next character is crucially important. A naive # approach would be "greedy sampling", consisting in always choosing the most likely # next character. However, such an approach would result in very repetitive and predictable # strings that don’t look like coherent language. A more interesting approach would consist # in making slightly more surprising choices, i.e. introducing randomness in the sampling # process, for instance by sampling from the probability distribution for the next character. # This would be called "stochastic sampling" (you recall that "stochasticity" is what we call # "randomness" in this field). In such a setup, if "e" has a probability 0.3 of being the next # character according to the model, we would pick it 30% of the time. Note that greedy # sampling can itself be cast as sampling from a probability distribution: one where a # certain character has probability 1 and all others have probability 0. # # 当生成文本时,我们选取下一个字符的方式是非常重要的。一个原始的解决方法是“贪婪取样”,也就是永远选择最大似然值的下一个字符。但是这样的做法会导致非常重复和可预测的字符串,使得语义看起来不连贯。一个更有趣的方法包括在取样中使用一些更加惊奇的策略,或者说在其中引入一些随机性,比方说在选取下一个字符时使用概率分布来取样。这被称为“随机取样”。在这个方案中,如果“e”根据模型计算有着0.3的概率,我们会在30%的时间中选择它。值得一提的是贪婪取样也算是随机取样的一种:只不过其中一个字符的概率为1而其他字符的概率都是0。 # # > Sampling probabilistically from the softmax output of the model is neat, as it allows # even unlikely characters to be sampled some of the time, generating more # interesting-looking sentences and even sometimes showing creativity by coming up with # new, realistic-sounding words that didn’t occur in the training data. But there is one issue # with this strategy: it doesn’t offer a way to control the amount of randomness in the # sampling process. # # 从模型softmax的输出中使用随机取样是很灵活的,因为它某些时候能够选取那些不太可能的字符,从而生成更加有趣的句子,甚至有时还能生成一些新奇的听起来很真实的单词,即使它们没有出现在训练数据中。但是这里还有一个问题:它没有提供一个方法来控制取样过程中的随机程度。 # # > Why would we want more or less randomness? Consider an extreme case: pure # random sampling, i.e. drawing the next character from a uniform probability distribution, # where every character is equally likely. This scheme would have maximum randomness; # in other words, this probability distribution would have maximum "entropy". Naturally, it # would not produce anything interesting. At the other extreme, greedy sampling, which # doesn’t produce anything interesting either, has no randomness whatsoever: the # corresponding probability distribution has minimum entropy. Sampling from the "real" # probability distribution, i.e. the distribution that is output by the model’s softmax # function, constitutes an intermediate point in between these two extremes. However, # there are many other intermediate points of higher or lower entropy that one might want # to explore. Less entropy will give the generated sequences a more predictable structure # (and thus they will potentially be more realistic-looking) while more entropy will result # in more surprising and creative sequences. When sampling from generative models, it is # always good to explore different amounts of randomness in the generation process. Since # the ultimate judge of the interestingness of the generated data is us, humans, # interestingness is highly subjective and there is no telling in advance where the point of # optimal entropy lies. # # 为什么我们需要更多或者更少的随机性?考虑一个极端的情景:完全随机取样,也就是按照平均概率分布来选取下一个字符,那么每个字符都具有相同的似然。这个情境中有着最大的随机性;或者说,这个概率分布有着最大的“熵”。很显然它不会生成任何有趣的东西,同样的另一种极端,贪婪取样,也不会生成任何有趣的东西:这时的概率分布有着最小的熵。从“真实”的概率分布中采样,也就是从模型的softmax激活函数的输出分布中进行采样,使用了这两个极端之间的一个中间点。然而这两个极端之间还存在着很多其他的更高熵或者更低熵的点可以探索。低熵的点会带来更加可预测的生成序列结构(并且它们应该看起来更加真实)而高熵的点会带来更加令人惊奇和创造性的生成序列。当从生成模型中进行采样时,探索各种可能的随机性永远是个好主意。因为最终判定生成数据的有趣程度的人是我们自己,人类,有趣性是高度具有主观性的因此没有方法提前知道哪个点的熵是最合适的。 # # > In order to control the amount of stochasticity in the sampling process, let’s introduce # a parameter called "softmax temperature" that characterizes the entropy of the probability # distribution used for sampling, or in other words, that characterizes how surprising or # predictable our choice of next character will be. Given a temperature value, a new # probability distribution is computed from the original one (the softmax output of the # model) by reweighting it in the following way: # # 为了能够控制取样过程中的随机性,我们会引入一个参数叫做“softmax温度”用来表示取样时的概率分布熵,或者也可以说,用来表示下一个字符的选择有多出乎意料或者可预测。给定一个温度值后,就可以按照原始分布(模型的softmax输出值)和温度值计算得到一个新的概率分布,如下: # + import numpy as np def reweight_distribution(original_distribution, temperature=.5): ''' 根据温度重新计算概率分布来控制熵的大小 参数: original_distribution: 一个1D概率Numpy向量,总和应该为1 temperature: 计算新的概率分布的熵因子 返回: 原始概率分布经过重新计算后得到的新的概率分布 ''' distribution = np.log(original_distribution) / temperature distribution = np.exp(distribution) # 经过运算后,概率分布的总和可能不再为1,我们需要将其正规化 return distribution / np.sum(distribution) # - # > Higher "temperatures" result in sampling distributions of higher entropy, that will # generate more surprising and unstructured generated data, while a lower temperature will # result in less randomness and much more predictable generated data. # # 更高的“温度”会获得更高熵的取样分布,也就是生成更加意料不到和非结构化数据,而更低的温度会获得更少随机性也就是更加可预测的数据。 # # ![diff entropy on same distribution](imgs/f8.2.jpg) # # 图8-2 在相同的softmax分布上进行重新分布:高温度=高确定性,低温度=高随机性 # # 8.1.4 实现字符级LSTM文本生成 # # > Let’s put these ideas in practice in a Keras implementation. The first thing we need is a # lot of text data that we can use to learn a language model. You could use any sufficiently # large text file or set of text files—Wikipedia, the Lord of the Rings, etc. In this example # we will use some of the writings of Nietzsche, the late-19th century German philosopher # (translated to English). The language model we will learn will thus be specifically a # model of Nietzsche’s writing style and topics of choice, rather than a more generic model # of the English language. # # 下面让我们在实践中使用Keras来实现上面的想法。第一步我们需要很多文本数据来学习一个语言模型。你可以使用任何足够大的文本文件或者全套的文本文件如维基百科、指环王等。在本例中,我们会使用尼采的一些著作(英文翻译版),他是19世纪晚期德国的哲学家。这样得到的语言模型将会具有尼采的写作风格和主题选择,而不是更加通用的英语模型。 # # #### 准备数据 # # > Let’s start by downloading the corpus and converting it to lowercase: # # 让我们首先下载语料库并将其转换成小写: # + from tensorflow import keras path = keras.utils.get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt') text = open(path).read().lower() len(text) # - # > Next, we will extract partially-overlapping sequences of length maxlen , one-hot # encode them and pack them in a 3D Numpy array x of shape (sequences, maxlen, # unique_characters) . Simultaneously, we prepare a array y containing the # corresponding targets: the one-hot encoded characters that come right after each # extracted sequence. # # 接下来,我们会提取长度为maxlen的部分重叠的序列,然后进行one-hot编码并且打包成一个形状为(序列, maxlen, 独立字符)的一个3D Numpy数组中。同时,我们还需要准备一个目标y向量:也是每个提取到的序列后出现的字符相对应的one-hot编码。 # + # 提取字符序列的长度 maxlen = 60 # 取样新序列的步长值 step = 3 # 下面这个列表保存提取出来的序列 sentences = [] # 下面这个列表保存目标的字符(下一个字符) next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('Number of sequences:', len(sentences)) # 语料库中不同字符的集合 chars = sorted(list(set(text))) print('Unique characters:', len(chars)) # 下面是一个字典值,将不同字符映射成语料库中的序号 char_indices = dict((char, chars.index(char)) for char in chars) # 下一步是将这些字符进行one-hot编码 print('Vectorization...') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 # - # #### 构建网络 # # > Our network is a single LSTM layer followed by a Dense classifier and softmax over all # possible characters. But let us note that recurrent neural networks are not the only way to # do sequence data generation; 1D convnets also have proven extremely successful at it in # recent times. # # 我们使用一个LSTM层然后跟着一个全连接分类器,在所有可能的字符上进行softmax运算。不过这里需要提出的是,循环神经网络并不是生成序列数据的唯一选择,1D卷积网络最近在这个领域也被证明会非常成功。 # + from tensorflow.keras import layers from tensorflow.keras.models import Sequential model = Sequential() model.add(layers.LSTM(128, input_shape=(maxlen, len(chars)))) model.add(layers.Dense(len(chars), activation='softmax')) # - # > Since our targets are one-hot encoded, we will use categorical_crossentropy as # the loss to train the model: # # 因为这里的目标是one-hot编码的,所以我们会使用`categorical_crossentropy`作为损失函数来训练模型: # + from tensorflow.keras.optimizers import RMSprop optimizer = RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer) # - # #### 训练语言模型并且使用它来取样 # # > Given a trained model and a seed text snippet, we generate new text by repeatedly: # # > 1. Drawing from the model a probability distribution over the next character given the # text available so far # 2. Reweighting the distribution to a certain "temperature" # 3. Sampling the next character at random according to the reweighted distribution # 4. Adding the new character at the end of the available text # # 给定一个训练好的模型和一个种子文本片段,我们可以不断的生成新的文本: # # 1. 从模型中获得目前文本序列的下一个字符的概率分布。 # 2. 使用一个给定的“温度”重新得到一个新的分布。 # 3. 使用新的分布对下一个字符进行取样。 # 4. 将新取样的字符加入到文本的末尾。 # # > This is the code we use to reweight the original probability distribution coming out of # the model, and draw a character index from it (the "sampling function"): # # 下面是我们对概率分布进行重新权重然后获取下一个字符序号的代码(也就是“取样函数”): def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) # > Finally, this is the loop where we repeatedly train and generated text. We start # generating text using a range of different temperatures after every epoch. This allows us # to see how the generated text evolves as the model starts converging, as well as the # impact of temperature in the sampling strategy. # # 最后是下面的循环用来重复的训练和生成文本。我们在每次epoch之后都重新生成一个温度值。这能够让我们观察到生成文本是如何随着模型收敛进行变化的,同时看到温度对取样策略的影响。 # + import random import sys for epoch in range(1, 60): print('epoch', epoch) # 使用选取的文本数据 model.fit(x, y, batch_size=128, epochs=1) # Select a text seed at random start_index = random.randint(0, len(text) - maxlen - 1) original_text = text[start_index: start_index + maxlen] print('--- Generating with seed: "' + original_text + '"') for temperature in [0.2, 0.5, 1.0, 1.2]: generated_text = original_text print('------ temperature:', temperature) print(generated_text, end='') # We generate 400 characters for i in range(400): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] print(next_char, end='') print() # - # > Here is what we get at epoch 20, long before the model has fully converged. We used # the random seed text "new faculty, and the jubilation reached its climax when kant". # # 当第20次迭代时,模型还未完全收敛。我们使用的种子文本是“e variety among germans--pardon # me for stating the fact that”。 # # > With temperature=0.2: # # 当温度为0.2时,生成的文本是: # # ``` # e variety among germans--pardon # me for stating the fact that the world and the develop of the spirit and the state of the conscience of the spirit and the morality of the sense of the same time in the spirit and that the strength and the spirit and the state of the spirit and the sense of the spirit and the special proposed the suffering the sure of the conscience, and the sense of the conscience. the spirit and the conscience. the sense of the same time a # ``` # # > With temperature=0.5: # # 当温度是0.5时: # # ``` # e variety among germans--pardon # me for stating the fact that we think and the desirable conscience. # # # 14 # # =a thing all the domain of the precisely all the wors as ssquention and in the special spirit and the species of the demonstration of explom, the chate and hastor and conscience of self-place of the sureropened and class of the sportis, and the fact and the puring in the states and art of the will to be conscience of the belief of the states of the sen # ``` # # > With temperature=1.0: # # 当温度是1.0时: # # ``` # e variety among germans--pardon # me for stating the fact that art origin the sarrowered here stoom age repeatable for difference on thoughts," "he taikee in a count and sissian talegnd themselves, the tetiour, the # tradition to hompened all the regream; # enthrne," the inners of his own toings for all general gald us sind in b onwemon, but as conscienced that the order of # the same tentifoundance of the precisetementing, as unreligious by destrains and !f # ``` # # > With temperature=1.2: # # 当温度是1.2时: # # ``` # e variety among germans--pardon # me for stating the fact that when the stasterment the be; insonsist" to fragrion become dol afluwhking # like indemonedgutory," "the # -are hate on "youghle culture afforne of allowple, his 'much-countencely acjoses"y. # hom, he # visits dutues to black it is no polleatian paltitice of the spirit of a favoured it # naturaless "many # things--in harms and even-blound because obndion to sacrangablay, nual path. # # 124. he proby have been t # ``` # # > At epoch 60, the model has mostly converged and the text starts looking significantly # more coherent: # # 在第48次迭代之后,模型已经基本上收敛了,因此产生的文本看起来更加的相关:(译者注:此处选择了损失最小的迭代来示例,而不是原文中的60,实际上迭代次数只有59次) # # > With temperature=0.2: # # 当温度是0.2时,生成的文本是: # # ``` # necessary for the purpose is # a little vivisection of the germans to be all the same to the same to the suppose something the state of the same to the same truth of the prooth and man and the state of the most present destination of the sense of the fact the world of the state of the greatest states and the same to the disposition of the same truth and man and the supposing and the supposed and interpretation of the same to the same interestion of the same # ``` # # > With temperature=0.5: # # 当温度是0.5时: # # ``` # necessary for the purpose is # a little vivisection of the german, # of the spirit, and at present of all of life. # # 15. the most problem of their life man earl one of the freedoms of villogion of the heart to the dignous # interpretated the world the last the most interpretation and distinction, and the soul. the sense of the feelings is the contain even something and finer indianicn, and also indianicn of the early enough silence of a more growthing. the happin # ``` # # > With temperature=1.0: # # 当温度是1.0时: # # ``` # necessary for the purpose is # a little vivisection of the germans and, good"; and that interestion of attertion. # # # 110 # # =constantly valition the primordiagants. then inglinihorar, and solitudes up a reter--in the suppose of the community. the reason, allity for is a people is person to mys. the a # regarded odeaty # nationally # tomes result purpose right en of gratition. eagerated, hono # mineffing # seed--the # indiance # called # under cultive # original and moment, # indis # ``` # # > With temperature=1.2: # # 当温度是1.2时: # # ``` # necessary for the purpose is # a little vivisection of the germans--nature height. # # 126. no oight intempt-pretallents, to hidd-so purpose: "worlo of own asjrature, such although, caruses? have happent love affordness of all pariac". # # # 105 # atere tautised merules of fine indust ones; not. they gives gie 'menver ion one # by sole thingies of the through religios of different individuais intowar tro-first--the pleasion and condition # of my mints, with it; he ones # f # ``` # > As you can see, a low temperature results in extremely repetitive and predictable text, # but where local structure is highly realistic: in particular, all words (a word being a local # pattern of characters) are real English words. With higher temperatures, the generated # text becomes more interesting, surprising, even creative; it may sometimes invent # completely new words that sound somewhat plausible (such as "eterned" or # "troveration"). With a high temperature, the local structure starts breaking down and most # words look like semi-random strings of characters. Without a doubt, here 0.5 is the most # interesting temperature for text generation in this specific setup. Always experiment with # multiple sampling strategies! A clever balance between learned structure and randomness # is what makes generation interesting. # # 正如你看到的结果,较低的温度会导致非常重复和可预测的文本,但是生成的结果局部模式高度现实化:特别的是所有的单词(一个单词就是字符的局部模式)都是真是的英语单词。而使用较高的温度产生的文本就变得更加有趣,让人无法意料和具有创造性的,这种情况下有时候会发明一些全新的单词,看起来像是英文,又不是英文(例如“eterned”或者“troveration”)。在高温度下,文本的局部模式开始被打破,而大多数的单词看起来像是半随机字符组成的字符串。仔细观察可知,这里0.5的温度是最有意思的。在这种任务中,一定要多尝试多种取样策略。在学习到的结构和随机性之间选取一个最合适的平衡点。 # # > Note that by training a bigger model, longer, on more data, you can achieve generated # samples that will look much more coherent and realistic than ours. But of course, don’t # expect to ever generate any meaningful text, other than by random chance: all we are # doing is sampling data from a statistical model of which characters come after which # characters. Language is a communication channel, and there is a distinction between # what communications are about, and the statistical structure of the messages in which # communications are encoded. To evidence this distinction, here is a thought experiment: # what if human language did a better job at compressing communications, much like our # computers do with most of our digital communications? Then language would be no less # meaningful, yet it would lack any intrinsic statistical structure, thus making it impossible # to learn a language model like we just did. # # 这里还需要指明,如果你使用一个更大的模型,更长的片段,更多的数据,你就能够获得更加合理和真实的生成结果。但是当然不要期望这样能生成任何有意义的文本:我们现在做的所有事情只是从序列中按照字符出现的规律得到的模型中取样数据而已。语言是一个沟通渠道,在沟通渠道和信息编码成的统计学结构之间有着一道鸿沟。我们可以用下面这个思想实验来证明这点:如果人类语言在通信压缩上比现在做的好得多,就像我们使用计算机进行数字压缩通信那样,会出现什么情况?那么我们的语言中的信息量并不会变得更少,但是却会丢失了很多内在的统计学结构,因此使得这样的语言无法像我们前面那样训练一个语言模型出来。 # #### 小结一下 # # > - We can generate discrete sequence data by training a model to predict the next tokens(s) # given previous tokens. # - In the case of text, such a model is called a "language model" and could be based on # either words or characters. # - Sampling the next token requires balance between adhering to what the model judges # likely, and introducing randomness. # - One way to handle this is the notion of softmax temperature . Always experiment with # different temperatures to find the "right" one. # # - 我们能够通过训练一个模型来通过前面的标记生成下一个标记,从而生成离散的序列数据。 # - 在文本领域,这样的模型被称为“语言模型”,模型可以建立在单词或者字符上。 # - 下一个标记的取样需要在模型的分布概率和引入随机性之间进行取舍。 # - 处理这个问题的一个办法是使用softmax温度。多实验各种的温度来找到“合适”的那个值。 # ## 8.2 Deep Dream # # > "Deep Dream" is an artistic image modification technique that leverages the # representations learned by convolutional neural networks. It was first released by Google # in the summer of 2015, as an implementation written using the Caffe deep learning # library (this was several months before the first public release of TensorFlow). It quickly # became an Internet sensation thanks to the trippy pictures it could generate, full of # algorithmic pareidolia artifacts, bird feathers and dog eyes—a by-product of the fact that # the Deep Dream convnet was trained on ImageNet, where dog breeds and bird species # are vastly over-represented. # # “Deep Dream”是一个艺术图像编辑技巧,它利用了卷积神经网络学习到的表现形式。Deep Dream是谷歌在2015年夏天首次发布的,当时使用的是Caffe深度学习框架(也就是在TensorFlow首次公开发布的几个月前)实现的。因为它能生成具有迷幻色彩的图像因此很快就成为互联网上的热点,它创造的图像使用的是鸟类羽毛和狗的眼睛,这些都是Deep Dream卷积网络从ImageNet中训练得到的,然后通过一种奇幻的算法将它们组合起来。 # # ![deep dream example](imgs/f8.3.jpg) # # 图8-3 Deep Dream生成图像的例子 # > The Deep Dream algorithm is almost identical to the convnet filter visualization # technique that we introduced in Chapter 5, consisting in running a convnet "in reverse", # i.e. doing gradient ascent on the input to the convnet in order to maximize the activation # of a specific filter in an upper layer of the convnet. Deep Dream leverages this same idea, # with a few simple differences: # # > - With Deep Dream, we try to maximize the activation of entire layers rather than that of a # specific filter, thus mixing together visualizations of large numbers of features at once. # - We start not from a blank, slightly noisy input, but rather from an existing image—thus # the resulting feature visualizations will latch unto pre-existing visual patterns, distorting # elements of the image in a somewhat artistic fashion. # - The input images get processed at different scales (called "octaves"), which improves the # quality of the visualizations. # # Deep Dream算法基本上与我们在第五章介绍的卷积网络过滤器可视化技术相同,不过是“反向”运行卷积网络,也就是在输入上进行梯度上升从而最大化卷积网络上层特定过滤器的激活输出。Deep Dream充分利用了这个办法,不过有一些简单的区别: # # - 在Deep Dream当中,我们尝试最大化整个层次的激活输出而不是特定的过滤器,因此可以一次性混合大量的视觉元素。 # - 我们不是从一个空白带有少量噪音的输入开始,而是从一个现有的图像开始,因此生成的视觉特征会锁定在已经存在的视觉模式上,然后以某种艺术形式对这张图像元素进行扭曲。 # - 输入的图像会使用不同的缩放进行处理(被称为“音阶”),这样能改进生成的视觉效果质量。 # # > Let’s make our own Deep Dreams. # # 下面让我们来构建自己的Deep Dreams。 # ### 8.2.1 在Keras中实现Deep Dream # # > We will start from a convnet pre-trained on ImageNet. In Keras, we have many such # convnets available: VGG16, VGG19, Xception, ResNet50... albeit the same process is # doable with any of these, your convnet of choice will naturally affect your visualizations, # since different convnet architectures result in different learned features. The convnet used # in the original Deep Dream release was an Inception model, and in practice Inception is # known to produce very nice-looking Deep Dreams, so we will use the InceptionV3 model # that comes with Keras. # # 我们会从在ImageNet上预训练的卷积网络开始。在Keras中,有着很多可用的预训练网络:VGG16,VGG19,Xception,ResNet50.....尽管这些模型都可以采取同样的处理过程,但对于卷积网络模型的选择肯定会影响最终的视觉结果,因为不同的卷积网络结构导致不同的认知特征。最早发布的Deep Dream中使用的Inception模型,而且在实践中Inception能够产生非常漂亮的Deep Dreams,所有我们将会使用Keras内置的InceptionV3模型。 # + from tensorflow.keras.applications import InceptionV3 from tensorflow.keras import backend as K # 我们不会重新训练这个模型,因此我们会禁用所有训练相关动作 K.set_learning_phase(0) # 下面构建一个InceptionV3模型,不引入其顶端的分类器 model = InceptionV3(weights='imagenet', include_top=False) # - # > Next, we compute the "loss", the quantity that we will seek to maximize during the # gradient ascent process. In Chapter 5, for filter visualization, we were trying to maximize # the value of a specific filter in a specific layer. Here we will simultaneously maximize the # activation of all filters in a number of layers. Specifically, we will maximize a weighted # sum of the L2 norm of the activations of a set of high-level layers. The exact set of layers # we pick (as well as their contribution to the final loss) has a large influence on the visuals # that we will be able to produce, so we want to make these parameters easily configurable. # Lower layers result in geometric patterns, while higher layers result in visuals in which # you can recognize some classes from ImageNet (e.g. birds or dogs). We’ll start from a # somewhat arbitrary configuration involving four layers—but you will definitely want to # explore many different configurations later on: # # 下一步我们会计算“损失”,也就是在梯度上升过程中我们需要用来找到最大值的度量。在第五章可视化分类中,我们尝试过在特定层次的特定过滤器上最大化这个值。现在我们需要同时在多个层次的所有过滤器上最大化。特别的我们会最大化一组高阶层的激活L2范数的加权和。这些被选中的层次(因为它们对于最终损失的作用)对于生成的视觉特征有着巨大的影响,因此我们希望这些参数容易进行配置。在网络中,低阶的层次识别的是地理模式特征,而高阶层次负责识别那些从ImageNet(如鸟或狗)中获得视觉特征。我们会使用一个任意的四层结构作为开始,读者肯定在完成后会希望探索更多可能的配置: # 下面定义一个字典,表示各个层次对于总重损失的贡献权重 # 这里使用的层次名称是内置的InceptionV3模型的层次名称 # 你可以通过`model.summary()`来查看 layer_contributions = { 'mixed2': 0.2, 'mixed3': 3., 'mixed4': 2., 'mixed5': 1.5, } # > Now let’s define a tensor that contains our loss, i.e. the weighted sum of the L2 norm # of the activations of the layers listed above. # # 下面定义一个张量包含这我们的损失,也就是上面这些层级激活的L2范数的权重和。 # + # 对于每个关键层次获得相应的名字 layer_dict = dict([(layer.name, layer) for layer in model.layers]) # 定义损失值 loss = K.variable(0.) for layer_name in layer_contributions: # 将相关层次的激活值L2范数加到损失值上 coeff = layer_contributions[layer_name] activation = layer_dict[layer_name].output # 将激活张量的边缘去除以避免边际效应 scaling = K.prod(K.cast(K.shape(activation), 'float32')) loss.assign_add(coeff * K.sum(K.square(activation[:, 2: -2, 2: -2, :]))) / scaling # - # > Now we can set up the gradient ascent process: # # 现在我们就可以设置梯度上升过程了: # # 译者注:以下代码在使用了tensorflow v1兼容后仍然无法运行,希望大家能够提供建议修改下面代码使之能运行。 # + import tensorflow as tf tf.compat.v1.disable_eager_execution() # 用来保存生成的图像 dream = model.input # 按照损失值计算图像的梯度 grads = K.gradients(loss, dream)[0] # 标准化梯度值 grads /= K.maximum(K.mean(K.abs(grads)), 1e-7) # 定义函数用来计算损失值和梯度,以及梯度上升函数 outputs = [loss, grads] fetch_loss_and_grads = K.function([dream], outputs) def eval_loss_and_grads(x): outs = fetch_loss_and_grads([x]) loss_value = outs[0] grad_values = outs[1] return loss_value, grad_values def gradient_ascent(x, iterations, step, max_loss=None): for i in range(iterations): loss_value, grad_values = eval_loss_and_grads(x) if max_loss is not None and loss_value > max_loss: break print('...Loss value at', i, ':', loss_value) x += step * grad_values return x # - # > Finally, here is the actual Deep Dream algorithm. # # > First, we define a list of "scales" (also called "octaves") at which we will process the # images. Each successive scale is larger than previous one by a factor 1.4 (i.e. 40% # larger): we start by processing a small image and we increasingly upscale it (Figure 8.4). # # 最终来到真正的Deep Dream算法。 # # 首先我们定义一系列的“缩放比例”(也叫作“音阶”),用来处理图像。每个后续的比例都是前一个的1.4倍(也就是大40%):我们从小的图像开始处理然后慢慢增大它(参见图8-4)。 # # ![Deep Dream Process](imgs/f8.4.jpg) # # 图8-4 Deep Dream过程:一系列的缩放比例(音阶)以及在大尺寸图像上进行细节插入 # > Then, for each successive scale, from the smallest to the largest, we run gradient # ascent to maximize the loss we have previously defined, at that scale. After each gradient # ascent run, we upscale the resulting image by 40%. # # 然后对于每个缩放比例,从最小尺寸到最大尺寸,我们运行梯度增强来令前面定义的损失值最大化。每次梯度增强完成后,我们将结果图像放大40%。 # # > To avoid losing a lot of image detail after each successive upscaling (resulting in # increasingly blurry or pixelated images), we leverage a simple trick: after each upscaling, # we reinject the lost details back into the image, which is possible since we know what the # original image should look like at the larger scale. Given a small image S and a larger # image size L, we can compute the difference between the original image (assumed larger # than L) resized to size L and the original resized to size S—this difference quantifies the # details lost when going from S to L. # # 为了避免在每次放大过程中丢失许多的图像细节(因为这会导致图像模糊和像素化),我们还需要应用一个简单技巧:在每次放大后,我们将这些丢失的细节重新插入到图像中,因为我们有着大尺寸下的原始图像,所以这种做法很自然。给定一个小尺寸图像S和一个大尺寸图像L,我们能够计算得到原始图像(假设比L要大)缩放到尺寸L的变化值和原始尺寸缩放到S的变化值,通过这些变化值可以得到从S到L的细节损失值。 # + import numpy as np # 修改下面的超参数能够获得不同的艺术效果 step = 0.01 # 梯度增强系数 num_octave = 3 # 音阶数量 octave_scale = = 1.4 # 相邻音阶的尺寸系数 iterations = 20 # 每个音阶的梯度增强迭代次数 # 如果损失值超过10,我们就停止迭代,放置结果变得过于奇幻 max_loss = 10. # 下面设定你用来进行Deep Dream的原始图像路径 base_image_path = '...' # 将原始图像装载到Numpy数组中 img = preprocess_image(base_image_path) # 我们设置一个元组的列表,用来存储我们需要进行梯度增强的不同尺寸 original_shape = img.shape[1:3] successive_shapes = [original_shape] for i in range(1, num_octave): shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape]) successive_shapes.append(shape) # 反序列表,因为需要升序排列 successive_shapes = successive_shapes[::-1] # 将原始图像缩小到最小图像尺寸上 original_img = np.copy(img) shrunk_original_img = resize_img(img, successive_shapes[0]) for shape in successive_shapes: print('Processing image shape', shape) img = resize_img(img, shape) img = gradient_ascent(img, iterations=iterations, step=step, max_loss=max_loss) upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape) same_size_original = resize_img(original_img, shape) lost_detail = same_size_original - upscaled_shrunk_original_img img += lost_detail shrunk_original_img = resize_img(original_img, shape) save_img(img, fname='dream_at_scale_' + str(shape) + '.png') save_img(img, fname='final_dream.png') # - # > Note that the code above leverages the following straightforward auxiliary Numpy # functions, which all do just as their name suggests. They require to have SciPy installed. # # 注意上面的代码直接使用了Numpy的一些辅助函数,功能就如它们名称所暗示那样。这些函数需要按照SciPy。 # + import scipy from tensorflow.keras.preprocessing import image def resize_img(img, size): img = np.copy(img) factors = (1, float(size[0]) / img.shape[1], float(size[1]) / img.shape[2], 1) return scipy.ndimage.zoom(img, factors, order=1) def save_img(img, fname): pil_img = deprocess_image(np.copy(img)) scipy.misc.imsave(fname, pil_img) def preprocess_image(image_path): # 打开,缩放和格式化图像到合适的张量的函数 img = image.load_img(image_path) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) img = inception_v3.preprocess_input(img) return img def deprocess_image(x): # 将装了转换回图像的函数 if K.image_data_format() == 'channels_first': x = x.reshape((3, x.shape[2], x.shape[3])) x = x.transpose((1, 2, 0)) else: x = x.reshape((x.shape[1], x.shape[2], 3)) x /= 2. x += 0.5 x *= 255. x = np.clip(x, 0, 255).astype('uint8') return x # - # > Note that because the original InceptionV3 network was trained to recognize concepts # in images of size 299x299, and given that the process involves downscaling the images # by a reasonable factor, our Deep Dream implementation will produce much better results # on images that are somewhere between 300x300 and 400x400. Regardless, it is still # possible to run the same code on images of any size and any ratio. # # 这里要注意因为原始的Inception V3网络是在图像尺寸299x299上训练出来的,因此它是在这个尺寸上捕获的图像特征,上面的过程含有将图像缩小到某个比例的操作,所以我们的Deep Dream实现会在300x300到400x400大小的图像上表现更好的结果。不过,上面的实现仍然能够在任何尺寸和比例的图像上运行。 # # > Starting from this photograph (taken in the small hills between the San Francisco bay # and the Google campus), we obtain the following Deep Dream: # # 作者使用下面这张原始照片(在三藩市湾区和谷歌园区之间的一个小山谷拍摄),我们获得了下面的Deep Dream: # # ![deep dream example](imgs/f8.5.jpg) # # 图8-5 我们的Deep Dream实现的一个例子 # > I strongly suggest that you explore what you can do by adjusting which layers you are # using in your loss. Layers that are lower in the network contain more local, less abstract # representations and will lead to more geometric-looking dream patterns. Layers # higher-up will lead to more recognizable visual patterns based on the most common # objects found in ImageNet, such as dog eyes, bird feathers, and so on. You can use # random generation of the parameters in our layer_contributions dictionary in order # to quickly explore many different layer combinations. # # 作者强烈建议读者探索一下通过调整使用哪些层次用来作为损失值。网络中的低端层次包含着一些更加局部更少抽象的表现形式,并且会得到更加具有集合形式的dream图像模式。而高端的层次会得到那些更加可识别的视觉模式,也就是在ImageNet中可以观察到的目标,如狗眼睛,鸟羽毛等。你可以使用随机生成的参数来调整`layer_contributions`字典的值,从而快速的探索许多不同的层次损失值组合。 # # > Here is a range of results obtained using different layer configurations, from an image # of a delicious homemade pastry: # # 下面是部分使用不同层次配置获得的结果,都是从一张可口的糕点照片中生成的: # # ![different layer configurations](imgs/f8.6.jpg) # # 图8-6 使用不同的层次作为损失值获得的图像 # ### 8.2.2 小结 # # > - Deep Dream consists in running a network "in reverse" to generate inputs based on the # representations learned by the convnet. # - The results produced are fun, and share some similarity with the visual artifacts induced # in humans by the disruption of the visual cortex via psychedelics. # - Note that the process is not specific to image models, nor even to convnets. It could be # done for speech, music, and more. # # - Deep Dream使用一种“反向”的方法来让网络基于从卷积网络中学习到的表现形式来生成图像。 # - 生成的结果通过在图像中插入一下奇幻的视觉元素造成人眼视觉的隔断来形成有趣的效果。 # - 要说明的是这个过程不仅对图像模型有效,甚至不仅针对卷积网络。它可以用来对演讲、音乐等进行处理。 # ## 8.3 神经风格迁移 # # > Besides Deep Dream, another major development in deep learning-driven image # modification that happened in the summer of 2015 is neural style transfer, introduced by # <NAME> et al. The neural style transfer algorithm has undergone many refinements # and spawned many variations since its original introduction, including a viral smartphone # app, called Prisma. For simplicity, this section focuses on the formulation described in # the original paper. # # 除了Deep Dream,还有一种深度学习技术驱动的图像修改的主要应用,出现在2015年夏天,叫做神经风格迁移,由<NAME>首次提出。神经风格迁移算法在这之后经历了多次改良并且孵化出很多的变体,这里面包括一个爆款智能手机应用Prisma。为了简单起见,本小节专注于原始论文中描述的方法。 # # > Neural style transfer consists in applying the "style" of a reference image to a target # image, while conserving the "content" of the target image: # # 神经风格迁移包含着将一个参考图像的“风格”应用到目标图像上,并且保留目标图像的“内容”: # # ![neural style transfer](imgs/f8.7.jpg) # # 图8-7 神经风格迁移的例子 # > What is meant by "style" is essentially textures, colors, and visual patterns in the # image, at various spatial scales, while the "content" is the higher-level macrostructure of # the image. For instance, blue-and-yellow circular brush strokes are considered to be the # "style" in the above example using Starry Night by Van Gogh, while the buildings in the # Tuebingen photograph are considered to be the "content". # # “风格”本质上就是图像中的纹理、颜色和视觉模式,而“内容”是图像中高层次的宏结构。例如上面梵高的《星空》中的蓝黄交错的笔法就被认为是“风格”,而图宾根照片中的建筑物就被认为是“内容”。 # # > The idea of style transfer, tightly related to that of texture generation, has had a long # history in the image processing community prior to the development of neural style # transfer in 2015. However, as it turned out, the deep learning-based implementations of # style transfer offered results unparalleled by what could be previously achieved with # classical computer vision techniques, and triggered an amazing renaissance in creative # applications of computer vision. # # 风格转移的原理与纹理生成紧密相关,实际上在2015年出现神经风格迁移之前已经在图像处理领域存在了很久。然而由于基于深度学习技术实现的风格迁移的出现,人们发现其产生的结果与传统的计算机视觉技术得到的结果不可同日而语,因此再度引发了这个领域的一次爆发。 # # > The key notion behind implementing style transfer is same idea that is central to all # deep learning algorithms: we define a loss function to specify what we want to achieve, # and we minimize this loss. We know what we want to achieve: conserve the "content" of # the original image, while adopting the "style" of the reference image. If we were able to # mathematically define content and style, then an appropriate loss function to minimize # would be the following: # # 实现风格迁移的关键与所有的深度学习算法的核心点一致:定义损失函数来设定我们需要达到的目标,然后尽可能的最小化损失。我们这里的目标是:尽可能保留原始图像的“内容”而尽可能应用参考图像的“风格”。如果我们能够在数学上定义内容和风格,那么需要最小化的损失函数如下: # # ```python # loss = distance(style(reference_image) - style(generated_image)) + # distance(content(original_image) - content(generated_image)) # ``` # # > Where distance is a norm function such as the L2 norm, content is a function that # takes an image and computes a representation of its "content", and style is a function # that takes an image and computes a representation of its "style". # # 这里的`distance`是一个计算范数的函数,例如L2范数,`content`是一个从图像中获取并计算它内容表现形式的函数,`style`是一个从图像中获取并计算风格表现形式的函数。 # # > Minimizing this loss would cause style(generated_image) to be close to # style(reference_image) , while content(generated_image) would be close to # content(generated_image) , thus achieving style transfer as we defined it. # # 最小化这个损失会使得风格(生成图像)尽量接近(参考图像),而内容(生成图像)尽量接近(原始图像),因此达到我们定义的风格迁移目标。 # # > A fundamental observation made by Gatys et al is that deep convolutional neural # networks offer precisely a way to mathematically defined the style and content # functions. Let’s see how. # # Gatys在他的论文中提出了一个基本结论,就是深度卷积神经网络能够精确的定义我们需要的风格和内容函数。下面我们来看看如何实现。 # ### 8.3.1 内容损失 # # > As you already know, activations from earlier layers in a network contain local # information about the image, while activations from higher layers contain increasingly # global and abstract information. Formulated in a different way, the activations of the # different layers of a convnet provide a decomposition of the contents of an image over # different spatial scales. Therefore we expect the "content" of an image, which is more # global and more abstract, to be captured by the representations of a top layer of a # convnet. # # 正如你已经了解的,网络中前面层次的激活含有图像的局部信息,而上面层次的激活含有全局和抽象的信息。让我们换一种表述形式,卷积网络中不同层次的激活提供了在不同空间尺度上对图像内容分解的一种方式。因此我们我们希望获得一张图像的内容,也就是更加全局和抽象的信息,应该从卷积网络中的顶层中获得。 # # > A good candidate for a content loss would thus be to consider a pre-trained convnet, # and define as our loss the L2 norm between the activations of a top layer computed over # the target image and the activations of the same layer computed over the generated # image. This would guarantee that, as seen from the top layer of the convnet, the # generated image will "look similar" to the original target image. Assuming that what the # top layers of a convnet see is really the "content" of their input images, then this does # work as a way to preserve image content. # # 计算内容损失的一个很好的办法是使用一个预训练卷积网络,将我们的损失定义为网络最顶层计算得到的原始图像激活值与生成图像激活值的L2范数。这样能够保证对于最顶层来说,生成图像会和原始图像相似。因为我们假设卷积网络最顶层观察的是图像的“内容”,所以这样就能更好的保存图像内容。 # ### 8.3.2 风格损失 # # > While the content loss only leverages a single higher-up layer, the style loss as defined in # the Gatys et al. paper leverages multiple layers of a convnet: we aim at capturing the # appearance of the style reference image at all spatial scales extracted by the convnet, not # just any single scale. # # 对于内容损失来说,我们只使用了最顶层,然而Gatys等人在论文中定义的风格损失将需要使用卷积网络的多个层次:因为这里的目标是能够捕获参考图像中所有空间尺度上的风格表现,而不是单一的空间尺度。 # # > For the style loss, the Gatys et al. paper leverages the "Gram matrix" of a layer’s # activations, i.e. the inner product between the feature maps of a given layer. This inner # product can be understood as representing a map of the correlations between the features # of a layer. These feature correlations capture the statistics of the patterns of a particular # spatial scale, which empirically corresponds to the appearance of the textures found at # this scale. # # 对于风格损失,Gatys的论文使用了一个层激活的“格拉姆矩阵”,也就是给定层次的特征图的内积。这个内积的结果可以理解为层次的特征之间的相关性。这种特征的相关性捕获了特定空间尺度上的统计学模式,其实也就是在该尺度上观察到的纹理表现形式。 # # > Hence the style loss aims at preserving similar internal correlations within the # activations of different layers, across the style reference image and the generated image. # In turn, this guarantees that the textures found at different spatial scales will look similar # across the style reference image and the generated image. # # 因此风格损失的目标就是尽量保持不同层次激活的内部相关性,使得生成图像和参考图像的激活表现尽量一致。达到后,就能使得生成图像的风格看起来与参考图像相似。 # ### 8.3.3 简而言之 # # > In short, we can use a pre-trained convnet to define a loss that will: # # > - Preserve content by maintaining similar high-level layer activations between the target # content image and the generated image. The convnet should "see" both the target image # and the generated image as "containing the same things". # - Preserve style by maintaining similar correlations within activations for both low-level # layers and high-level layers. Indeed, feature correlations capture textures : the generated # and the style reference image should share the same textures at different spatial scales. # # 简而言之我们可以使用预训练的卷积网络来定义损失,以达到: # # - 在原始图像和生成图像之间保持相似的高层激活结果。卷积网络应该能够在两个图像上都“观测”相同的内容。 # - 通过在参考图像和生成图像之间保持相似的底层和高层激活结果的相关性来保持风格。实际上特征相关性代表着纹理:也就是生成图像和参考图像应该共享了不同空间尺度的相同纹理特征。 # # > Now let’s take a look at a Keras implementation of the original 2015 neural style # transfer algorithm. As you will see, it shares a lot of similarities with the Deep Dream # implementation we developed in the previous section. # # 下面我们来看一下在Keras中实现原始的2015神经风格迁移算法。你将会看到,下面的方法与上一节中的Deep Dream实现上有许多的相似之处。 # ### 8.3.4 Keras中的神经风格迁移 # # > Neural style transfer can be implemented using any pre-trained convnet. Here we will use # the VGG19 network, used by Gatys et al in their paper. VGG19 is a simple variant of the # VGG16 network we introduced in Chapter 5, with three more convolutional layers. # # 神经风格迁移可以使用任何的预训练卷积网络来实现。这里我们使用Gatys论文中用的那个VGG19网络。VGG19是我们在第五章中介绍过的VGG16网络的简单变体,只是多加了三个卷积层。 # # > This is our general process: # # > - Set up a network that will compute VGG19 layer activations for the style reference # image, the target image, and the generated image at the same time. # - Use the layer activations computed over these three images to define the loss function # described above, which we will minimize in order to achieve style transfer. # - Set up a gradient descent process to minimize this loss function. # # 主要的过程包括: # # - 构建一个网络,能够同时计算参考图像,原始目标图像和生成图像在VGG19层次上的激活。 # - 使用上面计算得到的层激活来定义前面介绍的损失函数,需要在训练中最小化这个值达到风格迁移的目标。 # - 设置梯度下降过程来最小化并进行训练。 # # > Let’s start by defining the paths to the two images we consider: the style reference # image and the target image. To make sure that all images processed share similar sizes # (widely different sizes would make style transfer more difficult), we will later resize # them all to a shared height of 400px. # # 首先我们定义两个图像的路径:风格参考图像和原始目标图像。为了保证所有图像都有着相似的大小(有着巨大尺寸差别的图像会使得风格迁移变得更加困难),我们会将两张图像都缩放到高度为400px。 # + from tensorflow.keras.preprocessing.image import load_img, img_to_array # 原始目标图像路径 target_image_path = 'img/portrait.jpg' # 风格参考图像路径 style_reference_image_path = 'img/transfer_style_reference.jpg' # 生成图像的尺寸 width, height = load_img(target_image_path).size img_height = 400 img_width = int(width * img_height / height) # - # > We will need some auxiliary functions for loading, pre-processing and # post-processing the images that will go in and out of the VGG19 convnet: # # 我们下面需要一些工具函数用来对输入输出VGG19卷积网络的图像进行装载、预处理、后处理: # + import numpy as np from tensorflow.keras.applications import vgg19 def preprocess_image(image_path): img = load_img(image_path, target_size=(img_height, img_width)) img = img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg19.preprocess_input(img) return img def deprocess_image(x): # 使用像素均值来规范化 x[:, :, 0] += 103.939 x[:, :, 1] += 116.779 x[:, :, 2] += 123.68 # 'BGR'->'RGB' x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype('uint8') return x # - # > Let’s set up the VGG19 network. It takes as input a batch of three images: the style # reference image, the target image, and a placeholder that will contain the generated # image. A placeholder is simply a symbolic tensor, the values of which are provided # externally via Numpy arrays. The style reference and target image are static, and thus # defined using K.constant , while the values contained in the placeholder of the # generated image will change over time. # # 然后构建VGG19网络。它将三张图像作为一个批次输入:风格参考图像、原始目标图像和一个作为生成图像的置位符。置位符就是一个符号化的张量,它的值通过外部Numpy数组来提供。因为风格参考图像和原始目标图像都是静态的,因此可以使用`K.constant`来定义,而置位符代表的生成图像会随着时间不断发生变化。 # + from tensorflow.keras import backend as K target_image = K.constant(preprocess_image(target_image_path)) style_reference_image = K.constant(preprocess_image(style_reference_image_path)) # 下面的置位符表示生成的图像 combination_image = K.placeholder((1, img_height, img_width, 3)) # 我们将三张图像合并成一个批次 input_tensor = K.concatenate([target_image, style_reference_image, combination_image], axis=0) # 构建VGG19网络,使用三张图像作为输入,模型会使用ImageNet数据集权重作为预训练权重值 model = vgg19.VGG19(input_tensor=input_tensor, weights='imagenet', include_top=False) print('Model loaded.') # - # > Let’s define the content loss, meant to make sure that the top layer of the VGG19 # convnet will have a similar view of the target image and the generated image: # # 定义内容损失,用来保证VGG19卷积网络的顶层对原始目标图像和生成图像有着相似的结果: def content_loss(base, combination): return K.sum(K.square(combination - base)) # > Now, here’s the style loss. It leverages an auxiliary function to compute the Gram # matrix of an input matrix, i.e. a map of the correlations found in the original feature # matrix. # # 下面就是风格损失。它使用一个工具函数来计算输入矩阵的格拉姆矩阵,也就是在原始特征矩阵中得到的相关性地图。 # + def gram_matrix(x): features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1))) gram = K.dot(features, K.transpose(features)) return gram def style_loss(style, combination): S = gram_matrix(style) C = gram_matrix(combination) channels = 3 size = img_height * img_width return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2)) # - # > To these two loss components, we add a third one, the "total variation loss". It is # meant to encourage spatial continuity in the generated image, thus avoiding overly # pixelated results. You could interpret it as a regularization loss. # # 在这两个损失模块基础上,我们增加了第三个,“总体差异损失”。这是用来提升生成图像的空间连续性的,从而避免产生过于像素化的结果。你可以理解为一个规范化后的损失。 def total_variation_loss(x): a = K.square(x[:, :img_height - 1, :img_width - 1, :] - x[:, 1:, :img_width - 1, :]) b = K.square(x[:, :img_height - 1, :img_width - 1, :] - x[:, :img_height - 1, 1:, :]) return K.sum(K.pow(a + b, 1.25)) # > The loss that we minimize is a weighted average of these three losses. To compute the # content loss, we only leverage one top layer, the block5_conv2 layer, while for the style # loss we use a list of layers than spans both low-level and high-level layers. We add the # total variation loss at the end. # # 最终我们需要最小化的损失是这三个损失值的加权平均。计算内容损失时我们只需要使用最顶层,也就是`block5_conv2`层,而计算风格损失时我们需要使用一个层次的列表,涵盖了底层到高层。最后我们将总体差异损失加在后面。 # # > Depending on the style reference image and content image you are using, you will # likely want to tune the content_weight coefficient, the contribution of the content loss # to the total loss. A higher content_weight means that the target content will be more # recognizable in the generated image. # # 取决于你在使用的风格参考图像和内容图像,你可能需要调整`content_weight`系数,它代表着内容损失在整体损失中占的比重。更高的`content_weight`代表着生成图像中的内容具有更高的辨识度。 # + # 定义个将层次名称映射到激活输出张量的字典 outputs_dict = dict([(layer.name, layer.output) for layer in model.layers]) # 内容损失计算的层次名称 content_layer = 'block5_conv2' # 风格损失计算的层次名称列表 style_layers = ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'block5_conv1'] # 三个损失值所占的权重比例 total_variation_weight = 1e-4 style_weight = 1. content_weight = 0.025 # 下面将所有的损失值相加,合成到一个loss损失值中 loss = K.variable(0.) layer_features = outputs_dict[content_layer] target_image_features = layer_features[0, :, :, :] combination_features = layer_features[2, :, :, :] loss += content_weight * content_loss(target_image_features, combination_features) for layer_name in style_layers: layer_features = outputs_dict[layer_name] style_reference_features = layer_features[1, :, :, :] combination_features = layer_features[2, :, :, :] sl = style_loss(style_reference_features, combination_features) loss += (style_weight / len(style_layers)) * sl loss += total_variation_weight * total_variation_loss(combination_image) # - # > Finally, we set up the gradient descent process. In the original Gatys et al. paper, # optimization is performed using the L-BFGS algorithm, so that is also what we will use # here. This is a key difference from the Deep Dream example in the previous section. The # L-BFGS algorithms comes packaged with SciPy. However, there are two slight # limitations with the SciPy implementation: # # > - It requires to be passed the value of the loss function and the value of the gradients as two # separate functions. # - It can only be applied to flat vectors, whereas we have a 3D image array. # # 最后一步就是设置梯度下降过程。在Gatys的论文中,优化使用的是`L-BFGS`算法,因此我们这里也选择它。这是与之前Deep Dream例子的一个关键区别。L-BFGS算法被打包在SciPy库中。然而,SciPy实现的算法有两个局限性: # # - 它需要将损失函数和梯度值作为两个独立的参数代入。 # - 它只能应用在铺平的向量上,而这里我们有的是一个3D图像数组。 # # > It would be very inefficient for us to compute the value of the loss function and the # value of gradients independently, since it would lead to a lot of redundant computation # between the two. We would be almost twice slower than we could be by computing them # jointly. To by-pass this, we set up a Python class named Evaluator that will compute # both loss value and gradients value at once, will return the loss value when called the first # time, and will cache the gradients for the next call. # # 如果我们分别独立计算损失函数值和梯度值的话将会非常的低效,因为这会导致两者之间产生许多冗余的计算操作。这会使得整个计算时间比联合计算它们要多几乎一倍。为了避免这一点,我们会构造一个Python类叫做`Evaluator`,它会同时计算损失值和梯度值,然后在第一次调用时返回损失值,并将梯度值缓存起来留待第二次调用。 # + # 通过损失值计算生成图像的梯度值 grads = K.gradients(loss, combination_image)[0] # Function to fetch the values of the current loss and the current gradients fetch_loss_and_grads = K.function([combination_image], [loss, grads]) class Evaluator(object): def __init__(self): self.loss_value = None self.grads_values = None def loss(self, x): assert self.loss_value is None x = x.reshape((1, img_height, img_width, 3)) outs = fetch_loss_and_grads([x]) loss_value = outs[0] grad_values = outs[1].flatten().astype('float64') self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_values evaluator = Evaluator() # - # > Finally, we can run the gradient ascent process using SciPy’s L-BFGS algorithm, # saving the current generated image at each iteration of the algorithm (here, a single # iteration represents 20 steps of gradient ascent): # # 一切准备好后,我们就可以使用Scipy的L-BFGS算法来运行梯度增强过程,过程中我们会保存每次算法迭代完成后的生成图像(这里,一次迭代代表着20次梯度增强过程): # + from scipy.optimize import fmin_l_bfgs_b from scipy.misc import imsave import time result_prefix = 'my_result' iterations = 20 # 运行L-BFGS算法来最小化损失 # 初始化状态是原始目标图像 # 注意`scipy.optimize.fmin_l_bfgs_b`只能应用在铺平的向量上 x = preprocess_image(target_image_path) x = x.flatten() for i in range(iterations): print('Start of iteration', i) start_time = time.time() x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x, fprime=evaluator.grads, maxfun=20) print('Current loss value:', min_val) # 保存生成的图像 img = x.copy().reshape((img_height, img_width, 3)) img = deprocess_image(img) fname = result_prefix + '_at_iteration_%d.png' % i imsave(fname, img) end_time = time.time() print('Image saved as', fname) print('Iteration %d completed in %ds' % (i, end_time - start_time)) # - # > Here’s what we get: # # 运行之后我们可以得到: # # ![sample images](imgs/f8.8.jpg) # # 图8-8 风格迁移的一些生成图像 # > Keep in mind that what this technique achieves is merely a form of image # re-texturing, or texture transfer. It will work best with style reference images that are # strongly textured and highly self-similar, and with content targets that don’t require high # levels of details in order to be recognizable. It would typically not be able to achieve # fairly abstract feats such as "transferring the style of one portrait to another". The # algorithm is closer to classical signal processing than to AI, so don’t expect it to work # like magic! # # 这里还需要说明的是,这个技术仅仅是一种将图像重新绘制纹理的过程,或者是纹理转移。因此它会在风格参考图像具有强烈纹理风格或者高度自相似纹理风格,以及内容目标图像不需要高度细节才能够识别的情况下,能够工作的最良好。它无法实现一些很常见的抽象任务比方说“将一张肖像的风格迁移到另一张肖像上”。这里的算法更接近传统信号处理而不是AI,因此别期望它像变魔术一样生成图像。 # # > Additionally, do note that running this style transfer algorithm is quite slow. # However, the transformation operated by our setup is simple enough that it can be # learned by a small, fast feedforward convnet as well—as long as you have appropriate # training data available. Fast style transfer can thus be achieved by first spending a lot of # compute cycles to generate input-output training examples for a fixed style reference # image, using the above method, and then training a simple convnet to learn this # style-specific transformation. Once that is done, stylizing a given image is instantaneous: # it’s a just a forward pass of this small convnet. # # 并且也需要了解运行这样的风格迁移算法很慢。然而我们这里使用的迁移操作还是比较简单的,因此可以通过一个小型的快速的前向传播卷积网络来进行学习,前提只要你有合适的训练数据。所以快速风格迁移能够通过预先训练生成特定输入输出训练样本上的固定风格参考图像的模型来完成,然后针对每个特定的风格转换都训练一个独立的简单卷积网络。完成之后,对给定图像的风格迁移就是瞬间完成:因为它仅需要对一个小型卷积网络做一次前向传播运算。 # ### 8.3.5 小结 # # > - Style transfer consists in creating a new image that preserves the "contents" of a target # image while also capturing the "style" of a reference image. # - "Content" can be captured by the high-level activations of a convnet. # - "Style" can be captured by the internal correlations of the activations of different layers # of a convnet. # - Hence deep learning allows style transfer to be formulated as an optimization process # using a loss defined with a pre-trained convnet. # - Starting from this basic idea, many variants and refinements are possible! # # - 风格迁移包含着创建一张新的图像,其中保留了目标图像的“内容”以及参考图像的“风格”。 # - “内容”可以从卷积网络的高层激活结果中获得。 # - “风格”可以从卷积网络各个层次的激活结果内在相关性中获得。 # - 因此我们可以使用深度学习方法,在一个允许你了卷积网络上使用损失优化方式来完成风格迁移。 # - 从这些基础知识出发,可以得到很多风格迁移的变体和改良。 # ## 8.4 使用变分自动编码生成图像 # # > Sampling from a latent space of images to create entirely new images, or edit existing # ones, is currently the most popular and successful application of creative AI. In this # section and the next one, we review some of the high-level concepts pertaining to image # generation, alongside implementations details relative to the two main techniques in this # domain: Variational Autoencoders (VAEs) and Generative Adversarial Networks # (GANs). The techniques we present here are not specific to images—one could develop # latent spaces of sound, music, or even text, using GANs or VAEs—but in practice the # most interesting results have been obtained with pictures, and that is what we focus on # here. # # 从图像的潜空间中取样来创建完全新的图像或编辑已经存在的图像,目前在创造性AI领域已经称为最热门和成功的应用。在本节和下一节中,我们会介绍一些高层的图像生成概念,同时会专门阐述与之相关两种技术实现你:变分自动编码(VAE)和生成对抗网络(GAN)。这两节介绍的技巧不但可以应用在图像上,也可以将它们应用到声音、音乐或者文本的潜空间中,不过在实践中最有趣的结果还是来自图像,因此我们还是聚焦于此。 # ### 8.4.1 从图像潜空间取样 # # > The key idea of image generation is to develop a low-dimensional latent space of # representations (which naturally is a vector space, i.e. a geometric space), where any # point can be mapped to a realistic-looking image. The module capable of realizing this # mapping, taking as input a latent point and outputting an image, i.e. a grid of pixels, is # called a generator (in the case of GANs) or a decoder (in the case of VAEs). Once such a # latent space has been developed, one may sample points from it, either deliberately or at # random, and by mapping them to image space, generate images never seen before. # # 图像生成的关键在于能够找到图像的低维度潜空间的表现形式(也就是向量空间或者几何空间),空间中人和店都能够被映射成真实图像中的一个点。能够实现这样的映射,也就是将输入潜空间的点转换成图像输出,或者说是一个像素网格的模块,被称为生成器(在使用GAN的情况下)或者解码器(在使用VAE的情况下)。一旦找到了这样的潜空间,就可以从中取样,以指定的方式或者以随机的方式,将它们映射到图像空间,从而生成从未有过的图像。 # # ![latent space](imgs/f8.9.jpg) # # 图8-9 从图像的潜空间中学习然后取样获得新的图像 # > GANs and VAEs are simply two different strategies for learning such latent spaces of # image representations, with each its own characteristics. VAEs are great for learning # latent spaces that are well-structured, where specific directions encode a meaningful axis # of variation in the data. GANs generate images that can potentially be highly realistic, but # the latent space they come from may not have as much structure and continuity. # # GAN和VAE就是两种从图像表现形式中学习获得潜空间的不同策略,当然它们具有各自的特点。VAE在学习具有良好结构的图像潜空间时特别有效,这里特定方向编码会是图像中一个有意义的数据轴的变分。GAN可以产生高度真实的图像,但是它们学习的潜空间可能并没有良好的结构和连续性。 # # ![VAE continuous latent space](imgs/f8.10.jpg) # # 图8-10 <NAME>使用VAE学习得到的连续潜空间生成的图像 # ### 8.4.2 图像编辑中的概念向量 # # > We already hinted at the idea of a "concept vector" when we covered word embeddings # in Chapter 6. The idea is still the same: given a latent space of representations, or an # embedding space, certain directions in the space may encode interesting axes of variation # in the original data. In a latent space of images of faces, for instance, there may be a # "smile vector" s , such that if latent point z is the embedded representation of a certain # face, then latent point z + s is the embedded representation of the same face, smiling. # Once one has identified such a vector, is then becomes possible to edit images by # projecting them into the latent space, moving their representation in a meaningful way, # then decoding them back to image space. There are concept vectors for essentially any # independent dimension of variation in image space—in the case of faces, one may # discover vectors for adding sunglasses to a face, removing glasses, turning a male face # into female face, etc. # # 在第六章词嵌入中我们已经接触过“概念向量”的内容。这里的含义是一样的:给定表现形式的潜空间,或者一个嵌入空间,某些原始数据的空间中的方向可以被编码成有意义的轴。例如在人脸图像的潜空间中,可能会存在“微笑向量”,我们称为向量`s`,然后在某张脸谱图像中存在一个潜在点`z`,那么潜在点`z + s`就变成了同一张脸并且带着微笑的嵌入表现形式。一旦我们找到了这样的向量,那么通过将这个向量投射到潜空间中来对图像进行编辑就变得可能了,从而将表现形式朝着期望的方向移动,最后重新将其解码到图像空间中。在图像空间充满了这样的概念向量独立维度,在人脸例子中,就存在这发现戴了太阳眼镜、去除眼镜、将男性脸部换成女性脸部等。 # # > Here is an example of a "smile vector", a concept vector discovered by <NAME> # from the Victoria University School of Design in New Zealand, using VAEs trained on a # dataset of faces of celebrities (the CelebA dataset): # # 下面是一个“微笑向量”的例子,这是由新西兰维多利亚大学设计学院的Tom White发现的,他使用了VAE在一个名人脸谱数据集上训练得到: # # ![smile vector](imgs/f8.11.jpg) # # 图8-11 微笑向量 # ### 8.4.3 变分自动编码器 # # > Variational autoencoders, simultaneously discovered by Kingma & Welling in December # 2013, and Rezende, Mohamed & Wierstra in January 2014, are a kind of generative # model that is especially appropriate for the task of image editing via concept vectors. # They are a modern take on autoencoders—a type of network that aims to "encode" an # input to a low-dimensional latent space then "decode" it back—that mixes ideas from # deep learning with Bayesian inference. # # 变分自动编码器是Kingma和Welling在2013年12月份,Rezende、Mohamed和Wierstra在2014年1月份同时发现的,是一种特别合适通过概念向量来进行图像编辑任务的生成模型。它是自动编码器的一个现代方法,自动编码器是一种网络专注于将输入“编码”到一个低维度的潜空间,然后将其“解码”回去的机器学习方法,它融合了深度学习和贝叶斯推断。 # # > A classical image autoencoder takes an image, maps it to a latent vector space via an # "encoder" module, then decode it back to an output with the same dimensions as the # original image, via a "decoder" module. It is then trained by using as target data the same # images as the input images, meaning that the autoencoder learns to reconstruct the # original inputs. By imposing various constraints on the "code", i.e. the output of the # encoder, one can get the autoencoder to learn more or less interesting latent # representations of the data. Most commonly, one would constraint the code to be very # low-dimensional and sparse (i.e. mostly zeros), in which case the encoder acts as a way # to compress the input data into fewer bits of information. # # 一个经典的图像自动编码器接受一张图像输入,使用“编码器”模块将它映射到潜在向量空间,然后又重新把向量空间解码映射到原始维度的图像空间,这意味着自动编码器具有学习重构元时输入的能力。通过对“编码”引入不同的约束条件,也就是约束编码器的输出,能够让其学习到数据中一些有意义的潜空间表现形式。更普遍来说,通过将数据编码到很低维度且稀疏的空间(也就是大部分是0),这样就可以提供一种将输入数据压缩到更小数据量的信息之中。 # # ![autoencoder](imgs/f8.12.jpg) # # 图8-12 自动编码器,将输入x编码到低维度潜空间,实现压缩后重新解码到原始数据空间 # > In practice, such classical autoencoders don’t lead to particularly useful or # well-structured latent spaces. They’re not particularly good at compression, either. For # these reasons, they have largely fallen out of fashion over the past years. Variational # autoencoders, however, augment autoencoders with a little bit of statistical magic that # forces them to learn continuous, highly structured latent spaces. They have turned out to # be a very powerful tool for image generation. # # 在实践中,这样的传统自动编码器不会得到特别有用或者良好结构化的潜空间。它们在压缩方面也不会表现优异。因为这些原因,传统的自动编码器在过去几年已经逐渐不再流行。然而变分自动编码器,增广自动编码器,使用了一些统计学的技巧使得它们能够学习到连续的高度结构化的潜空间。因此两者已经成为图像生成非常强大的工具。 # # > A VAE, instead of compressing its input image into a fixed "code" in the latent space, # turns the image into the parameters of a statistical distribution: a mean and a variance. # Essentially, this means that we are assuming that the input image has been generated by a # statistical process, and that the randomness of this process should be taken into # accounting during encoding and decoding. The VAE then uses the mean and variance # parameters to randomly sample one element of the distribution, and decodes that element # back to the original input. The stochasticity of this process improves robustness and # forces the latent space to encode meaningful representations everywhere, i.e. every point # sampled in the latent will be decoded to a valid output. # # 在VAE中,不再使用将输入图像压缩到潜空间的一个固定“编码”,而是将图像转换成统计学分布的参数:均值和方差。从根本上来说,这意味着我们假定输入图像是由一个统计学过程生成的,因此这个过程中的随机性必须在编码和解码的时候纳入考虑之中。VAE使用均值和方差参数来在分布中进行随机取样,然后把元素解码到原始输入空间中。将随机性加入这个过程中极大改善了潜空间编码有意义变现形式的健壮性和能力,也就是说潜空间中采样的每个点都能正确的解码到输出中。 # # ![VAE](imgs/f8.13.jpg) # # 图8-13 VAE将图像映射到两个向量上,z_mean和z_log_sigma,它们能有效表示图像的概率分布,在分布中可以取样并解码到原始空间 # > In technical terms, here is how a variational autoencoder works. First, an encoder # module turns the input samples input_img into two parameters in a latent space of # representations, which we will note z_mean and z_log_variance . Then, we randomly # sample a point z from the latent normal distribution that is assumed to generate the input # image, via z = z_mean + exp(z_log_variance) * epsilon , where epsilon is a # random tensor of small values. Finally, a decoder module will map this point in the latent # space back to the original input image. Because epsilon is random, the process ensures # that every point that is close to the latent location where we encoded input_img ( z-mean # ) can be decoded to something similar to input_img , thus forcing the latent space to be # continuously meaningful. Any two close points in the latent space will decode to highly # similar images. Continuity, combined with the low dimensionality of the latent space, # forces every direction in the latent space to encode a meaningful axis of variation of the # data, making the latent space very structured and thus highly suitable to manipulation via # concept vectors. # # 用技术术语来描述变分自动编码的原理。首先编码器模块将输入图像编码到潜空间的两个参数上,我们使用`z-mean`和`z_log_variance`来表示。然后我们可以在潜空间正态分布上取样z点作为输入图像生成的假设,公式是$$z=z\_mean+e^{z\_log\_variance}*\epsilon$$ # 这里的$\epsilon$是一个随机的小数值张量。最后解码器模块会将潜空间的这个点应社会原始输入图像。因为$\epsilon$是随机的,这个过程能狗保证每个从输入图像编码中得到的取样点都能近似解码到输入图像附近,因此强制让潜空间变为连续有意义。任何潜空间的两个邻近点必然会解码得到高度相似的图像。连续性再加上潜空间的低维度特性,使得潜空间中的每个方向都能代表一个数据变化上有意义的轴,因此潜空间变得非常具有结构化特征,特别适合用概念向量来编辑图像。 # # > The parameters of a VAE are trained via two loss functions: first, a reconstruction # loss that forces the decoded samples to match the initial inputs, and a regularization loss, # which helps in learning well-formed latent spaces and reducing overfitting to the training # data. # # VAE的参数需要通过两个损失函数来训练:第一个是重建损失,用来令解码后的样本接近原始输入,另一个是正则化损失,用来帮助学习到良好结构的潜空间和减少对训练数据的过拟合。 # # > Let’s quickly go over a Keras implementation of a VAE. Schematically, it looks like # this: # # 让我们快速看一下VAE在Keras中的实现。简单来说,如下: # # ```python # # 将输入编码成一个均值和方差参数 # z_mean, z_log_variance = encoder(input_img) # # # 从概率分布中取样一个点 # z = z_mean + exp(z_log_variance) * epsilon # # # 然后将z解码回到原始图像空间 # reconstructed_img = decoder(z) # # # 实例化模型 # model = Model(input_img, reconstructed_img) # # # 然后使用两个损失函数来训练模型 # # 重建损失和正则化损失 # ``` # > Here is the encoder network we will use: a very simple convnet which maps the input # image x to two vectors, z_mean and z_log_variance . # # 下面是一个编码器网络:它由一个简单的卷积网络构成,将输入的图像x转换成两个向量,`z_mean`和`z_log_variance`。 # + import tensorflow.keras as keras from tensorflow.keras import layers from tensorflow.keras import backend as K from tensorflow.keras.models import Model import numpy as np img_shape = (28, 28, 1) batch_size = 16 latent_dim = 2 # 潜空间的维度:平面 input_img = keras.Input(shape=img_shape) x = layers.Conv2D(32, 3, padding='same', activation='relu')(input_img) x = layers.Conv2D(64, 3, padding='same', activation='relu', strides=(2, 2))(x) x = layers.Conv2D(64, 3, padding='same', activation='relu')(x) x = layers.Conv2D(64, 3, padding='same', activation='relu')(x) shape_before_flattening = K.int_shape(x) x = layers.Flatten()(x) x = layers.Dense(32, activation='relu')(x) z_mean = layers.Dense(latent_dim)(x) z_log_var = layers.Dense(latent_dim)(x) # - # > Here is the code for using z_mean and z_log_var , the parameters of the statistical # distribution assumed to have produced input_img , to generate a latent space point z . # Here, we wrap some arbitrary code (built on top of Keras backend primitives) into a # Lambda layer. In Keras, everything needs to be a layer, so code that isn’t part of a built-in # layer should be wrapped in a Lambda (or else, in a custom layer). # # 下面是使用`z_mean`和`z_log_var`的代码,两个假设用来生成输入图像的统计学分布参数。下面的代码取样潜空间的点z。这里我们将取样的函数代码(在Keras backend原语上构建)封装成一个Lambda层。在Keras中,任何东西都应该是一个层,因此所有不属于内建层的代码都应该封装到Lambda(或者自定义层)之中。 # + def sampling(args): z_mean, z_log_var = args epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0., stddev=1.) return z_mean + K.exp(z_log_var) * epsilon z = layers.Lambda(sampling)([z_mean, z_log_var]) # - # > This is the decoder implementation: we reshape the vector z to the dimensions of an # image, then we use a few convolution layers to obtain a final image output that has the # same dimensions as the original input_img . # # 然后是解码器实现:我们将z向量重新转换成一张图像,然后我们使用几个卷积层来获得与原始图像相同维度的输出图像。 # + # 解码器的输入我们会使用z decoder_input = layers.Input(K.int_shape(z)[1:]) # 使用正确数量的单元提升采样 x = layers.Dense(np.prod(shape_before_flattening[1:]), activation='relu')(decoder_input) # 恢复成铺平之前的图像形状 x = layers.Reshape(shape_before_flattening[1:])(x) # 下面使用与编码其相反的操作:加上一个`Conv2DTranspose`层以及相应的参数 x = layers.Conv2DTranspose(32, 3, padding='same', activation='relu', strides=(2, 2))(x) x = layers.Conv2D(1, 3, padding='same', activation='sigmoid')(x) # 最后我们就获得了一个与原始输入相同尺寸的特征地图 # 然后定义解码器模型 decoder = Model(decoder_input, x) # 然后就可以将它应用到`z`上得到解码图像 z_decoded = decoder(z) # - # > The dual loss of a VAE doesn’t fit the traditional expectation of a sample-wise # function of the form loss(input, target) . Thus, we set up the loss by writing a # custom layer with internally leverages the built-in add_loss layer method to create an # arbitrary loss. # # VAE的双损失与常用的样本相关的函数形式`loss(input, target)`无法匹配。因此我们需要编写一个自定义的层来构建损失,在其内部使用内建的`add_loss`方法来获得任意的损失函数定义。 # + class CustomVariationalLayer(keras.layers.Layer): def vae_loss(self, x, z_decoded): x = K.flatten(x) z_decoded = K.flatten(z_decoded) xent_loss = keras.metrics.binary_crossentropy(x, z_decoded) kl_loss = -5e-4 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return K.mean(xent_loss + kl_loss) def call(self, inputs): x = inputs[0] z_decoded = inputs[1] loss = self.vae_loss(x, z_decoded) self.add_loss(loss, inputs=inputs) # 我们不会使用这个层来输出 return x # 使用输入和解码输出调用我们自定义的层次,来获取最终模型的输出 y = CustomVariationalLayer()([input_img, z_decoded]) # - # > Finally, we instantiate and train the model. Since the loss has been taken care of in # our custom layer, we don’t specify an external loss at compile time ( loss=None ), which # in turns means that we won’t pass target data during training (as you can see we only # pass x_train to the model in fit ). # # 最后构建和训练这个模型,因为损失已经在自定义层次中计算了,所以我们在编译模型时无需指定额外的损失函数(`loss=None`),这也意味着模型训练时不会传递目标数据参数给模型(下面的代码可以看到我们只传递了x_train到模型训练)。 # + from tensorflow.keras.datasets import mnist import tensorflow as tf tf.compat.v1.enable_eager_execution() vae = Model(input_img, y) vae.compile(optimizer='rmsprop', loss=None) vae.summary() # 在MNIST数据集上训练我们的VAE模型 (x_train, _), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_train = x_train.reshape(x_train.shape + (1,)) x_test = x_test.astype('float32') / 255. x_test = x_test.reshape(x_test.shape + (1,)) vae.fit(x=x_train, y=None, shuffle=True, epochs=10, batch_size=batch_size, validation_data=(x_test, None)) # - # > Once such a model is trained—e.g. on MNIST, in our case—we can use the decoder # network to turn arbitrary latent space vectors into images: # # 模型训练好了之后,比方说在MNIST数据集上,就可以使用解码器网络来在潜空间取样获得图像: # + import matplotlib.pyplot as plt from scipy.stats import norm # %matplotlib inline # 展示一个手写数字的2D流形 n = 15 # 15x15的网格 digit_size = 28 figure = np.zeros((digit_size * n, digit_size * n)) # 在单位正方形中的线性空间坐标通过正态分布的逆累积分布函数按照潜空间向量z获得 # 因为我们对潜空间的先验假设为正态分布 grid_x = norm.ppf(np.linspace(0.05, 0.95, n)) grid_y = norm.ppf(np.linspace(0.05, 0.95, n)) for i, yi in enumerate(grid_x): for j, xi in enumerate(grid_y): z_sample = np.array([[xi, yi]]) z_sample = np.tile(z_sample, batch_size).reshape(batch_size, 2) x_decoded = decoder.predict(z_sample, batch_size=batch_size) digit = x_decoded[0].reshape(digit_size, digit_size) figure[i * digit_size: (i + 1) * digit_size, j * digit_size: (j + 1) * digit_size] = digit plt.figure(figsize=(10, 10)) plt.imshow(figure, cmap='Greys_r') # - # ![decode numbers](imgs/f8.14.jpg) # # 图8-14 从潜空间中获得手写数字 # > The grid of sampled digits shows a completely continuous distribution of the different # digit classes, with one digit morphing into another as you follow a path through latent # space. Specific directions in this space have a meaning, e.g. there is a direction for # "four-ness", "one-ness", etc. # # 上面的数字网格完全展示了不同数字种类的连续分布,从一个数字变化到另外一个数字就像你在潜空间中沿着某个方向前进一样。在这个空间中特定的方向有着相应的意义,例如有一个方向表示“4”、“1”等。 # # > In the next section, we cover in detail the other major tool for generating artificial # images: generative adversarial networks (GANs). # # 在下一节中,我们会介绍另一个生成人工图像的主要工具:生成对抗网络(GAN)。 # ### 8.4.4 小结 # # > Image generation with deep learning is done by learning latent spaces that capture # statistical information about a dataset of images. By sampling points from the latent # space, and "decoding" them, one can generate never-seen-before images. There are two # major tools to do this: VAEs and GANs. # # > - VAEs result in highly structured, continuous latent representations. For this reason, they # work well for doing all sort of image edition in latent space, like face swapping, turning a # frowning face into a smiling face, and so on. They also work nicely for doing latent space # based animations, i.e. animating a walk along a cross section of the latent space, showing # a starting image slowly morphing into different images in a continuous way. # - GANs enable the generation of realistic single-frame images, but may not induce latent # spaces with solid structure and high continuity. # # 深度学习中的图像生成需要通过模型学习到捕获到图像数据集上的统计学信息的潜空间来实现。从潜空间中取样点,然后“解码”,就能生成之前不存在的图像。有两个主要的工具来完成这项任务:VAE和GAN。 # # - VAE能够获得高度结构化连续的潜空间。因此它能够完成各种各样的图像在潜空间进行编辑的工作,例如换脸、将皱眉表情变为微笑表情等等。它也能应用在实现潜空间动画上,例如在潜空间中沿着一个切面形成动画、展示一张初始图像然后连续渐变到其他图像上。 # - GAN能够生成单帧的真实图像,但是它的潜空间可能不是结构化和高度连续的。 # # > Most successful practical applications I have seen with images actually rely on VAEs, # but GANs are extremely popular in the world of academic research—at least circa # 2016-2017. You will find out how they work and how to implement one in the next # section. # # 很多成功的实际图像应用都依赖着VAE,但是GAN在学术领域却是异常流行,至少在2016-2017左右是这样。你可以在下一节看到GAN的工作原理。 # # > To play further with image generation, I suggest working with the CelebA dataset, # "Large-scale Celeb Faces Attributes". It’s a free-to-download image dataset with more # than 200,000 celebrity portraits. It’s great for experimenting with concept vectors in # particular. It beats MNIST for sure. # # 要进一步学习验证图像生成,作者建议使用CelebA数据集,这是一个“大规模名人脸谱数据集”。它可以免费下载,内含超过20万个名人肖像。它对于实验概念向量非常合适。肯定比MNIST数据集要好。 # ## 8.5 生成对抗网络简介 # # > Generative Adversarial Networks (GANs), introduced in 2014 by <NAME>, are an # alternative to VAEs for learning latent spaces of images. They enable the generation of # fairly realistic synthetic images by forcing the generated images to be statistically almost # indistinguishable from real ones. # # 生成对抗网络(GAN)是2014年由<NAME>提出的,它是除VAE外另一种学习图像潜空间的方法。它能生成相当真实的合成图像,通过让生成图像的统计学特征与真实图像基本一致来实现。 # # > An intuitive way to understand GANs is to imagine a forger trying to create a fake # Picasso painting. At first, the forger is pretty bad at the task. He mixes some of his fakes # with authentic Picassos, and shows them all to an art dealer. The art dealer makes an # authenticity assessment for each painting, and gives the forger feedback about what # makes a Picasso look like a Picasso. The forger goes back to his atelier to prepare some # new fakes. As times goes on, the forger becomes increasingly competent at imitating the # style of Picasso, and the art dealer becomes increasingly expert at spotting fakes. In the # end, we have on our hands some excellent fake Picassos. # # 理解GAN的一个直观方式是想象有一个伪造者尝试伪造毕加索的画作。一开始的时候伪造者很不擅长这个任务。他将自己伪造的作品混入毕加索的真迹当中展示给艺术鉴赏人士。鉴赏人对每幅画作进行真伪评价,然后反馈给伪造者评判毕加索真迹的信息。伪造者根据这些反馈信息,回到他的工作室重新绘制一些新的赝品。随着时间推进,伪造者越来越擅长仿制毕加索画作这项任务,而同时鉴赏人也在鉴别赝品领域变得越来越专业。最终,我们就能得到一些非常逼真的毕加索赝品。 # # > That’s what GANs are: a forger network network and an expert network, each being # trained to best the other. As such, a GAN is made of two parts: # # > - A generator network , which takes as input a random vector (a random point in the latent # space) and decodes it into a synthetic image. # - A discriminator network (also called adversary ), which takes as input an image (real or # synthetic), and must predict whether the image came from the training set or was created # by the generator network. # # 这就是GAN的构成:一个伪造者网络和一个专家网络,每一个都需要进行训练,以期能够打败另一个。所以GAN的组成包括: # # - 一个生成网络,接收随机向量作为输入(潜空间中的一个随机点)然后将它解码成一个合成图像。 # - 一个鉴别器网络(也叫作对抗网络),接收一张图像(真实或合成)作为输入,然后判断这张图像来自训练集还是由生成网络生成。 # # > The generator network is trained to be able to fool the discriminator network, and # thus it evolves towards generating increasingly realistic images as training goes on: # artificial images that look indistinguishable from real ones—to the extent that it is # impossible for the discriminator network to tell the two apart. Meanwhile, the # discriminator is constantly adapting to the gradually improving capabilities of the # generator, which sets a very high bar of realism for the generated images. Once training # is over, the generator is capable of turning any point in its input space into a believable # image. Unlike VAEs, this latent space has less explicit guarantees of meaningful # structure, and in particular, it isn’t continuous. # # 生成网络的训练目标是击败鉴别器网络,因此它会随着训练过程的推进而产生越发真实的图像:这些图像看起来无法与真实图像区分出来,最终目标是使得鉴别器网络无法分出真假。而同时鉴别器也在不断的从生成器中改进鉴别能力,这样就能不断提升鉴别生成图像真伪的标准。当训练完成后,生成器能够将任何潜空间的点转换成一张难以分辨真伪的图像。不同于VAE,这里的潜空间没有明确有意义的结构,或者更确切的说,它不是连续的。 # # ![GAN](imgs/f8.15.jpg) # # 图8-15 生成对抗网络原理 # > Remarkably, a GAN is a system where the optimization minimum isn’t fixed—unlike # in any other training setup you have encountered in this book before. Normally, gradient # descent consists in rolling down some hills in a static loss landscape. However, with a # GAN, every step taken down the hill changes the entire landscape by a bit. It’s a dynamic # system where the optimization process is seeking not a minimum, but rather an # equilibrium between two forces. For this reason, GANs are notoriously very difficult to # train—getting a GAN to work require lots of careful tuning of the model architecture and # training parameters. # # GAN不像本书之前介绍过的所有训练过程那样,它的最小优化值不是固定的。通常来说梯度下降就像是在一个静态的损失空间中下山一样。然而在GAN中,每次下山的一步都会稍微的改变整个损失空间一点。所以这是一个动态的系统,这里的优化目标不再是寻找一个最优最小值,而是在两股力量之间寻找平衡。正因为此,GAN具有非常高的训练难度,要训练出一个成功的GAN模型,需要许多精细的模型结构和训练参数的调整。 # # ![GAN example](imgs/f8.16.jpg) # # 图8-16 Mike Tyka使用多阶段GAN从人脸数据集上生成的图像。[Mike Tyka的网站](https://miketyka.com/) # ### 8.5.1 一个GAN的概要实现 # # > In what follows, we explain how to implement a GAN in Keras, in its barest form—since # GANs are quite advanced, diving deeply into the technical details would be out of scope # for us. Our specific implementation will be a deep convolutional GAN, or DCGAN: a # GAN where the generator and discriminator are deep convnets. In particular, it leverages # a Conv2DTranspose layer for image upsampling in the generator. # # 下面我们来介绍如何在Keras中实现一个GAN,当然是最原始的形式,因为GAN相当高深,深入到内部的技术细节将会超出本书的范围。我们这里的实现将会是深度卷积生成对抗网络,简称DCGAN:也就是生成器和鉴别器都是深度卷积网络的GAN。具体来说,它使用了`Conv2DTranspose`层来实现生成器的上采样。 # # > We will train our GAN on images from CIFAR10, a dataset of 50,000 32x32 RGB # images belong to 10 classes (5,000 images per class). To make things even easier, we # will only use images belonging to the class "frog". # # 我们会使用CIFAR10图像数据集来训练我们的GAN,这是一个有着5万张32x32 RGB图像的数据集,这些图像分别归属于10个不同的种类(每个类别5000张图像)。为了使得任务更加简单,我们仅仅使用那些类别是“青蛙”的图像。 # # > Schematically, our GAN looks like this: # # > - A generator network maps vectors of shape (latent_dim,) to images of shape (32, # 32, 3) . # - A discriminator network maps images of shape (32, 32, 3) to a binary score estimating # the probability that the image is real. # - A gan network chains the generator and the discriminator together: gan(x) = # discriminator(generator(x)) . Thus this gan network maps latent space vectors to # the discriminator’s assessment of the realism of these latent vectors as decoded by the # generator. # - We train the discriminator using examples of real and fake images along with # "real"/"fake" labels, as we would train any regular image classification model. # - To train the generator, we use the gradients of the generator’s weights with regard to the # loss of the gan model. This means that, at every step, we move the weights of the # generator in a direction that will make the discriminator more likely to classify as "real" # the images decoded by the generator. I.e. we train the generator to fool the discriminator. # # 总的来说我们的GAN就是如下的形式: # # - 一个生成器网络将形状为(latent_dim,)的向量解码成形状为(32, 32, 3)的图像。 # - 一个鉴别器网络将形状为(32, 32, 3)的图像输出成二分分类,估计图像为真的概率。 # - 一个GAN网络将生成器和鉴别器串联起来:`gan(x) = discriminator(generator(x))`。因此整个GAN网络将潜空间向量映射成鉴别器对其生成图像的真伪评估。 # - 我们使用真实的以及伪造的图像来训练鉴别器,同时包括这些图像的“真伪”标签,就像我们在训练一个普通的图像分类模型一样。 # - 为了训练生成器,我们使用整个GAN模型的损失来对生成器权重进行梯度运算。这意味着,每一次我们都将其权重朝着让鉴别器更容易认为图像为“真”的方向去移动一点点。这就是实际上训练生成器来欺骗鉴别器。 # ### 8.5.2 一些技巧 # # > Training GANs and tuning GAN implementations is notoriously difficult. There are a # number of known "tricks" that one should keep in mind. Like most things in deep # learning, it is more alchemy than science: these tricks are really just heuristics, not # theory-backed guidelines. They are backed by some level of intuitive understanding of # the phenomenon at hand, and they are known to work well empirically, albeit not # necessarily in every context. # # 训练和调参GAN实现起来是出了名的困难。这里有一些总结出来的“技巧”应该被记住。就像很多其他在深度学习中的技巧一样,它们更像炼金术而不是科学:这些技巧实际上都是启发性算法而非具有理论支持的准则。它们都是在实际实验中根据现象使用某种程度的直觉理解获得的,它们在很多场合下都工作良好,尽管并非每种环境中都需要。 # # > Here are a few of the tricks that we leverage in our own implementation of a GAN # generator and discriminator below. It is not an exhaustive list of GAN-related tricks; you # will find many more across the GAN literature. # # > - We use tanh as the last activation in the generator, instead of sigmoid , which would be # more commonly found in other types of models. # - We sample points from the latent space using a normal distribution (Gaussian # distribution), not a uniform distribution. # - Stochasticity is good to induce robustness. Since GAN training results in a dynamic # equilibrium, GANs are likely to get "stuck" in all sorts of ways. Introducing randomness # during training helps prevent this. We introduce randomness in two ways: 1) we use # dropout in the discriminator, 2) we add some random noise to the labels for the # discriminator. # - Sparse gradients can hinder GAN training. In deep learning, sparsity is often a desirable # property, but not in GANs. There are two things that can induce gradient sparsity: 1) max # pooling operations, 2) ReLU activations. Instead of max pooling, we recommend using # strided convolutions for downsampling, and we recommend using a LeakyReLU layer # instead of a ReLU activation. It is similar to ReLU but it relaxes sparsity constraints by # allowing small negative activation values. # - In generated images, it is common to see "checkerboard artifacts" caused by unequal # coverage of the pixel space in the generator. To fix this, we use a kernel size that is # divisible by the stride size, whenever we use a strided Conv2DTranpose or Conv2D in # both the generator and discriminator. # # 下面列出了我们的生成器和鉴别器GAN实现中使用到的一些技巧。这当然不是一份有关GAN技巧的完整列表,你可以在GAN相关的文献中找到更多的技巧。 # # - 我们使用`tanh`作为生成器最后的激活函数,而不是`sigmoid`,后者是其他模型中经常使用的激活函数。 # - 我们使用正态分布(高斯分布)来从潜空间中取样,而不是均匀分布。 # - 随机性能够更好地提供健壮性。因为GAN的训练结果是一个动态平台,所以GAN很容易在各种情况下卡住。在训练中引入随机性能够帮助避免这一点。我们使用两种方式引入随机性:1)在鉴别器中使用dropout,2)在鉴别器的标签中加入一些随机噪音。 # - 稀疏梯度会阻碍GAN的训练。在深度学习中稀疏性通常是希望的特点,但在GAN中不是这样。有两个做法会带来稀疏性:1)最大池化操作,2)线性整流单元激活。所以我们推荐使用步进卷积对图像进行下取样来取代最大池化,使用`LeakyReLU`层来取代`ReLU`激活。`LeakyReLU`类似于`ReLU`,但是它允许存在小数值的负数以减低稀疏性。 # - 在生成的图像中很容易观察到“棋盘效应”,这是由于生成器的在像素空间的不平衡导致的。为了修正这一点,我们使用的核大小能够被步进大小整除,在生成器和鉴别器中无论使用`Conv2DTranspose`还是`Conv2D`层时都保证这一点。 # # ![checkboard artifact](imgs/f8.17.jpg) # # 图8-17 棋盘效应,由于步进值和核大小值不匹配造成的像素空间不平衡,GAN中一个著名的坑 # ### 8.5.3 生成器 # # > First, we develop a generator model, which turns a vector (from the latent # space—during training it will sampled at random) into a candidate image. One of the # many issues that commonly arise with GANs is that the generator gets stuck with # generated images that look like noise. A possible solution is to use dropout on both the # discriminator and generator. # # 首先我们构建生成器模型,它能将一个向量(训练时从潜空间中随机取样获得)转换成一个候选图像。在GAN中有一个经常会碰到的问题就是生成器卡在不停生成噪音的阶段。一个可以采取的措施就是在鉴别器和生成器中都加上dropout层。 # + import tensorflow.keras from tensorflow.keras import layers import numpy as np latent_dim = 32 height = 32 width = 32 channels = 3 generator_input = keras.Input(shape=(latent_dim,)) # 首先将输入转换成一个16x16具有128个通道的特征地图 x = layers.Dense(128 * 16 * 16)(generator_input) x = layers.LeakyReLU()(x) x = layers.Reshape((16, 16, 128))(x) # 然后加入一个卷积层 x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) # 上采样到32x32 x = layers.Conv2DTranspose(256, 4, strides=2, padding='same')(x) x = layers.LeakyReLU()(x) # 在增加一些卷积层 x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) # 产生一个32x31 1个通道的特征地图 x = layers.Conv2D(channels, 7, activation='tanh', padding='same')(x) generator = keras.models.Model(generator_input, x) generator.summary() # - # ### 8.5.4 鉴别器 # # > Then, we develop a discriminator model, that takes as input a candidate image (real or # synthetic) and classifies it into one of two classes, either "generated image" or "real # image that comes from the training set". # # 然后我们就来构建鉴别器模型,他接收一张候选图像(真实的或合成的)作为输入,并将其分为两类,“生成的图像”或“来自训练集的真实图像”。 # + discriminator_input = layers.Input(shape=(height, width, channels)) x = layers.Conv2D(128, 3)(discriminator_input) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Flatten()(x) # 加入一个dropout层,非常重要的技巧 x = layers.Dropout(0.4)(x) # 分类器层 x = layers.Dense(1, activation='sigmoid')(x) discriminator = keras.models.Model(discriminator_input, x) discriminator.summary() # 为了令训练逐渐稳定,我们在优化器中使用学习率衰减和梯度裁剪 discriminator_optimizer = keras.optimizers.RMSprop(lr=0.0008, clipvalue=1.0, decay=1e-8) discriminator.compile(optimizer=discriminator_optimizer, loss='binary_crossentropy') # - # ### 8.5.5 对抗网络 # # > Finally, we setup the GAN, which chains the generator and the discriminator. This is the # model that, when trained, will move the generator in a direction that improves its ability # to fool the discriminator. This model turns latent space points into a classification # decision, "fake" or "real", and it is meant to be trained with labels that are always "these # are real images". So training gan will updates the weights of generator in a way that # makes discriminator more likely to predict "real" when looking at fake images. Very # importantly, we set the discriminator to be frozen during training (non-trainable): its # weights will not be updated when training gan . If the discriminator weights could be # updated during this process, then we would be training the discriminator to always # predict "real", which is not what we want! # # 最后我们构建GAN,它将生成器和鉴别器串联在一起。这个模型的目标是当训练时,我们会将生成器的权重朝着改进它能更好欺骗鉴别器的方向移动。这个模型将潜空间的点转换成最终的分类预测,“赝品”或“真迹”,模型设计的宗旨就是使用“这些是真实的图像”这样的标签来进行训练。因此训练GAN会更新生成器的权重,期望更新后生成的合成图像更容易使得鉴别器认为是真的。非常重要的一点是,在训练过程中我们会冻结鉴别器权重(不可训练的):鉴别器的权重在训练GAN过程中不会更新。因为如果过程中更新了鉴别器的权重,最终我们会训练出永远预测为“真实”图像的鉴别器,这显然不是我们希望的。 # + # 设置鉴别器权重不可训练(仅对整个GAN模型而言) discriminator.trainable = False gan_input = keras.Input(shape=(latent_dim,)) gan_output = discriminator(generator(gan_input)) gan = keras.models.Model(gan_input, gan_output) gan_optimizer = keras.optimizers.RMSprop(lr=0.0004, clipvalue=1.0, decay=1e-8) gan.compile(optimizer=gan_optimizer, loss='binary_crossentropy') # - # ### 8.5.6 如何训练我们的DCGAN # # > Now we can start training. To recapitulate, this is schematically what the training loop # looks like: # # 现在可以开始训练了。整个训练的循环过程如下: # # ```text # for each epoch: # * 从潜空间中取样点 (随机噪音). # * 使用这个随机噪音在生成器中生成图像 # * 将生成的图像混入真实图像中 # * 使用这些混合的图像来训练鉴别器,使用相应的目标标签,“真实”或者“合成” # * 从潜空间中取样新的随机点 # * 使用这些随机向量训练GAN,这时的目标标签使用的是“这些都是真实图像”,用来更新生成器的权重 # ``` # # > Let’s implement it: # # 让我们来实现它: # # 译者注,以下代码修改了图像输出目录以及定时保存的间隔。 # + import os from tensorflow.keras.preprocessing import image # 载入CIFAR10数据集 (x_train, y_train), (_, _) = keras.datasets.cifar10.load_data() # 选择其中的青蛙图像(类别6) x_train = x_train[y_train.flatten() == 6] # 规范化数据 x_train = x_train.reshape((x_train.shape[0],) + (height, width, channels)).astype('float32') / 255. iterations = 10000 batch_size = 20 save_dir = os.path.join(os.environ['HOME'], 'gan_output') # 开始训练的循环 start = 0 for step in range(iterations): # 从潜空间中随机取样点 random_latent_vectors = np.random.normal(size=(batch_size, latent_dim)) # 将向量解码成合成图像 generated_images = generator.predict(random_latent_vectors) # 将合成图像混入真是图像 stop = start + batch_size real_images = x_train[start: stop] combined_images = np.concatenate([generated_images, real_images]) # 组装真是图像和合成图像的目标标签 labels = np.concatenate([np.ones((batch_size, 1)), np.zeros((batch_size, 1))]) # 在标签中加入随机噪音 - 非常重要的技巧 labels += 0.05 * np.random.random(labels.shape) # 训练鉴别器 d_loss = discriminator.train_on_batch(combined_images, labels) # 从潜空间中随机取样更多的点 random_latent_vectors = np.random.normal(size=(batch_size, latent_dim)) # 组装新的标签,说明“这些都是真实图像” misleading_targets = np.zeros((batch_size, 1)) # 训练生成器 (通过GAN模型,这时鉴别器的权重不可训练) a_loss = gan.train_on_batch(random_latent_vectors, misleading_targets) start += batch_size if start > len(x_train) - batch_size: start = 0 # 定时保存或绘制图像 if step % 100 == 99: # 保存模型参数 gan.save_weights('gan.h5') # 打印指标 print('discriminator loss:', d_loss) print('adversarial loss:', a_loss) # 保存一张生成图像 img = image.array_to_img(generated_images[0] * 255., scale=False) img.save(os.path.join(save_dir, 'generated_frog' + str(step) + '.png')) # 保存一张真是图像,用于做对比 img = image.array_to_img(real_images[0] * 255., scale=False) img.save(os.path.join(save_dir, 'real_frog' + str(step) + '.png')) # - # > When training, you may see your adversarial loss start increasing considerably while # your discriminative loss will tend to zero, i.e. your discriminator may end up dominating # your generator. If that’s the case, try reducing the discriminator learning rate and increase # the dropout rate of the discriminator. # # 当训练时,你有可能会看到你的对抗损失急剧增加而鉴别损失趋向于0,也就是说你的鉴别器开始完全支配你的生成器了。如果出现了这种情况,尝试减小鉴别器的学习率和增加鉴别器的dropout比率。 # # ![frog generated images](imgs/f8.18.jpg) # # 图8-18 图中每一列都有两张合成图像和一张真实图像,你可以肉眼识别吗。答案是真是图像分别在中间、顶部、底部、中间。 # ### 8.5.7 小结 # # > - GANs consist in a generator network coupled with a discriminator network. The # discriminator is trained to tell apart the output of the generator and real images from a # training dataset, while the generator is trained to fool the discriminator. Remarkably, the # generator nevers sees images from the training set directly; the information it has about # the data comes from the discriminator. # - GANs are difficult to train, because training a GAN is a dynamic process rather than a # simple descent process with a fixed loss landscape. Getting a GAN to train correctly # requires leveraging a number of heuristic tricks, as well as extensive tuning. # - GANs can potentially produce highly realistic images. However, unlike VAEs, the latent # space that they learn does not have a neat continuous structure, and thus may not be # suited for certain practical applications, such as image editing via latent space concept # vectors. # # - GAN包含着一个生成网络和一个鉴别器网络。鉴别器训练来对真实数据集图像和生成图像进行分类,而生成器训练来欺骗鉴别器。这里很重要的一点是,生成器从未直接接触训练集中的图像,它的信息完全来自于鉴别器的反馈信息。 # - GAN训练难度很高,因为训练GAN是一个动态过程,而不是传统的静态空间梯度下降过程。要使得GAN正确的训练需要使用一系列启发性技巧,和繁重的调参工作。 # - GAN可以生成高度真实的图像。然而不像VAE,它获得的潜空间并没有干净的连续结构,所以它也不能胜任某些应用场景,比如使用潜空间概念向量进行图像编辑。 # ## 8.6 总结:生成深度学习 # # > This is the end of the chapter on creative applications of deep learning, where deep nets # go beyond simply annotating existing content, and start generating their own. You have # just learned: # # > - How to generate sequence data, one timestep at a time. This is applicable to text # generation, but also to note-by-note music generation, or any other type of timeseries # data. # - How Deep Dreams work: by maximizing convnet layer activations through gradient # ascent in input space. # - How to perform style transfer, where a content image and a style image get combined to # produce interesting-looking results. # - What GANs and VAEs are, how they can be used for dreaming up new images, and how # latent space "concept vectors" could be used for image edition. # # 这里要结束本章,深度学习的创造性应用了,本章让你看到深度网络已经超越标记已经存在的内容范畴,进入到生成内容的范畴了。你在本章了解了: # # - 如何生成序列数据,一次产生一个数据。这广泛应用在文本生成上,不过也可以应用在音乐生成或其他类型的时间序列数据上。 # - Deep Dream是如何工作的:通过在输入空间上最大化梯度增强的激活结果。 # - 如何进行风格迁移,用来将内容图像和风格图像组合在一起生成很有趣的结果。 # - GAN和VAE是什么,它们是如何产生全新的图像的,还有潜空间“概念向量”如何用来进行图像编辑。 # # > These few techniques only cover the very basics of this fast-expanding field. There’s # a lot more to discover out there—generative deep learning would be deserving of an # entire book of its own. # # 这些技术仅仅覆盖了这个快速扩张领域的最基础部分。这个领域还有很多本章未阐述却值得发现的内容,生成深度学习这个主题完全可以写一本书。 # << [第七章:高级深度学习最佳实践](Chapter7_Advanced_deep_learning_best_pratices.ipynb)|| [目录](index.md) || [第九章:总结](Chapter9_Conclusions.ipynb) >>
Chapter8_Generative_deep_learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # A number of common questions come up about basic numbers reporting for the final list. This notebook explores some ways that we can take our intermediate SGCN summary with the results of taxonomic authority consultation and answer those questions. Pandas grouping is particularly useful in this context. import pandas as pd sgcn_summary = pd.read_csv('sgcn_taxonomy_check.csv', low_memory=False) # Based on the taxonomic lookup process, we end up with final identified taxa at various levels of the taxonomic hierarchy. We record that detail in a taxonomic_rank property retrieved from the matching document in ITIS or WoRMS. In many cases, we want to report only on taxa identified at the species level, which we do in subsequent steps, but we should look at the distribution of the data across ranks first. for rank, group in sgcn_summary.groupby("taxonomic_rank"): print(rank, len(group)) # We may also want to limit our exploration to just those species that are included in the latest reporting period, 2015. This codeblock sets up a new dataframe filtered to only species reported in 2015. matched_species = sgcn_summary.loc[(sgcn_summary["taxonomic_rank"] == "Species") & (sgcn_summary["2015"].notnull())] print(len(matched_species)) # Now we can look at the distribution of species that were successfully aligned with taxonomic authorities (aka the National List) by the high level taxonomic group assigned based on the mapping of logical groups to higher level taxonomy. for tax_group, group in matched_species.groupby("taxonomic_group"): print(tax_group, len(group)) # We might also want to look further at what happened in the taxonomic matching process. We generated a field in the processing metadata that captures the overall method used in matching a submitted name string to a taxon identifier. # # * Exact Match - means that the submitted name was found to match exactly one valid ("accepted" in the case of ITIS plants) taxon # * Fuzzy Match - means that the original submitted name had a misspelling of some kind but that we were able to find it with a fuzzy search # * Followed Accepted TSN or Followed Valid AphiaID - means that the original submitted name string found a match to a taxon that is no longer considered valid and our process followed the taxonomic reference to retrieve a valid taxon for use # * Found multiple matches - means that our search on submitted name string found multiple matches for the name (often homynyms) but that only a single valid taxon was available to give us an acceptable match for match_method, group in matched_species.groupby("match_method"): print(match_method, len(group)) # If we really want to dig into the details, we can pull just the details for those cases where the submitted name string does not match the final valid scientific name we matched to in the taxonomic authority. This codeblock outputs a subset dataframe with just the pertinent details. matched_species.loc[matched_species["lookup_name"] != matched_species["valid_scientific_name"]][["lookup_name","valid_scientific_name","match_method"]]
Explore Summarized Data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: qiskit # language: python # name: qiskit # --- # # Task 4 # # Find the lowest eigenvalue of the following matrix: # # $$ # \begin{pmatrix} # 1 & 0 & 0 & 0 \\ # 0 & 0 & -1 & 0 \\ # 0 & -1 & 0 & 0 \\ # 0 & 0 & 0 & 1 \\ # \end{pmatrix} # $$ # # using VQE-like circuits, created by yourself from scratch. # # 1) Decomposing the desired matrix into pauli operators # # Since we want to measure a VQE-like circuit, we need to decompose the matrix into their pauli components. We have that the pauli matrices are: # # $$ # \sigma_x = \begin{pmatrix} # 0 & 1 \\ # 1 & 0 # \end{pmatrix} \hspace{1em} , \hspace{1em} \sigma_y = \begin{pmatrix} # 0 & -i \\ # i & 0 # \end{pmatrix} \hspace{1em} , \hspace{1em} \sigma_z = \begin{pmatrix} # 1 & 0 \\ # 0 & -1 # \end{pmatrix} # $$ # # Thus to get the pauli component of a $2^N$ x $2^N$ matrix $A$, as stated on [stackoverflow](https://quantumcomputing.stackexchange.com/questions/8725/can-arbitrary-matrices-be-decomposed-using-the-pauli-basis), we have: # # $$ # A = \sum_{ij} \frac{1}{4} h_{ij} \hspace{0.5em} \sigma_i \otimes \sigma_j # $$ # # And the components $h_{ij}$ are: # # $$ # h_{ij} = \frac{1}{4} \mathrm{Tr} \bigg[ (\sigma_i \otimes \sigma_j) \cdot A \bigg] # $$ # # Therefore we create a function `get_components_from_matrix` that does this decomposition for us. import numpy as np import matplotlib.pyplot as plt import qiskit # + # The matrix that we want to decompose A = np.array([[1,0,0,0],[0,0,-1,0],[0,-1,0,0],[0,0,0,1]]) # Defining Pauli Matrices pauli_x = np.array([[0,1],[1,0]]) ; pauli_y = np.array([[0,1j],[-1j,0]]) ; pauli_z = np.array([[1,0],[0,-1]]) basis = {'I': np.eye(2), 'X': pauli_x, 'Y': pauli_y, 'Z': pauli_z} # + from itertools import product def get_components_from_matrix(A, basis): """ Decompose a matrix on a given basis, in our case we decompose in the pauli basis {I,X,Y,Z}. Args: A (list, np.array): Matrix that you want to decompose. basis (dict): dictionary with name of the basis as keys and the basis matrix as values. Output: components_dict (dict): Dictionary with the basis name as keys and component as values. """ assert len(A) == len(A[0]), "your matrix is not square" assert not np.abs(int(np.log2(len(A))) - np.log2(len(A))) > 0, "the lenght of your matrix is not a power of 2" repeat_times = int(np.log2(len(A))) components_dict = {} for (name_1, pauli_1),(name_2, pauli_2) in product(basis.items(), repeat=repeat_times): components_dict[name_1 + name_2] = np.trace(1/4.*np.kron(pauli_1,pauli_2) @ A) return { key : val for key,val in components_dict.items() if val != 0} # + tags=[] components = get_components_from_matrix(A, basis) print("Decomposition:") for name, comp in components.items(): print(f" {name} : {comp}") # + tags=[] print("Result of the decomposition of A:") decomp = 0.5*np.kron(np.eye(2), np.eye(2)) - 0.5*np.kron(pauli_x, pauli_x) - 0.5*np.kron(pauli_y, pauli_y) + 0.5*np.kron(pauli_z, pauli_z) print(f"""{decomp}""") # - # Now we know that the desired matrix, has the following pauli decomposition: # # $$ # A = 0.5 I \otimes I \ - \ 0.5 \sigma_x \otimes \sigma_x \ - \ 0.5 \sigma_y \otimes \sigma_y \ + \ 0.5 \sigma_z \otimes \sigma_z # $$ # # 2) Creating VQE circuit # # Now we want to construct the VQE circuti and measure the pauli factors in order to get the lowest eigenvalue of $A$. This is done in three parts: # - 1) Construct the VQE Ansatz, which is a parametrized quantum circuit; # - 2) Construct XX, YY, and ZZ measurements; # - 3) Vary VQE Ansatz parameters and measure the eigenvalue. # ## 2.1) Creating Ansatz # # In order to create the Variational Ansatz, we need to create a parametrized circuit. Here I chose to use Qiskit, therefore we need to use `parameter` from the `qiskit.circuit` library. # + from qiskit import QuantumCircuit, QuantumRegister from qiskit import execute, Aer from qiskit.circuit import Parameter theta = Parameter('θ') qr = QuantumRegister(2, name='vqe') qc = QuantumCircuit(qr) qc.h(qr[0]) qc.cx(qr[0], qr[1]) qc.rx(theta, qr[0]) qc.draw('mpl') # - # ## 2.2) Constructing XX, YY, and ZZ measurements # # After creating our Ansatz, we need to measure the expected values of our pauli variables: XX, YY, and ZZ. The ZZ measurement is the easiest one, because the measurements are given on the computational basis, which is the same as saying that it is in the Z basis. For XX and YY measurements we need to make a change of basis in order to measure it, this will be explained in their respective sections. # ### 2.2.1) Constructing ZZ measurement # # Since the standard measurements are already in the ZZ basis, we only need to copy the circuit and use the `.measure_all()` method. Below there is a example of a circuit doing the ZZ measurement. # + def measure_zz_circuit(circuit: qiskit.QuantumCircuit): """Measure the ZZ Component of the variational circuit. Args: circuit (qiskit.QuantumCircuit): Circuit that you want to measure the ZZ Component. Outputs: zz_meas (qiskit.QuantumCircuit): Circuit with ZZ measurement. """ zz_meas = circuit.copy() zz_meas.measure_all() return zz_meas zz_meas = measure_zz_circuit(qc) zz_meas.draw('mpl') # - # Now that we have created the ZZ circuit, we need a way to measure it. In this case we will do a sweep over the parametric variable $\theta$ and get the expected values for each value of the parameter. # # In order to calculate the expected value, we need to see how a ZZ measurement will affect a arbitrary qubit: # # $$ # Z \otimes Z(a ∣00\rangle + b∣01\rangle + c∣10 \rangle + d∣11\rangle ) = a∣00 \rangle − b∣01\rangle − c∣10\rangle + d∣11\rangle # $$ # # Thus in order to have the expectation value of Z, we have: # $$ # \langle \psi | Z\otimes Z | \psi \rangle = \mathrm{Pr}(00) - \mathrm{Pr}(01) - \mathrm{Pr}(10) + \mathrm{Pr}(11) # $$ # # Where $\mathrm{Pr}(ij)$ is the probability of measuring the variable $ij$. In order to obtain the probabilities, we need to fill in a zero when the probability doesn't show up because qiskit will not put this probability in the dictionary. def measure_zz(given_circuit: qiskit.QuantumCircuit, theta_range: [list, np.array], num_shots: int = 10000): """ Measure the ZZ expected value for a given_circuit. Args: given_circuit (qiskit.QuantumCircuit): The parametrized circuit that you want to calculate the ZZ expected value. theta_range (Union[list, np.array]): Range of the parameter that you want to sweep. num_shots (int): Number of shots for each circuit run. (default=10000) Returns: zz (np.array): Expected values for each parameter. """ zz_meas = measure_zz_circuit(given_circuit) simulator = qiskit.Aer.get_backend('qasm_simulator') job = execute(zz_meas, backend= simulator, shots= num_shots, parameter_binds=[{theta: theta_val} for theta_val in theta_range]) counts = job.result().get_counts() zz = [] for count in counts: # Fill if the ij doesn't show up if '00' not in count: count['00'] = 0 if '01' not in count: count['01'] = 0 if '10' not in count: count['10'] = 0 if '11' not in count: count['11'] = 0 # Get total counts in order to obtain the probability total_counts = count['00'] + count['11'] + count['01'] + count['10'] # Get counts for expected value zz_meas = count['00'] + count['11'] - count['01'] - count['10'] # Append the probability zz.append(zz_meas / total_counts) return np.array(zz) theta_range = np.linspace(0, 2 * np.pi, 128) zz = measure_zz(qc, theta_range) plt.plot(theta_range, zz); plt.title(r"$\langle ZZ \rangle$", fontsize=16) plt.xlabel(r"$\theta$", fontsize=14) plt.ylabel(r"$\langle ZZ \rangle$", fontsize=14) plt.show() # ### 2.2.2) Constructing YY measurement # # In order to get the expectation value for the YY measurement, we need a change of basis because the measurement are only made on the Z basis. We use the following identity: $Y = (HS^\dagger)^\dagger Z H S^\dagger$, thus the YY expected value is: # # $$ # \langle \psi | Y | \psi \rangle = (\langle \psi |(H S^\dagger)^\dagger) Z (H S^\dagger| \psi \rangle) \equiv \langle \tilde \psi | Z | \tilde \psi \rangle # $$ # # Therefore, we need to add $S^\dagger H$ and do a Z measurement (just as the previous section) where we want to measure the $Y$ expected value. # + def measure_yy_circuit(circuit: qiskit.QuantumCircuit): """Measure the YY Component of the variational circuit. Args: given_circuit (qiskit.QuantumCircuit): Circuit that you want to measure the YY Component. Outputs: yy_meas (qiskit.QuantumCircuit): Circuit with YY measurement. """ yy_meas = circuit.copy() yy_meas.barrier(range(2)) yy_meas.sdg(range(2)) yy_meas.h(range(2)) yy_meas.measure_all() return yy_meas yy_meas = measure_yy_circuit(qc) yy_meas.draw('mpl') # - def measure_yy(given_circuit: qiskit.QuantumCircuit, theta_range: [list, np.array], num_shots: int = 10000): """ Measure the YY expected value for a given_circuit. Args: given_circuit (qiskit.QuantumCircuit): The parametrized circuit that you want to calculate the YY expected value. theta_range (Union[list, np.array]): Range of the parameter that you want to sweep. num_shots (int): Number of shots for each circuit run. (default=10000) Returns: yy (np.array): Expected values for each parameter. """ yy_meas = measure_yy_circuit(given_circuit) simulator = Aer.get_backend('qasm_simulator') job = execute(yy_meas, backend= simulator, shots=num_shots, parameter_binds=[{theta: theta_val} for theta_val in theta_range]) counts = job.result().get_counts() yy = [] for count in counts: if '00' not in count: count['00'] = 0 if '01' not in count: count['01'] = 0 if '10' not in count: count['10'] = 0 if '11' not in count: count['11'] = 0 total_counts = count['00'] + count['11'] + count['01'] + count['10'] yy_meas = count['00'] + count['11'] - count['01'] - count['10'] yy.append(yy_meas / total_counts) return np.array(yy) theta_range = np.linspace(0, 2 * np.pi, 128) yy = measure_yy(qc, theta_range) plt.plot(theta_range, yy) plt.title(r"$\langle YY \rangle$", fontsize=16) plt.xlabel(r"$\theta$", fontsize=14) plt.ylabel(r"$\langle YY \rangle$", fontsize=14) plt.show() # ### 2.2.3) Constructing XX measurement # # Just as the YY measurement, we need to use a identity in order to change from the X basis to the Z basis in order to do the measurement: $X = H Z H $, thus the YY expected value is: # # $$ # \langle \psi | Y | \psi \rangle = (\langle \psi |H) Z (H | \psi \rangle) \equiv \langle \tilde \psi | Z | \tilde \psi \rangle # $$ # # Therefore, we need to add $H$ and do a Z measurement where we want to measure the $X$ expected value. # + def measure_xx_circuit(circuit: qiskit.QuantumCircuit): """Measure the XX Component of the variational circuit. Args: circuit (qiskit.QuantumCircuit): Circuit that you want to measure the XX Component. Outputs: xx_meas (qiskit.QuantumCircuit): Circuit with XX measurement. """ xx_meas = circuit.copy() xx_meas.barrier(range(2)) xx_meas.h(range(2)) xx_meas.measure_all() return xx_meas xx_meas = measure_xx_circuit(qc) xx_meas.draw('mpl') # - def measure_xx(given_circuit: qiskit.QuantumCircuit, theta_range: [list, np.array], num_shots: int = 10000): """ Measure the XX expected value for a given_circuit. Args: given_circuit (qiskit.QuantumCircuit): The parametrized circuit that you want to calculate the XX expected value. theta_range (Union[list, np.array]): Range of the parameter that you want to sweep. num_shots (int): Number of shots for each circuit run. (default=10000) Returns: xx (np.array): Expected values for each parameter. """ xx_meas = measure_xx_circuit(given_circuit) simulator = Aer.get_backend('qasm_simulator') job = execute(xx_meas, backend= simulator, shots=num_shots, parameter_binds=[{theta: theta_val} for theta_val in theta_range]) counts = job.result().get_counts() xx = [] for count in counts: if '00' not in count: count['00'] = 0 if '01' not in count: count['01'] = 0 if '10' not in count: count['10'] = 0 if '11' not in count: count['11'] = 0 total_counts = count['00'] + count['11'] + count['01'] + count['10'] xx_meas = count['00'] + count['11'] - count['01'] - count['10'] xx.append(xx_meas / total_counts) return np.array(xx) theta_range = np.linspace(0, 2 * np.pi, 128) xx = measure_xx(qc, theta_range) plt.plot(theta_range, xx) plt.title(r"$\langle XX \rangle$", fontsize=16) plt.xlabel(r"$\theta$", fontsize=14) plt.ylabel(r"$\langle XX \rangle$", fontsize=14) plt.show() # One fun fact is that since the ansatz consists of rotations on the X axis, i.e. $H$ and $R_X$, the expected value of the XX operator is constant, if the ansatz was only rotations on the Y axis, the YY operator would be constant, and so on. This shows that our ansatz is not searching through all the Hilbert space and only a portion of it, thus we hope that our solution is in this portion, if it is not we should search for another ansatz. # ## 2.3) Getting the eigenvalue # # Now that we constructed XX, YY, and ZZ expected values, we can measure them given our defined ansatz and apply for the pauli decomposition of the given matrix, which is: # # $$ # A = 0.5 I \otimes I \ - \ 0.5 \sigma_x \otimes \sigma_x \ - \ 0.5 \sigma_y \otimes \sigma_y \ + \ 0.5 \sigma_z \otimes \sigma_z # $$ # # $I \otimes I$ measurements are always 1, because of the normalization of the quantum state, i.e. $\langle \psi | \psi \rangle = 1$, all other values comes from what is measured varying the parameters of the ansatz. def get_eigenvalue(circuit, theta_range, num_shots = 10000): xx = measure_xx(circuit, theta_range, num_shots = num_shots) yy = measure_yy(circuit, theta_range, num_shots = num_shots) zz = measure_zz(circuit, theta_range, num_shots = num_shots) energy = 0.5*1 \ - 0.5*xx \ - 0.5*yy \ + 0.5*zz return energy theta_range = np.linspace(0, 2 * np.pi, 128) eigenvalues = get_eigenvalue(qc, theta_range) plt.plot(theta_range, eigenvalues) plt.title(r"Eigenvalues", fontsize=16) plt.xlabel(r"$\theta$", fontsize=14) plt.ylabel(r"$Eigenvalue$", fontsize=14) plt.show() # + tags=[] print(f"Smallest eigenvalue from VQE: {np.round(np.min(eigenvalues),4)} with theta = {np.round(theta_range[np.argmin(eigenvalues)], 2)}") print(f"Smallest eigenvalue calculated classically = {np.round(np.min(np.linalg.eigh(A)[0]),4)}") print(f"Error between classical and quantum: {np.round(np.abs(np.min(eigenvalues) - np.min(np.linalg.eigh(A)[0])), 4)}") # - # -------------------------------------------- # # References # # 1) [Qiskit Documentation](https://qiskit.org/documentation/tutorials/circuits_advanced/1_advanced_circuits.html) # # 2) [Quantum Computing: An applied Approach - Hidary](https://www.springer.com/gp/book/9783030239213) # # 3)[Qiskit Summer School](https://www.youtube.com/watch?v=Rs2TzarBX5I&list=PLOFEBzvs-VvrXTMy5Y2IqmSaUjfnhvBHR) from qiskit.tools.jupyter import * # %qiskit_version_table
Task 4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Explainer on Hypothesis Testing and Bayesian Posterior Inference # # In this notebook we will consider two types of reasoning, hypothesis testing and Bayesian inference. As we will see, both of these types of reasoning are quite common. But, we will be concerned with situtations in which this reasoning can be carried out with precision and objectivity, specifically, circumstances in which we can use the mathematical theories of probability and statistics. Our goal here is simply to understand what the types of reasoning are and how probability and statistics can be used to help us reason well. I will begin by talking about hypothesis testing. After introducing the main idea I will focus on a particular sort of hypothesis test called a t-test and give an example of how to do a t-test. Then I will talk about Bayesian inference. I will go through a couple different techniques for doing Bayesian inference using a similar example as was used to illustrate the t-test. Lastly, we can compare the two sorts of inference to see what the main differences are. # # ## Hypothesis Testing # # A __hypothesis test__ is a method for determining how likely an observation is given some hypothesis. If we make some observation and it is highly unlikely that we would make that observation given some hypothesis, then we have justification for rejecting that hypothesis. Alternatively, if we make some observation and it is highly likely that we would make that observation given some hypothesis, then this gives some further credence to the hypothesis (the degree of confirmation depends on many different factors I won't go into here). We are going to focus on cases in which an observation is highly unlikely given some hypothesis. # # Consider the following scenario: I arrive home from work and am greeted by my dog. I assume that my dog has been a good dog today per usual. Then I observe that my trash can is tipped over and spilled onto my floor. I judge this observation to be highley unlikely given the hypothesis that my dog has been a good dog today. Consequently, I reject that hypthesis. # # This is an example of what we might call a _subjective hypothesis test_, where my method is simply to make a judgment about how likely I think an observation is given some hypothesis. People use hypothesis testing of this sort regularly. _But,_ one issue with this sort of reasoning is that it is subjective! While I judge that the observation is unlikely given the hypothesis, reasonable people could disagree. I haven't given any basis for my judgment beyond that is how it seems to me. And even if I could give some rationale for my judgement, people often just are not good at making judgments of this sort. So how can we do better? # # A _statistical hypothesis test_ is a hypothesis test that utilizes the mathematical theories of statistics and probability to determine how likely the observation is given some hypothesis. By using statistics and probability we can be precise and give an objective basis for our judgment of how likely the observation is. # # Unlike the subjective hypothesis test, we cannot always use a statistical hypothesis test. Some hypotheses aren't apt for statistical hypothesis testing and sometimes we just won't have the information necessary to use a statistical hypothesis test. But, if our hypothesis is is apt and we do have relevant data then we can perform a statistical hypothesis test. # # The topic of statistical hypothesis tests is large, so here we will just focus in on one sort of statistical hypothesis test, namely the t-test. There are a few different types of t-test, we are just going to talk about a two independent sample t-test, but, see the appendix for the other types. # # A __two independent sample t-test__ is a type of hypothesis test that can be used when we want to determine whether there is a difference in the means of some numeric feature for two different groups. We use it to test the hypothesis that there is no difference in means. This is typically called the null hypothesis. We will test the hypothesis by taking a sample from the groups being tested and then do some math to determine how likely it is that we would get this sample if the null hypothesis is true. If it is highly unlikely that we would get this sample given the null hypothesis then we reject it in favor of the alternative hypothesis that there is a difference in means. # # Note: We can perform a t-test if the numeric feature is normally distributed, the groups have similar variance, and we have an appropriate sample size of between 20-30. We can use the t-test for larger sample sizes but in that case there are more appropriate hypothesis tests, e.g. a z-test. # # To perform a two sample t-test we take our samples, and for each we compute the mean $\bar{x}_i$, standard deviation $\sigma_i$, and record the sample size $n_i$. We then use these values to calculate something called the t-statistic. Once we have the t-statistic we will determine how likely it is that we would get a sample with this t-statistic given the null hypothesis. If it is highly unlikely we would get that t-statistic given the null hypothesis then we will reject the null hypothesis. Otherwise we won't do anything. # # The t-statistic essentially depends on two factors, how big is the difference in the sample means and how much variance is there in the samples. The bigger the difference in sample means, the more extreme the t-statistic (farther from zero). And the bigger the variance in the samples, the less extreme the t-statistic (closer to zero). It should be clear why a big difference in sample means is relevant, we are trying to determine if there is a difference between the group means. The reason why we take the variance into account is that if there is a lot of variance in the sample we should be less confident that the sample mean is close to the group mean. So even if there is a big difference in sample means, if there is sufficiently large sample variance we should not reject the null hypothesis. # # We calculate the t-statistic using the following equation: # # $$t = \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}.$$ # # Now we want to assess how likely it is that we would get the calculated t-statistic if the null hypothesis is true. The probability that we would observe this t-statistic under the null hypothesis is called the __p-value__. If the p-value is very low, this just mean that it is highly unlikely we would get this t-statistic given the null hypothesis, in which case we will reject the null hypothesis. Typically, a p-value of less than .05 is seen as grounds for rejecting the null hypothesis. But really it all depends on how important it is to be correct. If it is not a vital matter whether the null hypothesis is true, we might raise the threshold for rejecting the null hypothesis to say .1. Alternatively, if it is really important that we only reject the null hypothesis when it really is false, then we might lower the threshold to .01 or even lower. # # To find the p-value we can use the probability density function for the t-statistic. Under the null hypothesis, if we took many samples from the two groups we are interested in, $t$ would have an approximately normal distribution with a mean of zero and a variance of $\frac{d}{d-1}$, where $d$ is the degrees of freedom. This follows from the central limit theorem, which I won't talk about here, but is worth being familiar with. Let's take a look at a t-distribution. # + # Plot t-distribution import numpy as np import scipy.stats as ss import matplotlib.pyplot as plt import matplotlib.mlab as mlab import seaborn as sns import pandas as pd import math # %matplotlib inline plt.style.use('seaborn') plt.rcParams['figure.figsize'] = (10, 6) # t-distribution for two sample test with sample sizes of 30 t_dist = ss.t(df=58) x = np.linspace(t_dist.ppf(0.0001), t_dist.ppf(0.9999), 100) s = math.sqrt(t_dist.stats(moments='v')) plt.plot(x, t_dist.pdf(x)) #plt.vlines([-3*s, -2*s, -s, 0, s, 2*s, 3*s], 0, .4, linestyle='dashed', colors='y') plt.show() # - # We can see it does indeed look like a normal distribution centered around zero. To calculate our p-value we will find the value of our t-statistic on the x-axis and then find the area under the curve beyond that point. This is actually the t-distribution we will use for the example of how to do a t-test, so let's turn to that now. # # ### Example # Suppose we are planning to move to a new city and we are trying to decide between living in two neighborhoods, Uptown and Downtown. Our budget to buy a home is 200,000 dollars. There are several homes in both neighborhoods that are priced within our budget. But, it would be preferable to find a home within our budget that is in the neighborhood with the higher average of home prices as this leaves more room for our new home's value to grow. So we want to find out if Uptown and Downtown have different averages of home prices. # # To check if there is a difference between the average of home prices for each neighborhood we can perform a t-test. Our null hypothesis is $H_0$: The averages of home prices for each neighborhood are the same. Our alternative hypothesis is $H_1$: The averages of home prices for each neighborhood are different. # # We will take a random sample of 30 home values from each neighborhood and perform a t-test on them. If we get p-value less that .05 then we will reject the null hypothesis and conclude the alternative hypothesis is true. First let's get those samples and take a look at them. # + import numpy as np import scipy.stats as ss import matplotlib.pyplot as plt import matplotlib.mlab as mlab import seaborn as sns import pandas as pd # %matplotlib inline plt.style.use('seaborn') plt.rcParams['figure.figsize'] = (10, 6) np.random.seed(seed=123) # Draw Samples uptown_sample = np.random.normal(loc=185000.0, scale=50000.0, size=30) downtown_sample = np.random.normal(loc=215000.0, scale=50000.0, size=30) # Create Data Frame d_1 = {'location':['uptown']*30, 'price':uptown_sample} d_2 = {'location':['downtown']*30, 'price':downtown_sample} df = pd.concat([pd.DataFrame(data=d_1), pd.DataFrame(data=d_2)], axis=0) # Plot Samples sns.swarmplot(y='price', x='location', data=df) plt.show() # - # Using the good ol' eye test it looks like the average prices for downtown homes is a bit higher than that of uptown homes. But, I think most would agree that the eye test is inconclusive here and so it really is necessary to do the t-test. To do this we need to calculate the t-statistic and then we can see where it is on the t-distribution. # + #Perform t-test uptown_sample_mean = uptown_sample.mean() uptown_sample_stdev = uptown_sample.std() uptown_n = 30 downtown_sample_mean = downtown_sample.mean() downtown_sample_stdev = downtown_sample.std() downtown_n = 30 t = (uptown_sample_mean - downtown_sample_mean)/math.sqrt((uptown_sample_stdev**2/uptown_n) + (downtown_sample_stdev**2/downtown_n)) print("t-statistic:",t) # t-distribution for two sample test with sample sizes of 30 t_dist = ss.t(df=58) x = np.linspace(t_dist.ppf(0.0001), t_dist.ppf(0.9999), 100) s = math.sqrt(t_dist.stats(moments='v')) plt.plot(x, t_dist.pdf(x)) plt.vlines([t, -t], 0, .4, linestyle='dashed', colors='g') plt.xlabel('t-statistic') plt.show() # - # To get the p-value we need to find the area under the curve beyond our lines. Since it can be a bit of hassle to do this ourselves, there are tables we could refer to that will give us an approximate p-value for our t-statistic. Alternatively, we can just use a python package to calculate the p-value for us. The scipy.stats package has function for doing a t-test on two samples. #Perform t-test using built-in function display(ss.ttest_ind(uptown_sample, downtown_sample)) # We can see from the t-test that our p-value is approximately .03 which is less than .05! This means that if the null hypothesis is true there is less than a 5% chance that would have gotten the t-statistic we did get. So, we are justified in rejecting the null hypothesis and concluding that there is a difference in the averages of home prices between Uptown and Downtown. But, it is worth noting that if our threshold for a significant p-value had been set lower at .01 then we would not reject the null hypothesis. # # Before moving on to talk about Bayesian inference it is worth considering in a bit more depth what the p-value for our t-statistic means. The p-value for a t-statistic represents the _frequency_ at which we would get that t-statistic or one more extereme if the null hypothesis is true. If we performed 100 different t-tests and got this t-statistic or one more extreme each time we would expect the null hypothesis to be true in three of those cases. If we reject the null hypothesis for all of those tests we would be wrong approximately three times. # # Let's now suppose that we do 200 hypothesis tests with a threshold of .05 for rejecting the null hypothesis. Assume that 50% of the time the null hypothesis is true and the other 50% of the time the null hypothesis is false. In the cases where the null hypothesis is true, we should expect to get the answer right 95 times and the answer wrong 5 times. What about the cases in which the null hypothesis is false? In this case we need to calculate the __power__ of our test, which is just the proability that we reject the null hypothesis given that it is false. I will omit the details of how to calculate it, but the power of our test will depend on the sample size and the size of the difference we want to detect. We can use a python package to calculate the power of our t-test. # + from statsmodels.stats.power import TTestIndPower mu_uptown = 185000 mu_downtown = 215000 sigma = 50000 effect_size = (mu_downtown - mu_uptown)/sigma sample_size = 30 alpha = 0.05 calculator = TTestIndPower() power = calculator.solve_power(effect_size = effect_size, power=None, nobs1=sample_size, ratio=1.0, alpha=alpha) print(power) # - # So we have a power of roughly .63. This means that if the null hypothesis is false there is a 63% chance we will reject it (where 63% chance means that around 63 out of 100 times we will get a t-statistic that will lead us to reject the null hypothesis). Let's put this information together in table: # # |Confusion Matrix|$H_0$ Rejected|$H_0$ Not Rejected| # |----------------|----|--------| # |$H_0$ is False|63|37| # |$H_0$ is True|5|95| # # We can use this table along with the concepts of precision and recall to evaluate our test. __Precision__ is defined as the number of true positives divided by the total number of predicted positives. In our case a true positive is the case where we reject the null hypothesis and the null hypothesis is indeed false. The predicted positives are all the cases in which we reject the null hypothesis. So our precision would be .92. We want our test to have high precision and ours seems to have reasonably high precision. __Recall__ is defined as the number of true positives divided by the number of positives. We have 63 true positives. The total number of positives is the number of cases in which the null hypothesis is false, which is 100. So our recall is .63 (which is the same as the power). While our precision is reasonably high our recall would ideally be higher. In our example we know there was a difference in means by design and so we did the correct thing when we rejected the null hypothesis. But, if we had picked a different random sample there is a substantial chance that we would not have rejected the null hypothesis. So our test was not as well designed as it could be. One way we could address the low recall/power would be to increase our sample size. # ## Bayesian Inference # # We saw before that statistical hypothesis testing is essentially a way of updating beliefs about some hypothesis in light of some observation made. Bayesian inference can similarly be viewed as a way of updating beliefs given some observation made. The difference is we won't be rejecting a hypothesis but rather will be adjusting how confident we are that the hypothesis is true. So consider again the example where I arrive home from work and am greeted by my dog. Before I make any observation I am fairly confident that my dog has been a good dog today, call that level of confidence $P(H)$. Also, I know in that moment that if I were to see that my trash can is tipped over and spilled onto my floor then I will be much less confident that my dog was a good dog today, call that level of confidence $P(H|O)$. Subsequently, I see that my trash can is tipped over and spilled onto my floor. How confident should I be that my dog was a good dog today, call this unknown level of confidence $P_O(H)$. Well I already knew ahead of time that if I made this observation I would be less confident in this hypothesis, specifically to degree $P(H|O)$. Bayesian inference is just the practice of setting my confidence in the hypothesis after I make the observation to what I had established earlier it would be if I were to make that observation. That is $P_O(H) = P(H|O)$. # # Now one worry we should have about this example, a worry we raised before when talking about hypothesis tests, is that my reasoning is too subjective. Specifically, where did $P(H|O)$ come from? Well it was just a vague judgment I made. Fortunately sometimes we are in a position to do better, namely when we can make Bayesian inference precise using the mathematical theory of probability. If we can get some more objective probabilities and use them to determine $P(H|O)$ then our inference will be better. When we are dealing with probabilites, $P_O()$ is called the __posterior probability distribution__ and $P()$ is called the __prior probability distribution__. So another way to describe Bayesian inference is that it is just a way of getting the posterior distribution from the prior distribution, namely we set $P_O() = P(\cdot |O)$ (which is why you will often see the $P(\cdot |O)$ referred to as the posterior distribution). # # __Calculating $P(H|O)$:__ To calculate $P(H|O)$ we use what's called "Bayes' Theorem" or "Bayes' Rule": # # <br> # $$P(H|O) = \frac{P(O|H)P(H)}{P(O)}.$$ # <br> # I'll skip over where this comes from, but see the appendix for the motivation for it. The important thing to note is that we can calculate $P(H|O)$ if we can calculate $P(O|H)$, $P(H)$, and $P(O)$. It is often feasible to calculate $P(O|H)$ (we essentially did this when doing a hypothesis test) and $P(H)$ is the prior for our hypothesis which we will supply. Unfortunatley, calculating $P(O)$ is often problematic. We'll see a couple ways of getting around this difficultity but it is best to just see them in action. So let's now turn to an example of Bayesian inference. # # ### Example # Suppose again that we are interested in home prices for a partiuclar area. Specifically I am interested in the average of home prices in Uptown. Suppose my credences for what the average of home prices in Uptown is are normally distributed with mean 200,000 with a standard deviation of 50,000. That is, I am most confident that the average of home prices in Uptown is 200,000 dollars. As you increase or decrease the value my confidence that this value is the average of home prices in Uptown decreases. Let's visualize the distribution. x = np.linspace(200000.0 - 4*50000.0, 200000 + 4*50000.0, 100) plt.plot(x,mlab.normpdf(x, 200000.0, 50000.0), label='Prior') plt.legend() plt.show() # So, I already have some rough idea of what I think the average of home prices in Uptown is. But, this is more or less an educated guess. I would have a better idea if I took a sample and used that to inform my beliefs. Suppose I get a sample of home prices in uptown $X = x_1, ..., x_{30}$, and let $\theta$ be a variable that ranges over the candidate means of home prices in Uptown. Using Bayesian inference and Bayes' rule the posterior distribution is # # $$P_X(\theta) = P(\theta|X) = \frac{P(X|\theta)P(\theta)}{P(X)}.$$ # # Unfortunately, calculating what we need to in order to specify the posterior distribution is difficult. In particular, it is often infeasible to calculate $P(X)$, which is just the prior probability of getting the data we got. But, there are a couple ways around this. One is posterior sampling which we can use to approximate the posterior and the other is to use the fact that in our case the prior is conjugate to the posterior relative to the likelihood, which basically means we can look up an equation to get the parameters for the posterior which only uses values we already know. I'll go through both now. # # We can sample the posterior distribution in order to get an approximate sense of what it is like. You may wonder how we can sample from a distribution for which we are trying to figure out what it is! Yet, there are techniques we can use to get our sample. I'll largely gloss over the technical details, but the basic idea is that the sampling process will propose values for $\theta$ in such a way that the values for $\theta$ that fit better with our observation $X$ and prior will be chosen more often, the values for $\theta$ that do not fit well with our observation $X$ and prior will be chosen less often, and values that are inconsistent with our observation $X$ and prior will be rejected. # # What values of $\theta$ fit better with our observation and prior? Well, the difficulty in calculating $P(\theta|X)$ using Bayes' rule was that we could not calculate $P(X)$, but it is just a fixed number. So, while we cannot calculate the posterior, we can note that for two specific values of $\theta$, $\theta_1$ and $\theta_2$, $P(\theta_1|X) > P(\theta_2|X)$ if and only if $P(X|\theta_1)P(\theta_1) > P(X|\theta_2)P(\theta_2)$. So we can use $P(X|\theta)P(\theta)$ as a way of evaluating how well a particular value of theta fits with our obsevration and prior. # # Now that we have covered the basic idea of posterior sampling, the question remains how do we generate the samples. In this case we will use something called Markov Chain Monte Carlo sampling, specifically what is called the Metropolis–Hastings algorithm. See the appendix for the details on the algorithm. # + import scipy.stats as ss theta_sample = [] sampling_width = 50000.0 #Step One theta_start = np.random.normal(loc=200000.0, scale=50000.0, size=1) theta_current = theta_start for i in range(10000): #Step Two theta_new = np.random.normal(loc=theta_current, scale=25000.0, size=1) theta_new = theta_new[0] #Step Three likelihood_current = ss.norm(theta_current, 50000.0).pdf(uptown_sample).prod() likelihood_new = ss.norm(theta_new, 50000.0).pdf(uptown_sample).prod() prior_current = ss.norm(200000.0, 50000.0).pdf(theta_current) prior_new = ss.norm(200000.0, 50000.0).pdf(theta_new) r = (likelihood_new * prior_new) / (likelihood_current * prior_current) #Step Four t = np.random.rand() #Step Five if t <= r: theta_current = theta_new theta_sample.append(theta_current) #Drop first half of samples theta_sample = theta_sample[-5000:] theta_sample = np.array(theta_sample) # - # Now that we have our sample from the posterior we can visulaize it to get a sense of what it looks like. First let's take a look at the distribution of values for $\theta$ we got, which is the theoretical mean for home prices in Uptown. plt.hist(theta_sample, bins=25) # The sample appears normally distributed centered just below 190,000. Next let's look at the probability density function for values of $\theta$, i.e. our approximation of the posterior. We know the posterior is normal so we take the mean of the sample and the standard deviation of our sample and use that to approximate the posterior. Here we will visualize the approximated posterior along with the prior and we will indicate what the mean of observations of home prices in Uptown is. # + x = np.linspace(200000.0 - 4*50000.0, 200000 + 4*50000.0, 100) mean = theta_sample.mean() std = theta_sample.std() observation = uptown_sample.mean() plt.plot(x, mlab.normpdf(x, mean, std), label='Posterior') plt.plot(x,mlab.normpdf(x, 200000.0, 50000.0), label='Prior') plt.vlines([observation], 0, .000043, linestyle='dashed', colors='k', label='Mean of Observed Values') plt.legend() # - # We can see that the posterior is normal and centered right around the mean of our sample of Uptown home prices. So the sample of Uptown home prices is having a large impact on what the posterior distribution is. Given that the true mean of home prices in Uptown in 185,000 dollars and our prior for the mean of home prices in Uptown was centered around 200,000 dollars this seems like a significant improvement. And it is especially good because $P(X|\theta)$ is an objective probability. Even though our prior was arguably subjective, our posterior is much less subjective. # # Is this a good approxiamtion of the posterior? There are various ways of evaluating whether we got a good approximation or not I won't go into here. Since we can calculate our posterior from the conjugate prior we can compare the two to evaluate how good our approxiamtion is. Let's turn to calculating the posterior from the conjugate prior. # # The second way around calculating the posterior using Bayes' rule is to rely on the fact that in this particular problem we have a prior that is normal and a likelihood, i.e. $P(X|\theta)$, that is normal. Given this situation it can be proved that the posterior distribution will be a normal distribution with the following mean and standard deviation: $$\mu_{\text{posterior}} = \frac{\sigma_{\text{sample}}^2\cdot\mu_{\text{prior}} + n\cdot\sigma_{\text{prior}}^2\cdot\mu_{\text{sample}}}{\sigma_{\text{sample}}^2 + n\cdot\sigma_{\text{prior}}^2},$$ # # $$\sigma_{\text{posterior}} = \sqrt{\frac{\sigma_{\text{sample}}^2\cdot\sigma_{\text{prior}}^2}{\sigma_{\text{sample}}^2 + n\cdot\sigma_{\text{prior}}^2}}.$$ # # <br> # The proof is not something worth going through here. Just note that there are other conjugate distributions so it is worth being familiar with what they are if you are doing Bayesian inference often. Let's calculate the posterior and compare it to our approximation. # + x = np.linspace(185000.0 - 50000.0, 185000.0 + 50000.0, 100) #Equation Inputs n=30 mu_prior = 200000.0 stdev_prior = 50000.0 sample_mean = uptown_sample.mean() sample_std = 50000.0 #This is a known quantity and is not calculated from the sample #Calculate Posterior and Plot it mu_posterior_ = ((sample_std**2)*mu_prior + (stdev_prior**2)*n*sample_mean)/((stdev_prior**2)*n + (sample_std**2)) std_posterior = math.sqrt(((sample_std**2)*(stdev_prior**2))/((stdev_prior**2)*n + (sample_std**2))) plt.plot(x, mlab.normpdf(x, mu_posterior_, std_posterior), label='True Posterior') #Plot Approximation mean = theta_sample.mean() std = theta_sample.std() plt.plot(x, mlab.normpdf(x, mean, std), 'g--', label='Approximation Based On Sampling') plt.legend() # - # Here we can see that the approximation of the posterior we got appears to be a good one. There is a bit more variance in the approximation we got but that is to be expected given it based on a sample. # # ## Comparing Bayesian Inference to Hypothesis Testing # # One main difference between the two methods is what we get out. In the example for hypothesis testing we get a hypothesis that the means are different. In the Bayesian case we got a probability distribution for the mean out of it. Another major difference between these two types of inference is what went into them. In the case of hypothesis testing we need a hypothesis and some data, but that is all it is based on. In the case of Bayesian inference we have some data but we also have a prior distribution. We can think of the prior distribution as the analog of the hypothesis but it is a much more complex sort of thing. A third important difference is when they can be used. Bayesian inference can be used anytime we can compute the needed probabilities or can use one of the other methods of specifying the posterior, e.g. posterior sampling. The applicability of the t-test is much narrower, though there are other statistical hypothesis test that can be used in some other circumstances. # # While there are important differences between these two methods, there is a key similarity between the two. Both methods converage to the truth at the same rate as sample size increases. If we look at the equation for the standard deviation of the posterior distribution, # # <br> # $$\sigma_{\text{posterior}} = \sqrt{\frac{\sigma_{\text{sample}}^2\cdot\sigma_{\text{prior}}^2}{\sigma_{\text{sample}}^2 + n\cdot\sigma_{\text{prior}}^2}},$$ # <br> # we can see that as the sample size $n$ increases the standard deviation decreases at a rate of $\sqrt{\frac{1}{n}}$. Similarly, if we look at the equation for the t-stastic, # # <br> # $$t = \frac{\bar{x} - \mu}{\sqrt{\frac{\sigma^2}{n}}},$$ # <br> # we can see that as the sample size $n$ increases we become more confident that $\bar{x}$ is the true population mean at a rate of $\sqrt{\frac{1}{n}}$. So, sample size does not give either method an advantage over the other. # # # ## Appendix # # #### The three types of t-test: # # - A one sample t-test is a test used to determine if the mean for a group we have a sample from is different from some fixed number. For example, suppose we know the average height of a person, if we have a sample of heights of basketball players we could perform this sort of t-test to determine if there is a difference in the average height of basketball players as compared to the known average height of people in general. # - A two independent samples t-test is a t-test applied to two independent groups that both have the same numeric feature and we want to determine if the mean of that feature differs between the groups. For example, we could perform a t-test of this sort to determine if there is a difference in the average height of men and the average height of women. # - A paired samples t-test is a test for a difference in means between groups where the individuals in the sample are related in some way. For example, we might measure the average blood pressure of a group of individuals before taking some medication and then compare this to the average blood pressure of the same group of individuals after taking the medication. # # #### Motivation for definition of conditional probability and Bayes' Rule: # # In the theory of probability, we have the following definition of conditional probability: # # <br> # $$P(H|O) =_{df} \frac{P(H\& O)}{P(O)}.$$ # <br> # To motivate the definition of conditional probability we can use one of Bayes' examples. Suppose we have a billiard table and a billiard ball. We will roll the ball and it will bounce around the table until it stops. We are interested in where it will stop and let's assume that for any two spots on the table there is an equal chance it will stop there. The probability that it will land in specific region of the table is the area of that region divided by the total area of the pool table. So looking at the picture below, the probability that the ball will stop in region $B$ is the area of $B$ divided by the total area of the table. This gives us a general method for calculating the probability the ball will stop in a region of interest. The probability is equal to the area of the region of interest divided by the total area of the region where the ball could stop. # # ![Conditional Probability](https://github.com/jpkell05/images/blob/master/Pool_Table_Conditional_Probability.png?raw=true) # # Suppose we are interested in the conditional probability that the ball stops in region $A$ given that it stops somewhere in region $B$. This is analogous to the simple case above, we know the ball will stop somewhere in region $B$ and we want to know the probability that it also stops in region $A$. So, we look at the subregion of $B$ that is also part of region $A$, i.e. region $A\& B$. We take the area of $A\& B$ and divide by the total area of $B$. We end up with $P(A|B) = \frac{P(A\& B)}{P(B)}$, which is consistent with the definition of conditional probability given above. Given that the definition gives the intuitively correct result in this case gives some reason to think it is correct. # # By doing a little algebra on the definition of conditional probability we can see that $P(H\& O) = P(O|H)P(H)$ and so by substitution we get Bayes' Theorem: # # <br> # $$P(H|O) = \frac{P(O|H)P(H)}{P(O)}.$$ # # #### The Metropolis–Hastings algorithm: # # The Metropolis–Hastings algorithm algorithm is as follows. # # 1. Pick a starting point $\theta_0$ (I will do this by sampling from the prior) and set $\theta_{\text{current}} = \theta_0$. # # 2. Propose a new value $\theta_{\text{new}}$ by sampling $\theta_{\text{new}}$ from a normal distribution centered around $\theta_{\text{current}}$ (the standard deviation of the normal is a parameter you pick and can tune). # # 3. Calculate $r$ where $$r = \frac{P(X|\theta_{\text{new}})P(\theta_{\text{new}})}{P(X|\theta_{\text{current}})P(\theta_{\text{current}})}.$$ # <br> # 4. Sample a threshold value $t$ from a uniform distibution over the interval $[0, 1]$. # # 5. If $r < t$ then $\theta_{\text{new}}$ is rejected and we repeat steps two through five, otherwise set $\theta_{\text{current}} = \theta_{\text{new}}$, record $\theta_{\text{current}}$ and repeat steps two through five. #
Explainer on Hypothesis Testing and Bayesian Posterior Inference.ipynb
# --- # jupyter: # jupytext: # formats: ipynb,md # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import geopandas import numpy import matplotlib.pyplot as plt import geoplanar from shapely.geometry import box, Point # # Omitted interiors # For a planar enforced polygon layer there should be no individual polygons that are contained inside other polygons. # # Violation of this condition can lead to a number of errors in subsequent spatial analysis. # # ## Violation: Points within more than a single feature # If this were not the case, then it would be possible for a point to be contained inside more than a single polygon which would be a violation of planar enforcement. # An example can be seen as follows: # + p1 = box(0,0,10,10) p2 = box(1,1, 3,3) p3 = box(7,7, 9,9) gdf = geopandas.GeoDataFrame(geometry=[p1,p2,p3]) base = gdf.plot(edgecolor='k') pnt1 = geopandas.GeoDataFrame(geometry=[Point(2,2)]) pnt1.plot(ax=base,color='red') # - pnt1.within(gdf.geometry[0]) pnt1.within(gdf.geometry[1]) # The violation here is that `pnt1` is `within` *both* polygon `p1` *and* `p2`. # ## Error in area calculations # # A related error that arises in this case is that the area of the "containing" polygon will be too large, since it includes the area of the smaller polygons: gdf.geometry[0] gdf.area gdf.area.sum() # ## Missing interior rings (aka holes) # # The crux of the issue is that the two smaller polygons are entities in their own right, yet the large polygon was defined to have only a single external ring. It is missing two **interior rings** # which would allow for the correct topological relationship between the larger polygon and the two smaller polygons. # # `geoplanar` can detect missing interiors: mi = geoplanar.missing_interiors(gdf) mi # ## Adding interior rings # Once we know that the problem is missing interior rings, we can correct this with `add_interiors`: gdf1 = geoplanar.add_interiors(gdf) gdf1.geometry[0] # And we see that the resulting area of the GeoSeries is now correct: gdf1.area # Additionally, a check for `missing_interiors` reveals the violation has been corrected geoplanar.missing_interiors(gdf1) # The addition of the interior rings also corrects the violation of the containment rule that a point should belong to at most a single polygon in a planar enforced polygon GeoSeries: # pnt1.within(gdf1.geometry[0]) pnt1.within(gdf1.geometry[1]) # ## Failure to detect contiguity # # A final implication of missing interiors in a non-planar enforced polygon GeoSeries is that algorithms that rely on planar enforcement to detect contiguous polygons will fail. # # More specifically, in [pysal](https://pysal.org), fast polygon detectors can be used to generate so called Queen neighbors, which are pairs of polygons that share at least one vertex on their exterior/interior rings. import libpysal w = libpysal.weights.Queen.from_dataframe(gdf) w.neighbors # The original GeoDataFrame results in fully disconnected polygons, or islands. `pysal` at least throws a warning when islands are detected, and for this particular type of planar enforcement violation, missing interiors, the contained polygons will always be reported as islands. # # Using the corrected GeoDataFrame with the inserted interior rings results in the correct neighbor determinations: w = libpysal.weights.Queen.from_dataframe(gdf1) w.neighbors
notebooks/holes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: u4-s3-dnn # kernelspec: # display_name: U4-S3-DNN (Python 3.7) # language: python # name: u4-s3-dnn # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/pragmatizt/DS-Unit-4-Sprint-3-Deep-Learning/blob/master/ira_Unit_4_Sprint_3_Challenge.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="ne5SUADtA9K1" colab_type="text" # <img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200> # <br></br> # <br></br> # # # Major Neural Network Architectures Challenge # ## *Data Science Unit 4 Sprint 3 Challenge* # # In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: # recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Autoencoders. In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. # # __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on SageMaker, Colab or a comparable environment. If something is running longer, doublecheck your approach! # # ## Challenge Objectives # *You should be able to:* # * <a href="#p1">Part 1</a>: Train a LSTM classification model # * <a href="#p2">Part 2</a>: Utilize a pre-trained CNN for objective detection # * <a href="#p3">Part 3</a>: Describe the components of an autoencoder # * <a href="#p4">Part 4</a>: Describe yourself as a Data Science and elucidate your vision of AI # + [markdown] colab_type="text" id="-5UwGRnJOmD4" # <a id="p1"></a> # ## Part 1 - RNNs # # Use an RNN/LSTM to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model. # # Your Tasks: # - Use Keras to fit a predictive model, classifying news articles into topics. # - Report your overall score and accuracy # # For reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class. # # __*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done! # + colab_type="code" id="DS-9ksWjoJit" outputId="95ead12f-9d2a-459d-dda1-7e2ab89a5fff" colab={"base_uri": "https://localhost:8080/", "height": 63} from tensorflow.keras.datasets import reuters (X_train, y_train), (X_test, y_test) = reuters.load_data(num_words=None, skip_top=0, maxlen=None, test_split=0.2, seed=723812, start_char=1, oov_char=2, index_from=3) # + id="Bi2yBFp2B2aD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c3e6957f-8e6e-4109-dd49-5e884014e8f0" X_train.shape, y_train.shape, X_test.shape, y_test.shape # + colab_type="code" id="fLKqFh8DovaN" outputId="6462a4e5-6373-47aa-fa19-9d16e1898be2" colab={"base_uri": "https://localhost:8080/", "height": 70} # Demo of encoding word_index = reuters.get_word_index(path="reuters_word_index.json") print(f"Iran is encoded as {word_index['iran']} in the data") print(f"London is encoded as {word_index['london']} in the data") print("Words are encoded as numbers in our dataset.") # + colab_type="code" id="_QVSlFEAqWJM" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="43d5e202-3b31-45f2-89fe-ae82e39f1572" from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding, LSTM batch_size = 46 max_features = len(word_index.values()) maxlen = 200 print(len(X_train), 'train sequences') print(len(X_test), 'test sequences') print('Pad sequences (samples x time)') X_train = sequence.pad_sequences(X_train, maxlen=maxlen) X_test = sequence.pad_sequences(X_test, maxlen=maxlen) print('X_train shape:', X_train.shape) print('X_test shape:', X_test.shape) print('Build model...') model = Sequential() model.add(Embedding(max_features, 128)) model.add(LSTM(128, dropout=0.1, recurrent_dropout=0.1)) model.add(Dense(1, activation='sigmoid')) # + id="lR68S6v1A9Lq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="6fc9bfe0-f75b-4216-cca7-b326d078d633" # You should only run this cell once your model has been properly configured model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) print('Train...') model.fit(X_train, y_train, batch_size=batch_size, epochs=1, validation_data=(X_test, y_test)) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size) print('Test score:', score) print('Test accuracy:', acc) # + id="8GjTAvJmDj48" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="16178261-58f9-4122-cde2-2faa427ac155" # Reference for the Sequence Data Question below: X_train.shape, X_test.shape # + [markdown] id="BxApIf33A9L0" colab_type="text" # ## Sequence Data Question # #### *Describe the `pad_sequences` method used on the training dataset. What does it do? Why do you need it?* # # **Answer**: Pad sequences transforms a list into a 2D numpy array. As we can see in the first time we split the data, it was 1 dimensional. # # To transform it into a 2D array, we used the the pad_sequences() method. Also, the maxlen indicates the maximum length of each sequence. # # So as we transform these into 2D arrays, we ensure that they are the same shape by using the maxlen parameter. # # **References** (for my future self reviewing this sprint challenge): # - [Keras Documentation](https://keras.io/preprocessing/sequence/) # - [Stack Overflow](https://stackoverflow.com/questions/42943291/what-does-keras-io-preprocessing-sequence-pad-sequences-do) # # ## RNNs versus LSTMs # #### *What are the primary motivations behind using Long-ShortTerm Memory Cell unit over traditional Recurrent Neural Networks?* # # **Answer**: Simply put, LSTM's can remember information for long periods of time. # # In non-technical terms it can bring up "context" from the past to present & future information. # # LSTM's have the ability to add or remove information to the cell state by structures called **gates**. # # *"Gates are a way to optionally let information through. They are composed out of a sigmoid neural net layer and a pointwise multiplication operation."* - Blog post referenced below. # # ![alt text](https://i.stack.imgur.com/Iv3nU.png) # # **Reference Links**: # - [StackOverflow](https://i.stack.imgur.com/Iv3nU.png) # - [Blog Post on LSTMs](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) # - [Link](https://arxiv.org/ftp/arxiv/papers/1604/1604.04573.pdf): Saw on my google searches; looked like an interesting article on multi-label image classification # # ## RNN / LSTM Use Cases # #### *Name and Describe 3 Use Cases of LSTMs or RNNs and why they are suited to that use case* # # **Answer**: # - Unsegmented, connected handwriting recognition # - Speech recognition # - Anomaly detection in network traffic # # They're best suited for the cases mentioned above because LSTM excels in classifying, processing, and making predictions on *time series* data. # # In each of the cases above, there could be an unspecified period of time between events. # # The fact that LSTM "remembers" makes it an excellent tool for these sort of problems. # # **Reference Link**: # - [Wikipedia](https://en.wikipedia.org/wiki/Long_short-term_memory) # # + [markdown] colab_type="text" id="yz0LCZd_O4IG" # <a id="p2"></a> # ## Part 2- CNNs # # ### Find the Frog # # Time to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs: # # <img align="left" src="https://d3i6fh83elv35t.cloudfront.net/newshour/app/uploads/2017/03/GettyImages-654745934-1024x687.jpg" width=400> # # + colab_type="code" id="whIqEWR236Af" outputId="4c22cc22-84f7-44fd-9f52-688fe5637bd5" colab={"base_uri": "https://localhost:8080/", "height": 70} # !pip install google_images_download # + colab_type="code" id="EKnnnM8k38sN" outputId="c4eee7c6-5000-4da3-c40f-3c4ef9178eaa" colab={"base_uri": "https://localhost:8080/", "height": 351} from google_images_download import google_images_download response = google_images_download.googleimagesdownload() arguments = {"keywords": "lilly frog pond", "limit": 5, "print_urls": True} absolute_image_paths = response.download(arguments) # One error below. Looks like the fifth image is returning a 404 error. # + [markdown] colab_type="text" id="si5YfNqS50QU" # At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model. # # *Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog` # # *Stretch goals* # - Check for fish or other labels # - Create a matplotlib visualizations of the images and your prediction as the visualization label # + id="aoDod5tkaLgk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 388} outputId="4f765abf-4155-469a-b0e8-3f6e26513188" import numpy as np from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions def process_img_path(img_path): return image.load_img(img_path, target_size=(224, 224)) def img_contains_frog(img): x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) model = ResNet50(weights='imagenet') features = model.predict(x) frog_results = decode_predictions(features, top=3)[0] print(frog_results) frog_results.append for entry in frog_results: if 'frog'in entry[1]: return True # Else: return False def img_contains_fish(img): x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) model = ResNet50(weights='imagenet') features = model.predict(x) fish_results = decode_predictions(features, top=3)[0] fish_results.append(decode_predictions(features, top=10)[0]) print(fish_results) fish_results.append for entry in fish_results: if 'fish'in entry[1]: return True #Else: return False # Frogs for x in absolute_image_paths[0]['lilly frog pond']: x = process_img_path(x) print(img_contains_frog(x)) # Fish for x in absolute_image_paths[0]['lilly frog pond']: x = process_img_path(x) print(img_contains_fish(x)) # + [markdown] id="oErE_CirA9Mk" colab_type="text" # #### Stretch Goal: Displaying Predictions # + id="XVDO-mn1ffBh" colab_type="code" colab={} ## Couldn't get code to work. Good "code challenge" for myself once Winter Break starts. # + [markdown] colab_type="text" id="XEuhvSu7O5Rf" # <a id="p3"></a> # ## Part 3 - Autoencoders # # Describe a use case for an autoencoder given that an autoencoder tries to predict its own input. # # *Answer:* Given that it tries to predict its own input, one novel way of using autoencoders is image denoising. Oftentimes images contain noise in the data -- autoencoders can get rid of that noise! # # - [Medium](https://medium.com/datadriveninvestor/deep-learning-autoencoders-db265359943e), a decent blog post overview on autoencoders. # - [Kaggle](https://www.kaggle.com/shivamb/how-autoencoders-work-intro-and-usecases), I love this Kaggle post on AutoEncoders. Putting this on my list for winter reading. # + [markdown] colab_type="text" id="626zYgjkO7Vq" # <a id="p4"></a> # ## Part 4 - More... # + [markdown] colab_type="text" id="__lDWfcUO8oo" # Answer the following questions, with a target audience of a fellow Data Scientist: # # - **What do you consider your strongest area, as a Data Scientist?** # # **Answer**: I would actually consider my non-technical experience as something that will help me in the long term. # # I have a background in sales, operations, and entrepreneurship. So I'm comfortable with *storytelling* (selling a product or service), I understand business (which will help me share the data with different stakeholders: whether C-suite, finance, marketing, or customers, etc.), and I have an undergrad background in Economics (a general understanding of data and visualizations which can help me tell the story). # # - **What area of Data Science would you most like to learn more about, and why?** # # **Answer**: You know, I was more inspired by the data anlytics and visualizations part of Data Science, but since starting Unit 4, and learning about all the cool things that we can do with images, text, ... anything(!), I kind of want to spend some time looking into this deeper. # # But I would be happy if my starting job in this field is as a Data Analyst or as a Business Intelligence analyst (plays on the strengths I mentioned above) # # - **Where do you think Data Science will be in 5 years?** # # **Answer**: Able to process more data (5G, stronger hardware), maybe one or two groundbreaking algorithms, more ubiquitous, more unintimidating to the general population. # # Fully integrated with industries like energy, agriculture, finance, tech (of course), practically every industry will see the value in Data Science. # # - **What are the threats posed by AI to our society?** # # **Answer**: The social and economic changes are the biggest and most obvious ones. There will be a massive job displacement for people all over the world. Like every technological revolution in the past: agricultural, industrial, digital. # # - **How do you think we can counteract those threats? ** # # **Answer**: We need to have our brightest minds look into how to best "catch" these massive amounts of people that will see their jobs become obsolete due to A.I. # # To have support structures that will allow them to reskill. Whether that's financial support during the time that they're reskilling, as well as the educational support that allows them to get the best education while reskilling. # # (Lambda is doing a great job as a solution to this, which is already happening) # # - **Do you think achieving General Artifical Intelligence is ever possible?** # # **Answer**: Yes, I do. As hardware, algorithms, and internet speeds (meaning improved pipelines) improves, I think it's only a matter of time. # # A few sentences per answer is fine - only elaborate if time allows. # + [markdown] colab_type="text" id="_Hoqe3mM_Mtc" # ## Congratulations! # # Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist. # # + id="qoN2ZF2eA9NB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="2c6b4ea5-459f-44ec-a881-56350659c389" from IPython.display import HTML HTML("""<iframe src="https://giphy.com/embed/26xivLqkv86uJzqWk" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk">via GIPHY</a></p>""") # + id="NUsou3rtadp5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e0e0c0c3-71b9-4ade-da23-71cc78276b81" print("Woohoo! We did it. Survived four units of Lambda School. Onto labs!")
ira_Unit_4_Sprint_3_Challenge.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Full-Waveform Inversion (FWI) # # This notebook is the third in a series of tutorial highlighting various aspects of seismic inversion based on Devito operators. In this second example we aim to highlight the core ideas behind seismic inversion, where we create an image of the subsurface from field recorded data. This tutorial follows on the modelling tutorial and will reuse the modelling and velocity model. # # ## Inversion requirement # # Seismic inversion relies on two known parameters: # # - **Field data** - or also called **recorded data**. This is a shot record corresponding to the true velocity model. In practice this data is acquired as described in the first tutorial. In order to simplify this tutorial we will fake field data by modelling it with the true velocity model. # # - **Initial velocity model**. This is a velocity model that has been obtained by processing the field data. This model is a rough and very smooth estimate of the velocity as an initial estimate for the inversion. This is a necessary requirement for any optimization (method). # # ## Inversion computational setup # # In this tutorial, we will introduce the gradient operator. This operator corresponds to the imaging condition introduced in the previous tutorial with some minor modifications that are defined by the objective function (also referred to in the tutorial series as the *functional*, *f*) and its gradient, *g*. We will define this two terms in the tutorial too. # # ## Notes on the operators # # As we already describe the creation of a forward modelling operator, we will only call an wrapped function here. This wrappers already contains all the necessary operator for seismic modeling, imaging and inversion, however any new operator will be fully described and only used from the wrapper in the next tutorials. # + import numpy as np # %matplotlib inline from devito import configuration configuration['log_level'] = 'WARNING' # - # ## Computational considerations # # As we will see in this tutorial, FWI is again very computationally demanding, even more so than RTM. To keep this tutorial as light-wight as possible we therefore again use a very small demonstration model. We also define here a few parameters for the final example runs that can be changed to modify the overall runtime of the tutorial. nshots = 9 # Number of shots to create gradient from nreceivers = 101 # Number of receiver locations per shot fwi_iterations = 8 # Number of outer FWI iterations # # True and smooth velocity models # # As before, we will again use a very simple model domain, consisting of a circle within a 2D domain. We will again use the "true" model to generate our synthetic shot data and use a "smooth" model as our initial guess. In this case the smooth model is very smooth indeed - it is simply a constant background velocity without any features. # + #NBVAL_IGNORE_OUTPUT from examples.seismic import demo_model, plot_velocity, plot_perturbation # Define true and initial model shape = (101, 101) # Number of grid point (nx, nz) spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km origin = (0., 0.) # Need origin to define relative source and receiver locations model = demo_model('circle-isotropic', vp=3.0, vp_background=2.5, origin=origin, shape=shape, spacing=spacing, nbpml=40) model0 = demo_model('circle-isotropic', vp=2.5, vp_background=2.5, origin=origin, shape=shape, spacing=spacing, nbpml=40) plot_velocity(model) plot_velocity(model0) plot_perturbation(model0, model) # - # ## Acquisition geometry # # In this tutorial, we will use the easiest case for inversion, namely a transmission experiment. The sources are located on one side of the model and the receivers on the other side. This allow to record most of the information necessary for inversion, as reflections usually lead to poor inversion results. # + #NBVAL_IGNORE_OUTPUT # Define acquisition geometry: source from examples.seismic import RickerSource, Receiver # Define time discretization according to grid spacing t0 = 0. tn = 1000. # Simulation lasts 1 second (1000 ms) dt = model.critical_dt # Time step from model grid spacing nt = int(1 + (tn-t0) / dt) # Discrete time axis length time = np.linspace(t0, tn, nt) # Discrete modelling time f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz) src = RickerSource(name='src', grid=model.grid, f0=f0, time=np.linspace(t0, tn, nt)) src.coordinates.data[0, :] = np.array(model.domain_size) * .5 src.coordinates.data[0, 0] = 20. # 20m from the left end # We can plot the time signature to see the wavelet src.show() # + #NBVAL_IGNORE_OUTPUT # Define acquisition geometry: receivers # Initialize receivers for synthetic data rec = Receiver(name='rec', grid=model.grid, npoint=nreceivers, ntime=nt) rec.coordinates.data[:, 1] = np.linspace(0, model.domain_size[0], num=nreceivers) rec.coordinates.data[:, 0] = 980. # 20m from the right end # Plot acquisition geometry plot_velocity(model, source=src.coordinates.data, receiver=rec.coordinates.data[::4, :]) # - # ## True and smooth data # # We can generate shot records for the true and smoothed initial velocity models, since the difference between them will again form the basis of our imaging procedure. # + # Compute synthetic data with forward operator from examples.seismic.acoustic import AcousticWaveSolver solver = AcousticWaveSolver(model, src, rec, space_order=4) true_d, _, _ = solver.forward(src=src, m=model.m) # - # Compute initial data with forward operator smooth_d, _, _ = solver.forward(src=src, m=model0.m) # + #NBVAL_IGNORE_OUTPUT from examples.seismic import plot_shotrecord # Plot shot record for true and smooth velocity model and the difference plot_shotrecord(true_d.data, model, t0, tn) plot_shotrecord(smooth_d.data, model, t0, tn) plot_shotrecord(smooth_d.data - true_d.data, model, t0, tn) # - # # Full-Waveform Inversion # # # ## Formulation # # Full-waveform inversion (FWI) aims to invert an accurate model of the discrete wave velocity, $\mathbf{c}$, or equivalently the square slowness of the wave, $\mathbf{m} = \frac{1}{\mathbf{c}^2}$, from a given set of measurements of the pressure wavefield $\mathbf{u}$. This can be expressed as the following optimization problem [1, 2]: # # \begin{aligned} # \mathop{\hbox{minimize}}_{\mathbf{m}} \Phi_s(\mathbf{m})&=\frac{1}{2}\left\lVert\mathbf{P}_r # \mathbf{u} - \mathbf{d}\right\rVert_2^2 \\ # \mathbf{u} &= \mathbf{A}(\mathbf{m})^{-1} \mathbf{P}_s^T \mathbf{q}_s, # \end{aligned} # # where $\mathbf{P}_r$ is the sampling operator at the receiver locations, $\mathbf{P}_s^T$ is the injection operator at the source locations, $\mathbf{A}(\mathbf{m})$ is the operator representing the discretized wave equation matrix, $\mathbf{u}$ is the discrete synthetic pressure wavefield, $\mathbf{q}_s$ is the corresponding pressure source and $\mathbf{d}$ is the measured data. It is worth noting that $\mathbf{m}$ is the unknown in this formulation and that multiple implementations of the wave equation operator $\mathbf{A}(\mathbf{m})$ are possible. # # We have already defined a concrete solver scheme for $\mathbf{A}(\mathbf{m})$ in the first tutorial, including appropriate implementations of the sampling operator $\mathbf{P}_r$ and source term $\mathbf{q}_s$. # # To solve this optimization problem using a gradient-based method, we use the # adjoint-state method to evaluate the gradient $\nabla\Phi_s(\mathbf{m})$: # # \begin{align} # \nabla\Phi_s(\mathbf{m})=\sum_{\mathbf{t} =1}^{n_t}\mathbf{u}[\mathbf{t}] \mathbf{v}_{tt}[\mathbf{t}] =\mathbf{J}^T\delta\mathbf{d}_s, # \end{align} # # where $n_t$ is the number of computational time steps, $\delta\mathbf{d}_s = \left(\mathbf{P}_r \mathbf{u} - \mathbf{d} \right)$ is the data residual (difference between the measured data and the modelled data), $\mathbf{J}$ is the Jacobian operator and $\mathbf{v}_{tt}$ is the second-order time derivative of the adjoint wavefield solving: # # \begin{align} # \mathbf{A}^T(\mathbf{m}) \mathbf{v} = \mathbf{P}_r^T \delta\mathbf{d}. # \end{align} # # We see that the gradient of the FWI function is the previously defined imaging condition with an extra second-order time derivative. We will therefore reuse the operators defined previously inside a Devito wrapper. # ## FWI gradient operator # # To compute a single gradient $\nabla\Phi_s(\mathbf{m})$ in our optimization workflow we again use `solver.forward` to compute the entire forward wavefield $\mathbf{u}$ and a similar pre-defined gradient operator to compute the adjoint wavefield `v`. The gradient operator provided by our `solver` utility also computes the correlation between the wavefields, allowing us to encode a similar procedure to the previous imaging tutorial as our gradient calculation: # # - Simulate the forward wavefield with the background velocity model to get the synthetic data and save the full wavefield $\mathbf{u}$ # - Compute the data residual # - Back-propagate the data residual and compute on the fly the gradient contribution at each time step. # # This procedure is applied to multiple source positions and summed to obtain a gradient image of the subsurface. We again prepare the source locations for each shot and visualize them, before defining a single gradient computation over a number of shots as a single function. # + #NBVAL_IGNORE_OUTPUT # Prepare the varying source locations sources source_locations = np.empty((nshots, 2), dtype=np.float32) source_locations[:, 0] = 30. source_locations[:, 1] = np.linspace(0., 1000, num=nshots) plot_velocity(model, source=source_locations) # + # Create FWI gradient kernel from devito import Function, clear_cache def fwi_gradient(m_in): # Important: We force previous wavefields to be destroyed, # so that we may reuse the memory. clear_cache() # Create symbols to hold the gradient and residual grad = Function(name="grad", grid=model.grid) residual = Receiver(name='rec', grid=model.grid, ntime=nt, coordinates=rec.coordinates.data) objective = 0. for i in range(nshots): # Update source location src.coordinates.data[0, :] = source_locations[i, :] # Generate synthetic data from true model true_d, _, _ = solver.forward(src=src, m=model.m) # Compute smooth data and full forward wavefield u0 smooth_d, u0, _ = solver.forward(src=src, m=m_in, save=True) # Compute gradient from data residual and update objective function residual.data[:] = smooth_d.data[:] - true_d.data[:] objective += .5*np.linalg.norm(residual.data.reshape(-1))**2 solver.gradient(rec=residual, u=u0, m=m_in, grad=grad) return objective, grad.data # - # Having defined our FWI gradient procedure we can compute the initial iteration from our starting model. This allows us to visualize the gradient alongside the model perturbation and the effect of the gradient update on the model. # + #NBVAL_IGNORE_OUTPUT # Compute gradient of initial model ff, update = fwi_gradient(model0.m) print('Objective value is %f ' % ff) # + #NBVAL_IGNORE_OUTPUT from examples.seismic import plot_image # Plot the FWI gradient plot_image(update, vmin=-1e4, vmax=1e4, cmap="jet") # Plot the difference between the true and initial model. # This is not known in practice as only the initial model is provided. plot_image(model0.m.data - model.m.data, vmin=-1e-1, vmax=1e-1, cmap="jet") # Show what the update does to the model alpha = .05 / np.max(update) plot_image(model0.m.data - alpha*update, vmin=.1, vmax=.2, cmap="jet") # - # We see that the gradient and the true perturbation have the same sign, therefore, with an appropriate scaling factor, we will update the model in the correct direction. # Define bounding box constraints on the solution. def apply_box_constraint(m): # Maximum possible 'realistic' velocity is 3.5 km/sec # Minimum possible 'realistic' velocity is 2 km/sec return np.clip(m, 1/3.5**2, 1/2**2) # + #NBVAL_SKIP # Run FWI with gradient descent history = np.zeros((fwi_iterations, 1)) for i in range(0, fwi_iterations): # Compute the functional value and gradient for the current # model estimate phi, direction = fwi_gradient(model0.m) # Store the history of the functional values history[i] = phi # Artificial Step length for gradient descent # In practice this would be replaced by a Linesearch (Wolfe, ...) # that would guarantee functional decrease Phi(m-alpha g) <= epsilon Phi(m) # where epsilon is a minimum decrease constant alpha = .005 / np.max(direction) # Update the model estimate and inforce minimum/maximum values model0.m.data[:] = apply_box_constraint(model0.m.data - alpha * direction) # Log the progress made print('Objective value is %f at iteration %d' % (phi, i+1)) # + #NBVAL_IGNORE_OUTPUT # First, update velocity from computed square slowness nbpml = model.nbpml model0.vp = np.sqrt(1. / model0.m.data[nbpml:-nbpml, nbpml:-nbpml]) # Plot inverted velocity model plot_velocity(model0) # + #NBVAL_SKIP import matplotlib.pyplot as plt # Plot objective function decrease plt.figure() plt.loglog(history) plt.xlabel('Iteration number') plt.ylabel('Misift value Phi') plt.title('Convergence') plt.show() # - # ## References # # [1] _<NAME>. and <NAME>.: An overview of full-waveform inversion in exploration geophysics, GEOPHYSICS, 74, WCC1–WCC26, doi:10.1190/1.3238367, http://library.seg.org/doi/abs/10.1190/1.3238367, 2009._ # # [2] _<NAME>., <NAME>., and <NAME>.: An effective method for parameter estimation with PDE constraints with multiple right hand sides, SIAM Journal on Optimization, 22, http://dx.doi.org/10.1137/11081126X, 2012._ # <sup>This notebook is part of the tutorial "Optimised Symbolic Finite Difference Computation with Devito" presented at the Intel® HPC Developer Conference 2017.</sup>
02a_fwi.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fundamentals of Python Variables And Numpy # ## <NAME> # ## Setting up the environment import numpy as np # This is the main numerical library we will use import matplotlib.pyplot as plt # This is the main plotting library we will use import scipy.io.wavfile as wavfile # We will use this library to load in audio import IPython.display as ipd # This is a library that allows us to play audio samples in Jupyter # ## Basic Arithmetic / Variable Naming # Go through +, *, /, **, % 1+1 9*8 10/4 2**2 + 2 20 % 6 # Remainder a = 5 # This sets the variable a to be 5 b = 2 b**2 a = a + 1 # This is weird syntax, but it means set a to be the previous value of a plus 1 a**a matt = 0 1 / matt 1matt = 0 # Cannot start a variable name with a number christralie = 30 print(ChrisTralie) # Case sensitive!! # ## Numpy Arrays / Plotting x = [1, 9, -10, 2] x[4] x[0] # Lists are zero-indexed x[2] x[-1] x[-2] x = [i**2 for i in range(1000)] x[-1] len(x) # This gets the length of the list # Slicing mylist = [12, 9, -2, 8, 17, 13, 24] firstfour = mylist[0:4] print(firstfour) print(mylist[2:5]) print(mylist[0:5:2]) mylist[0:5:2] = 0 # This is a limitation of lists mylist[0:5:2] = [0, 0, 0] print(mylist) mylist[0] = 64 print(mylist) list1 = [0, 1, 2, 3, 4, 5, 6, 7] list2 = [2, 4, 6, 8, 10, 12, 14, 16] list1[0:7:2] = list2[1:8:2] print(list1) print(list1[0:7:2]) # We can leave out the last element print(list1[0: :3]) # Take every third element starting at element 0 print(list1[2: : ]) # Take every element from 2 to the end print(list1[::-1]) x = [1, 2, 3, 4, 5] print(x[5::]) y = [3, 8, 9] x[5::] = y print(x) # The + by default with lists puts one list at the end of the other a = [1, 2, 3] b = [2, 3, 4] print(a + b) # + # arange, zeros, ones, plot/stem # + # np.arange? # - x = np.arange(10) + 1 # Element-wise add # This is *much* faster than adding to individual elements #x[0] = x[0] + 1 #x[1] = x[1] + 1 print(x) x = x*x # Element-wise multiplication print(x) x = x + x print(x) plt.plot(x) plt.stem(x) x = np.arange(100) plt.plot(np.mod(x, 7)) # "Modulus operator": Returns remainder after division # Class exercise: Make the array [0, 1, 0, 3, 0, 5, 0, 7, 0] using arange and slicing x = np.arange(8) print(x) x = x*2 print(x) x = x + 1 print(x) x = np.mod(x, 8) print(x) # + ## Andrew/Kat x = np.arange(17) print(x) y = np.mod(x, 8) print(y) y = y[1::2] print(y) # Class exercise: Create an array with the elements [1, 3, 5, 7, 1, 3, 5, 7] # using np.arange and np.mod # - ## You can really cheat with regular lists x = [1, 3, 5, 7] x = x + x print(x)
Week1_Fundamentals.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:v2_0.7] * # language: python # name: conda-env-v2_0.7-py # --- # # Feature Representation Methods in ChemML # To build a machine learning model, raw chemical data is first converted into a numerical representation. The representation contains spatial or topological information that defines a molecule. The resulting features may either be in continuous (molecular descriptors) or discrete (molecular fingerprints) form. from chemml.chem import Molecule from chemml.datasets import load_organic_density import numpy as np import warnings warnings.filterwarnings('ignore') # ### Creating `chemml.chem.Molecule` object from molecule SMILES # # All feature representation methods available in ChemML require `chemml.chem.Molecule` as inputs # Importing an existing dataset from ChemML molecules, target, dragon_subset = load_organic_density() mol_objs_list = [] for smi in molecules['smiles']: mol = Molecule(smi, 'smiles') mol.hydrogens('add') mol.to_xyz('MMFF', maxIters=10000, mmffVariant='MMFF94s') mol_objs_list.append(mol) # ## [Coulomb Matrix](https://doi.org/10.1103/PhysRevLett.108.058301) # # Simple molecular descriptor which mimics the electro-static interaction between nuclei. # + from chemml.chem import CoulombMatrix #The coulomb matrix type can be sorted (SC), unsorted(UM), unsorted triangular(UT), eigen spectrum(E), or random (RC) CM = CoulombMatrix(cm_type='SC',n_jobs=-1) features = CM.represent(mol_objs_list) print(features[:5]) # - # ## [Fingerprints from RDKit](https://www.rdkit.org/) # # Molecular fingerprints are a way of encoding the structure of a molecule. The most common type of fingerprint is a series of binary digits (bits) that represent the presence or absence of particular substructures in the molecule. Comparing fingerprints allows you to determine the similarity between two molecules, to find matches to a query substructure, etc. # + from chemml.chem import RDKitFingerprint # RDKit fingerprint types: 'morgan', 'hashed_topological_torsion' or 'htt' , 'MACCS' or 'maccs', 'hashed_atom_pair' or 'hap' morgan_fp = RDKitFingerprint(fingerprint_type='morgan', vector='bit', n_bits=1024, radius=3) features = morgan_fp.represent(mol_objs_list) print(features[:5]) # - # ## Molecule tensors from `chemml.chem.Molecule` objects # # Molecule tensors can be used to create neural graph fingerprints using `chemml.models` from chemml.chem import tensorise_molecules atoms,bonds,edges = tensorise_molecules(molecules=mol_objs_list, max_degree=5, max_atoms=None, n_jobs=-1, batch_size=100, verbose=True) print("Matrix for atom features (num_molecules, max_atoms, num_atom_features):\n", atoms.shape) print("Matrix for connectivity between atoms (num_molecules, max_atoms, max_degree):\n", edges.shape) print("Matrix for bond features (num_molecules, max_atoms, max_degree, num_bond_features):\n", bonds.shape)
docs/ipython_notebooks/feature_representation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import scipy as sp import scipy.signal # ## Data import # We import the csv file. # We are interested in the Ping_index , Ping_date , Ping_time , Latitude, Longitude , and the sv* columns. # Each sv* column corresponds to a depth. # The value for each cell is the logarithm of the intensity of the echo.(ratio of intensity) data_path = '/home/benjamin/Bureau/data jam days/Hackathlon data/' def load_data(filename): df = pd.read_csv(filename) del df['Distance_gps'] del df['Distance_vl'] del df['Ping_milliseconds'] del df['Depth_start'] del df['Depth_stop'] del df['Range_start'] del df['Range_stop'] del df['Sample_count'] data= np.array(df.iloc[:,5:]).transpose() return data,df # ### Filtering # Code from Roland to filter some Sonar artefacts def binary_impulse(Sv, threshold=10): ''' :param Sv: gridded Sv values (dB re 1m^-1) :type Sv: numpy.array :param threshold: threshold-value (dB re 1m^-1) :type threshold: float return: :param mask: binary mask (0 - noise; 1 - signal) :type mask: 2D numpy.array desc: generate threshold mask defined by RB status: test ''' mask = np.ones(Sv.shape).astype(int) samples,pings = Sv.shape for sample in range(1, samples-1): for ping in range(0, pings): a = Sv[sample-1, ping] b = Sv[sample, ping] c = Sv[sample+1, ping] if (b - a > threshold) & (b - c > threshold): mask[sample, ping] = 0 return mask def filter_data(data_matrix): # The relevant data values for the krill are between -70 and -65 data2 =data_matrix.copy() data2[data_matrix<-70] = -70 data2[data_matrix>-65] = -65 data2 = data2 + 70 # We apply a median filtering to get rid of the isolated peaks or lines (which are noise) # Two steps # A variant of the median filter implemented by Roland for lines datafilt = binary_impulse(data2.transpose(), threshold=2) datafilt = datafilt.transpose()*data2 # A standard median filter used in image processing datafilt2 = sp.signal.medfilt(datafilt,kernel_size=3) # try to get rid of the mean by line data3 =datafilt2.copy() data3 = data3 - np.mean(data3,1,keepdims=True) # Gaussian filtering from skimage.filters import gaussian gauss_denoised = gaussian(data3,10) # Compute a function to find the krill signaldata = gauss_denoised[0:150,:] sumsignal = np.sum(signaldata,0)-np.mean(np.sum(signaldata,0)) binary_signal = sumsignal.copy() threshold = 11 binary_signal[sumsignal<threshold] = 0 binary_signal[sumsignal>threshold] = 100 return binary_signal def extract_info(binary_signal,df): krill_list = [] krill_dic = {} data_len = len(binary_signal) for idx in range(data_len): if binary_signal[idx] >0: if idx==0 or binary_signal[idx-1] == 0: # beginning of a krill detection krill_layer_start = idx # record latitude and longitude krill_dic['latitude_start'] = df.iloc[idx,3] krill_dic['longitude_start'] = df.iloc[idx,4] krill_dic['date_start'] = df.iloc[idx,1] krill_dic['time_start'] = df.iloc[idx,2] if idx == data_len-1 or binary_signal[idx+1] == 0: # end of krill detection krill_layer_stop = idx # record latitude and longitude krill_dic['latitude_stop'] = df.iloc[idx,3] krill_dic['longitude_stop'] = df.iloc[idx,4] krill_dic['date_stop'] = df.iloc[idx,1] krill_dic['time_stop'] = df.iloc[idx,2] # store krill layer in list krill_list.append(krill_dic) krill_dic = {} # Compute Krill depth #if krill_layer_stop<data_len-1: # krill_layer = datafilt2[krill_layer_start:krill_layer_stop+1] #else: # krill_layer = datafilt2[krill_layer_start:] #min_depth,max_depth,mean_depth = krill_depth(krill_layer) return krill_list import glob global_krill_list = [] for filename in glob.iglob(data_path+'*.csv'): print('Loading data ...') print('%s' % filename) data,df = load_data(filename) print('Filtering data...') binary_signal = filter_data(data) print('Extraction information...') krill_list = extract_info(binary_signal,df) print('Number of Krill events:',len(krill_list)) global_krill_list += krill_list len(global_krill_list) import json with open('krill_data.json', 'w', encoding='utf-8') as f: f.write(json.dumps(global_krill_list, ensure_ascii=False)) def krill_depth(array): # Compute the depth of the krill swarm depth_function = np.sum(array,1)
ACE_fish script clean several files.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # linear model to classify the MNIST data set # # In this second tutorial, we will continue to work on image classification and try a linear classification model. This kind of model have the same number of parameters as the input images (64 here) plus one bias. They work by trying to with the parameters so that we minimize some Loss function at training time. At test time, a prediction is fast as it is basically just a dot product. # ## Prepare and get a sense of the data # # We start by loading our image data set: MNIST. Using the function `load_digits` of the `datasets` module of `sklearn` provide the dataset in a reduced form suitable for this practival session. # import numpy and initialize the random seed to yield consistent results import numpy as np np.random.seed(42) from sklearn.datasets import load_digits mnist = ... mnist.keys() # The data set need to be partitioned into train and test data. Here use the handy function `train_test_split` of `sklearn` to reserve 20% of the data to test your model. # # **/!\ The test data is to be left untouched.** # + from sklearn.model_selection import train_test_split (X_train, X_test, y_train, y_test) = train_test_split(..., test_size=...) print('shape of train data is {}, type is {}'.format(X_train.shape, X_train.dtype)) print('shape of test data is {}, type is {}'.format(X_test.shape, X_test.dtype)) # - # observe the data points: they are in 64 bits floats but only integers values from 0 to 16. The data can therefore be safely casted to uint8 to reduce the memory footprint by a factor of 8. print(...) # min print(...) # max print(...) # unique X_train = X_train.astype(...) # plot an image using matplotlib. The function `imshow` can be used reshaping the data as $(8\times8)$ array. # + import matplotlib # %matplotlib inline from matplotlib import pyplot as plt, cm index = 0 plt.imshow(...), cmap=cm.gray_r) plt.axis('off') plt.title('image %d in the train set' % index) # - # With this particular dataset, the list of the categories is identical to their indices (from 0 to 9). # # Print the class of image `index`. print('image {} is a {}'.format(..., ...)) # ## Model definition # # Here we define our simple machine learning algorithm which takes the features $x$, multiply them be some weights $W$ and add a bias term $b$. # # $$f(x, W, b) = W.x + b = s$$ # # For a given image in vector form with $d$ features, W has size (10, d) so that the product $W.X$ produces 10 numbers which are called the scores for each class. # # Initialize `numpy` arrays of size (10, 64) for $W$ and (10) for $b$. Concatenate $b$ and $W$ using the function `np.c_` to use the bias trick. # + # initialization with random weights W = 0.1 * np.random.randn(...) b = 0.1 * np.random.randn(...) # apply the bias trick W = ... print('shape of W is now {}'.format(W.shape)) # - # The data points are already in vector form, let's add 1 to each for the bias trick. # + X_train = np.c_[..., X_train] print('shape of train data is now {}'.format(X_train.shape)) X_test = np.c_[..., X_test] print('shape of test data is now {}'.format(X_test.shape)) # - # now compute the 10 scores for the `index` training image with a dot product using `np.dot` and use the max score to determine the prediction scores = np.dot(...) # look at the individual score for each class for (label, score) in zip(labels, scores): print('{}: {:5.2f}'.format(..., ...)) # Print the result, note that as we have 10 scores, we need to find the index of the maximum score to determine the class. print('prediction: {}'.format(...) print('ground thruth: {}'.format(...) # ## Loss function # # ### Hinge loss # # We now need to define a way to tell the machine how happy we are with this prediction. The machine will then use this information to learn and come up with better predictions. The measure of our "happiness" is called a *loss function* and the process of learning the parameters (both $W$ and $b$) is called optimisation. # # One possibility to measure how good is the prediction is the so called Hinge Loss: # # $$L_i=\sum_{j\neq y^i}\max(0, s_j - s_{y^i} + 1)$$ # # Since it is inspired by linear support vector machines, this loss is also called Multi-class SVM Loss. # Now we can average arithmetically the losses $L_i$ for each instance $x^i$ to compute the general loss $L$ of the model. # # $$L=\frac{1}{n}\sum_i L_i(f(x^i, W), y^i)$$ # step by step calculation of the loss Li = 0 yi = ... # ground truth target for j in range(...): if j == yi: print('skipping %d' % j) continue margin = ... print('{:2d} {:6.2f} {:6.2f}'.format(j, scores[j], margin)) Li += ... print(18 * '-') print('hinge loss is {:.1f}'.format(Li)) # Now we understand how the hinge loss works, we can use a more efficient implementation and include it in a reusable function. # # Create a function (using `def`) called `loss_i` that compute the loss for given parameters `W` and `index`. # + # inline calculation of the loss yi = np.squeeze(y_train)[index] Li = np.sum([max(0, scores[j] - scores[yi] + 1) for j in range(10) if j != yi]) print(Li) # create a function to evaluate the loss for the given W for image index in the training set def loss_i(...): yi = ... # ground truth target scores = ... Li = np.sum([max(0, scores[j] - scores[yi] + 1) for j in range(10) if j != yi]) return Li print(loss_i(W, index)) # - # Finally create a function to compute the average loss on a batch of images def loss_batch(W, batch_size=100): L = 0. # average loss for index in range(batch_size): L += ... L /= batch_size return L loss_batch(W, batch_size=50) # ### Softmax loss # # Another very popular loss function to use with multiclassification problems is the multinomial logistic or softmax loss (popular in deep learning). Here the score for each class is passed to the softmax function: exponentiated (and become positive) and normalized. This gives the probability distribution of this class: # # $$P(Y=k|X=x_i)=\frac{e^{s_k}}{\sum_j e^{s_j}}$$ # # Now we have a probability we can try to maximize the likelihood which is equivalent to minimize the negative of the log likelihood: # # $$L_i=-\log P(Y=k|X=x_i)=-\log\left(\frac{e^{s_k}}{\sum_j e^{s_j}}\right)$$ # start by exponentiating our scores to obtain unnormalized probabilities escores = np.exp(scores) norm_escores = escores / np.sum(escores) for j in range(10): print('{:6d} | {:8.1f} | {:6.4f}'.format(j, escores[j], norm_escores[j])) print(26 * '-') # verify that the sum of the probability is 1 print('sum of probabilities check: {:.3f}'.format(np.sum(norm_escores))) # compute the softmax loss Li = -np.log(norm_escores[yi]) print('Softmax loss is {:.2f}'.format(Li)) # ## Learning the model # # Here we use the calculated loss to optimize the parameters $W$ and $b$. For this we need to evaluate the gradient $\dfrac{\partial L}{\partial W}$ of $L$ with respect to $W$. # # The gradient is obtained by differentiating the loss expression with respect to $W$: # # $$\nabla_{w_j}L_i=1\left(w_j^T x_i - w_{y_i}^T x_i + 1 > 0\right) x_i\quad\text{for }j\neq y_i$$ # # $$\nabla_{w_{y_i}}L_i=-\left(\sum_{j\neq y_i}1\left(w_j^T x_i - w_{y_i}^T x_i + 1 > 0\right)\right) x_i$$ # # with $1(condition)$ equals to 1 if $condition$ is true, 0 otherwise. Here we see that the data vector $x$ is scaled by the number of classes that did not meet the margins. # verify one more time the size of our matrices print('shape of train data is {}'.format(X_train.shape)) print('shape of W is {}'.format(W.shape)) # ### Implementation # # Simple SVM loss gradient implementation: # - iterate over each data point $i$ in the batch # - compute the score using $W.x^i$ (bias trick) # - compute the margin for each class # - compute the loss and the gradient components associated with this data point # - finally average the gradient and the loss with respect to the number of data points in the batch def svm_loss_gradient(W, X, y): """ SVM loss gradient. Inputs: - W: array of shape (K, 1 + D) containing the weights. - X: array of shape (N, 1 + D) containing the data. - y: array of shape (N, 1) containing training labels 0 <= k < K. Returns a tuple of: - average loss - gradient of the loss with respect to weights W """ dW = np.zeros_like(W) # initialize the gradient as zero K = ... # number of classes n = ... # number of data points loss = 0.0 for i in range(n): #print('evaluating gradient / image %d' % i) yi = np.squeeze(y)[i] # ground truth target scores = ... # compute SVM loss and gradient for this data point for j in range(K): if j == yi: continue # only compute loss if incorrectly classified margin = ... if margin > 0: loss += margin dW[yi, :] -= ... # correct class gradient dW[j, :] += ... # incorrect class gradient # average the loss and gradient loss /= n dW /= n return loss, dW # Now try our SVM gradient loss by computing the gradient with respect to the first `nb` images in the training set. nb = 100 loss, dW = svm_loss_gradient(...) print('loss is {:.2f}'.format(loss)) print('gradient dW with respect to the first pixel =', dW[:, 2]) # ### Gradient check # # now, to verify our SVM gradietn implementation, we are going to perform a **gradient check**. # # The gradient is computed numerically using a finite difference scheme: # # $$\nabla L\approx\dfrac{L(W+h) - L(W-h)}{2h}$$ def gradient_check(f, W, h=0.0001): dL = np.zeros_like(W) # evaluate the loss modifiying each value of W for c in range(W.shape[0]): for p in range(W.shape[1]): W[c, p] += h fxph = f(W) W[c, p] -= 2*h fxmh = f(W) dL[c, p] = ... # centered finite differences W[c, p] += h # put back initial value return dL # apply our gradient check, print the gradient with respect to the first pixel. Compare with the analytical value. Realize that to evaluate the gradient numerically, the loss function was called $2\times64$ times. This is why it is so slow. And we tested it only with 100 training images over 1437! print('loss is {:.2f}'.format(loss_batch(W, batch_size=100))) dL = gradient_check(loss_batch, W) print(dL.shape) print(dL[:, 2]) # ### Gradient Descent # # now we have successfully created our linear model, loss function, and that we can compute the gradient of the loss with respect to $W$, let's actually use this to perform gradient descent and learn our model. # # The backbone of the gradient descent is this simple equation: # $$W\leftarrow W - \eta \nabla_W L$$ # # $\eta$ is the learning rate (the most important hyperparameter). The weights $W$ are being updated at each iteration until a stop criterion is met or a maximum number of iteration reached. # + # examine one single gradient descent step W = 0.1 * np.random.randn(10, 65) print('average loss is %.1f' % loss_batch(W, batch_size=X_train.shape[0])) loss, dL_dw = svm_loss_gradient(W, X_train, y_train) # perform one gradient descent eta = 0.005 W = W - eta * dL_dw print('after one step the average loss is %.1f' % loss_batch(W, batch_size=X_train.shape[0])) # - # ### Mini-batch gradient descent # # because $n$ is large (1437 here, but can also be much much larger), it does not actually make sense of computing the gradient on the complete set of training images at each iteration (remeber that the gradient is averaged). Instead, it is very common to compute the gradient on a subset (called a mini-batch) of 32 to 256 images. This is much faster and performs well. W = np.random.randn(10, 65) # initialization of the coefficients eta = 0.005 # learning rate (< 1) batch_size = 128 loss_history = [] it = 0 while it < 2000: # prepare batch idxs = np.random.choice(range(X_train.shape[0]), size=batch_size, replace=True) X_batch = X_train[idxs, :] y_batch = y_train[idxs] # evaluate loss and gradient loss, dL_dw = ... print('it {:d} - loss {:.1f}'.format(it, loss)) # gradient descent W = ... loss_history.append(loss) it += 1 plt.plot(loss_history) # Now make some prediction! Try the first 20 entries in the test set. for i in range(20): y_pred = ... print('{} - {}'.format(y_pred, y_test[i])) # Construct the confusion matrix which is usefull to measure the performances of our multinomial classifier. # + from sklearn.metrics import confusion_matrix y_train_pred = ... conf = confusion_matrix(...) # - plt.imshow(conf) plt.xlabel('predicted class') plt.ylabel('actual class') plt.title('confusion matrix') # To better visualize the errors, it is useful to normalize each row by the total number of samples in each category. row_sums = conf.sum(axis=1, keepdims=True) norm_conf = conf / row_sums np.fill_diagonal(norm_conf, 0) plt.imshow(norm_conf) plt.xlabel('predicted class') plt.ylabel('actual class') plt.title('matrix of error rates') # The columns for classes 8 and 9 look worse than the other. Analyzing the type of errors of the model can help improving it. # # Finally we can compare our results with the `SGDClassifier` from `sklearn`. # ## Compare our gradient descent results with sklearn (X_train, X_test, y_train, y_test) = train_test_split(mnist['data'], mnist['target'], test_size=0.2) from sklearn import linear_model clf = linear_model.SGDClassifier(random_state=42) clf.fit(X_train, y_train) y_pred = clf.predict(...) for i in range(20): print('{} - {}'.format(y_pred[i], y_test[i])) # Compute the **accuracy** by dividing the number of correct prediction in the train set by the number os training samples. y_train_pred = clf.predict(...) print(np.sum(...) / ...) # It is better to perform K-fold cross validation to measure the performances of the model. For this we can use the `cross_val_score` method with `cv=3`. from sklearn.model_selection import cross_val_score cross_val_score(clf, X_train, y_train, cv=3, scoring="accuracy") # + from sklearn.model_selection import cross_val_predict y_train_pred = cross_val_predict(clf, X_train, y_train, cv=3) from sklearn.metrics import confusion_matrix conf = confusion_matrix(y_train, y_train_pred) # - plt.imshow(conf) plt.xlabel('predicted class') plt.ylabel('actual class') plt.title('confusion matrix') plt.figure(figsize=(12, 5)) plt.subplot(251); plt.imshow(clf.coef_[0].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(252); plt.imshow(clf.coef_[1].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(253); plt.imshow(clf.coef_[2].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(254); plt.imshow(clf.coef_[3].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(255); plt.imshow(clf.coef_[4].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(256); plt.imshow(clf.coef_[5].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(257); plt.imshow(clf.coef_[6].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(258); plt.imshow(clf.coef_[7].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(259); plt.imshow(clf.coef_[8].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.subplot(2, 5, 10); plt.imshow(clf.coef_[9].reshape((8, 8)), cmap=cm.gray); plt.axis('off') plt.show()
tutorials/mnist_linear_classifier.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Educational Attainment Distribution # # An exploration of levels of education among the general population in various countries. # # To download the data, visit the [Wittgenstein Centre Human Capital Data Explorer](http://dataexplorer.wittgensteincentre.org/wcde-v2/). In the Indicator dropdown menu, select Educational Attainment Distribution by Broad Age. Select all countries, all years, age 15+, and the Medium (SSP2) scenario. # # - Date: 2019-01-23 # - Source: [Wittgenstein Centre for Demography and Global Human Capital](http://dataexplorer.wittgensteincentre.org/wcde-v2/) # ## Setup # + import io import pandas as pd import matplotlib.pyplot as plt import matplotlib.ticker as ticker from src.plot import ds_plot # %config InlineBackend.figure_format = 'retina' # Custom chart style plt.style.use('../assets/datastory.mpltstyle') # - df = pd.read_csv('../data/raw/educational_attainment.csv', skiprows=8) # ## Overview df.sample(3) # Unique levels of education df.Education.value_counts() # Time extent of the data df.Year.min(), df.Year.max() # ## Plotting def reshape_country(df, country, year_max=2050): cols = ['No Education', 'Incomplete Primary', 'Primary', 'Lower Secondary', 'Upper Secondary', 'Post Secondary'] colors = list(reversed(['#2bbaff', '#ffaa8c', '#ff713f', '#5d1800'])) # Transform the data data = (df .query(f'Area == "{country}" & Year <= {year_max}') .drop(['Area', 'Age'], axis=1) .pivot_table(columns='Education', values='Distribution', index='Year') .filter(items=cols) .assign(Primary=lambda x: x['Primary'] + x['Lower Secondary']) .drop(['Lower Secondary', 'Incomplete Primary'], axis=1)) # Convert absolute numbers to percentages data = data.apply(lambda x: x / data.sum(axis=1)) * 100 return data def plot_country(data): """Plot stacked area chart for `country`.""" fig, ax = ds_plot(figsize=(14.4, 5.8)) colors = list(reversed(['#2bbaff', '#ffaa8c', '#ff713f', '#5d1800'])) ax = data.plot(kind='area', stacked=True, color=colors, ax=ax, legend=False) ax.yaxis.set_major_formatter(ticker.PercentFormatter(decimals=0)) ax.set_xlim(1970, 2050) ax.set_ylim(0, 100) ax.set_xlabel('') ax.set_yticks([0, 25, 50, 75, 100], minor=False) ax.grid(axis='y', which='major') ax.axvline(2019, ls='--', lw=3, color='#404041') fd = {'size': 14, 'weight': 500, 'color': '#404041', 'backgroundcolor': '#ff713f'} ax.text(2018, 50, '<NAME>', ha='right', va='center', fontdict=fd) plt.gcf().set_facecolor('white') return ax data = df.pipe(reshape_country, 'China'); data.head() # + ax = plot_country(data) fd = {'size': 18, 'weight': 500, 'color': '#f5f5f5'} # Manually add annotations ax.text(2030, 90, 'Eftergymnasial', fontdict=fd) ax.text(1983, 90, 'Gymnasial', fontdict=fd) ax.text(1975, 60, 'Grundskola', fontdict=fd) ax.text(1972, 15, 'Ingen utbildning', fontdict=fd) fig = plt.gcf() fig.set_facecolor('#ffffff') fig.savefig('../charts/educational-levels.png') # -
notebooks/2019-01-23-educational-levels.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Extending your Metadata using DocumentClassifiers at Index Time # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial16_Document_Classifier_at_Index_Time.ipynb) # # With DocumentClassifier it's possible to automatically enrich your documents with categories, sentiments, topics or whatever metadata you like. This metadata could be used for efficient filtering or further processing. Say you have some categories your users typically filter on. If the documents are tagged manually with these categories, you could automate this process by training a model. Or you can leverage the full power and flexibility of zero shot classification. All you need to do is pass your categories to the classifier, no labels required. This tutorial shows how to integrate it in your indexing pipeline. # DocumentClassifier adds the classification result (label and score) to Document's meta property. # Hence, we can use it to classify documents at index time. \ # The result can be accessed at query time: for example by applying a filter for "classification.label". # + [markdown] pycharm={"name": "#%% md\n"} # This tutorial will show you how to integrate a classification model into your preprocessing steps and how you can filter for this additional metadata at query time. In the last section we show how to put it all together and create an indexing pipeline. # + pycharm={"name": "#%%\n"} # Let's start by installing Haystack # Install the latest release of Haystack in your own environment # #! pip install farm-haystack # Install the latest master of Haystack # !pip install grpcio-tools==1.34.1 # !pip install git+https://github.com/deepset-ai/haystack.git # !wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz # !tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin # Install pygraphviz # !apt install libgraphviz-dev # !pip install pygraphviz # If you run this notebook on Google Colab, you might need to # restart the runtime after installing haystack. # + pycharm={"name": "#%%\n"} # Here are the imports we need from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever from haystack.schema import Document from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers # + pycharm={"name": "#%%\n"} # This fetches some sample files to work with doc_dir = "data/preprocessing_tutorial" s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/preprocessing_tutorial.zip" fetch_archive_from_http(url=s3_url, output_dir=doc_dir) # + [markdown] pycharm={"name": "#%% md\n"} # ## Read and preprocess documents # # + pycharm={"name": "#%%\n"} # note that you can also use the document classifier before applying the PreProcessor, e.g. before splitting your documents all_docs = convert_files_to_dicts(dir_path=doc_dir) preprocessor_sliding_window = PreProcessor( split_overlap=3, split_length=10, split_respect_sentence_boundary=False ) docs_sliding_window = preprocessor_sliding_window.process(all_docs) # - # ## Apply DocumentClassifier # We can enrich the document metadata at index time using any transformers document classifier model. While traditional classification models are trained to predict one of a few "hard-coded" classes and required a dedicated training dataset, zero-shot classification is super flexible and you can easily switch the classes the model should predict on the fly. Just supply them via the labels param. # Here we use a zero shot model that is supposed to classify our documents in 'music', 'natural language processing' and 'history'. Feel free to change them for whatever you like to classify. \ # These classes can later on be accessed at query time. doc_classifier = TransformersDocumentClassifier(model_name_or_path="cross-encoder/nli-distilroberta-base", task="zero-shot-classification", labels=["music", "natural language processing", "history"], batch_size=16 ) # + # we can also use any other transformers model besides zero shot classification # doc_classifier_model = 'bhadresh-savani/distilbert-base-uncased-emotion' # doc_classifier = TransformersDocumentClassifier(model_name_or_path=doc_classifier_model, batch_size=16, use_gpu=-1) # + # we could also specifiy a different field we want to run the classification on # doc_classifier = TransformersDocumentClassifier(model_name_or_path="cross-encoder/nli-distilroberta-base", # task="zero-shot-classification", # labels=["music", "natural language processing", "history"], # batch_size=16, use_gpu=-1, # classification_field="description") # - # convert to Document using a fieldmap for custom content fields the classification should run on docs_to_classify = [Document.from_dict(d) for d in docs_sliding_window] # classify using gpu, batch_size makes sure we do not run out of memory classified_docs = doc_classifier.predict(docs_to_classify) # let's see how it looks: there should be a classification result in the meta entry containing labels and scores. print(classified_docs[0].to_dict()) # ## Indexing # + # In Colab / No Docker environments: Start Elasticsearch from source # ! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz -q # ! tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz # ! chown -R daemon:daemon elasticsearch-7.9.2 import os from subprocess import Popen, PIPE, STDOUT es_server = Popen(['elasticsearch-7.9.2/bin/elasticsearch'], stdout=PIPE, stderr=STDOUT, preexec_fn=lambda: os.setuid(1) # as daemon ) # wait until ES has started # ! sleep 30 # - # Connect to Elasticsearch document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document") # Now, let's write the docs to our DB. document_store.delete_all_documents() document_store.write_documents(classified_docs) # check if indexed docs contain classification results test_doc = document_store.get_all_documents()[0] print(f'document {test_doc.id} with content \n\n{test_doc.content}\n\nhas label {test_doc.meta["classification"]["label"]}') # ## Querying the data # All we have to do to filter for one of our classes is to set a filter on "classification.label". # Initialize QA-Pipeline from haystack.pipelines import ExtractiveQAPipeline retriever = ElasticsearchRetriever(document_store=document_store) reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", use_gpu=True) pipe = ExtractiveQAPipeline(reader, retriever) ## Voilà! Ask a question while filtering for "music"-only documents prediction = pipe.run( query="What is heavy metal?", params={"Retriever": {"top_k": 10, "filters": {"classification.label": ["music"]}}, "Reader": {"top_k": 5}} ) print_answers(prediction, details="high") # ## Wrapping it up in an indexing pipeline from pathlib import Path from haystack.pipelines import Pipeline from haystack.nodes import TextConverter, PreProcessor, FileTypeClassifier, PDFToTextConverter, DocxToTextConverter # + file_type_classifier = FileTypeClassifier() text_converter = TextConverter() pdf_converter = PDFToTextConverter() docx_converter = DocxToTextConverter() indexing_pipeline_with_classification = Pipeline() indexing_pipeline_with_classification.add_node(component=file_type_classifier, name="FileTypeClassifier", inputs=["File"]) indexing_pipeline_with_classification.add_node(component=text_converter, name="TextConverter", inputs=["FileTypeClassifier.output_1"]) indexing_pipeline_with_classification.add_node(component=pdf_converter, name="PdfConverter", inputs=["FileTypeClassifier.output_2"]) indexing_pipeline_with_classification.add_node(component=docx_converter, name="DocxConverter", inputs=["FileTypeClassifier.output_4"]) indexing_pipeline_with_classification.add_node(component=preprocessor_sliding_window, name="Preprocessor", inputs=["TextConverter", "PdfConverter", "DocxConverter"]) indexing_pipeline_with_classification.add_node(component=doc_classifier, name="DocumentClassifier", inputs=["Preprocessor"]) indexing_pipeline_with_classification.add_node(component=document_store, name="DocumentStore", inputs=["DocumentClassifier"]) indexing_pipeline_with_classification.draw("index_time_document_classifier.png") document_store.delete_documents() txt_files = [f for f in Path(doc_dir).iterdir() if f.suffix == '.txt'] pdf_files = [f for f in Path(doc_dir).iterdir() if f.suffix == '.pdf'] docx_files = [f for f in Path(doc_dir).iterdir() if f.suffix == '.docx'] indexing_pipeline_with_classification.run(file_paths=txt_files) indexing_pipeline_with_classification.run(file_paths=pdf_files) indexing_pipeline_with_classification.run(file_paths=docx_files) document_store.get_all_documents()[0] # - # we can store this pipeline and use it from the REST-API indexing_pipeline_with_classification.save_to_yaml("indexing_pipeline_with_classification.yaml") # + [markdown] pycharm={"name": "#%% md\n"} # ## About us # # This [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany # # We bring NLP to the industry via open source! # Our focus: Industry specific language models & large scale QA systems. # # Some of our other work: # - [German BERT](https://deepset.ai/german-bert) # - [GermanQuAD and GermanDPR](https://deepset.ai/germanquad) # - [FARM](https://github.com/deepset-ai/FARM) # # Get in touch: # [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) # # By the way: [we're hiring!](https://www.deepset.ai/jobs) #
tutorials/Tutorial16_Document_Classifier_at_Index_Time.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/yukinaga/minnano_dl/blob/main/section_5/02_loss_function.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="t1HGlYzi4u7a" # # 「誤差」の定義 # 出力と正解の間で「誤差」を定義します。 # 誤差には様々な定義の仕方がありますが、今回は「二乗和誤差」について解説します。 # + [markdown] id="fOqhfIac2eK3" # ## 二乗和誤差 # # ニューラルネットワークには複数の出力と、それぞれに対応した正解があります。 # これらを使い、二乗和誤差は以下の式で定義されます。 # # $$ E = \frac{1}{2} \sum_{k=1}^n(y_k-t_k)^2 $$ # # $y_k$は出力、$t_k$は正解、$n$は出力層のニューロン数を表します。 # $\frac{1}{2}$をかけるのは、微分した形を扱いやすくするためです。 # # ここで、総和を取る前の個々の二乗誤差をグラフに描画します。 # # $$E_k = \frac{1}{2}(y_k-t_k)^2$$ # # 以下のコードにより、`t`の値が0.25、0.5、0.75のとき、`y`の値とともに二乗誤差がどう変化するのかを確認します。 # # # # # + id="aEQhZssn94cy" import numpy as np import matplotlib.pyplot as plt def square_error(y, t): return (y - t)**2/2 # 二乗誤差 y = np.linspace(0, 1) ts = [0.25, 0.5, 0.75] for t in ts: plt.plot(y, square_error(y, t), label="t="+str(t)) plt.legend() plt.xlabel("y") plt.ylabel("Error") plt.show() # + [markdown] id="ZSujlDCbOUJ8" # 入力と正解が等しいときに最小値の0をとり、入力と正解が離れるについて誤差は次第に大きくなっていきます。 # これを全ての出力と正解のペアで総和をとることにより、ある入力に対する誤差の大きさが決まることになります。
section_5/02_loss_function.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import gym import numpy as np import matplotlib as plt env = gym.make('Acrobot-v1') env.reset() # + for _ in range(1000): env.render() env.step(env.action_space.sample()) env.close() # - # # Action and State # # Reinforcement Learning will learn a mapping of states to the optimal action to perform in that state by exploration, i.e. the agent explores the environment and takes actions based off rewards defined in the environment. [source](https://www.learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/) # # - **Action**, input provided by the agent to the environment # Here, left, nothing or right represented as +1, 0 or -1) # # # - **State**, numeric representation of what the agent is observing at a particular time in the environment # Here, the state consists of the sin() and cos() of the two rotational joint angles and the joint angular velocities : [cos(theta1) sin(theta1) cos(theta2) sin(theta2) thetaDot1 thetaDot2]. For the first link, an angle of 0 corresponds to the link pointing downwards. The angle of the second link is relative to the angle of the first link. An angle of 0 corresponds to having the same angle between the two links. A state of [1, 0, 1, 0, ..., ...] means that both links point downwards. # + env.reset() # reset environment to a new, random state env.render() env.close() print("Action Space (number of input possibility by agent user) {}".format(env.action_space)) print("State Space (encoding of the curent state to be mapped) {}".format(env.observation_space)) # - # # Q-learning # # ![image.png](attachment:image.png) # # To define the maximum expected cumulative award for given pair with hyperparameters : # - learning rate # - discount factor # # The Q learning equation maps state-action pairs to a maximum with combination of immediate reward plus future rewards i.e. for new states learned value is reward plus future estimate of rewards. # # Adapt the Qlearning function # # ![image.png](attachment:image.png) # # **from [moutain_car exemple](https://gist.github.com/gkhayes/3d154e0505e31d6367be22ed3da2e955) # # # Determine size of discretized state space num_states = (env.observation_space.high - env.observation_space.low) * np.array([1, 1, 1, 1, 1, 1]) #multiplication du state incrementé num_states = np.round(num_states, 0).astype(int) + 1 num_states # Initialize Q table Q = np.random.uniform(low = -1, high = 1, size = (num_states[0], num_states[1], env.action_space.n)) Q # Initialize variables to track rewards reward_list = [] ave_reward_list = [] # # Hyperparameters (1/2) : # # # We define **epsilon**, the exploration rate of different possibilities (set to 1 at the beginning). # Then randomly, if **epsilon** is less than this random number, we will explore the possible path. # Start = big **epsilon** # Progressively = reduce the **epsilon** as the agent estimates the Q-values more precisely (the lowest the epsilon, the more chances to select the best option (overfit)) # # **min_eps** : # # **episodes** : # + # Initialize epsilon at 1 epsilon = 0.2 # the lowest the epsilon, the more chances to select the best option (overfit) - the lower the more chances to choose the next action at random (here 20% of random choice) min_eps = 0.05 episodes = 5000 # episodes : will reduce the impact of epsilon every run (handles the progress) # Calculate episodic reduction in epsilon reduction = (epsilon - min_eps) / episodes # - # # 3 basic steps of Qlearning : # # 1. Agent starts in a state (s1) takes an action (a1) and receives a reward (r1) # 2. Agent selects action by referencing Q-table with highest value (max) OR by random (epsilon, ε) # 3. Update q-values # # # Hyperparameters (2/2) : # # **learning**: lr or learning rate (alpha in the equation, α), can simply be defined as how much you accept the new value vs the old value. Above we are taking the difference between new and old and then multiplying that value by the learning rate. This value then gets added to our previous q-value which essentially moves it in the direction of our latest update. # # **discount**: (gamma in the equation, γ) The discount factor is used to balance immediate and future reward. We apply the discount to the future reward upon update. Typically this value can range anywhere from 0.8 to 0.99. # + learning= 0.2 # learning rate discount = 0.9 # discount rate # Run Q learning algorithm for i in range(episodes): # Initialize parameters done = False tot_reward, reward = 0,0 state = env.reset() # Discretize state state_adj = (state - env.observation_space.low) * np.ones((6,)) state_adj = np.round(state_adj, 0).astype(int) while done != True: # Render environment for last 5 episodes if i >= (episodes - 5): env.render() # Determine next action - epsilon greedy strategy if np.random.random() < 1 - epsilon: # if random inferior to 1-epsilon (epsilon has to be between 0.0001-0.999) action = np.argmax(Q[state_adj[0], state_adj[1]]) else: action = np.random.randint(0, env.action_space.n) # Get next state and reward state2, reward, done, info = env.step(action) # Discretize state2 state2_adj = (state2 - env.observation_space.low) * np.ones((6,)) state2_adj = np.round(state2_adj, 0).astype(int) #Allow for terminal states if done and state2[0] >= 0.5: Q[state_adj[0], state_adj[1], action] = reward # Adjust Q value for current state else: delta = learning*(reward + discount*np.max(Q[state2_adj[0],state2_adj[1]]) - Q[state_adj[0], state_adj[1],action]) Q[state_adj[0], state_adj[1],action] += delta # Update variables tot_reward += reward state_adj = state2_adj # Decay epsilon ==== reduce the epsilon as the agent estimates the Q-values more precisely if epsilon > min_eps: epsilon -= reduction # Track rewards reward_list.append(tot_reward) if (i+1) % 100 == 0: # every 100 episodes, get the averaged reward printed on the list ave_reward = np.mean(reward_list) ave_reward_list.append(ave_reward) reward_list = [] if (i+1) % 100 == 0: print('Episode {} Average Reward: {}'.format(i+1, ave_reward)) env.close() ave_reward_list # - reward_list
games_gymOpenAI/acrobot/.ipynb_checkpoints/Gym-acrobot-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # <style>div.container { width: 100% }</style> # <img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="assets/PyViz_logo_wm_line.png" /> # <div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 09. Operations and Pipelines</h2></div> # When interactively exploring a dataset you often end up interleaving visualization and analysis code. In HoloViews your visualization and your data are one and the same, so analysis and data transformations can be applied directly to the visualizable data. For that purpose HoloViews provides operations, which can be used to implement any analysis or data transformation you might want to do. Operations take a HoloViews Element and return another Element of either the same type or a new type, depending on the operation. We'll illustrate operations and pipelines using a variety of libraries: # # <div style="margin: 10px"> # <a href="http://holoviews.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/holoviews.png"/></a> # <a href="http://bokeh.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/bokeh.png"/></a> # <a href="http://datashader.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/datashader.png"/></a> # <a href="http://ioam.github.io/param"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/param.png"/></a><br><br> # <a href="http://pandas.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:140px" src="./assets/pandas.png"/></a> # <a href="http://matplotlib.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/matplotlib_wm.png"/></a> # <a href="http://numpy.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/numpy.png"/></a> # </div> # # Since Operations know about HoloViews you can apply them to large collections of data collected in HoloMap and DynamicMap containers. Since operations work on both of these containers that means they can also be applied lazily. This feature allows us to chain multiple operations in a data analysis, processing, and visualization pipeline, e.g. to drive the operation of a dashboard. # # Pipelines built using DynamicMap and HoloViews operations are also useful for caching intermediate results and just-in-time computations, because they lazily (re)compute just the part of the pipeline that has changed. # + import time import param import numpy as np import holoviews as hv from holoviews.operation.timeseries import rolling, rolling_outlier_std from holoviews.operation.datashader import datashade, dynspread hv.extension('bokeh') # - # # Declare some data # In this example we'll work with a timeseries that stands in for stock-price data. We'll define a small function to generate a random, noisy timeseries, then define a ``DynamicMap`` that will generate a timeseries for each stock symbol: # + def time_series(T=1, N=100, mu=0.1, sigma=0.1, S0=20): """Parameterized noisy time series""" dt = float(T)/N t = np.linspace(0, T, N) W = np.random.standard_normal(size = N) W = np.cumsum(W)*np.sqrt(dt) # standard brownian motion X = (mu-0.5*sigma**2)*t + sigma*W S = S0*np.exp(X) # geometric brownian motion return S def load_symbol(symbol, **kwargs): return hv.Curve(time_series(N=10000), kdims=[('time', 'Time')], vdims=[('adj_close', 'Adjusted Close')]) stock_symbols = ['AAPL', 'FB', 'IBM', 'GOOG', 'MSFT'] dmap = hv.DynamicMap(load_symbol, kdims=['Symbol']).redim.values(Symbol=stock_symbols) # - # We will start by visualizing this data as-is: # %opts Curve [width=600] {+framewise} dmap # ## Applying an operation # Now let's start applying some operations to this data. HoloViews ships with two ready-to-use timeseries operations: the ``rolling`` operation, which applies a function over a rolling window, and a ``rolling_outlier_std`` operation that computes outlier points in a timeseries. Specifically, ``rolling_outlier_std`` excludes points less than one sigma (standard deviation) away from the rolling mean, which is just one example; you can trivially write your own operations that do whatever you like. # %opts Scatter (color='indianred') smoothed = rolling(dmap, rolling_window=30) outliers = rolling_outlier_std(dmap, rolling_window=30) smoothed * outliers # As you can see, the operations transform the ``Curve`` element into a smoothed version and a set of ``Scatter`` points containing the outliers both with a ``rolling_window`` of 30. Since we applied the operation to a ``DynamicMap``, the operation is lazy and only computes the result when it is requested. # + # Exercise: Apply the rolling and rolling_outlier_std operations changing the rolling_window and sigma parameters # - # ## Linking operations to streams # Instead of supplying the parameter values for each operation explicitly as a scalar value, we can also define a ``Stream`` that will let us update our visualization dynamically. By supplying a ``Stream`` with a ``rolling_window`` parameter to both operations, we can now generate our own events on the stream and watch our visualization update each time. # + rolling_stream = hv.streams.Stream.define('rolling', rolling_window=5) stream = rolling_stream() rolled_dmap = rolling(dmap, streams=[stream]) outlier_dmap = rolling_outlier_std(dmap, streams=[stream]) rolled_dmap * outlier_dmap # - for i in range(20, 200, 20): time.sleep(0.2) stream.event(rolling_window=i) # + # Exercise: Create a stream to control the sigma value and add it to the outlier operation, # then vary the sigma value and observe the effect # - # ## Defining operations # # Defining custom Operations is also very straightforward. For instance, let's define an ``Operation`` to compute the residual between two overlaid ``Curve`` Elements. All we need to do is subclass from the ``Operation`` baseclass and define a ``_process`` method, which takes the ``Element`` or ``Overlay`` as input and returns a new ``Element``. The residual operation can then be used to subtract the y-values of the second Curve from those of the first Curve. # + from holoviews.operation import Operation class residual(Operation): """ Subtracts two curves from one another. """ label = param.String(default='Residual', doc=""" Defines the label of the returned Element.""") def _process(self, element, key=None): # Get first and second Element in overlay el1, el2 = element.get(0), element.get(1) # Get x-values and y-values of curves xvals = el1.dimension_values(0) yvals1 = el1.dimension_values(1) yvals2 = el2.dimension_values(1) # Return new Element with subtracted y-values # and new label return el1.clone((xvals, yvals1-yvals2), vdims=[self.p.label]) # - # To see what that looks like in action let's try it out by comparing the smoothed and original Curve. residual_dmap = residual(rolled_dmap * dmap) residual_dmap # Since the stream we created is linked to one of the inputs of ``residual_dmap``, changing the stream values triggers updates both in the plot above and in our new residual plot. for i in range(20, 200, 20): time.sleep(0.2) stream.event(rolling_window=i) # ## Chaining operations # # Of course, since operations simply transform an Element in some way, operations can easily be chained. As a simple example, we will take the ``rolled_dmap`` and apply the ``datashading`` and ``dynspread`` operation to it to construct a datashaded version of the plot. As you'll be able to see, this concise specification defines a complex analysis pipeline that gets reapplied whenever you change the Symbol or interact with the plot -- whenever the data needs to be updated. # %%opts RGB [width=600 height=400] {+framewise} overlay = dynspread(datashade(rolled_dmap)) * outlier_dmap (overlay + residual_dmap).cols(1) # ## Visualizing the pipeline # To understand what is going on we will write a small utility that traverses the output we just displayed above and visualizes each processing step leading up to it. # + # %%opts RGB Curve [width=250 height=200] def traverse(obj, key, items=None): items = [] if items is None else items for inp in obj.callback.inputs[:1]: label = inp.callback.operation.name if isinstance(inp.callback, hv.core.OperationCallable) else 'price' if inp.last: items.append(inp[key].relabel(label)) if isinstance(inp, hv.DynamicMap): traverse(inp, key, items) return list(hv.core.util.unique_iterator(items))[:-1] hv.Layout(traverse(overlay, 'AAPL')).cols(4) # - # Reading from right to left, the original price timeseries is first smoothed with a rolling window, then datashaded, then each pixel is spread to cover a larger area. As you can see, arbitrarily many standard or custom operations can be defined to capture even very complex workflows so that they can be replayed dynamically as needed interactively.
notebooks/09_Operations_and_Pipelines.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Fashion classification model # + # Version 1 # # Original lesson: # https://www.tensorflow.org/tutorials/keras/classification # + # Imports import tensorflow as tf from tensorflow import keras print(f"tf_version = {tf.__version__}") import numpy as np import matplotlib.pyplot as plt # + # Prepare data fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() print(train_images.shape) class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # - # show sample plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() # normalize train_images = train_images / 255.0 test_images = test_images / 255.0 # + # Visual test for test_images plt.figure(figsize=(20, 20)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]], size=25) plt.grid(False) plt.show() # + model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(100, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # - # training & test model.fit(train_images, train_labels, epochs=10) test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=1) predictions = model.predict(test_images) def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() num_rows = 20 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) plt.show() # + img = test_images[0] print(img.shape) img = (np.expand_dims(img,0)) print(img.shape) predictions_single = model.predict(img) print(predictions_single.argmax()) plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=90)
fashion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Playground and experiments # # _This is not the notebook you want to read..._ # + pycharm={"name": "#%%\n"} import xarray from pathlib import Path import geopandas as gp from IPython.display import display from shapely.geometry import Point data_dir = Path("data") # + [markdown] pycharm={"name": "#%% md\n"} # ## What happens if we combine two DataArrays with different coordinates in a Dataset? # + pycharm={"name": "#%%\n"} da1 = xarray.open_rasterio(data_dir / "grassland_25.tiff") da2 = xarray.open_rasterio(data_dir / "small_woody_features_27.tiff") da3 = xarray.open_rasterio(data_dir / "tree_cover_density_25.tiff") # + pycharm={"name": "#%%\n"} print(da1) # + pycharm={"name": "#%%\n"} print(da2) # + pycharm={"name": "#%%\n"} ds = xarray.Dataset({"grass": da1, "swf": da2}) # + pycharm={"name": "#%%\n"} print(ds) # + [markdown] pycharm={"name": "#%% md\n"} # -> we get the union of coordinates in the Dataset # # ## What about identical coordinates? # + pycharm={"name": "#%%\n"} ds = xarray.Dataset({"grass": da1, "trees": da3}) # + pycharm={"name": "#%%\n"} print(ds) assert (ds.coords["x"].values == da1.coords["x"].values).all() assert (ds.coords["y"].values == da1.coords["y"].values).all() assert (ds.coords["x"].values == da3.coords["x"].values).all() assert (ds.coords["y"].values == da3.coords["y"].values).all() # + [markdown] pycharm={"name": "#%% md\n"} # -> we get one Dataset with the same coordinates as the input DataArrays # # ## Interpolate to join arrays with different coordinates # # This fails: Tries to interpolate on 'band', which has only one value. # + pycharm={"name": "#%%\n"} da2_resampled = da2.interp_like(da1) # + [markdown] pycharm={"name": "#%% md\n"} # Squeeze out 'band' dimension before resampling: # + pycharm={"name": "#%%\n"} da2_resampled = da2.squeeze('band').interp_like(da1, method='nearest') # + pycharm={"name": "#%%\n"} print(da2_resampled) assert (da2_resampled.coords["x"].values == da1.coords["x"].values).all() assert (da2_resampled.coords["y"].values == da1.coords["y"].values).all() # + [markdown] pycharm={"name": "#%% md\n"} # Now we can nicely merge all three DataArrays into one Dataset # + pycharm={"name": "#%%\n"} ds = xarray.Dataset({"grass": da1, "swf": da2_resampled, "trees": da3}) print(ds) assert (ds.coords["x"].values == da1.coords["x"].values).all() assert (ds.coords["y"].values == da1.coords["y"].values).all() # + pycharm={"name": "#%%\n"} ds.max() # + pycharm={"name": "#%%\n"} ds["grass"] # + pycharm={"name": "#%%\n"} # - # ### Build voronoi regions # # _Insert this into the demo after preprocessing the raster data_ # # + pycharm={"name": "#%%\n"} voronoi_polys, _ = geovoronoi.voronoi_regions_from_coords( coords=geovoronoi.points_to_coords(school_points["geometry"]), geo_shape=munich_df_metric.iloc[0]["geometry"] ) print(voronoi_polys) # + pycharm={"name": "#%%\n"} school_polys = gp.GeoSeries(voronoi_polys) # + pycharm={"name": "#%%\n"} school_polys.plot(figsize=(12, 10)) school_points.plot(ax=plt.gca(), color='red') # + pycharm={"name": "#%%\n"} da1.crs # + [markdown] pycharm={"name": "#%% md\n"} # # Experiments on accuracy of distances in difference CRSs # + pycharm={"name": "#%%\n"} from math import radians, cos, sin, asin, sqrt def haversine(lon1, lat1, lon2, lat2): lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) r = 6371_000 # radius of Earth in m return c * r # + pycharm={"name": "#%%\n"} # Accurate (haversine) distance for 0.1 degree increment from lon=10, lat=50 print("0.1 degrees North:", haversine(10, 50, 10, 50.1)) print("0.1 degrees East:", haversine(10, 50, 10.1, 50)) # + pycharm={"name": "#%%\n"} # Project to EPSG 3857 and EPSG 25832, measure by coordinate difference lonlat = gp.GeoSeries([ Point(10, 50), Point(10, 50.1), Point(10.1, 50) ], crs="EPSG:4326") for p in ["3857", "25832"]: proj = lonlat.to_crs("EPSG:" + p) print(f"0.1 degrees North, delta in EPSG {p} coords:", proj.iloc[1].distance(proj.iloc[0])) print(f"0.1 degrees East, delta in EPSG {p} coords:", proj.iloc[2].distance(proj.iloc[0])) # + [markdown] pycharm={"name": "#%% md\n"} # -> EPSG 3857 is off by a lot on the East/West distance # # + pycharm={"name": "#%%\n"}
playground.ipynb
# # Density test # Here, we compare the two unmatched networks by treating each as an Erdos-Renyi network # and simply compare their estimated densities. # ## The Erdos-Renyi (ER) model # The [**Erdos-Renyi (ER) model** # ](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model) # is one of the simplest network models. This model treats # the probability of each potential edge in the network occuring to be the same. In # other words, all edges between any two nodes are equally likely. # # ```{admonition} Math # Let $n$ be the number of nodes. We say that for all $(i, j), i \neq j$, with $i$ and # $j$ both running # from $1 ... n$, the probability of the edge $(i, j)$ occuring is: # # $$ P[A_{ij} = 1] = p_{ij} = p $$ # # Where $p$ is the the global connection probability. # # Each element of the adjacency matrix $A$ is then sampled independently according to a # [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution): # # $$ A_{ij} \sim Bernoulli(p) $$ # # For a network modeled as described above, we say it is distributed # # $$ A \sim ER(n, p) $$ # # ``` # # Thus, for this model, the only parameter of interest is the global connection # probability, $p$. This is sometimes also referred to as the **network density**. # ## Testing under the ER model # In order to compare two networks $A^{(L)}$ and $A^{(R)}$ under this model, we # simply need to compute these network densities ($p^{(L)}$ and $p^{(R)}$), and then # run a statistical test to see if these densities are significantly different. # # ```{admonition} Math # Under this # model, the total number of edges $m$ comes from a $Binomial(n(n-1), p)$ distribution, # where $n$ is the number of nodes. This is because the number of edges is the sum of # independent Bernoulli trials with the same probability. If $m^{(L)}$ is the number of # edges on the left # hemisphere, and $m^{(R)}$ is the number of edges on the right, then we have: # # $$m^{(L)} \sim Binomial(n^{(L)}(n^{(L)} - 1), p^{(L)})$$ # # and independently, # # $$m^{(R)} \sim Binomial(n^{(R)}(n^{(R)} - 1), p^{(R)})$$ # # To compare the two networks, we are just interested in a comparison of $p^{(L)}$ vs. # $p^{(R)}$. Formally, we are testing: # # $$H_0: p^{(L)} = p^{(R)}, \quad H_a: p^{(L)} \neq p^{(R)}$$ # # Fortunately, the problem of testing for equal proportions is well studied. # In our case, we will use Fisher's Exact test to run this test for the null and # alternative hypotheses above. # ``` # + import datetime import time import matplotlib.path import matplotlib.pyplot as plt import matplotlib.transforms import numpy as np import pandas as pd import seaborn as sns from giskard.plot import merge_axes, soft_axis_off from graspologic.simulations import er_np from matplotlib.collections import LineCollection from pkg.data import load_network_palette, load_node_palette, load_unmatched from pkg.io import FIG_PATH from pkg.io import glue as default_glue from pkg.io import savefig from pkg.plot import SmartSVG, networkplot_simple, set_theme from pkg.plot.er import plot_density from pkg.stats import erdos_renyi_test from pkg.utils import sample_toy_networks from svgutils.compose import Figure, Panel, Text from pkg.plot import draw_hypothesis_box, rainbowarrow DISPLAY_FIGS = True FILENAME = "er_unmatched_test" def gluefig(name, fig, **kwargs): savefig(name, foldername=FILENAME, **kwargs) glue(name, fig, figure=True) if not DISPLAY_FIGS: plt.close() def glue(name, var, **kwargs): default_glue(name, var, FILENAME, **kwargs) t0 = time.time() set_theme(font_scale=1.25) network_palette, NETWORK_KEY = load_network_palette() node_palette, NODE_KEY = load_node_palette() left_adj, left_nodes = load_unmatched("left") right_adj, right_nodes = load_unmatched("right") # + # describe ER model np.random.seed(8888) ps = [0.2, 0.4, 0.6] n_steps = len(ps) fig, axs = plt.subplots( 2, n_steps, figsize=(6, 3), gridspec_kw=dict(height_ratios=[2, 0.5]), constrained_layout=True, ) n = 18 for i, p in enumerate(ps): A = er_np(n, p) if i == 0: node_data = pd.DataFrame(index=np.arange(n)) ax = axs[0, i] networkplot_simple(A, node_data, ax=ax, compute_layout=i == 0) label_text = f"{p}" if i == 0: label_text = r"$p = $" + label_text ax.set_title(label_text, pad=10) fig.set_facecolor("w") ax = merge_axes(fig, axs, rows=1) soft_axis_off(ax) rainbowarrow(ax, (0.15, 0.5), (0.85, 0.5), cmap="Blues", n=100, lw=12) ax.set_xlim((0, 1)) ax.set_ylim((0, 1)) ax.set_xticks([]) ax.set_yticks([]) ax.set_xlabel("Increasing density") gluefig("er_explain", fig) # + A1, A2, node_data = sample_toy_networks() node_data["labels"] = np.ones(len(node_data), dtype=int) palette = {1: sns.color_palette("Set2")[2]} fig, axs = plt.subplots(2, 2, figsize=(6, 6), gridspec_kw=dict(wspace=0.7)) ax = axs[0, 0] networkplot_simple(A1, node_data, ax=ax) ax.set_title("Compute global\nconnection density") ax.set_ylabel( "Left", color=network_palette["Left"], size="large", rotation=0, ha="right", labelpad=10, ) ax = axs[1, 0] networkplot_simple(A2, node_data, ax=ax) ax.set_ylabel( "Right", color=network_palette["Right"], size="large", rotation=0, ha="right", labelpad=10, ) stat, pvalue, misc = erdos_renyi_test(A1, A2) ax = axs[0, 1] ax.text( 0.4, 0.2, r"$p = \frac{\# \ edges}{\# \ potential \ edges}$", ha="center", va="center", ) ax.axis("off") ax.set_title("Compare ER\nmodels") ax.set(xlim=(-0.5, 2), ylim=(0, 1)) ax = axs[1, 1] ax.axis("off") x = 0 y = 0.55 draw_hypothesis_box("er", -0.2, 0.8, ax=ax, fontsize="medium", yskip=0.2) gluefig("er_methods", fig) # - stat, pvalue, misc = erdos_renyi_test(left_adj, right_adj) glue("pvalue", pvalue, form="pvalue") # + n_possible_left = misc["possible1"] n_possible_right = misc["possible2"] glue("n_possible_left", n_possible_left) glue("n_possible_right", n_possible_right) density_left = misc["probability1"] density_right = misc["probability2"] glue("density_left", density_left, form="0.2g") glue("density_right", density_right, form="0.2g") n_edges_left = misc["observed1"] n_edges_right = misc["observed2"] # + coverage = 0.95 glue("coverage", coverage, form="2.0f%") plot_density(misc, palette=network_palette, coverage=coverage) gluefig("er_density", fig) # - # ## Reject bilateral symmetry under the ER model # # ```{glue:figure} fig:er_unmatched_test-er_density # :name: "fig:er_unmatched_test-er_density" # # Comparison of estimated densities for the left and right hemisphere networks. The # estimated density (probability of any edge across the entire network), $\hat{p}$, for # the left # hemisphere is ~{glue:text}`er_unmatched_test-density_left:0.3f`, while for the right # it is # ~{glue:text}`er_unmatched_test-density_right:0.3f`. Black lines denote # {glue:text}`er_unmatched_test-coverage_percentage`**%** # confidence intervals for this estimated parameter $\hat{p}$. The p-value for testing # the null hypothesis that these densities are the same is # {glue:text}`er_unmatched_test-pvalue:0.3g` (two # sided Fisher's exact test). # ``` # # {numref}`Figure {number} <fig:er_unmatched_test-er_density>` shows the comparison of # the network densities between the left and right hemisphere induced subgraphs. We see # that the density on the left is ~{glue:text}`er_unmatched_test-density_left:0.3f`, and # on the right it is ~{glue:text}`er_unmatched_test-density_right:0.3f`. To determine # whether this is a difference likely to be observed by chance under the ER model, # we ran a two-sided Fisher's exact test, which tests whether the success probabilities # between two independent binomials are significantly different. This test yields a # p-value of {glue:text}`er_unmatched_test-pvalue:0.3g`, suggesting that we have strong # evidence to reject this version of our hypotheis of bilateral symmetry. We note that # while the difference between estimated densities is not massive, this low p-value # results from the large sample size for this comparison. We note that there are # {glue:text}`er_unmatched_test-n_possible_left:,.0f` and # {glue:text}`er_unmatched_test-n_possible_right:,.0f` potential edges on the left and # right, # respectively, making the sample size for this comparison quite large. # # To our knowledge, when neuroscientists have considered the question of bilateral # symmetry, they have not meant such a simple comparison of proportions. In many ways, # the ER model is too simple to be an interesting description of connectome structure. # However, we note that *even the simplest network model* yields a significant # difference between brain hemispheres for this organism. It is unclear whether this # difference in densities is biological (e.g. a result of slightly differing rates of # development for this individual), an artifact of how the data was collected (e.g. # technological limitations causing slightly lower reconstruction rates on the left # hemisphere), or something else entirely. Still, the ER test results also provide # important considerations for other tests. Almost any network statistic (e.g. # clustering coefficient, number of triangles, etc), as well as many of the model-based # parameters we will consider in this paper, are strongly related to the network # density. Thus, if the densities are different, it is likely that tests based on any # of these other test statistics will also reject the null hypothesis. Thus, we will # need ways of telling whether an observed difference for these other tests could be # explained by this difference in density alone. # + FIG_PATH = FIG_PATH / FILENAME fontsize = 12 methods = SmartSVG(FIG_PATH / "er_methods.svg") methods.set_width(200) methods.move(10, 20) methods_panel = Panel( methods, Text("A) Density test methods", 5, 10, size=fontsize, weight="bold") ) density = SmartSVG(FIG_PATH / "er_density.svg") density.set_height(methods.height) density.move(10, 15) density_panel = Panel( density, Text("B) Density comparison", 5, 10, size=fontsize, weight="bold") ) density_panel.move(methods.width * 0.9, 0) fig = Figure( (methods.width + density.width) * 0.9, (methods.height) * 0.9, methods_panel, density_panel, ) fig.save(FIG_PATH / "composite.svg") fig # - elapsed = time.time() - t0 delta = datetime.timedelta(seconds=elapsed) print(f"Script took {delta}") print(f"Completed at {datetime.datetime.now()}")
docs/er_unmatched_test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # # Dataset Exploration # # Here we'll be exploring how each of the features we have so far relates to the target variable "status" # ## Importing the dataset import pandas as pd startups = pd.read_csv('data/startups_2.csv', index_col=0) startups[:3] # ### Let's start exploring the numerical features # Let's see a heatmap chart of the average features for 'acquired' startups against the complete set of startups # + import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline def plot_avg_status_against_avg_total(df, status): startups_numeric = df.filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at|status)')) startups_acquired = startups_numeric[startups_numeric['status'] == status] startups_numeric = startups_numeric.drop('status', 1) startups_acquired = startups_acquired.drop('status', 1) fig, ax = plt.subplots(figsize=(20,20)) ax.set_title(status+' startups heatmap') sns.heatmap((pd.DataFrame(startups_acquired.mean()).transpose() -startups_numeric.mean())/startups_numeric.std(ddof=0), annot=True, cbar=False, square=True, ax=ax) plot_avg_status_against_avg_total(startups, 'acquired') # - # The same for 'closed': plot_avg_status_against_avg_total(startups, 'closed') plot_avg_status_against_avg_total(startups, 'ipo') plot_avg_status_against_avg_total(startups, 'operating') # We can see some logic behavior here. Acquired startups tend to have high venture_funding_rounds and low seed_funding_rounds, while closed startups have few funding_rounds in general and relatively high angel_funding_rounds. # # Regarding the dates variables we also have logical results. Acquired and closed startups haven't had a funding for a higher amount of time. # While operating startups had a funding not so long ago when compared to the rest of the startups. # + # Produce a scatter matrix for each pair of features in the data #startups_funding_rounds = startups_numeric.filter(regex=('.*funding_total_usd')) #pd.scatter_matrix(startups_funding_rounds, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); # - # ## Applying PCA to discover which features best explain the variance in the dataset # + from sklearn.decomposition import PCA import visuals as vs startups_numeric = startups.filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at)')) # TODO: Apply PCA by fitting the good data with the same number of dimensions as features pca = PCA(n_components=4) pca.fit(startups_numeric) # Generate PCA results plot pca_results = vs.pca_results(startups_numeric, pca) startups_numeric[:3] # + good_data = startups_numeric import numpy as np dimensions = dimensions = ['Dimension {}'.format(i) for i in range(1,len(pca.components_)+1)] components = pd.DataFrame(np.round(pca.components_, 4), columns = good_data.keys()) components.index = dimensions components # - # The most important variables here are: # # Dimension1: funding_rounds, -last_funding_at, debt_financing_funding_rounds, venture_funding_rounds # # Dimension2: -funding_rounds, -last_funding_at, -seed_funding_rounds, venture_funding_rounds # # Dimension3: -last_funding_at, equity_crowdfunding_funding_rounds, -seed_funding_rounds # # Dimension4: last_funding_at, equity_crowdfunding_funding_rounds, seed_funding_rounds # ### Now I'll apply the same PCA algorithm, but just for startups with acquired status # + startups_numeric_acquired = startups.filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at|status)')) startups_numeric_acquired = startups_numeric_acquired[startups_numeric_acquired['status'] == 'acquired'] startups_numeric_acquired = startups_numeric_acquired.drop('status', 1) pca = PCA(n_components=4) pca.fit(startups_numeric_acquired) # Generate PCA results plot pca_results = vs.pca_results(startups_numeric_acquired, pca) # - # Okay. We see now that some features tend to express more variance than others. # # We also see that funding_rounds variable tend to dominate against funding_total_usd values. # And also, that last_funding_at is a very expressing variable. # # # ### Let's start playing with non-numerical variables: dates and Categories #startups_numeric = df.filter(regex=('.*(funding_total_usd|funding_rounds|status)')) startups_non_numeric = startups.filter(regex=('^((?!(_acquisitions|_investments|_per_round|funding_total_usd|funding_rounds|_at)).)*$')) startups_non_numeric[:3] # ### Let's try some DecisionTrees for categories and see which performance we get. startups_non_numeric['status'].value_counts() startups_non_numeric['acquired'] = startups_non_numeric['status'].map({'operating': 0, 'acquired':1, 'closed':0, 'ipo':0}) startups_non_numeric = startups_non_numeric.drop('status', 1) startups_non_numeric[:3] from sklearn import tree def visualize_tree(tree_model, feature_names): """Create tree png using graphviz. Args ---- tree_model -- scikit-learn DecsisionTree. feature_names -- list of feature names. """ with open("dt.dot", 'w') as f: tree.export_graphviz(tree_model, out_file=f, feature_names=feature_names) command = ["dot", "-Tpng", "dt.dot", "-o", "dt.png"] try: subprocess.check_call(command) except: exit("Could not run dot, ie graphviz, to " "produce visualization") # + #import visuals_tree as vs_tree #vs_tree.ModelLearning(startups_non_numeric.drop(['acquired','state_code'], 1), startups_non_numeric['acquired']) from sklearn import tree from sklearn.cross_validation import cross_val_score from sklearn import tree from sklearn import grid_search from sklearn import preprocessing #clf = tree.DecisionTreeClassifier(random_state=0) #cross_val_score(clf, startups_non_numeric.drop(['acquired','state_code'], 1), startups_non_numeric['acquired'], cv=10) #Drop state_code feature features = startups_non_numeric.drop(['acquired','state_code'], 1) #Convert state_code feature to number #features = startups_non_numeric.drop(['acquired'], 1) #features['state_code'] = preprocessing.LabelEncoder().fit_transform(features['state_code']) #Convert state_code to dummy variables features = pd.get_dummies(startups_non_numeric.drop(['acquired'], 1), prefix='state', columns=['state_code']) #Merge numeric_features to non-numeric-features features_all = pd.concat([features, startups_numeric], axis=1, ignore_index=False) #features = features_all features = startups_numeric parameters = {'max_depth':range(5,20)} clf = grid_search.GridSearchCV(tree.DecisionTreeClassifier(), parameters, n_jobs=5, scoring='roc_auc') clf.fit(X=features, y=startups_non_numeric['acquired']) tree_model = clf.best_estimator_ print (clf.best_score_, clf.best_params_) print tree.export_graphviz(clf.best_estimator_, feature_names=list(features.columns)) # - import visuals_tree as vs_tree vs_tree = reload(vs_tree) vs_tree.ModelComplexity(features_all, startups_non_numeric['acquired']) # ### Only categories and states are not enough for making a good prediction. With that, maximum (roc_auc) of 0.64 was achieved. With attributes, a simple decisionTreeClassifier achieved 0.84 roc_auc. # ## Saving the dataset ready to be tested by different learning algorithms all = pd.concat([features_all, startups_non_numeric['acquired']], axis=1, ignore_index=False) all.to_csv('data/startups_3.csv') all_with_status = all.join(startups['status']) all_with_status_without_operating = all_with_status[all_with_status['status'] != 'operating'] all_with_status_without_operating.shape all_without_operating = all_with_status_without_operating.drop('status', 1) all_without_operating.to_csv('data/startups_not_operating_3.csv')
exploratory_code/3_dataset_exploration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Execution Plan # # In this notebook we try to understand Spark execution plans. We will use the weather example and analyse all the steps in order to get a better understanding. # # ## Exeuction Model of Spark # # In contrast to many other (mainly non-distributed) frameworks, Spark does not execute any transformation immediately, but only records the step and builds a so called execution plan. This plan is the basis for Sparks resilience against failure of individual nodes (since the result can be reconstructed from the execution plan), but also allows Spark to perform optimizations which span all transformation steps. # # Specifically with Spark DataFrames (as opposed to the more low level RDD interface), Spark uses an advanced optimizer. The general steps of query processing in response to an action (like a "show" or "save" action)" are always as follows: # 1. Parse logical execution plan # 2. Analyze logical execution plan and resolve all symbols (tables, columns, functions) # 3. Optimize logical execution plan # 4. Create physical execution plan by mapping all steps to RDD operations # # ## Relation to RDDs # Note that RDDs are only used in the very last step, although the general conception is that DataFrames sit on top of RDDs. But the point is, that a DataFrame first collects all transformations on a higher level of abstraction and RDDs only come into play in this very last step. # # Actually you can access an RDD of any DataFrame. BUT: This access will actually create the physical execution plan for this specific RDD. Before accessing this RDD it even didn't exist. This also means that using a DataFrames RDD actually is an optimization barrier. # # ## Weather Example # # In the following steps, we will try to understand how Spark executes a simplified version of the weather analysis including aggregations and joins. spark.conf.set("spark.sql.adaptive.enabled", False) # # 1. Load Data # # First we load the weather data, which consists of the measurement data and some station metadata. storageLocation = "s3://dimajix-training/data/weather" # ## 1.1 Load Measurements # # Measurements are stored in multiple directories (one per year) # + from pyspark.sql.functions import * from functools import reduce # Read in all years, store them in an Python array raw_weather_per_year = [spark.read.text(storageLocation + "/" + str(i)).withColumn("year", lit(i)) for i in range(2003,2015)] # Union all years together raw_weather = reduce(lambda l,r: l.union(r), raw_weather_per_year) # Display first 10 records raw_weather.limit(10).toPandas() # - # ### Extract Measurements # # Measurements were stored in a proprietary text based format, with some values at fixed positions. We need to extract these values with a simple `SELECT` statement. # + weather = raw_weather.select( col("year"), substring(col("value"),5,6).alias("usaf"), substring(col("value"),11,5).alias("wban"), substring(col("value"),16,8).alias("date"), substring(col("value"),24,4).alias("time"), substring(col("value"),42,5).alias("report_type"), substring(col("value"),61,3).alias("wind_direction"), substring(col("value"),64,1).alias("wind_direction_qual"), substring(col("value"),65,1).alias("wind_observation"), (substring(col("value"),66,4).cast("float") / lit(10.0)).alias("wind_speed"), substring(col("value"),70,1).alias("wind_speed_qual"), (substring(col("value"),88,5).cast("float") / lit(10.0)).alias("air_temperature"), substring(col("value"),93,1).alias("air_temperature_qual") ) weather.limit(10).toPandas() # - # ## 1.2 Load Station Metadata # # We also need to load the weather station meta data containing information about the geo location, country etc of individual weather stations. # + stations = spark.read \ .option("header", True) \ .csv(storageLocation + "/isd-history") # Display first 10 records stations.limit(10).toPandas() # - # ## 1.3 Perform Analysis # # Now for completeness sake, let's reperform the analysis (minimum and maximum temperature per year and country) using `JOIN` and `GROUP BY` operations. # + df = weather.join(stations, (weather.usaf == stations.USAF) & (weather.wban == stations.WBAN)) result = df.groupBy(df.CTRY, df.year).agg( min(when(df.air_temperature_qual == lit(1), df.air_temperature)).alias('min_temp'), max(when(df.air_temperature_qual == lit(1), df.air_temperature)).alias('max_temp'), min(when(df.wind_speed_qual == lit(1), df.wind_speed)).alias('min_wind'), max(when(df.wind_speed_qual == lit(1), df.wind_speed)).alias('max_wind') ) pdf = result.toPandas() pdf # - # # 2 Investigate Execution Plans # # Now that we have redone the whole analysis, let's try to understand how Spark actually executes these steps. In order to understand the whole aggregation, we start simple and add one step after the other and look how execution plans change. # ## 2.1 Reading Data # # The first step is to read in data. In order to start simple, we only load a single year into a DataFrame called `raw_weather_2003`. We can inspect the execution plan that would create the records of that DataFrame with the `explain()` method. raw_weather_2003 = spark.read.text(storageLocation + "/2003") ## YOUR CODE HERE # As we can see, the execution plan actually contains a single operation - reading data from disk. Note two things: # * The phyiscal execution plan has been created specifically for the `explain()` command. It is not stored in the DataFrame, the DataFrame only contains the basis for a *parsed logical plan* # * The plan is not executed, only printed to the console # # We can also inspect a more detailed execition plan, if we pass `True` to the `explain()` method as follows: # + ## YOUR CODE HERE # - # As you can see, the explanation now contains all four steps: # * Parsed logical execution plan. This directly corresponds to the operations as specified. # * Analyzed logical plan. This resolves all relations and columns and data types. # * Optimized logical plan. This plan is already optimized (we'll see some optimizations later) # * Physical execution plan. This maps all operations and transformations to RDD operations. # ## 2.2 Adding Columns # # Let's see how the execution plan changes if we add a new column. raw_weather_2003 = spark.read.text(storageLocation + "/2003").withColumn("year", lit(2003)) ## YOUR CODE HERE # ### Remarks # We see that a `Project` operation was inserted to all execution plans which is responsible for adding the `year` column. # ## 2.3 SELECT Operation # # Now let's perform an additional `SELECT` operation after adding the year. We do not add all columns yet in order to keep the output small and more readable. We will add more columns later when we really require them. weather_2003 = raw_weather_2003.select( col("year"), substring(col("value"),5,6).alias("usaf"), substring(col("value"),11,5).alias("wban") ) ## YOUR CODE HERE # ### Remarks # Here we see that the original parsed plan and analyzed plan actually contains two `Project` operations. Each of them corresponds to a single transformation (`withColumn` and `select`). But the optimizer merged these operations into a single one, thus simplifying execution. # ## 2.4 UNION Operation # # Just for completeness, let's see what a `UNION` operation does. We required it after loading all years into individual DataFrames. # + # Read in all years, store them in an Python array raw_weather_per_year = [spark.read.text(storageLocation + "/" + str(i)).withColumn("year", lit(i)) for i in range(2003,2015)] # Union all years together raw_weather = reduce(lambda l,r: l.union(r), raw_weather_per_year) # Print execution plan ## YOUR CODE HERE # - # ## 2.5 JOIN Operation # # The next operation we had to perform was a `JOIN` between the measurements and the station metadata. We will use only a single year instead of the unioned data to keep output small and thereby increase readability of the execution plans. df = ## YOUR CODE HERE # ### Remarks # Now a `JOIN` results in an interesting execution plan: # * Spark filters columns, since an inner JOIN require non-null values # * Filtering is actually pushed down before the projection. This reduces amount of data as soon as possible # * JOIN operation is performed in two steps: # * Load data and broadcast it to all nodes (`BroadcastExchange`) # * Perform the join (`BroadcastHashJoin`) # # In addition to the *broadcast join* Spark also supports a different join implementation - more on that later. # ### Implicit Filtering # # Actually let's have a look at what happens with a left outer join. This should not filter away `NULL` values on the left side: # + ## YOUR CODE HERE # - # ## 2.6 Aggregation # # Finally we want to perform an aggregation on the joined data. We need to restart from measurement extraction, since we did not extract all required columns so far. So we will perform the following steps # * Reuse `raw_weather_2003` which already contains the `year` column # * Extract all requirement measurements # * Join with stations metadata # * Perform grouped aggregation # Again we will only analyze the temperature, just to keep execution plans a little bit smaller. This means that some columns are missing, but the basic operations are all the same. # ### Extract Measurements weather_2003 = raw_weather_2003.select( col("year"), substring(col("value"),5,6).alias("usaf"), substring(col("value"),11,5).alias("wban"), substring(col("value"),16,8).alias("date"), substring(col("value"),24,4).alias("time"), substring(col("value"),42,5).alias("report_type"), substring(col("value"),61,3).alias("wind_direction"), substring(col("value"),64,1).alias("wind_direction_qual"), substring(col("value"),65,1).alias("wind_observation"), (substring(col("value"),66,4).cast("float") / lit(10.0)).alias("wind_speed"), substring(col("value"),70,1).alias("wind_speed_qual"), (substring(col("value"),88,5).cast("float") / lit(10.0)).alias("air_temperature"), substring(col("value"),93,1).alias("air_temperature_qual") ) ## YOUR CODE HERE # ### Join with Stations Metadata df = weather_2003.join(stations, (weather_2003.usaf == stations.USAF) & (weather_2003.wban == stations.WBAN)) ## YOUR CODE HERE # ### Perform Grouped Aggregation # + ## YOUR CODE HERE # - # ### Remarks # # Again we can see that Spark performs some simple but clever optiomizations: # * Projections only contains the columns required, not all available columns of df. The required columns are recursively *pushed up* the transformation chain from the last operation (grouped aggregation) to the first transformations # * The aggregation is performed in three steps: # * Partial aggregation (`HashAggregate` with `partial_...` functions) # * Shuffle (`Exchange hashpartitioning`) # * Final aggregation of partial results (`HashAggregate`) # ## 2.7 Sorting # # The last operation we like to analyze is sorting. To keep execution plans simple, we just sort the `stations` DataFrame by the stations IDs. # + ## YOUR CODE HERE # - # ### Remarks # # In order to have a globally sorted result, it is not enough to sort within each Spark partition. This implies that some kind of shuffle operation has to be executed. In contrast to all our previous examples, this time Spark uses a `rangepartitioning` by which it simply splits up all data according to the range of the sorting key. After that is done, records will be sorted independently within each partition. Since the ranges were non-overlapping this is enough for a global ordering covering all partitions.
pyspark-advanced/jupyter-execution/Execution Plan - Skeleton.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Test version of the soil exercise NGEN16: simple diffusion model. # Course: NGEN16-2019 Code: Python</p> # Author: <NAME> # This is a test version of a Jupyter notebook to present some principles of diffusion of $CO_2$ in the soil. The exercise also introduces Jupyter Notebook as an working environment. This notebook combines (explaining) text with coding cells that contain the simulation model. Actually, notebooks can be used for data analysis and eg. shared between users to work together. Think that it can also be used to publish data analysis or to present a 'work-report' for an assignment (see an introduction to the use in [Shen, 2014](https://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261)). # In this presentation we use the programming language Python which is the default for Jupyter notebooks. # <br> # The background of the diffusion model of the production and transport of $CO_2$ is described in the exercise document. This notebooks goes stepwise through the simulation and presents the results in graphs. You can use it and for example change settings to see what the effect is in the resuls. As described in the exercise document, the notebooks presents three sub-sections: first the simulation of soil temperature, then the simulation of $CO_2$ production and finally the calculation of flux between the soil layers, the $CO_2$ concentration in each layer and the efflux of $CO_2$ out of the soil. # # A programming language makes use of function libraries. So we start the calculation model with including some libraries for eg function as sinus and functions for drawing the figures. Also we define here the figure legend and line colors. As you can see we will make use of 8 layers in the top soil, including the surface # %matplotlib inline import numpy as np from matplotlib import pyplot as plt plt.style.use('classic') import numpy as np lines=['Surface','Layer1','Layer2','Layer3','Layer4','Layer5','Layer6','Layer7'] lincolor=['black','red', 'blue', 'orange','black','red', 'blue', 'green'] # <p><b> Soil temperature </b></p> # $CO_2$ production and flux are depending on soil temperature. We do not have actual measurements of the temperature for each layer, so we will estimate the temperature fluctuations following a sinus function around a constant average temperature with a given amplitude. We define a period of 10 days and a timestep of 300 s. # First we declare a number of variables and the midpoint of each layer in m below surface (below is expressed with the minus) TimeStep=300 #length of time step in seconds Day=10 #Number of days SecDay=86400 #Number of seconds in a day TotTime=Day*SecDay #Total number of seconds over 10 days TotSteps=int(TotTime/TimeStep)# Number of timesteps over the period defined as integer TempAvg=10 #average temperature over the 10 days A0=8 #Pre-set artificial amplitude in the temperature from average w=2*np.pi/SecDay a=1.5E-6 zd=np.sqrt(2*a/w) #damping depth z=[0.0,-0.05,-0.15,-0.25,-0.35,-0.45,-0.55,-0.65] #Midpoint of each 'soillayer' # The results will be stored as arrays. Here we define already arrays for the temperture and the produced CO2 (the second variable is for total production over all layers). Each array is a double array with layer and time step. Time is set to zero. t=0.0 #Time is set to zero layers= range (8) Temp=np.zeros((8,TotSteps)) SProd=np.zeros((8,TotSteps)) SProdTot=np.zeros(TotSteps) # In the next code section the temperature is calculated in a 'for' loop: with counter k going from 0 to 'Total Timestep' the temperature is calculated for each layer (i=layer, k=TimeStep). Compare with function in exercise text: for k in range(0,TotSteps): #wfor k is zero to TotSteps: for i in layers: # the temperature function Temp[i,k]=TempAvg+A0*np.exp(z[i]/zd)*np.sin(w*t+z[i]/zd) t=t+TimeStep # The result is plotted in figure 1: # + fig1=plt.figure(figsize=(12,8)) ax=plt.axes() plt.rc('lines', linewidth=2) for i in layers: plt.plot(Temp[i,:], color=lincolor[i], label=lines[i]) plt.legend(fontsize=10) plt.title("Soil temperature per layer") plt.ylabel("Temperature (°C)") plt.xlabel("Time step") ax.tick_params(axis='both', which='major', direction='out', labelsize=10) # - # <p><b>$CO_2$-production</b></p> Check the function for production of $CO_2$ in the exercise text. In the next code block the variables of the equation are given, as well as the SOM and root content of each layer (defined in an array) #Application of CO2 production based on Michaelis-Menten temperature function kSom=3.85e-6 #Decomposition rate organic matter mg g-1 s-1 kRoots=4.3e-5 #Respiration rate of roots mg g-1 s-1 energy=7.93E4 #Activation energy kJ mol-1 r=8.314 #Gas constant J K-1 mol-1 mSOM=[0, 3800, 2600, 1400, 500, 200, 100, 0] #Organic matter content in each layer in mg m-3 mRoots=[ 0, 578, 163, 58, 37, 5, 0, 0] #Root content in each layer in mg m-3 # In the following block the $CO_2$ production from roots and SOM per layer for each time step is calculated. The Michaelis Menten temperature part of the function is taken in two steps, just to read the equation more easily. The block after that presents the resulting figure. # + for k in range(TotSteps): for i in layers: # we split the temperature part into sections TempFrac1=energy/(r*(Temp[i,k]+273.15)) #Temperature from Celsius to Kelvin TempFrac2=((Temp[i,k]+273.15)-283.15)/283.15 TempFrac=np.exp(TempFrac1*TempFrac2) #Combine it into the Michaelis-Menten based equation for CO2 production from respiration and decomposition SProd[i,k]=(kRoots*mRoots[i]+kSom*mSOM[i])*TempFrac # Sum the production per layer to total production over all layers. #Production is per m3, so divided by 10 for a layer of 10 cm SProdTot[k]=SProdTot[k]+(SProd[i,k]/10) # - # Plot the resulting production per layer in figure 2: # + fig2=plt.figure(figsize=(12,8)) ax=plt.axes() plt.rc('lines', linewidth=2) for i in layers: plt.plot(SProd[i,:], color=lincolor[i], label=lines[i]) plt.legend(fontsize=10) ax.set_title('CO2 production per layer mg m-3 s-1') ax.set_ylabel('CO2 production in mg/m3') ax.set_xlabel('Time step') ax.tick_params(axis='both', which='major', labelsize=10) # - # <p><b>$CO_2$-flux</b></p>Next step is to calculate the $CO_2$ concentration per layer for each timestep. The concentration is the result of the concentration present from previous timestep, the production of $CO_2$ and the net flux out or in the layer from other layers during the actual timestep. First we calculate the diffusion coefficent for each layer depending on temperature and we define a number of start settings on $CO_2$ concentration: starting value at t=0 and keep contant concentration at the surface. # + #Settings for the CO2 concentration and flux calculation CO2= np.zeros((8,TotSteps)) #Definition CO2 concentration variable DiffCoeffSoil=np.zeros((8,TotSteps)) #Definition diffusion coefficient CO2 in soil depending on temperature and tortuosity # Estimation of diffusion coefficient for each layer and timestep DCO2Air=1.39e-5 #Diffusion coeff for CO2 in air AFPorosity=0.1515 #Porosity is set to 15.15% for k in range(TotSteps): #Calculate Diffusion coeff for each layer DiffCoeffSoil[0,k]=DCO2Air*(((Temp[0,k]+273.15)/273.15)**1.75) for i in range(1,8): DiffCoeffSoil[i,k]=0.66*AFPorosity*DCO2Air*(((Temp[i,k]+273.15)/273.15)**1.75) #Setting ambient concentration at surface to a constant value at 695 mg/m3 for k in range(TotSteps): CO2[0,k]=695 # Initial concentration is set to 695 mg/m3 for the first timestep in all layers for i in layers: CO2[i,0]=695 # - # In the next cell Fick's law for diffusion is applied. First the two average diffusion coefficients between the three mid-points of the adjoining layers (i-1, i and i+1) is estimated. Then the concentration differences between the layers and vertical distance. Then Fick's law gives the net flux of the layer with the two adjoining layers (FluxLayer). The deepest layer has the lower boundary as is this calculated separate. To present the net flux into the atmosphere, the flux from the top soil into the air layer is calculated once more. # + FluxLayer=np.zeros((8,TotSteps)) for k in range(1,TotSteps): for i in range(1,7): # for first soil layer to soillayer 6: last layer is treated separate due to boundary #First calculation of average diffusion coeff between the layers in this step, note approximation by division of 2 Dtm1=(DiffCoeffSoil[i,k]+DiffCoeffSoil[i-1,k])/2 #Average of Diff coeff between layer i and layer above i-1 Dtm2=(DiffCoeffSoil[i,k]+DiffCoeffSoil[i+1,k])/2 #Average of Diff coeff between layer i and layer below i+1 #Then calculation of concentration differences and thickness of layers (although set constant to 0.1 m) C1=CO2[i-1,k-1]-CO2[i,k-1] #difference in concentration layer i-1 and i C2=CO2[i,k-1]-CO2[i+1,k-1] #difference in concentration layer i and i+1 Z1=z[i-1]-z[i] # difference in depth between layers i-1 and i Z2=z[i]-z[i+1] # difference in depth between layers i and i+1 #Estimation of flux between layers from concentration differences of previous timestep #Net flux to/from the layer is the sum of the two fluxes from top and bottom of layer #Flux from Fick's law: diffusion coeff times ratio dc/dz; minus sign is for correct direction of flux FluxLayer[i,k]=-Dtm1*(C1/Z1)+Dtm2*(C2/Z2) # for lowest soillayer Dtm1=(DiffCoeffSoil[7,k]+DiffCoeffSoil[6,k])/2 FluxLayer[7,k]=-Dtm1*((CO2[6,k-1]-CO2[7,k-1])/(z[6]-z[7])) for i in range(1,8): # for all soil layers the concentration for actual timestep with assumption that # dz is set 10 cm just for now CO2[i,k]=CO2[i,k-1]-(FluxLayer[i,k]/0.1)*TimeStep + (SProd[i,k]*TimeStep) # estimation of net efflux from the top soil into air-layer Dtm1=(DiffCoeffSoil[0,k]+DiffCoeffSoil[1,k])/2 FluxLayer[1,k]=-Dtm1*((CO2[0,k-1]-CO2[1,k-1])/(z[0]-z[1])) # - # Results are presented in figure 3 and 4: $CO_2$ concentration per layer and net flux from the soil. # + fig3=plt.figure(figsize=(12,8)) ax=plt.axes() for i in layers: plt.plot(CO2[i,:], color=lincolor[i], label=lines[i]) plt.legend(fontsize=10) ax.set_title("CO2 concentration per layer mg m-3") ax.set_ylabel('CO2 concentration in mg/m3') ax.set_xlabel('Time step') ax.tick_params(axis='both', which='major', labelsize=10) # - # For comparison, the total production over all layers is given togther with the net flux from the soil. fig4=plt.figure(figsize=(12,8)) ax=plt.axes() plt.plot(FluxLayer[1,:], color='Black', label='Surface Flux') plt.plot(SProdTot[:], color='Red', label='Total production') plt.legend() ax.set_title("CO2 surface flux") ax.set_ylabel("CO2 flux or production mg m-2 s-1") ax.set_xlabel("Time step") ax.tick_params(axis='both', which='major', labelsize=10)
education/MSc_BSc/soil_exercise_ngen16/TestSoilT.ipynb