code
stringlengths
235
11.6M
repo_path
stringlengths
3
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import panel as pn import numpy as np import holoviews as hv pn.extension() # - # For a large variety of use cases we do not need complete control over the exact layout of each individual component on the page, as could be achieved with a [custom template](../../user_guide/Templates.ipynb), we just want to achieve a more polished look and feel. For these cases Panel ships with a number of default templates, which are defined by declaring four main content areas on the page, which can be populated as desired: # # * **`header`**: The header area of the HTML page # * **`sidebar`**: A collapsible sidebar # * **`main`**: The main area of the application # * **`modal`**: A modal area which can be opened and closed from Python # # These four areas behave very similarly to other Panel layout components and have list-like semantics. The `ReactTemplate` in particular however is an exception to this rule as the `main` area behaves like panel `GridSpec` object. Unlike regular layout components however, the contents of the areas is fixed once rendered. If you need a dynamic layout you should therefore insert a regular Panel layout component (e.g. a `Column` or `Row`) and modify it in place once added to one of the content areas. # # Templates can allow for us to quickly and easily create web apps for displaying our data. Panel comes with a default Template, and includes multiple Templates that extend the default which add some customization for a better display. # # #### Parameters: # # In addition to the four different areas we can populate the `ReactTemplate` declares a few variables to configure the layout: # # * **`cols`** (dict): Number of columns in the grid for different display sizes (`default={'lg': 12, 'md': 10, 'sm': 6, 'xs': 4, 'xxs': 2}`) # * **`breakpoints`** (dict): Sizes in pixels for various layouts (`default={'lg': 1200, 'md': 996, 'sm': 768, 'xs': 480, 'xxs': 0}`) # * **`row_height`** (int, default=150): Height per row in the grid # * **`dimensions`** (dict): Minimum/Maximum sizes of cells in grid units (`default={'minW': 0, 'maxW': 'Infinity', 'minH': 0, 'maxH': 'Infinity'}`) # * **`prevent_collision`** (bool, default=Flase): Prevent collisions between grid items. # # These parameters control the responsive resizing in different layouts. The `ReactTemplate` also exposes the same parameters as other templates: # # * **`busy_indicator`** (BooleanIndicator): Visual indicator of application busy state. # * **`header_background`** (str): Optional header background color override. # * **`header_color`** (str): Optional header text color override. # * **`logo`** (str): URI of logo to add to the header (if local file, logo is base64 encoded as URI). # * **`site`** (str): Name of the site. Will be shown in the header. Default is '', i.e. not shown. # * **`site_url`** (str): Url of the site and logo. Default is "/". # * **`title`** (str): A title to show in the header. # * **`theme`** (Theme): A Theme class (available in `panel.template.theme`) # # ________ # In this case we are using the `ReactTemplate`, built on [react-grid-layout](https://github.com/STRML/react-grid-layout), which provides a responsive, resizable, draggable grid layout. Here is an example of how you can set up a display using this template: # + react = pn.template.ReactTemplate(title='React Template') pn.config.sizing_mode = 'stretch_both' xs = np.linspace(0, np.pi) freq = pn.widgets.FloatSlider(name="Frequency", start=0, end=10, value=2) phase = pn.widgets.FloatSlider(name="Phase", start=0, end=np.pi) @pn.depends(freq=freq, phase=phase) def sine(freq, phase): return hv.Curve((xs, np.sin(xs*freq+phase))).opts( responsive=True, min_height=400) @pn.depends(freq=freq, phase=phase) def cosine(freq, phase): return hv.Curve((xs, np.cos(xs*freq+phase))).opts( responsive=True, min_height=400) react.sidebar.append(freq) react.sidebar.append(phase) # Unlike other templates the `ReactTemplate.main` area acts like a GridSpec react.main[:4, :6] = pn.Card(hv.DynamicMap(sine), title='Sine') react.main[:4, 6:] = pn.Card(hv.DynamicMap(cosine), title='Cosine') react.servable(); # - # With the `row_height=150` this will result in the two `Card` objects filling 4 rows each totalling 600 pixels and each taking up 6 columns, which resize responsively to fill the screen and reflow when working on a smaller screen. When hovering of the top-left corner of each card a draggable handle will allow dragging the components around while a resize handle will show up at the bottom-right corner. # <h3><b>ReactTemplate with DefaultTheme</b></h3> # <img src="../../assets/React.png" style="margin-left: auto; margin-right: auto; display: block;"></img> # </br> # <h3><b>ReactTemplate with DarkTheme</b></h3> # <img src="../../assets/ReactDark.png" style="margin-left: auto; margin-right: auto; display: block;"></img> # The app can be displayed within the notebook by using `.servable()`, or rendered in another tab by replacing it with `.show()`. # # Themes can be added using the optional keyword argument `theme`. Each template comes with a DarkTheme and a DefaultTheme, which can be set `ReactTemplate(theme=DarkTheme)`. If no theme is set, then DefaultTheme will be applied. # # It should be noted that Templates may not render correctly in a notebook, and for the best performance the should ideally be deployed to a server.
examples/reference/templates/React.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Convolutional Neural Networks: Application # # Welcome to Course 4's second assignment! In this notebook, you will: # # - Implement helper functions that you will use when implementing a TensorFlow model # - Implement a fully functioning ConvNet using TensorFlow # # **After this assignment you will be able to:** # # - Build and train a ConvNet in TensorFlow for a classification problem # # We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). # ## 1.0 - TensorFlow model # # In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. # # As usual, we will start by loading in the packages. # + import math import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage import tensorflow as tf from tensorflow.python.framework import ops from cnn_utils import * # %matplotlib inline np.random.seed(1) # - # Run the next cell to load the "SIGNS" dataset you are going to use. # Loading the data (signs) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5. # # <img src="images/SIGNS.png" style="width:800px;height:300px;"> # # The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples. # Example of a picture index = 6 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) # In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it. # # To get started, let's examine the shapes of your data. X_train = X_train_orig/255. X_test = X_test_orig/255. Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) conv_layers = {} # ### 1.1 - Create placeholders # # TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session. # # **Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint](https://www.tensorflow.org/api_docs/python/tf/placeholder). # + # GRADED FUNCTION: create_placeholders def create_placeholders(n_H0, n_W0, n_C0, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_H0 -- scalar, height of an input image n_W0 -- scalar, width of an input image n_C0 -- scalar, number of channels of the input n_y -- scalar, number of classes Returns: X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float" Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float" """ ### START CODE HERE ### (≈2 lines) X = tf.placeholder(tf.float32,[None, n_H0, n_W0, n_C0]) Y = tf.placeholder(tf.float32,[None, n_y]) ### END CODE HERE ### return X, Y # - X, Y = create_placeholders(64, 64, 3, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) # **Expected Output** # # <table> # <tr> # <td> # X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) # # </td> # </tr> # <tr> # <td> # Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) # # </td> # </tr> # </table> # ### 1.2 - Initialize parameters # # You will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment. # # **Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use: # ```python # W = tf.get_variable("W", [1,2,3,4], initializer = ...) # ``` # [More Info](https://www.tensorflow.org/api_docs/python/tf/get_variable). # + # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes weight parameters to build a neural network with tensorflow. The shapes are: W1 : [4, 4, 3, 8] W2 : [2, 2, 8, 16] Returns: parameters -- a dictionary of tensors containing W1, W2 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 2 lines of code) W1 = tf.get_variable("W1",[4, 4, 3, 8],initializer=tf.contrib.layers.xavier_initializer(seed=0)) W2 = tf.get_variable("W2",[2, 2, 8, 16],initializer=tf.contrib.layers.xavier_initializer(seed=0)) ### END CODE HERE ### parameters = {"W1": W1, "W2": W2} return parameters # - tf.reset_default_graph() with tf.Session() as sess_test: parameters = initialize_parameters() init = tf.global_variables_initializer() sess_test.run(init) print("W1 = " + str(parameters["W1"].eval()[1,1,1])) print("W2 = " + str(parameters["W2"].eval()[1,1,1])) # ** Expected Output:** # # <table> # # <tr> # <td> # W1 = # </td> # <td> # [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br> # -0.06847463 0.05245192] # </td> # </tr> # # <tr> # <td> # W2 = # </td> # <td> # [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br> # -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br> # -0.22779644 -0.1601823 -0.16117483 -0.10286498] # </td> # </tr> # # </table> # ### 1.2 - Forward propagation # # In TensorFlow, there are built-in functions that carry out the convolution steps for you. # # - **tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation [here](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) # # - **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation [here](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool) # # - **tf.nn.relu(Z1):** computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/nn/relu) # # - **tf.contrib.layers.flatten(P)**: given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten) # # - **tf.contrib.layers.fully_connected(F, num_outputs):** given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected) # # In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. # # # **Exercise**: # # Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. # # In detail, we will use the following parameters for all the steps: # - Conv2D: stride 1, padding is "SAME" # - ReLU # - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" # - Conv2D: stride 1, padding is "SAME" # - ReLU # - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" # - Flatten the previous output. # - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. # + # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "W2" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] W2 = parameters['W2'] ### START CODE HERE ### # CONV2D: stride of 1, padding 'SAME' Z1 = tf.nn.conv2d(X,W1,strides=[1,1,1,1],padding="SAME") # RELU A1 = tf.nn.relu(Z1) # MAXPOOL: window 8x8, sride 8, padding 'SAME' P1 = tf.nn.max_pool(A1,ksize=[1,8,8,1],strides=[1,8,8,1],padding="SAME") # CONV2D: filters W2, stride 1, padding 'SAME' Z2 = tf.nn.conv2d(P1,W2,strides=[1,1,1,1],padding="SAME") # RELU A2 = tf.nn.relu(Z2) # MAXPOOL: window 4x4, stride 4, padding 'SAME' P2 = tf.nn.max_pool(A2,ksize=[1,4,4,1],strides=[1,4,4,1],padding="SAME") # FLATTEN P2 = tf.contrib.layers.flatten(P2) # FULLY-CONNECTED without non-linear activation function (not not call softmax). # 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None" Z3 = tf.contrib.layers.fully_connected(P2,num_outputs=6,activation_fn=None) ### END CODE HERE ### return Z3 # + tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) init = tf.global_variables_initializer() sess.run(init) a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)}) print("Z3 = " + str(a)) # - # **Expected Output**: # # <table> # <td> # Z3 = # </td> # <td> # [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br> # [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] # </td> # </table> # ### 1.3 - Compute cost # # Implement the compute cost function below. You might find these two functions helpful: # # - **tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits) # - **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/reduce_mean) # # ** Exercise**: Compute the cost below using the function above. # + # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y)) ### END CODE HERE ### return cost # + tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) init = tf.global_variables_initializer() sess.run(init) a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)}) print("cost = " + str(a)) # - # **Expected Output**: # # <table> # <td> # cost = # </td> # # <td> # 2.91034 # </td> # </table> # ## 1.4 Model # # Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. # # You have implemented `random_mini_batches()` in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches. # # **Exercise**: Complete the function below. # # The model below should: # # - create placeholders # - initialize parameters # - forward propagate # - compute the cost # - create an optimizer # # Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) # + # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): """ Implements a three-layer ConvNet in Tensorflow: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X_train -- training set, of shape (None, 64, 64, 3) Y_train -- test set, of shape (None, n_y = 6) X_test -- training set, of shape (None, 64, 64, 3) Y_test -- test set, of shape (None, n_y = 6) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: train_accuracy -- real number, accuracy on the train set (X_train) test_accuracy -- real number, testing accuracy on the test set (X_test) parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep results consistent (tensorflow seed) seed = 3 # to keep results consistent (numpy seed) (m, n_H0, n_W0, n_C0) = X_train.shape n_y = Y_train.shape[1] costs = [] # To keep track of the cost # Create Placeholders of the correct shape ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables globally init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): minibatch_cost = 0. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , temp_cost = sess.run([optimizer,cost],feed_dict={X:minibatch_X,Y:minibatch_Y}) ### END CODE HERE ### minibatch_cost += temp_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 5 == 0: print ("Cost after epoch %i: %f" % (epoch, minibatch_cost)) if print_cost == True and epoch % 1 == 0: costs.append(minibatch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # Calculate the correct predictions predict_op = tf.argmax(Z3, 1) correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(accuracy) train_accuracy = accuracy.eval({X: X_train, Y: Y_train}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test}) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) return train_accuracy, test_accuracy, parameters # - # Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code! _, _, parameters = model(X_train, Y_train, X_test, Y_test) # **Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. # # <table> # <tr> # <td> # **Cost after epoch 0 =** # </td> # # <td> # 1.917929 # </td> # </tr> # <tr> # <td> # **Cost after epoch 5 =** # </td> # # <td> # 1.506757 # </td> # </tr> # <tr> # <td> # **Train Accuracy =** # </td> # # <td> # 0.940741 # </td> # </tr> # # <tr> # <td> # **Test Accuracy =** # </td> # # <td> # 0.783333 # </td> # </tr> # </table> # Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). # # Once again, here's a thumbs up for your work! fname = "images/thumbs_up.jpg" image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)) plt.imshow(my_image)
Convolutional Neural Networks/Week 1/Convolution+model+-+Application+-+v1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Непараметрические криетрии # Критерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки) # ------------- | -------------| # **Знаков** | $\times$ | | $\times$ # **Ранговый** | $\times$ | $\times$ | $\times$ # **Перестановочный** | $\times$ | $\times$ | $\times$ # ## Mirrors as potential environmental enrichment for individually housed laboratory mice # (Sherwin, 2004): 16 лабораторных мышей были помещены в двухкомнатные клетки, в одной из комнат висело зеркало. С целью установить, есть ли у мышей какие-то предпочтения насчет зеркал, измерялась доля времени, которое каждая мышь проводила в каждой из своих двух клеток. # + import numpy as np import pandas as pd import itertools from scipy import stats from statsmodels.stats.descriptivestats import sign_test from statsmodels.stats.weightstats import zconfint # - # %pylab inline # ### Загрузка данных mouses_data = pd.read_csv('mirror_mouses.txt', header = None) mouses_data.columns = ['proportion_of_time'] mouses_data mouses_data.describe() pylab.hist(mouses_data.proportion_of_time) pylab.show() # ## Одновыборочные критерии print '95%% confidence interval for the median time: [%f, %f]' % zconfint(mouses_data) # ### Критерий знаков # $H_0\colon$ медиана доли времени, проведенного в клетке с зеркалом, равна 0.5 # # $H_1\colon$ медиана доли времени, проведенного в клетке с зеркалом, не равна 0.5 print "M: %d, p-value: %f" % sign_test(mouses_data, 0.5) # ### Критерий знаковых рангов Вилкоксона m0 = 0.5 stats.wilcoxon(mouses_data.proportion_of_time - m0) # ### Перестановочный критерий # $H_0\colon$ среднее равно 0.5 # # $H_1\colon$ среднее не равно 0.5 def permutation_t_stat_1sample(sample, mean): t_stat = sum(map(lambda x: x - mean, sample)) return t_stat permutation_t_stat_1sample(mouses_data.proportion_of_time, 0.5) def permutation_zero_distr_1sample(sample, mean, max_permutations = None): centered_sample = map(lambda x: x - mean, sample) if max_permutations: signs_array = set([tuple(x) for x in 2 * np.random.randint(2, size = (max_permutations, len(sample))) - 1 ]) else: signs_array = itertools.product([-1, 1], repeat = len(sample)) distr = [sum(centered_sample * np.array(signs)) for signs in signs_array] return distr pylab.hist(permutation_zero_distr_1sample(mouses_data.proportion_of_time, 0.5), bins = 15) pylab.show() def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'): if alternative not in ('two-sided', 'less', 'greater'): raise ValueError("alternative not recognized\n" "should be 'two-sided', 'less' or 'greater'") t_stat = permutation_t_stat_1sample(sample, mean) zero_distr = permutation_zero_distr_1sample(sample, mean, max_permutations) if alternative == 'two-sided': return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr) if alternative == 'less': return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr) if alternative == 'greater': return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr) print "p-value: %f" % permutation_test(mouses_data.proportion_of_time, 0.5) print "p-value: %f" % permutation_test(mouses_data.proportion_of_time, 0.5, 10000)
statistics/Одновыборочные непараметрические критерии stat.non_parametric_tests_1sample.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # The Multivariate Gaussian distribution # # The density of a multivariate Gaussian with mean vector $\mu$ and covariance matrix $\Sigma$ is given as # # \begin{align} # \mathcal{N}(x; \mu, \Sigma) &= |2\pi \Sigma|^{-1/2} \exp\left( -\frac{1}{2} (x-\mu)^\top \Sigma^{-1} (x-\mu) \right) \\ # & = \exp\left(-\frac{1}{2} x^\top \Sigma^{-1} x + \mu^\top \Sigma^{-1} x - \frac{1}{2} \mu^\top \Sigma^{-1} \mu -\frac{1}{2}\log \det(2\pi \Sigma) \right) \\ # \end{align} # # Here, $|X|$ denotes the determinant of a square matrix. # # $\newcommand{\trace}{\mathop{Tr}}$ # # \begin{align} # {\cal N}(s; \mu, P) & = |2\pi P|^{-1/2} \exp\left(-\frac{1}2 (s-\mu)^\top P^{-1} (s-\mu) \right) # \\ # & = \exp\left( # -\frac{1}{2}s^\top{P^{-1}}s + \mu^\top P^{-1}s { -\frac{1}{2}\mu^\top{P^{-1}\mu -\frac12|2\pi P|}} # \right) \\ # \log {\cal N}(s; \mu, P) & = -\frac{1}{2}s^\top{P^{-1}}s + \mu^\top P^{-1}s + \text{ const} \\ # & = -\frac{1}{2}\trace {P^{-1}} s s^\top + \mu^\top P^{-1}s + \text{ const} \\ # \end{align} # # ## Special Cases # # To gain the intuition, we take a look to a few special cases # ### Bivariate Gaussian # # #### Example 1: Identity covariance matrix # # $ # x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right) # $ # # $ # \mu = \left(\begin{array}{c} 0 \\ 0 \end{array} \right) # $ # # $ # \Sigma = \left(\begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right) = I_2 # $ # # \begin{align} # \mathcal{N}(x; \mu, \Sigma) &= |2\pi I_{2}|^{-1/2} \exp\left( -\frac{1}{2} x^\top x \right) # = (2\pi)^{-1} \exp\left( -\frac{1}{2} \left( x_1^2 + x_2^2\right) \right) = (2\pi)^{-1/2} \exp\left( -\frac{1}{2} x_1^2 \right)(2\pi)^{-1/2} \exp\left( -\frac{1}{2} x_2^2 \right)\\ # & = \mathcal{N}(x; 0, 1) \mathcal{N}(x; 0, 1) # \end{align} # # #### Example 2: Diagonal covariance # $\newcommand{\diag}{\text{diag}}$ # # $ # x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right) # $ # # $ # \mu = \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right) # $ # # $ # \Sigma = \left(\begin{array}{cc} s_1 & 0 \\ 0 & s_2 \end{array} \right) = \diag(s_1, s_2) # $ # # \begin{eqnarray} # \mathcal{N}(x; \mu, \Sigma) &=& \left|2\pi \left(\begin{array}{cc} s_1 & 0 \\ 0 & s_2 \end{array} \right)\right|^{-1/2} \exp\left( -\frac{1}{2} \left(\begin{array}{c} x_1 - \mu_1 \\ x_2-\mu_2 \end{array} \right)^\top \left(\begin{array}{cc} 1/s_1 & 0 \\ 0 & 1/s_2 \end{array} \right) \left(\begin{array}{c} x_1 - \mu_1 \\ x_2-\mu_2 \end{array} \right) \right) \\ # &=& ((2\pi)^2 s_1 s_2 )^{-1/2} \exp\left( -\frac{1}{2} \left( \frac{(x_1-\mu_1)^2}{s_1} + \frac{(x_2-\mu_2)^2}{s_2}\right) \right) \\ # & = &\mathcal{N}(x; \mu_1, s_1) \mathcal{N}(x; \mu_2, s_2) # \end{eqnarray} # # #### Example 3: # $ # x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right) # $ # # $ # \mu = \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right) # $ # # $ # \Sigma = \left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array} \right) # $ # for $1<\rho<-1$. # # Need $K = \Sigma^{-1}$. When $|\Sigma| \neq 0$ we have $K\Sigma^{-1} = I$. # # $ # \left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array} \right) \left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right) = \left(\begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right) # $ # \begin{align} # k_{11} &+ \rho k_{21} & & &=1 \\ # \rho k_{11} &+ k_{21} & & &=0 \\ # && k_{12} &+ \rho k_{22} &=0 \\ # && \rho k_{12} &+ k_{22} &=1 \\ # \end{align} # Solving these equations leads to the solution # # $$ # \left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right) = \frac{1}{1-\rho^2}\left(\begin{array}{cc} 1 & -\rho \\ -\rho & 1 \end{array} \right) # $$ # Plotting the Equal probability contours # + # %matplotlib inline import matplotlib.pyplot as plt import numpy as np from notes_utilities import pnorm_ball_points RHO = np.arange(-0.9,1,0.3) plt.figure(figsize=(20,20/len(RHO))) plt.rc('text', usetex=True) plt.rc('font', family='serif') for i,rho in enumerate(RHO): plt.subplot(1,len(RHO),i+1) plt.axis('equal') ax = plt.gca() ax.set_xlim(-4,4) ax.set_ylim(-4,4) S = np.mat([[1, rho],[rho,1]]) A = np.linalg.cholesky(S) dx,dy = pnorm_ball_points(3*A) plt.title(r'$\rho =$ '+str(rho if np.abs(rho)>1E-9 else 0), fontsize=16) ln = plt.Line2D(dx,dy,markeredgecolor='k', linewidth=1, color='b') ax.add_line(ln) ax.set_axis_off() #ax.set_visible(False) plt.show() # + from ipywidgets import interact, interactive, fixed import ipywidgets as widgets from IPython.display import clear_output, display, HTML from matplotlib import rc from notes_utilities import bmatrix, pnorm_ball_line rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}) rc('text', usetex=True) fig = plt.figure(figsize=(5,5)) S = np.array([[1,0],[0,1]]) dx,dy = pnorm_ball_points(S) ln = plt.Line2D(dx,dy,markeredgecolor='k', linewidth=1, color='b') dx,dy = pnorm_ball_points(np.eye(2)) ln2 = plt.Line2D(dx,dy,markeredgecolor='k', linewidth=1, color='k',linestyle=':') plt.xlabel('$x_1$') plt.ylabel('$x_2$') ax = fig.gca() ax.set_xlim((-4,4)) ax.set_ylim((-4,4)) txt = ax.text(-1,-3,'$\left(\right)$',fontsize=15) ax.add_line(ln) ax.add_line(ln2) plt.close(fig) def set_line(s_1, s_2, rho, p, a, q): S = np.array([[s_1**2, rho*s_1*s_2],[rho*s_1*s_2, s_2**2]]) A = np.linalg.cholesky(S) #S = A.dot(A.T) dx,dy = pnorm_ball_points(A,p=p) ln.set_xdata(dx) ln.set_ydata(dy) dx,dy = pnorm_ball_points(a*np.eye(2),p=q) ln2.set_xdata(dx) ln2.set_ydata(dy) txt.set_text(bmatrix(S)) display(fig) ax.set_axis_off() interact(set_line, s_1=(0.1,2,0.01), s_2=(0.1, 2, 0.01), rho=(-0.99, 0.99, 0.01), p=(0.1,4,0.1), a=(0.2,10,0.1), q=(0.1,4,0.1)) # - # %run plot_normballs.py # %run matrix_norm_sliders.py # Exercise: # # $ # x = \left(\begin{array}{c} x_1 \\ x_2 \end{array} \right) # $ # # $ # \mu = \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right) # $ # # $ # \Sigma = \left(\begin{array}{cc} s_{11} & s_{12} \\ s_{12} & s_{22} \end{array} \right) # $ # # # Need $K = \Sigma^{-1}$. When $|\Sigma| \neq 0$ we have $K\Sigma^{-1} = I$. # # $ # \left(\begin{array}{cc} s_{11} & s_{12} \\ s_{12} & s_{22} \end{array} \right) \left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right) = \left(\begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right) # $ # # Derive the result # $$ # K = \left(\begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right) # $$ # # Step 1: Verify # # $$ # \left(\begin{array}{cc} s_{11} & s_{12} \\ s_{21} & s_{22} \end{array} \right) = \left(\begin{array}{cc} 1 & -s_{12}/s_{22} \\ 0 & 1 \end{array} \right) \left(\begin{array}{cc} s_{11}-s_{12}^2/s_{22} & 0 \\ 0 & s_{22} \end{array} \right) \left(\begin{array}{cc} 1 & 0 \\ -s_{12}/s_{22} & 1 \end{array} \right) # $$ # # Step 2: Show that # $$ # \left(\begin{array}{cc} 1 & a\\ 0 & 1 \end{array} \right)^{-1} = \left(\begin{array}{cc} 1 & -a\\ 0 & 1 \end{array} \right) # $$ # and # $$ # \left(\begin{array}{cc} 1 & 0\\ b & 1 \end{array} \right)^{-1} = \left(\begin{array}{cc} 1 & 0\\ -b & 1 \end{array} \right) # $$ # # Step 3: Using the fact $(A B)^{-1} = B^{-1} A^{-1}$ and $s_{12}=s_{21}$, show that and simplify # $$ # \left(\begin{array}{cc} s_{11} & s_{12} \\ s_{21} & s_{22} \end{array} \right)^{-1} = # \left(\begin{array}{cc} 1 & 0 \\ s_{12}/s_{22} & 1 \end{array} \right) # \left(\begin{array}{cc} 1/(s_{11}-s_{12}^2/s_{22}) & 0 \\ 0 & 1/s_{22} \end{array} \right) \left(\begin{array}{cc} 1 & s_{12}/s_{22} \\ 0 & 1 \end{array} \right) # $$ # # # ## Gaussian Processes Regression # # # In Bayesian machine learning, a frequent problem encountered is the regression problem where we are given a pairs of inputs $x_i \in \mathbb{R}^N$ and associated noisy observations $y_i \in \mathbb{R}$. We assume the following model # # \begin{eqnarray*} # y_i &\sim& {\cal N}(y_i; f(x_i), R) # \end{eqnarray*} # # The interesting thing about a Gaussian process is that the function $f$ is not specified in close form, but we assume that the function values # \begin{eqnarray*} # f_i & = & f(x_i) # \end{eqnarray*} # are jointly Gaussian distributed as # \begin{eqnarray*} # \left( # \begin{array}{c} # f_1 \\ # \vdots \\ # f_L \\ # \end{array} # \right) & = & f_{1:L} \sim {\cal N}(f_{1:L}; 0, \Sigma(x_{1:L})) # \end{eqnarray*} # Here, we define the entries of the covariance matrix $\Sigma(x_{1:L})$ as # \begin{eqnarray*} # \Sigma_{i,j} & = & K(x_i, x_j) # \end{eqnarray*} # for $i,j \in \{1, \dots, L\}$. Here, $K$ is a given covariance function. Now, if we wish to predict the value of $f$ for a new $x$, we simply form the following joint distribution: # \begin{eqnarray*} # \left( # \begin{array}{c} # f_1 \\ # f_2 \\ # \vdots \\ # f_L \\ # f \\ # \end{array} # \right) & \sim & {\cal N}\left( \left(\begin{array}{c} # 0 \\ # 0 \\ # \vdots \\ # 0 \\ # 0 \\ # \end{array}\right) # , \left(\begin{array}{cccccc} # K(x_1,x_1) & K(x_1,x_2) & \dots & K(x_1, x_L) & K(x_1, x) \\ # K(x_2,x_1) & K(x_2,x_2) & \dots & K(x_2, x_L) & K(x_2, x) \\ # \vdots &\\ # K(x_L,x_1) & K(x_L,x_2) & \dots & K(x_L, x_L) & K(x_L, x) \\ # K(x,x_1) & K(x,x_2) & \dots & K(x, x_L) & K(x, x) \\ # \end{array}\right) \right) \\ # \left( # \begin{array}{c} # f_{1:L} \\ # f # \end{array} # \right) & \sim & {\cal N}\left( \left(\begin{array}{c} # \mathbf{0} \\ # 0 \\ # \end{array}\right) # , \left(\begin{array}{cc} # \Sigma(x_{1:L}) & k(x_{1:L}, x) \\ # k(x_{1:L}, x)^\top & K(x, x) \\ # \end{array}\right) \right) \\ # \end{eqnarray*} # # Here, $k(x_{1:L}, x)$ is a $L \times 1$ vector with entries $k_i$ where # # \begin{eqnarray*} # k_i = K(x_i, x) # \end{eqnarray*} # # Popular choices of covariance functions to generate smooth regression functions include a Bell shaped one # \begin{eqnarray*} # K_1(x_i, x_j) & = & \exp\left(-\frac{1}2 \| x_i - x_j \|^2 \right) # \end{eqnarray*} # and a Laplacian # \begin{eqnarray*} # K_2(x_i, x_j) & = & \exp\left(-\frac{1}2 \| x_i - x_j \| \right) # \end{eqnarray*} # # where $\| x \| = \sqrt{x^\top x}$ is the Euclidian norm. # # ## Part 1 # Derive the expressions to compute the predictive density # \begin{eqnarray*} # p(\hat{y}| y_{1:L}, x_{1:L}, \hat{x}) # \end{eqnarray*} # # # \begin{eqnarray*} # p(y | y_{1:L}, x_{1:L}, x) &=& {\cal N}(y; m, S) \\ # m & = & \\ # S & = & # \end{eqnarray*} # # ## Part 2 # Write a program to compute the mean and covariance of $p(\hat{y}| y_{1:L}, x_{1:L}, \hat{x})$ to generate a for the following data: # # x = [-2 -1 0 3.5 4] # y = [4.1 0.9 2 12.3 15.8] # # Try different covariance functions $K_1$ and $K_2$ and observation noise covariances $R$ and comment on the nature of the approximation. # # ## Part 3 # Suppose we are using a covariance function parameterised by # \begin{eqnarray*} # K_\beta(x_i, x_j) & = & \exp\left(-\frac{1}\beta \| x_i - x_j \|^2 \right) # \end{eqnarray*} # Find the optimum regularisation parameter $\beta^*(R)$ as a function of observation noise variance via maximisation of the marginal likelihood, i.e. # \begin{eqnarray*} # \beta^* & = & \arg\max_{\beta} p(y_{1:N}| x_{1:N}, \beta, R) # \end{eqnarray*} # Generate a plot of $b^*(R)$ for $R = 0.01, 0.02, \dots, 1$ for the dataset given in 2. # # + def cov_fun_bell(x1,x2,delta=1): return np.exp(-0.5*np.abs(x1-x2)**2/delta) def cov_fun_exp(x1,x2): return np.exp(-0.5*np.abs(x1-x2)) def cov_fun(x1,x2): return cov_fun_bell(x1,x2,delta=0.1) R = 0.05 x = np.array([-2, -1, 0, 3.5, 4]); y = np.array([4.1, 0.9, 2, 12.3, 15.8]); Sig = cov_fun(x.reshape((len(x),1)),x.reshape((1,len(x)))) + R*np.eye(len(x)) SigI = np.linalg.inv(Sig) xx = np.linspace(-10,10,100) yy = np.zeros_like(xx) ss = np.zeros_like(xx) for i in range(len(xx)): z = np.r_[x,xx[i]] CrossSig = cov_fun(x,xx[i]) PriorSig = cov_fun(xx[i],xx[i]) + R yy[i] = np.dot(np.dot(CrossSig, SigI),y) ss[i] = PriorSig - np.dot(np.dot(CrossSig, SigI),CrossSig) plt.plot(x,y,'or') plt.plot(xx,yy,'b.') plt.plot(xx,yy+3*np.sqrt(ss),'b:') plt.plot(xx,yy-3*np.sqrt(ss),'b:') plt.show()
MultivariateGaussian.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # Analysis of data from fault-scarp model runs. # # Start by setting up arrays to hold the data. # + import numpy as np N = 125 run_number = np.zeros(N) hill_length = np.zeros(N) dist_rate = np.zeros(N) uplint = np.zeros(N) max_ht = np.zeros(N) mean_slope = np.zeros(N) mean_ht = np.zeros(N) # - # Set up info on domain size and parameters. domain_lengths = np.array([58, 103, 183, 325, 579]) disturbance_rates = 10.0 ** np.array([-4, -3.5, -3, -2.5, -2]) uplift_intervals = 10.0 ** np.array([2, 2.5, 3, 3.5, 4]) for i in range(N): hill_length[i] = domain_lengths[i // 25] dist_rate[i] = disturbance_rates[i % 5] uplint[i] = uplift_intervals[(i // 5) % 5] # Read data from file. # + import csv i = -1 with open('./grain_hill_stats.csv', 'rb') as csvfile: myreader = csv.reader(csvfile) for row in myreader: if i > -1: run_number[i] = int(row[0]) max_ht[i] = float(row[1]) mean_ht[i] = float(row[2]) mean_slope[i] = float(row[3]) i += 1 csvfile.close() # - # Let's revisit the question of how to plot, now that I seem to have solved the "not run long enough" problem. If you take the primary variables, the scaling is something like # # $h = f( d, \tau, \lambda )$ # # There is just one dimension here: time. So our normalization becomes simply: # # $h = f( d\tau, \lambda )$ # # This suggests plotting $h$ versus $d\tau$ segregated by $\lambda$. max_ht # + import matplotlib.pyplot as plt # %matplotlib inline dtau = dist_rate * uplint # Here's our dimensionless disturbance rate syms = ['k+', 'k.', 'kv', 'k*', 'ko'] for i in range(5): idx = (i * 25) + np.arange(25) plt.loglog(dtau[idx], mean_ht[idx], syms[i]) plt.xlabel(r"Dimensionless disturbance rate, $d'$", {'fontsize' : 12}) plt.ylabel(r'Mean height, $h$', {'fontsize' : 12}) plt.legend([r'$\lambda = 58$', '$\lambda = 103$', '$\lambda = 183$', '$\lambda = 325$', '$\lambda = 579$']) #plt.savefig('mean_ht_vs_dist_rate.pdf') # UNCOMMENT TO GENERATE FIGURE FILE # - # Try Scott's idea of normalizing by lambda for i in range(5): idx = (i * 25) + np.arange(25) plt.loglog(dtau[idx], mean_ht[idx] / hill_length[idx], syms[i]) plt.xlabel(r"Dimensionless disturbance rate, $d'$", {'fontsize' : 12}) plt.ylabel(r'$h / \lambda$', {'fontsize' : 12}) plt.legend([r'$\lambda = 58$', '$\lambda = 103$', '$\lambda = 183$', '$\lambda = 325$', '$\lambda = 579$']) plt.savefig('h_over_lam_vs_dist_rate.pdf') # This (h vs d') is actually a fairly straightforward result. For any given hillslope width, there are three domains: (1) threshold, in which height is independent of disturbance or uplift rate; (2) linear, in which height is inversely proportional to $d\tau$, and (3) finite-size, where mean height is only one or two cells, and which is basically meaningless and can be ignored. # Next, let's look at the effective diffusivity. One method is to start with mean hillslope height, $H_m$. Diffusion theory predicts that mean height should be given by: # # $H = \frac{U}{3D}L^2$ # # Then simply invert this to solve for $D$: # # $D = \frac{U}{3H}L^2$ # # In the particle model, $H$ in real length units is equal to height in cells, $h$, times scale of a cell, $\delta$. Similarly, $L = \lambda \delta / 2$, and $U = \delta / I_u$, where $I_u$ is uplift interval in cells/time (the factor of 2 in $\lambda$ comes from the fact that hillslope length is half the domain length). Substituting, # # $D = \frac{4}{3I_u h} \lambda^2 \delta^2$ # # This of course requires defining cell size. We could also do it in terms of a disturbance rate, $d_{eff}$, equal to $D/\delta^2$, # # $d_{eff} = \frac{4}{3I_u h} \lambda^2$ # # Ok, here's a neat thing: we can define a dimensionless effective diffusivity as follows: # # $D' = \frac{D}{d \delta^2} = \frac{4}{3 d I_u h} \lambda^2$ # # This measures the actual diffusivity relative to the nominal value reflected by the disturbance rate. Here we'll plot it against slope gradient in both linear and log-log. # # + hill_halflen = hill_length / 2.0 D_prime = (hill_halflen * hill_halflen) / (12 * uplint * mean_ht * dist_rate) plt.plot(mean_slope, D_prime, 'k.') plt.xlabel('Mean slope gradient') plt.ylabel('Dimensionless diffusivity') # - syms = ['k+', 'k.', 'kv', 'k*', 'ko'] for i in range(0, 5): idx = (i * 25) + np.arange(25) plt.semilogy(mean_slope[idx], D_prime[idx], syms[i]) plt.xlabel('Mean slope gradient', {'fontsize' : 12}) plt.ylabel('Dimensionless diffusivity', {'fontsize' : 12}) plt.ylim([1.0e0, 1.0e5]) plt.legend(['L = 58', 'L = 103', 'L = 183', 'L = 325', 'L = 579']) idx1 = np.where(max_ht > 4)[0] idx2 = np.where(max_ht <= 4)[0] plt.semilogy(mean_slope[idx1], D_prime[idx1], 'ko', mfc='none') plt.semilogy(mean_slope[idx2], D_prime[idx2], '.', mfc='0.5') plt.xlabel('Mean slope gradient', {'fontsize' : 12}) plt.ylabel(r"Dimensionless diffusivity, $D_e'$", {'fontsize' : 12}) plt.legend(['Mean height > 4 cells', 'Mean height <= 4 cells']) plt.savefig('dimless_diff_vs_grad.pdf') # Just for fun, let's try to isolate the portion of $D_e$ that doesn't contain Furbish et al.'s $\cos^2\theta$ factor. In other words, we'll plot against $\cos^2 \theta S$. Remember that $S=\tan \theta$, so $\theta = \tan^{-1} S$ and $\cos \theta = \cos\tan^{-1}S$. theta = np.arctan(mean_slope) cos_theta = np.cos(theta) cos2_theta = cos_theta * cos_theta cos2_theta_S = cos2_theta * mean_slope idx1 = np.where(max_ht > 4)[0] idx2 = np.where(max_ht <= 4)[0] plt.semilogy(cos2_theta_S[idx1], D_prime[idx1], 'ko', mfc='none') plt.semilogy(cos2_theta_S[idx2], D_prime[idx2], '.', mfc='0.5') plt.xlabel(r'Mean slope gradient $\times \cos^2 \theta$', {'fontsize' : 12}) plt.ylabel(r"Dimensionless diffusivity, $D_e'$", {'fontsize' : 12}) plt.legend(['Mean height > 4 cells', 'Mean height <= 4 cells']) plt.savefig('dimless_diff_vs_grad_cos2theta.pdf') # Now let's try $D / (1 - (S/S_c)^2)$ and see if that collapses things... Sc = np.tan(np.pi * 30.0 / 180.0) de_with_denom = D_prime * (1.0 - (mean_slope / Sc) ** 2) # + plt.semilogy(mean_slope[idx1], D_prime[idx1], 'ko', mfc='none') plt.semilogy(mean_slope[idx2], D_prime[idx2], '.', mfc='0.5') plt.xlabel('Mean slope grad', {'fontsize' : 12}) plt.ylabel(r"Dimensionless diffusivity, $D_e'$", {'fontsize' : 12}) plt.legend(['Mean height > 3 cells', 'Mean height <= 3 cells']) # Now add analytical slope = np.arange(0, 0.6, 0.05) D_pred = 10.0 / (1.0 - (slope/Sc)**2) plt.plot(slope, D_pred, 'r') # - # Version of the De-S plot with lower end zoomed in to find the approximate asymptote: idx1 = np.where(max_ht > 4)[0] idx2 = np.where(max_ht <= 4)[0] plt.semilogy(mean_slope[idx1], D_prime[idx1], 'ko', mfc='none') plt.semilogy(mean_slope[idx2], D_prime[idx2], '.', mfc='0.5') plt.xlabel('Mean slope gradient', {'fontsize' : 12}) plt.ylabel(r"Dimensionless diffusivity, $D_e'$", {'fontsize' : 12}) plt.ylim(10, 100) # ==================================================== # OLDER STUFF BELOW HERE # Start with a plot of $D$ versus slope for given fixed values of everything but $I_u$. halflen = hill_length / 2.0 D = (halflen * halflen) / (2.0 * uplint * max_ht) print np.amin(max_ht) import matplotlib.pyplot as plt # %matplotlib inline # + idx = np.arange(0, 25, 5) plt.semilogy(mean_slope[idx], D[idx], '.') idx = np.arange(25, 50, 5) plt.plot(mean_slope[idx], D[idx], '.') idx = np.arange(50, 75, 5) plt.plot(mean_slope[idx], D[idx], '.') idx = np.arange(75, 100, 5) plt.plot(mean_slope[idx], D[idx], '.') idx = np.arange(100, 125, 5) plt.plot(mean_slope[idx], D[idx], '.') # + idx = np.arange(25, 50, 5) plt.plot(mean_slope[idx], D[idx], '.') idx = np.arange(100, 125, 5) plt.plot(mean_slope[idx], D[idx], '.') # - # Is there something we could do with integrated elevation? To reduce noise... # # $\int_0^L z(x) dx = A = \int_0^L \frac{U}{2D}(L^2 - x^2) dx$ # # $A= \frac{U}{2D}L^3 - \frac{U}{6D}L^3$ # # $A= \frac{U}{3D}L^3$ # # $A/L = H_{mean} = \frac{U}{3D}L^2$ # # Rearranging, # # $D = \frac{U}{3H_{mean}}L^2$ # # $D/\delta^2 = \frac{1}{3 I_u h_{mean}} \lambda^2$ # # This might be more stable, since it measures area (a cumulative metric). # First, a little nondimensionalization. We have an outcome, mean height, $h_m$, that is a function of three inputs: disturbance rate, $d$, system length $\lambda$, and uplift interval, $I_u$. If we treat cells as a kind of dimension, our dimensions are: C, C/T, C, T/C. This implies two dimensionless parameters: # # $h_m / \lambda = f( d I_u )$ # # So let's calculate these quantities: hmp = mean_ht / hill_length di = dist_rate * uplint plt.plot(di, hmp, '.') plt.loglog(di, hmp, '.') # I guess that's kind of a collapse? Need to split apart by different parameters. But first, let's try the idea of an effective diffusion coefficient: dd = (1.0 / (uplint * mean_ht)) * halflen * halflen plt.plot(mean_slope, dd, '.') # Ok, kind of a mess. Let's try holding everything but uplift interval constant. var_uplint = np.arange(0, 25, 5) + 2 for i in range(5): idx = (i * 25) + var_uplint plt.plot(mean_slope[idx], dd[idx], '.') plt.xlabel('Mean slope gradient') plt.ylabel('Effective diffusivity') plt.legend(['L = 58', 'L = 103', 'L = 183', 'L = 325', 'L = 579']) var_uplint = np.arange(0, 25, 5) + 3 for i in range(5): idx = (i * 25) + var_uplint plt.plot(mean_slope[idx], dd[idx], '.') plt.xlabel('Mean slope gradient') plt.ylabel('Effective diffusivity') plt.legend(['L = 58', 'L = 103', 'L = 183', 'L = 325', 'L = 579']) var_uplint = np.arange(0, 25, 5) + 4 for i in range(5): idx = (i * 25) + var_uplint plt.plot(mean_slope[idx], dd[idx], '.') plt.xlabel('Mean slope gradient') plt.ylabel('Effective diffusivity') plt.legend(['L = 58', 'L = 103', 'L = 183', 'L = 325', 'L = 579']) for i in range(5): idx = np.arange(5) + 100 + 5 * i plt.loglog(di[idx], hmp[idx], '.') plt.xlabel('d I_u') plt.ylabel('H_m / L') plt.grid('on') plt.legend(['I_u = 100', 'I_u = 316', 'I_u = 1000', 'I_u = 3163', 'I_u = 10,000']) hmp2 = (mean_ht + 0.5) / hill_length for i in range(5): idx = np.arange(5) + 0 + 5 * i plt.loglog(di[idx], hmp2[idx], '.') plt.xlabel('d I_u') plt.ylabel('H_m / L') plt.grid('on') plt.legend(['I_u = 100', 'I_u = 316', 'I_u = 1000', 'I_u = 3163', 'I_u = 10,000']) # They don't seem to segregate much by $I_u$. I suspect they segregate by $L$. So let's plot ALL the data, colored by $L$: for i in range(5): idx = i * 25 + np.arange(25) plt.loglog(di[idx], hmp2[idx], '.') plt.xlabel('d I_u') plt.ylabel('H_m / L') plt.grid('on') plt.legend(['L = 58', 'L = 103', 'L = 183', 'L = 325', 'L = 579']) # The above plot makes sense actually. Consider the end members: # # At low $d I_u$, you have angle-of-repose: # # $H_m = \tan (30^\circ) L/4$ (I think) # # or # # $H_m / L \approx 0.15$ # # At high $d I_u$, we have the diffusive case: # # $H_m = \frac{U}{3D} L^2$, or # # $H_m / L = \frac{U}{3D} L$ # # But wait a minute, that's backwards from what the plot shows. Could this be a finite-size effect? Let's suppose that finite-size effects mean that there's a minimum $H_m$ equal to $N$ times the size of one particle. Then, # # $H_m / L = N / L$ # # which has the right direction. What would $N$ actually be? From reading the graph above, estimate that (using $L$ as half-length, so half of the above numbers in legend): # # For $L=29$, $N/L \approx 0.02$ # # For $L=51.5$, $N/L \approx 0.015$ # # For $L=91.5$, $N/L \approx 0.009$ # # For $L=162.5$, $N/L \approx 0.0055$ # # For $L=289.5$, $N/L \approx 0.004$ nl = np.array([0.02, 0.015, 0.009, 0.0055, 0.004]) l = np.array([29, 51.5, 91.5, 162.5, 289.5]) n = nl * l n 1.0 / l # Ok, so these are all hovering around 1 cell! So, that could well explain the scatter at the right side. # # This is starting to make more sense. The narrow window around $d I_u \approx 10$ represents the diffusive regime. To the left, the angle-of-repose regime. To the right, the finite-size-effect regime. For good reasons, the diffusive regime is biggest with the biggest $L$ (more particles, so finite-size effect doesn't show up until larger $d I_u$). Within the diffusive regime, points separate according to scale, reflecting the $H_m / L \propto L$ effect. So, if we took points with varying $L$ but identical $d I_u$, ... idx = np.where(np.logical_and(di>9.0, di<11.0))[0] hmd = mean_ht[idx] ld = hill_length[idx] plt.plot(ld, hmd/ld, '.') idx2 = np.where(np.logical_and(di>0.9, di<1.1))[0] plt.plot(hill_length[idx2], mean_ht[idx2]/hill_length[idx2], 'o') plt.grid('on') plt.xlabel('L') plt.ylabel('Hm/L') plt.plot([0, 600.0], [0.5774/4.0, 0.5774/4.0]) plt.plot([0.0, 600.0], [0.0, 0.02 * 0.1/3.0 * 600.0]) # In the above plot, diffusive behavior is indicated by a slope of 1:1, whereas angle-of-repose is indicated by a flat trend. One thing this says is that, for a given $d I_u$, a longer slope is more likely to be influenced by the angle of repose. That makes sense I think...? plt.loglog(ld, hmd, '*') plt.grid('on') # Now, what if it's better to consider disturbance rate, $d$, to be in square cells / time? That is, when dimensionalized, to be $L^2/T$ rather than $L/T$? Let's see what happens when we do it this way: d2 = dist_rate * uplint / hill_length plt.loglog(hmp, d2, '.') # Not so great ... # # # Let's try another idea, based again on dimensional analysis. Start with dimensional quantities $U$ (uplift rate), $L$ (length), $H$ (mean height), and $\delta$ (cell size). Nondimensionalize in a somewhat surprising way: # # $\frac{UL^2}{Hd\delta^2} = f( d\delta / U, L/\delta )$ # # This is actually a dimensionless diffusivity: diffusivity relative to disturbance intensity. # # Now translate back: $H=h\delta$, $U=\delta / I_u$, and $L=\lambda \delta$: # # $\frac{\lambda^2}{hdI_u} = f( d I_u, \lambda )$ # # So what happens if we plot thus? diff_nd = halflen * halflen / (uplint * mean_ht * dist_rate) diu = dist_rate * uplint for i in range(5): idx = (i * 25) + np.arange(25) plt.loglog(diu[idx], diff_nd[idx], '+') plt.grid('on') plt.xlabel('Dimensionless disturbance rate (d I_u)') plt.ylabel('Dimensionless diffusivity (l^2/hdI_u)') plt.legend(['\lambda= 58', '\lambda = 103', '\lambda = 183', '\lambda = 325', '\lambda = 579']) # Now THAT'S a collapse. Good. Interpretation: as we go left to right, we go from faster uplift or lower disturbance to slower uplift or faster disturbance. That means the relief goes from high to low. At high relief, we get angle-of-repose behavior, for which the effective diffusivity increases with relief---hence, diffusivity decreases with increasing $dI_u$. Then we get to a realm that is presumably the diffusive regime, where the curve flattens out. This represents a constant diffusivity. Finally, we get to the far right side, where you hit finite-size effects: there will be a hill at least one particle high on average no matter how high $d I_u$, so diffusivity appears to drop again. # # There's a one-to-one relation between $D'$ and $\lambda$, at least in the steep regime. This reflects simple scaling. In the steep regime, $H = S_c L / 4$. By definition $D' = (U / 3H\delta^2 d) L^2$, or $H = (U / 3D'\delta^2 d) L^2$. Substituting, # # $(U / 3D'\delta^2 d) L^2 = S_c L / 4$ # # $D' = 4 U L / 3 \delta^2 d S_c$ # # in other words, we expect $D' \propto L$ in this regime. (If we translate back, this writes as # # $D' = 4 \lambda / 3 I_u d S_c$ # # Voila! # # Ok, but why does the scaling between $D'$ and $\lambda$ continue in the diffusive regime? My guess is as follows. To relate disturbance rate, $d$, to diffusivity, $D$, consider that disturbance acts over depth $\delta$ and length $L$. Therefore, one might scale diffusivity as follows: # # $D \propto dL\delta \propto d \lambda$ # # By that argument, $D$ should be proportional to $\lambda$. Another way to say this is that in order to preserve constant $D$, $d$ should be treated as a scale-dependent parameter: $d \propto D/\lambda$. # # A further thought: can we define diffusivity more carefully? One approach would be # # (frequency of disturbance events per unit time per unit length, $F$) [1/LT] # # x # # (cross-sectional area disturbed, $A$) # # x # # (characteristic displacement length,$\Lambda$) # # For the first, take the expected number of events across the whole in unit time and divide by the length of the slope: # # $F = \lambda d / L = ...$ # # hmm, this isn't going where I thought... #
ModelInputsAndRunScripts/DataAnalysis/analysis_of_grain_hill_data.ipynb
# # Beyond linear separation in classification # # As we saw in the regression section, the linear classification model # expects the data to be linearly separable. When this assumption does not # hold, the model is not expressive enough to properly fit the data. # Therefore, we need to apply the same tricks as in regression: feature # augmentation (potentially using expert-knowledge) or using a # kernel-based method. # # We will provide examples where we will use a kernel support vector machine # to perform classification on some toy-datasets where it is impossible to # find a perfect linear separation. # # We will generate a first dataset where the data are represented as two # interlaced half circle. This dataset is generated using the function # [`sklearn.datasets.make_moons`](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html). # + import numpy as np import pandas as pd from sklearn.datasets import make_moons feature_names = ["Feature #0", "Features #1"] target_name = "class" X, y = make_moons(n_samples=100, noise=0.13, random_state=42) # We store both the data and target in a dataframe to ease plotting moons = pd.DataFrame(np.concatenate([X, y[:, np.newaxis]], axis=1), columns=feature_names + [target_name]) data_moons, target_moons = moons[feature_names], moons[target_name] # - # Since the dataset contains only two features, we can make a scatter plot to # have a look at it. # + import matplotlib.pyplot as plt import seaborn as sns sns.scatterplot(data=moons, x=feature_names[0], y=feature_names[1], hue=target_moons, palette=["tab:red", "tab:blue"]) _ = plt.title("Illustration of the moons dataset") # - # From the intuitions that we got by studying linear model, it should be # obvious that a linear classifier will not be able to find a perfect decision # function to separate the two classes. # # Let's try to see what is the decision boundary of such a linear classifier. # We will create a predictive model by standardizing the dataset followed by # a linear support vector machine classifier. import sklearn sklearn.set_config(display="diagram") # + from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC linear_model = make_pipeline(StandardScaler(), SVC(kernel="linear")) linear_model.fit(data_moons, target_moons) # - # <div class="admonition warning alert alert-danger"> # <p class="first admonition-title" style="font-weight: bold;">Warning</p> # <p class="last">Be aware that we fit and will check the boundary decision of the classifier # on the same dataset without splitting the dataset into a training set and a # testing set. While this is a bad practice, we use it for the sake of # simplicity to depict the model behavior. Always use cross-validation when # you want to assess the generalization performance of a machine-learning model.</p> # </div> # Let's check the decision boundary of such a linear model on this dataset. # + from helpers.plotting import DecisionBoundaryDisplay DecisionBoundaryDisplay.from_estimator( linear_model, data_moons, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(data=moons, x=feature_names[0], y=feature_names[1], hue=target_moons, palette=["tab:red", "tab:blue"]) _ = plt.title("Decision boundary of a linear model") # - # As expected, a linear decision boundary is not enough flexible to split the # two classes. # # To push this example to the limit, we will create another dataset where # samples of a class will be surrounded by samples from the other class. # + from sklearn.datasets import make_gaussian_quantiles feature_names = ["Feature #0", "Features #1"] target_name = "class" X, y = make_gaussian_quantiles( n_samples=100, n_features=2, n_classes=2, random_state=42) gauss = pd.DataFrame(np.concatenate([X, y[:, np.newaxis]], axis=1), columns=feature_names + [target_name]) data_gauss, target_gauss = gauss[feature_names], gauss[target_name] # - ax = sns.scatterplot(data=gauss, x=feature_names[0], y=feature_names[1], hue=target_gauss, palette=["tab:red", "tab:blue"]) _ = plt.title("Illustration of the Gaussian quantiles dataset") # Here, this is even more obvious that a linear decision function is not # adapted. We can check what decision function, a linear support vector machine # will find. linear_model.fit(data_gauss, target_gauss) DecisionBoundaryDisplay.from_estimator( linear_model, data_gauss, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(data=gauss, x=feature_names[0], y=feature_names[1], hue=target_gauss, palette=["tab:red", "tab:blue"]) _ = plt.title("Decision boundary of a linear model") # As expected, a linear separation cannot be used to separate the classes # properly: the model will under-fit as it will make errors even on # the training set. # # In the section about linear regression, we saw that we could use several # tricks to make a linear model more flexible by augmenting features or # using a kernel. Here, we will use the later solution by using a radial basis # function (RBF) kernel together with a support vector machine classifier. # # We will repeat the two previous experiments and check the obtained decision # function. kernel_model = make_pipeline(StandardScaler(), SVC(kernel="rbf", gamma=5)) kernel_model.fit(data_moons, target_moons) DecisionBoundaryDisplay.from_estimator( kernel_model, data_moons, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(data=moons, x=feature_names[0], y=feature_names[1], hue=target_moons, palette=["tab:red", "tab:blue"]) _ = plt.title("Decision boundary with a model using an RBF kernel") # We see that the decision boundary is not anymore a straight line. Indeed, # an area is defined around the red samples and we could imagine that this # classifier should be able to generalize on unseen data. # # Let's check the decision function on the second dataset. kernel_model.fit(data_gauss, target_gauss) DecisionBoundaryDisplay.from_estimator( kernel_model, data_gauss, response_method="predict", cmap="RdBu", alpha=0.5 ) ax = sns.scatterplot(data=gauss, x=feature_names[0], y=feature_names[1], hue=target_gauss, palette=["tab:red", "tab:blue"]) _ = plt.title("Decision boundary with a model using an RBF kernel") # We observe something similar than in the previous case. The decision function # is more flexible and does not underfit anymore. # # Thus, kernel trick or feature expansion are the tricks to make a linear # classifier more expressive, exactly as we saw in regression. # # Keep in mind that adding flexibility to a model can also risk increasing # overfitting by making the decision function to be sensitive to individual # (possibly noisy) data points of the training set. Here we can observe that # the decision functions remain smooth enough to preserve good generalization. # If you are curious, you can try to repeat the above experiment with # `gamma=100` and look at the decision functions.
notebooks/logistic_regression_non_linear.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Overfitting II # # Last time, we saw a theoretical example of *overfitting*, in which we fit a machine learning model that perfectly fit the data it saw, but performed extremely poorly on fresh, unseen data. In this lecture, we'll observe overfitting in a more practical context, using the Titanic data set again. We'll then begin to study *validation* techniques for finding models with "just the right amount" of flexibility. import numpy as np from matplotlib import pyplot as plt import pandas as pd # + # assumes that you have run the function retrieve_data() # from "Introduction to ML in Practice" in ML_3.ipynb titanic = pd.read_csv("data.csv") titanic # - # Recall that we diagnosed overfitting by testing our model against some new data. In this case, we don't have any more data. So, what we can do instead is *hold out* some data that we won't let our model see at first. This holdout data is called the *validation* or *testing* data, depending on the use to which we put it. In contrast, the data that we allow our model to see is called the *training* data. `sklearn` provides a convenient function for partitioning our data into training and holdout sets called `train_test_split`. The default and generally most useful behavior is to randomly select rows of the data frame to be in each set. # + from sklearn.model_selection import train_test_split np.random.seed(1234) train, test = train_test_split(titanic, test_size = 0.3) # hold out 30% of the data train.shape, test.shape # - # Now we have two data frames. As you may recall from a previous lecture, we need to do some data cleaning, and split them into predictor variables `X` and target variables `y`. # + from sklearn import preprocessing def prep_titanic_data(data_df): df = data_df.copy() # convert male/female to 1/0 le = preprocessing.LabelEncoder() df['Sex'] = le.fit_transform(df['Sex']) # don't need name column df = df.drop(['Name'], axis = 1) # split into X and y X = df.drop(['Survived'], axis = 1) y = df['Survived'] return(X, y) # - X_train, y_train = prep_titanic_data(train) X_test, y_test = prep_titanic_data(test) # Now we're able to train our model on the `train` data, and then evaluate its performance on the `val` data. This will help us to diagnose and avoid overfitting. # # Let's try using the decision tree classifier again. As you may remember, the `DecisionTreeClassifier()` class takes an argument `max_depth` that governs how many layers of decisions the tree is allowed to make. Larger `max_depth` values correspond to more complicated trees. In this way, `max_depth` is a model complexity parameter, similar to the `degree` when we did polynomial regression. # # For example, with a small `max_depth`, the model scores on the training and validation data are relatively close. # + from sklearn import tree T = tree.DecisionTreeClassifier(max_depth = 3) T.fit(X_train, y_train) T.score(X_train, y_train), T.score(X_test, y_test) # - # On the other hand, if we use a much higher `max_depth`, we can achieve a substantially better score on the training data, but our performance on the validation data has not improved by much, and might even suffer. # + T = tree.DecisionTreeClassifier(max_depth = 20) T.fit(X_train, y_train) T.score(X_train, y_train), T.score(X_test, y_test) # - # That looks like overfitting! The model achieves a near-perfect score on the training data, but a much lower one on the test data. # + fig, ax = plt.subplots(1, figsize = (10, 7)) for d in range(1, 30): T = tree.DecisionTreeClassifier(max_depth = d) T.fit(X_train, y_train) ax.scatter(d, T.score(X_train, y_train), color = "black") ax.scatter(d, T.score(X_test, y_test), color = "firebrick") ax.set(xlabel = "Complexity (depth)", ylabel = "Performance (score)") # - # Observe that the training score (black) always increases, while the test score (red) tops out around 83\% and then even begins to trail off slightly. It looks like the optimal depth might be around 5-7 or so, but there's some random noise that can prevent us from being able to determine exactly what the optimal depth is. # # Increasing performance on the training set combined with decreasing performance on the test set is the trademark of overfitting. # # This noise reflects the fact that we took a single, random subset of the data for testing. In a more systematic experiment, we would draw many different subsets of the data for each value of depth and average over them. This is what *cross-validation* does, and we'll talk about it in the next lecture.
content/ML/ML_5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A Simple Autoencoder # # We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. # # <img src='notebook_ims/autoencoder_1.png' /> # # ### Compressed Representation # # A compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation! # # <img src='notebook_ims/denoising.png' width=60%/> # # In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. # + import torch import numpy as np from torchvision import datasets import torchvision.transforms as transforms # convert data to torch.FloatTensor transform = transforms.ToTensor() # load the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) # + # Create training and test dataloaders # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # - # ### Visualize the Data # + import matplotlib.pyplot as plt # %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # get one image from the batch img = np.squeeze(images[0]) fig = plt.figure(figsize = (5,5)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') # - # --- # ## Linear Autoencoder # # We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building a simple autoencoder. The encoder and decoder should be made of **one linear layer**. The units that connect the encoder and decoder will be the _compressed representation_. # # Since the images are normalized between 0 and 1, we need to use a **sigmoid activation on the output layer** to get values that match this input value range. # # <img src='notebook_ims/simple_autoencoder.png' width=50% /> # # # #### TODO: Build the graph for the autoencoder in the cell below. # > The input images will be flattened into 784 length vectors. The targets are the same as the inputs. # > The encoder and decoder will be made of two linear layers, each. # > The depth dimensions should change as follows: 784 inputs > **encoding_dim** > 784 outputs. # > All layers will have ReLu activations applied except for the final output layer, which has a sigmoid activation. # # **The compressed representation should be a vector with dimension `encoding_dim=32`.** # + import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Autoencoder(nn.Module): def __init__(self, encoding_dim): super(Autoencoder, self).__init__() ## encoder ## self.encoder1 = nn.Linear(784, 128) self.encoder2 = nn.Linear(128, encoding_dim) ## decoder ## self.decoder1 = nn.Linear(encoding_dim, 128) self.decoder2 = nn.Linear(128, 784) self.sigmoid = nn.Sigmoid() def forward(self, x): # define feedforward behavior # and scale the *output* layer with a sigmoid activation function x = F.relu(self.encoder1(x)) x = F.relu(self.encoder2(x)) x = F.relu(self.decoder1(x)) x = self.sigmoid(self.decoder2(x)) return x # initialize the NN encoding_dim = 32 model = Autoencoder(encoding_dim) print(model) # - # --- # ## Training # # Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. # # We are not concerned with labels in this case, just images, which we can get from the `train_loader`. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing _quantities_ rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows: # ``` # loss = criterion(outputs, images) # ``` # # Otherwise, this is pretty straightfoward training with PyTorch. We flatten our images, pass them into the autoencoder, and record the training loss as we go. # + # specify loss function criterion = nn.MSELoss() # specify loss function optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # + # number of epochs to train the model n_epochs = 20 for epoch in range(1, n_epochs+1): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data in train_loader: # _ stands in for labels, here images, _ = data # flatten images images = images.view(images.size(0), -1) # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model outputs = model(images) # calculate the loss loss = criterion(outputs, images) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item()*images.size(0) # print avg training statistics train_loss = train_loss/len(train_loader) print('Epoch: {} \tTraining Loss: {:.6f}'.format( epoch, train_loss )) # - # ## Checking out the results # # Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. # + # obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() images_flatten = images.view(images.size(0), -1) # get sample outputs output = model(images_flatten) # prep images for display images = images.numpy() # output is resized into a batch of images output = output.view(batch_size, 1, 28, 28) # use detach when it's an output that requires_grad output = output.detach().numpy() # plot the first ten input images and then reconstructed images fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4)) # input images on top row, reconstructions on bottom for images, row in zip([images, output], axes): for img, ax in zip(images, row): ax.imshow(np.squeeze(img), cmap='gray') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # - # ## Up Next # # We're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.
part-3/autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.9 64-bit (''iguanas_os_dev'': venv)' # name: python3 # --- # # Classification Metrics Example # Classification metrics are used to calculate the performance of binary predictors based on a binary target. They are used extensively in other Iguanas modules. This example shows how they can be applied and how to create your own. # ## Requirements # To run, you'll need the following: # # * A dataset containing binary predictor columns and a binary target column. # ---- # ## Import packages # + from iguanas.metrics.classification import Precision, Recall, FScore, Revenue import pandas as pd import numpy as np from typing import Union # - # ## Create data # Let's create some dummy predictor columns and a binary target column. For this example, let's assume the dummy predictor columns represent rules that have been applied to a dataset. # + np.random.seed(0) y_pred = pd.Series(np.random.randint(0, 2, 1000), name = 'A') y_preds = pd.DataFrame(np.random.randint(0, 2, (1000, 5)), columns=[i for i in 'ABCDE']) y = pd.Series(np.random.randint(0, 2, 1000), name = 'label') amounts = pd.Series(np.random.randint(0, 1000, 1000), name = 'amounts') # - # ---- # ## Apply optimisation functions # There are currently four classification metrics available: # # * Precision score # * Recall score # * Fbeta score # * Revenue # # **Note that the *FScore*, *Precision* and *Recall* classes are ~100 times faster on larger datasets compared to the same functions from Sklearn's *metrics* module. They also work with Koalas DataFrames, whereas the Sklearn functions do not.** # ### Instantiate class and run fit method # We can run the `fit` method to calculate the optimisation metric for each column in the dataset. # #### Precision score precision = Precision() # Single predictor rule_precision = precision.fit(y_preds=y_pred, y_true=y, sample_weight=None) # Multiple predictors rule_precisions = precision.fit(y_preds=y_preds, y_true=y, sample_weight=None) # #### Recall score recall = Recall() # Single predictor rule_recall = recall.fit(y_preds=y_pred, y_true=y, sample_weight=None) # Multiple predictors rule_recalls = recall.fit(y_preds=y_preds, y_true=y, sample_weight=None) # #### Fbeta score (beta=1) f1 = FScore(beta=1) # Single predictor rule_f1 = f1.fit(y_preds=y_pred, y_true=y, sample_weight=None) # Multiple predictors rule_f1s = f1.fit(y_preds=y_preds, y_true=y, sample_weight=None) # #### Revenue rev = Revenue(y_type='Fraud', chargeback_multiplier=2) # Single predictor rule_rev = rev.fit(y_preds=y_pred, y_true=y, sample_weight=amounts) # Multiple predictors rule_revs = rev.fit(y_preds=y_preds, y_true=y, sample_weight=amounts) # ### Outputs # The `fit` method returns the optimisation metric defined by the class: rule_precision, rule_precisions rule_recall, rule_recalls rule_f1, rule_f1s rule_rev, rule_revs # The `fit` method can be fed into various Iguanas modules as an argument (wherever the `metric` parameter appears). For example, in the `RuleGeneratorOpt` module, you can set the metric used to optimise the rules using this methodology. # ---- # ## Creating your own optimisation function # Say we want to create a class which calculates the Positive likelihood ratio (TP rate/FP rate). # The main class structure involves having a `fit` method which has three arguments - the binary predictor(s), the binary target and any event specific weights to apply. This method should return a single numeric value. class PositiveLikelihoodRatio: def fit(self, y_preds: Union[pd.Series, pd.DataFrame], y_true: pd.Series, sample_weight: pd.Series) -> float: def _calc_plr(y_true, y_preds): # Calculate TPR tpr = (y_true * y_preds).sum() / y_true.sum() # Calculate FPR fpr = ((1 - y_true) * y_preds).sum()/(1 - y_true).sum() return 0 if tpr == 0 or fpr == 0 else tpr/fpr if y_preds.ndim == 1: return _calc_plr(y_true, y_preds) else: plrs = np.empty(y_preds.shape[1]) for i, col in enumerate(y_preds.columns): plrs[i] = _calc_plr(y_true, y_preds[col]) return plrs # We can then apply the `fit` method to the dataset to check it works: # + tags=[] plr = PositiveLikelihoodRatio() # Single predictor rule_plr = plr.fit(y_preds=y_pred, y_true=y, sample_weight=None) # Multiple predictors rule_plrs = plr.fit(y_preds=y_preds, y_true=y, sample_weight=None) # - rule_plr, rule_plrs # Finally, after instantiating the class, we can feed the `fit` method to a relevant Iguanas module (for example, we can feed the `fit` method to the `metric` parameter in the `BayesianOptimiser` class so that rules are generated which maximise the Positive Likelihood Ratio). # ----
iguanas/metrics/examples/classification_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] tags=["remove_cell"] # # The Atoms of Computation # - # # Programming a quantum computer is now something that anyone can do in the comfort of their own home. # # But what to create? What is a quantum program anyway? In fact, what is a quantum computer? # # # These questions can be answered by making comparisons to standard digital computers. Unfortunately, most people don’t actually understand how digital computers work either. In this article, we’ll look at the basics principles behind these devices. To help us transition over to quantum computing later on, we’ll do it using the same tools as we'll use for quantum. # + [markdown] tags=["contents"] # ## Contents # # 1. [Splitting information into bits](#bits) # 2. [Computation as a Diagram](#diagram) # 3. [Your First Quantum Circuit](#first-circuit) # 4. [Example: Adder Circuit](#adder) # 4.1 [Encoding an Input](#encoding) # 4.2 [Remembering how to Add](#remembering-add) # 4.3 [Adding with Qiskit](#adding-qiskit) # - # Below is some Python code we'll need to run if we want to use the code in this page: from qiskit import QuantumCircuit, execute, Aer from qiskit.visualization import plot_histogram # ## 1. Splitting information into bits <a id="bits"></a> # The first thing we need to know about is the idea of bits. These are designed to be the world’s simplest alphabet. With only two characters, 0 and 1, we can represent any piece of information. # # One example is numbers. You are probably used to representing a number through a string of the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. In this string of digits, each digit represents how many times the number contains a certain power of ten. For example, when we write 9213, we mean # # # # $$ 9000 + 200 + 10 + 3 $$ # # # # or, expressed in a way that emphasizes the powers of ten # # # # $$ (9\times10^3) + (2\times10^2) + (1\times10^1) + (3\times10^0) $$ # # # # Though we usually use this system based on the number 10, we can just as easily use one based on any other number. The binary number system, for example, is based on the number two. This means using the two characters 0 and 1 to express numbers as multiples of powers of two. For example, 9213 becomes 10001111111101, since # # # # $$ 9213 = (1 \times 2^{13}) + (0 \times 2^{12}) + (0 \times 2^{11})+ (0 \times 2^{10}) +(1 \times 2^9) + (1 \times 2^8) + (1 \times 2^7) \\\\ \,\,\, + (1 \times 2^6) + (1 \times 2^5) + (1 \times 2^4) + (1 \times 2^3) + (1 \times 2^2) + (0 \times 2^1) + (1 \times 2^0) $$ # # # # In this we are expressing numbers as multiples of 2, 4, 8, 16, 32, etc. instead of 10, 100, 1000, etc. # <a id="binary_widget"></a> from qiskit_textbook.widgets import binary_widget binary_widget(nbits=5) # These strings of bits, known as binary strings, can be used to represent more than just numbers. For example, there is a way to represent any text using bits. For any letter, number, or punctuation mark you want to use, you can find a corresponding string of at most eight bits using [this table](https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/com.ibm.aix.networkcomm/conversion_table.htm). Though these are quite arbitrary, this is a widely agreed-upon standard. In fact, it's what was used to transmit this article to you through the internet. # # This is how all information is represented in computers. Whether numbers, letters, images, or sound, it all exists in the form of binary strings. # # Like our standard digital computers, quantum computers are based on this same basic idea. The main difference is that they use *qubits*, an extension of the bit to quantum mechanics. In the rest of this textbook, we will explore what qubits are, what they can do, and how they do it. In this section, however, we are not talking about quantum at all. So, we just use qubits as if they were bits. # ### Quick Exercises # 1. Think of a number and try to write it down in binary. # 2. If you have $n$ bits, how many different states can they be in? # ## 2. Computation as a diagram <a id="diagram"></a> # # Whether we are using qubits or bits, we need to manipulate them in order to turn the inputs we have into the outputs we need. For the simplest programs with very few bits, it is useful to represent this process in a diagram known as a *circuit diagram*. These have inputs on the left, outputs on the right, and operations represented by arcane symbols in between. These operations are called 'gates', mostly for historical reasons. # # Here's an example of what a circuit looks like for standard, bit-based computers. You aren't expected to understand what it does. It should simply give you an idea of what these circuits look like. # # ![image1](images/classical_circuit.png) # # For quantum computers, we use the same basic idea but have different conventions for how to represent inputs, outputs, and the symbols used for operations. Here is the quantum circuit that represents the same process as above. # # ![image2](images/quantum_circuit.png) # # In the rest of this section, we will explain how to build circuits. At the end, you'll know how to create the circuit above, what it does, and why it is useful. # ## 3. Your first quantum circuit <a id="first-circuit"></a> # In a circuit, we typically need to do three jobs: First, encode the input, then do some actual computation, and finally extract an output. For your first quantum circuit, we'll focus on the last of these jobs. We start by creating a circuit with eight qubits and eight outputs. n = 8 n_q = n n_b = n qc_output = QuantumCircuit(n_q,n_b) # This circuit, which we have called `qc_output`, is created by Qiskit using `QuantumCircuit`. The number `n_q` defines the number of qubits in the circuit. With `n_b` we define the number of output bits we will extract from the circuit at the end. # # The extraction of outputs in a quantum circuit is done using an operation called `measure`. Each measurement tells a specific qubit to give an output to a specific output bit. The following code adds a `measure` operation to each of our eight qubits. The qubits and bits are both labelled by the numbers from 0 to 7 (because that’s how programmers like to do things). The command `qc.measure(j,j)` adds a measurement to our circuit `qc` that tells qubit `j` to write an output to bit `j`. for j in range(n): qc_output.measure(j,j) # Now that our circuit has something in it, let's take a look at it. qc_output.draw() # Qubits are always initialized to give the output ```0```. Since we don't do anything to our qubits in the circuit above, this is exactly the result we'll get when we measure them. We can see this by running the circuit many times and plotting the results in a histogram. We will find that the result is always ```00000000```: a ```0``` from each qubit. counts = execute(qc_output,Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts) # The reason for running many times and showing the result as a histogram is because quantum computers may have some randomness in their results. In this case, since we aren’t doing anything quantum, we get just the ```00000000``` result with certainty. # # Note that this result comes from a quantum simulator, which is a standard computer calculating what an ideal quantum computer would do. Simulations are only possible for small numbers of qubits (~30 qubits), but they are nevertheless a very useful tool when designing your first quantum circuits. To run on a real device you simply need to replace ```Aer.get_backend('qasm_simulator')``` with the backend object of the device you want to use. # ## 4. Example: Creating an Adder Circuit <a id="adder"></a> # ### 4.1 Encoding an input <a id="encoding"></a> # # Now let's look at how to encode a different binary string as an input. For this, we need what is known as a NOT gate. This is the most basic operation that you can do in a computer. It simply flips the bit value: ```0``` becomes ```1``` and ```1``` becomes ```0```. For qubits, it is an operation called ```x``` that does the job of the NOT. # # Below we create a new circuit dedicated to the job of encoding and call it `qc_encode`. For now, we only specify the number of qubits. # + qc_encode = QuantumCircuit(n) qc_encode.x(7) qc_encode.draw() # - # Extracting results can be done using the circuit we have from before: `qc_output`. Adding the two circuits using `qc_encode + qc_output` creates a new circuit with everything needed to extract an output added at the end. qc = qc_encode + qc_output qc.draw() # Now we can run the combined circuit and look at the results. counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts) # Now our computer outputs the string ```10000000``` instead. # # The bit we flipped, which comes from qubit 7, lives on the far left of the string. This is because Qiskit numbers the bits in a string from right to left. Some prefer to number their bits the other way around, but Qiskit's system certainly has its advantages when we are using the bits to represent numbers. Specifically, it means that qubit 7 is telling us about how many $2^7$s we have in our number. So by flipping this bit, we’ve now written the number 128 in our simple 8-bit computer. # # Now try out writing another number for yourself. You could do your age, for example. Just use a search engine to find out what the number looks like in binary (if it includes a ‘0b’, just ignore it), and then add some 0s to the left side if you are younger than 64. # + qc_encode = QuantumCircuit(n) qc_encode.x(1) qc_encode.x(5) qc_encode.draw() # - # Now we know how to encode information in a computer. The next step is to process it: To take an input that we have encoded, and turn it into an output that we need. # ### 4.2 Remembering how to add <a id="remembering-add"></a> # To look at turning inputs into outputs, we need a problem to solve. Let’s do some basic maths. In primary school, you will have learned how to take large mathematical problems and break them down into manageable pieces. For example, how would you go about solving the following? # # ``` # 9213 # + 1854 # = ???? # ``` # # One way is to do it digit by digit, from right to left. So we start with 3+4 # ``` # 9213 # + 1854 # = ???7 # ``` # # And then 1+5 # ``` # 9213 # + 1854 # = ??67 # ``` # # Then we have 2+8=10. Since this is a two digit answer, we need to carry the one over to the next column. # # ``` # 9213 # + 1854 # = ?067 # ¹ # ``` # # Finally we have 9+1+1=11, and get our answer # # ``` # 9213 # + 1854 # = 11067 # ¹ # ``` # # This may just be simple addition, but it demonstrates the principles behind all algorithms. Whether the algorithm is designed to solve mathematical problems or process text or images, we always break big tasks down into small and simple steps. # # To run on a computer, algorithms need to be compiled down to the smallest and simplest steps possible. To see what these look like, let’s do the above addition problem again but in binary. # # # ``` # 10001111111101 # + 00011100111110 # # = ?????????????? # ``` # # Note that the second number has a bunch of extra 0s on the left. This just serves to make the two strings the same length. # # Our first task is to do the 1+0 for the column on the right. In binary, as in any number system, the answer is 1. We get the same result for the 0+1 of the second column. # # ``` # 10001111111101 # + 00011100111110 # # = ????????????11 # ``` # # Next, we have 1+1. As you’ll surely be aware, 1+1=2. In binary, the number 2 is written ```10```, and so requires two bits. This means that we need to carry the 1, just as we would for the number 10 in decimal. # # ``` # 10001111111101 # + 00011100111110 # = ???????????011 # ¹ # ``` # # The next column now requires us to calculate ```1+1+1```. This means adding three numbers together, so things are getting complicated for our computer. But we can still compile it down to simpler operations, and do it in a way that only ever requires us to add two bits together. For this, we can start with just the first two 1s. # # ``` # 1 # + 1 # = 10 # ``` # # Now we need to add this ```10``` to the final ```1``` , which can be done using our usual method of going through the columns. # # ``` # 10 # + 01 # = 11 # ``` # # The final answer is ```11``` (also known as 3). # # Now we can get back to the rest of the problem. With the answer of ```11```, we have another carry bit. # # ``` # 10001111111101 # + 00011100111110 # = ??????????1011 # ¹¹ # ``` # # So now we have another 1+1+1 to do. But we already know how to do that, so it’s not a big deal. # # In fact, everything left so far is something we already know how to do. This is because, if you break everything down into adding just two bits, there are only four possible things you’ll ever need to calculate. Here are the four basic sums (we’ll write all the answers with two bits to be consistent). # # ``` # 0+0 = 00 (in decimal, this is 0+0=0) # 0+1 = 01 (in decimal, this is 0+1=1) # 1+0 = 01 (in decimal, this is 1+0=1) # 1+1 = 10 (in decimal, this is 1+1=2) # ``` # # This is called a *half adder*. If our computer can implement this, and if it can chain many of them together, it can add anything. # ### 4.3 Adding with Qiskit <a id="adding-qiskit"></a> # Let's make our own half adder using Qiskit. This will include a part of the circuit that encodes the input, a part that executes the algorithm, and a part that extracts the result. The first part will need to be changed whenever we want to use a new input, but the rest will always remain the same. # ![half adder implemented on a quantum circuit](images/half-adder.svg) # # The two bits we want to add are encoded in the qubits 0 and 1. The above example encodes a ```1``` in both these qubits, and so it seeks to find the solution of ```1+1```. The result will be a string of two bits, which we will read out from the qubits 2 and 3. All that remains is to fill in the actual program, which lives in the blank space in the middle. # # The dashed lines in the image are just to distinguish the different parts of the circuit (although they can have more interesting uses too). They are made by using the `barrier` command. # # The basic operations of computing are known as logic gates. We’ve already used the NOT gate, but this is not enough to make our half adder. We could only use it to manually write out the answers. Since we want the computer to do the actual computing for us, we’ll need some more powerful gates. # # To see what we need, let’s take another look at what our half adder needs to do. # # ``` # 0+0 = 00 # 0+1 = 01 # 1+0 = 01 # 1+1 = 10 # ``` # # The rightmost bit in all four of these answers is completely determined by whether the two bits we are adding are the same or different. So for ```0+0``` and ```1+1```, where the two bits are equal, the rightmost bit of the answer comes out ```0```. For ```0+1``` and ```1+0```, where we are adding different bit values, the rightmost bit is ```1```. # # To get this part of our solution correct, we need something that can figure out whether two bits are different or not. Traditionally, in the study of digital computation, this is called an XOR gate. # # | Input 1 | Input 2 | XOR Output | # |:-------:|:-------:|:------:| # | 0 | 0 | 0 | # | 0 | 1 | 1 | # | 1 | 0 | 1 | # | 1 | 1 | 0 | # # In quantum computers, the job of the XOR gate is done by the controlled-NOT gate. Since that's quite a long name, we usually just call it the CNOT. In Qiskit its name is ```cx```, which is even shorter. In circuit diagrams, it is drawn as in the image below. qc_cnot = QuantumCircuit(2) qc_cnot.cx(0,1) qc_cnot.draw() # This is applied to a pair of qubits. One acts as the control qubit (this is the one with the little dot). The other acts as the *target qubit* (with the big circle). # # There are multiple ways to explain the effect of the CNOT. One is to say that it looks at its two input bits to see whether they are the same or different. Next, it overwrites the target qubit with the answer. The target becomes ```0``` if they are the same, and ```1``` if they are different. # # <img src="images/cnot_xor.svg"> # # Another way of explaining the CNOT is to say that it does a NOT on the target if the control is ```1```, and does nothing otherwise. This explanation is just as valid as the previous one (in fact, it’s the one that gives the gate its name). # # Try the CNOT out for yourself by trying each of the possible inputs. For example, here's a circuit that tests the CNOT with the input ```01```. qc = QuantumCircuit(2,2) qc.x(0) qc.cx(0,1) qc.measure(0,0) qc.measure(1,1) qc.draw() # If you execute this circuit, you’ll find that the output is ```11```. We can think of this happening because of either of the following reasons. # # - The CNOT calculates whether the input values are different and finds that they are, which means that it wants to output ```1```. It does this by writing over the state of qubit 1 (which, remember, is on the left of the bit string), turning ```01``` into ```11```. # # - The CNOT sees that qubit 0 is in state ```1```, and so applies a NOT to qubit 1. This flips the ```0``` of qubit 1 into a ```1```, and so turns ```01``` into ```11```. # # Here is a table showing all the possible inputs and corresponding outputs of the CNOT gate: # # | Input (q1 q0) | Output (q1 q0) | # |:-------------:|:--------------:| # | 00 | 00 | # | 01 | 11 | # | 10 | 10 | # | 11 | 01 | # # For our half adder, we don’t want to overwrite one of our inputs. Instead, we want to write the result on a different pair of qubits. For this, we can use two CNOTs. # + qc_ha = QuantumCircuit(4,2) # encode inputs in qubits 0 and 1 qc_ha.x(0) # For a=0, remove this line. For a=1, leave it. qc_ha.x(1) # For b=0, remove this line. For b=1, leave it. qc_ha.barrier() # use cnots to write the XOR of the inputs on qubit 2 qc_ha.cx(0,2) qc_ha.cx(1,2) qc_ha.barrier() # extract outputs qc_ha.measure(2,0) # extract XOR value qc_ha.measure(3,1) qc_ha.draw() # - # We are now halfway to a fully working half adder. We just have the other bit of the output left to do: the one that will live on qubit 3. # # If you look again at the four possible sums, you’ll notice that there is only one case for which this is ```1``` instead of ```0```: ```1+1```=```10```. It happens only when both the bits we are adding are ```1```. # # To calculate this part of the output, we could just get our computer to look at whether both of the inputs are ```1```. If they are — and only if they are — we need to do a NOT gate on qubit 3. That will flip it to the required value of ```1``` for this case only, giving us the output we need. # # For this, we need a new gate: like a CNOT but controlled on two qubits instead of just one. This will perform a NOT on the target qubit only when both controls are in state ```1```. This new gate is called the *Toffoli*. For those of you who are familiar with Boolean logic gates, it is basically an AND gate. # # In Qiskit, the Toffoli is represented with the `ccx` command. # + qc_ha = QuantumCircuit(4,2) # encode inputs in qubits 0 and 1 qc_ha.x(0) # For a=0, remove the this line. For a=1, leave it. qc_ha.x(1) # For b=0, remove the this line. For b=1, leave it. qc_ha.barrier() # use cnots to write the XOR of the inputs on qubit 2 qc_ha.cx(0,2) qc_ha.cx(1,2) # use ccx to write the AND of the inputs on qubit 3 qc_ha.ccx(0,1,3) qc_ha.barrier() # extract outputs qc_ha.measure(2,0) # extract XOR value qc_ha.measure(3,1) # extract AND value qc_ha.draw() # - # In this example, we are calculating ```1+1```, because the two input bits are both ```1```. Let's see what we get. counts = execute(qc_ha,Aer.get_backend('qasm_simulator')).result().get_counts() plot_histogram(counts) # The result is ```10```, which is the binary representation of the number 2. We have built a computer that can solve the famous mathematical problem of 1+1! # # Now you can try it out with the other three possible inputs, and show that our algorithm gives the right results for those too. # # The half adder contains everything you need for addition. With the NOT, CNOT, and Toffoli gates, we can create programs that add any set of numbers of any size. # # These three gates are enough to do everything else in computing too. In fact, we can even do without the CNOT. Additionally, the NOT gate is only really needed to create bits with value ```1```. The Toffoli gate is essentially the atom of mathematics. It is the simplest element, from which every other problem-solving technique can be compiled. # # As we'll see, in quantum computing we split the atom. import qiskit qiskit.__qiskit_version__
content/ch-states/atoms-computation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_python3 # language: python # name: conda_python3 # --- # # Roboschool simulations of physical robotics with Amazon SageMaker # # --- # ## Introduction # # Roboschool is an [open source](https://github.com/openai/roboschool/tree/master/roboschool) physics simulator that is commonly used to train RL policies for simulated robotic systems. Roboschool provides 3D visualization of physical systems with multiple joints in contact with each other and their environment. # # This notebook will show how to install Roboschool into the SageMaker RL container, and train pre-built robotics applications that are included with Roboschool. # ## Pick which Roboschool problem to solve # # Roboschool defines a [variety](https://github.com/openai/roboschool/blob/master/roboschool/__init__.py) of Gym environments that correspond to different robotics problems. Here we're highlighting a few of them at varying levels of difficulty: # # - **Reacher (easy)** - a very simple robot with just 2 joints reaches for a target # - **Hopper (medium)** - a simple robot with one leg and a foot learns to hop down a track # - **Humanoid (difficult)** - a complex 3D robot with two arms, two legs, etc. learns to balance without falling over and then to run on a track # # The simpler problems train faster with less computational resources. The more complex problems are more fun. # Uncomment the problem to work on roboschool_problem = 'reacher' #roboschool_problem = 'hopper' #roboschool_problem = 'humanoid' # ## Pre-requisites # # ### Imports # # To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. import sagemaker import boto3 import sys import os import glob import re import subprocess from IPython.display import HTML import time from time import gmtime, strftime sys.path.append("common") from misc import get_execution_role, wait_for_s3_object from docker_utils import build_and_push_docker_image from sagemaker.rl import RLEstimator, RLToolkit, RLFramework # ### Setup S3 bucket # # Set up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata. sage_session = sagemaker.session.Session() s3_bucket = sage_session.default_bucket() s3_output_path = 's3://{}/'.format(s3_bucket) print("S3 bucket path: {}".format(s3_output_path)) # ### Define Variables # # We define variables such as the job prefix for the training jobs *and the image path for the container (only when this is BYOC).* # create a descriptive job name job_name_prefix = 'rl-roboschool-'+roboschool_problem # ### Configure where training happens # # You can train your RL training jobs using the SageMaker notebook instance or local notebook instance. In both of these scenarios, you can run the following in either local or SageMaker modes. The local mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set `local_mode = True`. # + # run in local_mode on this machine, or as a SageMaker TrainingJob? local_mode = False if local_mode: instance_type = 'local' else: # If on SageMaker, pick the instance type instance_type = "ml.c5.2xlarge" # - # ### Create an IAM role # # Either get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role. # + try: role = sagemaker.get_execution_role() except: role = get_execution_role() print("Using IAM role arn: {}".format(role)) # - # ### Install docker for `local` mode # # In order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker and docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependenceis. # # Note, you can only run a single local notebook at one time. # only run from SageMaker notebook instance if local_mode: # !/bin/bash ./common/setup.sh # ## Build docker container # # We must build a custom docker container with Roboschool installed. This takes care of everything: # # 1. Fetching base container image # 2. Installing Roboschool and its dependencies # 3. Uploading the new container image to ECR # # This step can take a long time if you are running on a machine with a slow internet connection. If your notebook instance is in SageMaker or EC2 it should take 3-10 minutes depending on the instance type. # # + # %%time cpu_or_gpu = 'gpu' if instance_type.startswith('ml.p') else 'cpu' repository_short_name = "sagemaker-roboschool-ray-%s" % cpu_or_gpu docker_build_args = { 'CPU_OR_GPU': cpu_or_gpu, 'AWS_REGION': boto3.Session().region_name, } custom_image_name = build_and_push_docker_image(repository_short_name, build_args=docker_build_args) print("Using ECR image %s" % custom_image_name) # - # ## Write the Training Code # # The training code is written in the file “train-coach.py” which is uploaded in the /src directory. # First import the environment files and the preset files, and then define the main() function. # !pygmentize src/train-{roboschool_problem}.py # ## Train the RL model using the Python SDK Script mode # # If you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs. # # 1. Specify the source directory where the environment, presets and training code is uploaded. # 2. Specify the entry point as the training code # 3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container. # 4. Define the training parameters such as the instance count, job name, S3 path for output and job name. # 5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET or the RLRAY_PRESET can be used to specify the RL agent algorithm you want to use. # 6. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks. # + # %%time metric_definitions = RLEstimator.default_metric_definitions(RLToolkit.RAY) estimator = RLEstimator(entry_point="train-%s.py" % roboschool_problem, source_dir='src', dependencies=["common/sagemaker_rl"], image_name=custom_image_name, role=role, train_instance_type=instance_type, train_instance_count=1, output_path=s3_output_path, base_job_name=job_name_prefix, metric_definitions=metric_definitions, hyperparameters={ # Attention scientists! You can override any Ray algorithm parameter here: #"rl.training.config.horizon": 5000, #"rl.training.config.num_sgd_iter": 10, } ) estimator.fit(wait=local_mode) job_name = estimator.latest_training_job.job_name print("Training job: %s" % job_name) # - # ## Visualization # # RL training can take a long time. So while it's running there are a variety of ways we can track progress of the running training job. Some intermediate output gets saved to S3 during training, so we'll set up to capture that. # + print("Job name: {}".format(job_name)) s3_url = "s3://{}/{}".format(s3_bucket,job_name) if local_mode: output_tar_key = "{}/output.tar.gz".format(job_name) else: output_tar_key = "{}/output/output.tar.gz".format(job_name) intermediate_folder_key = "{}/output/intermediate/".format(job_name) output_url = "s3://{}/{}".format(s3_bucket, output_tar_key) intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key) print("S3 job path: {}".format(s3_url)) print("Output.tar.gz location: {}".format(output_url)) print("Intermediate folder path: {}".format(intermediate_url)) tmp_dir = "/tmp/{}".format(job_name) os.system("mkdir {}".format(tmp_dir)) print("Create local folder {}".format(tmp_dir)) # - # ### Fetch videos of training rollouts # Videos of certain rollouts get written to S3 during training. Here we fetch the last 10 videos from S3, and render the last one. recent_videos = wait_for_s3_object( s3_bucket, intermediate_folder_key, tmp_dir, fetch_only=(lambda obj: obj.key.endswith(".mp4") and obj.size>0), limit=10, training_job_name=job_name) last_video = sorted(recent_videos)[-1] # Pick which video to watch os.system("mkdir -p ./src/tmp_render/ && cp {} ./src/tmp_render/last_video.mp4".format(last_video)) HTML('<video src="./src/tmp_render/last_video.mp4" controls autoplay></video>') # ### Plot metrics for training job # We can see the reward metric of the training as it's running, using algorithm metrics that are recorded in CloudWatch metrics. We can plot this to see the performance of the model over time. # + # %matplotlib inline from sagemaker.analytics import TrainingJobAnalytics df = TrainingJobAnalytics(job_name, ['episode_reward_mean']).dataframe() num_metrics = len(df) if num_metrics == 0: print("No algorithm metrics found in CloudWatch") else: plt = df.plot(x='timestamp', y='value', figsize=(12,5), legend=True, style='b-') plt.set_ylabel('Mean reward per episode') plt.set_xlabel('Training time (s)') # - # ### Monitor training progress # You can repeatedly run the visualization cells to get the latest videos or see the latest metrics as the training job proceeds.
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Self-Driving Car Engineer Nanodegree # # ## Deep Learning # # ## Project: Build a Traffic Sign Recognition Classifier # # In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. # # > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", # "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. # # In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project. # # The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file. # # # >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. # --- # ## Step 0: Load The Data # + # Load pickled data import pickle # TODO: Fill this in based on where you saved the training and testing data training_file = '../data/train.p' validation_file= '../data/valid.p' testing_file = '../data/test.p' with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_valid, y_valid = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] assert(len(X_train) == len(y_train)) assert(len(X_valid) == len(y_valid)) assert(len(X_test) == len(y_test)) # - # --- # # ## Step 1: Dataset Summary & Exploration # # The pickled data is a dictionary with 4 key/value pairs: # # - `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). # - `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id. # - `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image. # - `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES** # # Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. # ### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas # + ### Replace each question mark with the appropriate value. ### Use python, pandas or numpy methods rather than hard coding the results # TODO: Number of training examples n_train = len(X_train) # TODO: Number of validation examples n_validation = len(X_valid) # TODO: Number of testing examples. n_test = len(X_test) # TODO: What's the shape of an traffic sign image? image_shape = X_train[0].shape # TODO: How many unique classes/labels there are in the dataset. n_classes = 43 print(n_validation) print("Number of training examples =", n_train) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes) # - # ### Include an exploratory visualization of the dataset # Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. # # The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python. # # **NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others? # + ### Data exploration visualization code goes here. ### Feel free to use as many code cells as needed. import matplotlib.pyplot as plt import random import numpy as np # Visualizations will be shown in the notebook. # %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image) print(y_train[index]) # - # histogram our data with numpy def histogram(y_train, n_classes): n, bins = np.histogram(y_train, n_classes) # get the center of hist width = 0.5*(bins[1] - bins[0]) center = (bins[:-1] + bins[1:]) / 2 plt.bar(center, n, align='center', width=width) plt.show() histogram(y_train, n_classes) # ---- # # ## Step 2: Design and Test a Model Architecture # # Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset). # # The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! # # With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. # # There are various aspects to consider when thinking about this problem: # # - Neural network architecture (is the network over or underfitting?) # - Play around preprocessing techniques (normalization, rgb to grayscale, etc) # - Number of examples per label (some have more than others). # - Generate fake data. # # Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. # ### Pre-process the Data Set (normalization, grayscale, etc.) # Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. # # Other pre-processing steps are optional. You can try different techniques to see if it improves performance. # # Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. # + ### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include ### converting to grayscale, etc. ### Feel free to use as many code cells as needed. from sklearn.utils import shuffle X_train = np.concatenate([X_train, X_train]) y_train = np.concatenate([y_train, y_train]) print(X_train.shape) print(y_train.shape) X_train_grey = np.sum(X_train/3, axis=3, keepdims=True) X_valid_grey = np.sum(X_valid/3, axis=3, keepdims=True) X_test_grey = np.sum(X_test/3, axis=3, keepdims=True) X_train_grey, y_train = shuffle(X_train_grey, y_train) print(X_train_grey.shape) histogram(y_train, n_classes) # - # ## Setup TensorFlow # The `EPOCH` and `BATCH_SIZE` values affect the training speed and model accuracy. # # You do not need to modify this section. # + import tensorflow as tf EPOCHS = 100 BATCH_SIZE = 128 # - # ### Model Architecture # + ### Define your architecture here. ### Feel free to use as many code cells as needed. from tensorflow.contrib.layers import flatten def conv2d(x, W, b, strides=1, padd='VALID', name='default'): x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding=padd) x = tf.nn.bias_add(x, b) return tf.nn.relu(x, name=name) def maxpool(input_x, k_size=2, strides_size=2, padd ='VALID', name='default'): ksize = [1, k_size, k_size, 1] strides = [1, strides_size, strides_size, 1] return tf.nn.max_pool(input_x, ksize, strides, padding=padd, name=name) def fc_layer(input_x, W, b): return tf.add(tf.matmul(input_x, W), b) # Store layers weight & bias def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0 sigma = 0.1 weights = { 'wc1': tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma)), 'wc2': tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)), 'wd1': tf.Variable(tf.truncated_normal(shape=(400, 200), mean = mu, stddev = sigma)), 'wd2': tf.Variable(tf.truncated_normal(shape=(200, 120), mean = mu, stddev = sigma)), 'wd3': tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)), 'out': tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))} biases = { 'bc1': tf.Variable(tf.zeros(6)), 'bc2': tf.Variable(tf.zeros(16)), 'bd1': tf.Variable(tf.zeros(200)), 'bd2': tf.Variable(tf.zeros(120)), 'bd3': tf.Variable(tf.zeros(84)), 'out': tf.Variable(tf.zeros(n_classes)) } # Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. # Activation. # Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = conv2d(x, weights['wc1'], biases['bc1'], name='conv1') conv1 = maxpool(conv1, name='max1') #print (conv1.shape) # Layer 2: Convolutional. Output = 10x10x16. # Activation. # Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = conv2d(conv1, weights['wc2'], biases['bc2'], name='conv2') conv2 = maxpool(conv2, name='max2') #print (conv2.shape) # TODO: Flatten. Input = 5x5x16. Output = 400. flat_op = tf.contrib.layers.flatten(conv2) # Layer 3: Fully Connected. Input = 400. Output = 200. fc1 = fc_layer(flat_op, weights['wd1'], biases['bd1']) # Activation. fc1 = tf.nn.relu(fc1) # Dropout fc1 = tf.nn.dropout(fc1, keep_prob) # Layer 4: Fully Connected. Input = 200. Output = 120. fc2 = fc_layer(fc1, weights['wd2'], biases['bd2']) # Activation. fc2 = tf.nn.relu(fc2) # Dropout fc2 = tf.nn.dropout(fc2, keep_prob) # Layer 5: Fully Connected. Input = 120. Output = 84.120 fc3 = fc_layer(fc2, weights['wd3'], biases['bd3']) # Activation. fc3 = tf.nn.relu(fc3) # Layer 6: Fully Connected. Input = 84. Output = 43. logits = fc_layer(fc3, weights['out'], biases['out']) return logits # - # ## Features and Labels # `x` is a placeholder for a batch of input images. # `y` is a placeholder for a batch of output labels. # # x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) keep_prob = tf.placeholder(tf.float32) one_hot_y = tf.one_hot(y, n_classes) # ### Train, Validate and Test the Model # A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation # sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. # + ### Train your model here. ### Calculate and report the accuracy on the training and validation set. ### Once a final model architecture is selected, ### the accuracy on the test set should be calculated and reported as well. ### Feel free to use as many code cells as needed. # - # # Train # + rate = 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) # - # ## Model Evaluation # Evaluate how well the loss and accuracy of the model for a given dataset. # # + correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples # - # # # ## Train the Model # Run the training data through the training pipeline to train the model. # # Before each epoch, shuffle the training set. # # After each epoch, measure the loss and accuracy of the validation set. # # Save the model after training. with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train_grey) print("Training...") print() for i in range(EPOCHS): X_train_grey, y_train = shuffle(X_train_grey, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train_grey[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5}) validation_accuracy = evaluate(X_valid_grey, y_valid) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './lenet') print("Model saved") # ## EVALUATE with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test_grey, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) training_accuracy = evaluate(X_train_grey, y_train) print("Training Accuracy = {:.3f}".format(training_accuracy)) # --- # # ## Step 3: Test a Model on New Images # # To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type. # # You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. # ### Load and Output the Images # #### If you disable the GPU while calculating this section please use the commented out img_labels # + ### Load the images and plot them here. ### Feel free to use as many code cells as needed. import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 import glob saved_image_color = [] images = glob.glob('test_images_from_internet/*') print(images) for idx, fname in enumerate(images): img = cv2.imread(fname) saved_image_color.append(img) plt.imshow(img) plt.show() saved_image_color = np.asarray(saved_image_color) saved_images = np.sum(saved_image_color/3, axis=3, keepdims=True) #img_labels = [31, 25 , 18, 3, 11, 17, 35] img_labels = [3, 11, 18, 17, 25, 35, 31] # - # ### Predict the Sign Type for Each Image # + ### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(saved_images, img_labels) print("Test Accuracy = {:.3f}".format(test_accuracy)) # - # ### Analyze Performance ### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. test_accuracy *= 100 ("Test Accuracy = {:.3f} %".format(test_accuracy)) # ### Output Top 5 Softmax Probabilities For Each Image Found on the Web # For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here. # # The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. # # `tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. # # Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability: # # ``` # # (5, 6) array # a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, # 0.12789202], # [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, # 0.15899337], # [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , # 0.23892179], # [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , # 0.16505091], # [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, # 0.09155967]]) # ``` # # Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces: # # ``` # TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], # [ 0.28086119, 0.27569815, 0.18063401], # [ 0.26076848, 0.23892179, 0.23664738], # [ 0.29198961, 0.26234032, 0.16505091], # [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], # [0, 1, 4], # [0, 5, 1], # [1, 3, 5], # [1, 4, 3]], dtype=int32)) # ``` # # Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices. # + ### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. softmax_logits = tf.nn.softmax(logits) top_k = tf.nn.top_k(softmax_logits, k=5) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver.restore(sess, tf.train.latest_checkpoint('.')) sess_softmax_logits = sess.run(softmax_logits, feed_dict={x: saved_images, keep_prob: 1.0}) sess_top_k = sess.run(top_k, feed_dict={x: saved_images, keep_prob: 1.0}) print("softmax logits = /n {0}".format(sess_softmax_logits)) print() print("top k 5 predictions = /n {0}".format(sess_top_k)) # - # ### Project Writeup # # Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. # > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", # "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. # --- # # ## Step 4 (Optional): Visualize the Neural Network's State with Test Images # # This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. # # Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable. # # For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. # # <figure> # <img src="visualize_cnn.png" width="380" alt="Combined Image" /> # <figcaption> # <p></p> # <p style="text-align: center;"> Your output should look something like this (above)</p> # </figcaption> # </figure> # <p></p> # # + ### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray") # - with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) conv1_s = sess.graph.get_tensor_by_name('conv1:0') outputFeatureMap(saved_images, conv1_s, plt_num=1) conv1_s = sess.graph.get_tensor_by_name('max1:0') outputFeatureMap(saved_images, conv1_s, plt_num=2) conv2_s = sess.graph.get_tensor_by_name('conv2:0') outputFeatureMap(saved_images, conv2_s, plt_num=3) conv2_s = sess.graph.get_tensor_by_name('max2:0') outputFeatureMap(saved_images, conv2_s, plt_num=4)
Traffic_Sign_Classifier.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + [markdown] origin_pos=0 # # 深度循环神经网络 # # :label:`sec_deep_rnn` # # 到目前为止,我们只讨论了具有一个单向隐藏层的循环神经网络。 # 其中,隐变量和观测值与具体的函数形式的交互方式是相当随意的。 # 只要交互类型建模具有足够的灵活性,这就不是一个大问题。 # 然而,对于一个单层来说,这可能具有相当的挑战性。 # 之前在线性模型中,我们通过添加更多的层来解决这个问题。 # 而在循环神经网络中,我们首先需要确定如何添加更多的层, # 以及在哪里添加额外的非线性,因此这个问题有点棘手。 # # 事实上,我们可以将多层循环神经网络堆叠在一起, # 通过对几个简单层的组合,产生了一个灵活的机制。 # 特别是,数据可能与不同层的堆叠有关。 # 例如,我们可能希望保持有关金融市场状况 # (熊市或牛市)的宏观数据可用, # 而微观数据只记录较短期的时间动态。 # # :numref:`fig_deep_rnn`描述了一个具有$L$个隐藏层的深度循环神经网络, # 每个隐状态都连续地传递到当前层的下一个时间步和下一层的当前时间步。 # # ![深度循环神经网络结构](../img/deep-rnn.svg) # :label:`fig_deep_rnn` # # ## 函数依赖关系 # # 我们可以将深度架构中的函数依赖关系形式化, # 这个架构是由 :numref:`fig_deep_rnn`中描述了$L$个隐藏层构成。 # 后续的讨论主要集中在经典的循环神经网络模型上, # 但是这些讨论也适应于其他序列模型。 # # 假设在时间步$t$有一个小批量的输入数据 # $\mathbf{X}_t \in \mathbb{R}^{n \times d}$ # (样本数:$n$,每个样本中的输入数:$d$)。 # 同时,将$l^\mathrm{th}$隐藏层($l=1,\ldots,L$) # 的隐状态设为$\mathbf{H}_t^{(l)} \in \mathbb{R}^{n \times h}$ # (隐藏单元数:$h$), # 输出层变量设为$\mathbf{O}_t \in \mathbb{R}^{n \times q}$ # (输出数:$q$)。 # 设置$\mathbf{H}_t^{(0)} = \mathbf{X}_t$, # 第$l$个隐藏层的隐状态使用激活函数$\phi_l$,则: # # $$\mathbf{H}_t^{(l)} = \phi_l(\mathbf{H}_t^{(l-1)} \mathbf{W}_{xh}^{(l)} + \mathbf{H}_{t-1}^{(l)} \mathbf{W}_{hh}^{(l)} + \mathbf{b}_h^{(l)}),$$ # :eqlabel:`eq_deep_rnn_H` # # 其中,权重$\mathbf{W}_{xh}^{(l)} \in \mathbb{R}^{h \times h}$, # $\mathbf{W}_{hh}^{(l)} \in \mathbb{R}^{h \times h}$和 # 偏置$\mathbf{b}_h^{(l)} \in \mathbb{R}^{1 \times h}$ # 都是第$l$个隐藏层的模型参数。 # # 最后,输出层的计算仅基于第$l$个隐藏层最终的隐状态: # # $$\mathbf{O}_t = \mathbf{H}_t^{(L)} \mathbf{W}_{hq} + \mathbf{b}_q,$$ # # 其中,权重$\mathbf{W}_{hq} \in \mathbb{R}^{h \times q}$和偏置$\mathbf{b}_q \in \mathbb{R}^{1 \times q}$都是输出层的模型参数。 # # 与多层感知机一样,隐藏层数目$L$和隐藏单元数目$h$都是超参数。 # 也就是说,它们可以由我们调整的。 # 另外,用门控循环单元或长短期记忆网络的隐状态 # 来代替 :eqref:`eq_deep_rnn_H`中的隐状态进行计算, # 可以很容易地得到深度门控循环神经网络或深度长短期记忆神经网络。 # # ## 简洁实现 # # 实现多层循环神经网络所需的许多逻辑细节在高级API中都是现成的。 # 简单起见,我们仅示范使用此类内置函数的实现方式。 # 以长短期记忆网络模型为例, # 该代码与之前在 :numref:`sec_lstm`中使用的代码非常相似, # 实际上唯一的区别是我们指定了层的数量, # 而不是使用单一层这个默认值。 # 像往常一样,我们从加载数据集开始。 # # + origin_pos=1 tab=["mxnet"] from mxnet import npx from mxnet.gluon import rnn from d2l import mxnet as d2l npx.set_np() batch_size, num_steps = 32, 35 train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps) # + [markdown] origin_pos=3 # 像选择超参数这类架构决策也跟 :numref:`sec_lstm`中的决策非常相似。 # 因为我们有不同的词元,所以输入和输出都选择相同数量,即`vocab_size`。 # 隐藏单元的数量仍然是$256$。 # 唯一的区别是,我们现在(**通过`num_layers`的值来设定隐藏层数**)。 # # + origin_pos=4 tab=["mxnet"] vocab_size, num_hiddens, num_layers = len(vocab), 256, 2 device = d2l.try_gpu() lstm_layer = rnn.LSTM(num_hiddens, num_layers) model = d2l.RNNModel(lstm_layer, len(vocab)) # + [markdown] origin_pos=6 # ## [**训练**]与预测 # # 由于使用了长短期记忆网络模型来实例化两个层,因此训练速度被大大降低了。 # # + origin_pos=7 tab=["mxnet"] num_epochs, lr = 500, 2 d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device) # + [markdown] origin_pos=8 # ## 小结 # # * 在深度循环神经网络中,隐状态的信息被传递到当前层的下一时间步和下一层的当前时间步。 # * 有许多不同风格的深度循环神经网络, # 如长短期记忆网络、门控循环单元、或经典循环神经网络。 # 这些模型在深度学习框架的高级API中都有涵盖。 # * 总体而言,深度循环神经网络需要大量的调参(如学习率和修剪) # 来确保合适的收敛,模型的初始化也需要谨慎。 # # ## 练习 # # 1. 基于我们在 :numref:`sec_rnn_scratch`中讨论的单层实现, # 尝试从零开始实现两层循环神经网络。 # 1. 在本节训练模型中,比较使用门控循环单元替换长短期记忆网络后模型的精确度和训练速度。 # 1. 如果增加训练数据,你能够将困惑度降到多低? # 1. 在为文本建模时,是否可以将不同作者的源数据合并?有何优劣呢? # # + [markdown] origin_pos=9 tab=["mxnet"] # [Discussions](https://discuss.d2l.ai/t/2771) #
d2l/mxnet/chapter_recurrent-modern/deep-rnn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img src="static/pybofractal.png" alt="Pybonacci" style="width: 200px;"/> # <img src="static/cacheme_logo.png" alt="CAChemE" style="width: 300px;"/> # # The Transport Problem # # > **Note:** Adapted from https://github.com/Pyomo/PyomoGallery, see LICENSE.BSD # # ## Summary # # The goal of the Transport Problem is to select the quantities of an homogeneous good that has several production plants and several punctiform markets as to minimise the transportation costs. # # It is the default tutorial for the GAMS language, and GAMS equivalent code is inserted as single-dash comments. The original GAMS code needs slighly different ordering of the commands and it's available at http://www.gams.com/mccarl/trnsport.gms. # # ## Problem Statement # # The Transport Problem can be formulated mathematically as a linear programming problem using the following model. # # ### Sets # # $I$ = set of canning plants # $J$ = set of markets # # ### Parameters # # $a_i$ = capacity of plant $i$ in cases, $\forall i \in I$ <br /> # $b_j$ = demand at market $j$ in cases, $\forall j \in J$ <br /> # $d_{i,j}$ = distance in thousands of miles, $\forall i \in I, \forall j \in J$ <br /> # $f$ = freight in dollars per case per thousand miles <br /> # $c_{i,j}$ = transport cost in thousands of dollars per case # # $c_{i,j}$ is obtained exougenously to the optimisation problem as $c_{i,j} = f \cdot d_{i,j}$, $\forall i \in I, \forall j \in J$ # # ### Variables # $x_{i,j}$ = shipment quantities in cases <br /> # z = total transportation costs in thousands of dollars # # ### Objective # # Minimize the total cost of the shipments: <br /> # $\min_{x} z = \sum_{i \in I} \sum_{j \in J} c_{i,j} x_{i,j}$ # # ### Constraints # # # Observe supply limit at plant i: <br /> # $\sum_{i \in I} x_{i,j} \leq a_{i}$, $\forall i \in I$ # # Satisfy demand at market j: <br /> # $\sum_{j \in J} x_{i,j} \geq b_{j}$, $\forall j \in J$ # # Non-negative transportation quantities <br /> # $x_{i,j} \geq 0$, $\forall i \in I, \forall j \in J$ # ## Pyomo Formulation # ### Creation of the Model # # In pyomo everything is an object. The various components of the model (sets, parameters, variables, constraints, objective..) are all attributes of the main model object while being objects themselves. # # There are two type of models in pyomo: A `ConcreteModel` is one where all the data is defined at the model creation. We are going to use this type of model in this tutorial. Pyomo however supports also an `AbstractModel`, where the model structure is firstly generated and then particular instances of the model are generated with a particular set of data. # # The first thing to do in the script is to load the pyomo library and create a new `ConcreteModel` object. We have little imagination here, and we call our model "model". You can give it whatever name you want. However, if you give your model an other name, you also need to create a `model` object at the end of your script: # + # Import of the pyomo module from pyomo.environ import * # Creation of a Concrete Model model = ConcreteModel() # - # ### Set Definitions # # Sets are created as attributes object of the main model objects and all the information is given as parameter in the constructor function. Specifically, we are passing to the constructor the initial elements of the set and a documentation string to keep track on what our set represents: ## Define sets ## # Sets # i canning plants / seattle, san-diego / # j markets / new-york, chicago, topeka / ; model.i = Set(initialize=['seattle','san-diego'], doc='Canning plans') model.j = Set(initialize=['new-york','chicago', 'topeka'], doc='Markets') # ### Parameters # # Parameter objects are created specifying the sets over which they are defined and are initialised with either a python dictionary or a scalar: ## Define parameters ## # Parameters # a(i) capacity of plant i in cases # / seattle 350 # san-diego 600 / # b(j) demand at market j in cases # / new-york 325 # chicago 300 # topeka 275 / ; model.a = Param(model.i, initialize={'seattle':350,'san-diego':600}, doc='Capacity of plant i in cases') model.b = Param(model.j, initialize={'new-york':325,'chicago':300,'topeka':275}, doc='Demand at market j in cases') # Table d(i,j) distance in thousands of miles # new-york chicago topeka # seattle 2.5 1.7 1.8 # san-diego 2.5 1.8 1.4 ; dtab = { ('seattle', 'new-york') : 2.5, ('seattle', 'chicago') : 1.7, ('seattle', 'topeka') : 1.8, ('san-diego','new-york') : 2.5, ('san-diego','chicago') : 1.8, ('san-diego','topeka') : 1.4, } model.d = Param(model.i, model.j, initialize=dtab, doc='Distance in thousands of miles') # Scalar f freight in dollars per case per thousand miles /90/ ; model.f = Param(initialize=90, doc='Freight in dollars per case per thousand miles') # A third, powerful way to initialize a parameter is using a user-defined function. # # This function will be automatically called by pyomo with any possible (i,j) set. In this case pyomo will actually call `c_init()` six times in order to initialize the `model.c` parameter. # + # Parameter c(i,j) transport cost in thousands of dollars per case ; # c(i,j) = f * d(i,j) / 1000 ; def c_init(model, i, j): return model.f * model.d[i,j] / 1000 model.c = Param(model.i, model.j, initialize=c_init, doc='Transport cost in thousands of dollar per case') # - # ### Variables # # Similar to parameters, variables are created specifying their domain(s). For variables we can also specify the upper/lower bounds in the constructor. # # Differently from GAMS, we don't need to define the variable that is on the left hand side of the objective function. ## Define variables ## # Variables # x(i,j) shipment quantities in cases # z total transportation costs in thousands of dollars ; # Positive Variable x ; model.x = Var(model.i, model.j, bounds=(0.0,None), doc='Shipment quantities in case') # ### Constrains # # At this point, it should not be a surprise that constrains are again defined as model objects with the required information passed as parameter in the constructor function. # + ## Define contrains ## # supply(i) observe supply limit at plant i # supply(i) .. sum (j, x(i,j)) =l= a(i) def supply_rule(model, i): return sum(model.x[i,j] for j in model.j) <= model.a[i] model.supply = Constraint(model.i, rule=supply_rule, doc='Observe supply limit at plant i') # demand(j) satisfy demand at market j ; # demand(j) .. sum(i, x(i,j)) =g= b(j); def demand_rule(model, j): return sum(model.x[i,j] for i in model.i) >= model.b[j] model.demand = Constraint(model.j, rule=demand_rule, doc='Satisfy demand at market j') # - # The above code takes advantage of [list comprehensions](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions), a powerful feature of the python language that provides a concise way to loop over a list. If we take the supply_rule as example, this is actually called two times by pyomo (once for each of the elements of i). Without list comprehensions we would have had to write our function using a for loop, like: def supply_rule(model, i): supply = 0.0 for j in model.j: supply += model.x[i,j] return supply <= model.a[i] # Using list comprehension is however quicker to code and more readable. # ### Objective and Solving # # The definition of the objective is similar to those of the constrains, except that most solvers require a scalar objective function, hence a unique function, and we can specify the sense (direction) of the optimisation. # + ## Define Objective and solve ## # cost define objective function # cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ; # Model transport /all/ ; # Solve transport using lp minimizing z ; def objective_rule(model): return sum(model.c[i,j]*model.x[i,j] for i in model.i for j in model.j) model.objective = Objective(rule=objective_rule, sense=minimize, doc='Define objective function') # - # As we are here looping over two distinct sets, we can see how list comprehension really simplifies the code. The objective function could have been written without list comprehension as: def objective_rule(model): obj = 0.0 for ki in model.i: for kj in model.j: obj += model.c[ki,kj]*model.x[ki,kj] return obj # ### Retrieving the Output # # We use the `pyomo_postprocess()` function to retrieve the output and do something with it. For example, we could display solution values (see below), plot a graph with [matplotlib](http://matplotlib.org/) or save it in a csv file. # # This function is called by pyomo after the solver has finished. ## Display of the output ## # Display x.l, x.m ; def pyomo_postprocess(options=None, instance=None, results=None): model.x.display() # We can print model structure information with `model.pprint()` (“pprint” stand for “pretty print”). # Results are also by default saved in a `results.json` file or, if PyYAML is installed in the system, in `results.yml`. # # ### Editing and Running the Script # # Differently from GAMS, you can use whatever editor environment you wish to code a pyomo script. If you don't need debugging features, a simple text editor like Notepad++ (in windows), gedit or kate (in Linux) will suffice. They already have syntax highlight for python. # # If you want advanced features and debugging capabilities you can use a dedicated Python IDE, like e.g. Spyder. # # You will normally run the script as `pyomo solve –solver=glpk transport.py`. You can output solver specific output adding the option `–stream-output`. If you want to run the script as `python transport.py` add the following lines at the end: # + # This emulates what the pyomo command-line tools does from pyomo.opt import SolverFactory import pyomo.environ opt = SolverFactory("glpk") results = opt.solve(model) # sends results to stdout results.write() print("\nDisplaying Solution\n" + '-'*60) pyomo_postprocess(None, None, results) # - # Finally, if you are very lazy and want to run the script with just `./transport.py` (and you are in Linux) add the following lines at the top: # + # #!/usr/bin/env python # -*- coding: utf-8 -*- # - # ## Complete script # # Here is the complete script: # !cat transport.py # ## Solutions # Running the model lead to the following output: # !pyomo solve --solver=glpk transport.py # By default, the optimization results are stored in the file `results.json`: # !cat results.json # This solution shows that the minimum transport costs is attained when 300 cases are sent from the Seattle plant to the Chicago market, 50 cases from Seattle to New-York and 275 cases each are sent from San-Diego plant to New-York and Topeka markets. # # The total transport costs will be $153,675. # ## References # # * Original problem formulation: # - Dantzig, <NAME>, Chapter 3.3. In Linear Programming and Extensions. Princeton University Press, Princeton, New Jersey, 1963. # * GAMS implementation: # - <NAME>, Chapter 2: A GAMS Tutorial. In GAMS: A User's Guide. The Scientific Press, Redwood City, California, 1988. # * Pyomo translation: <NAME>
04_TransportProblem.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_mxnet_p36 # language: python # name: conda_mxnet_p36 # --- # # Amazon SageMaker Object Detection using the Image and JSON format # # 1. [Introduction](#Introduction) # 2. [Setup](#Setup) # 3. [Data Preparation](#Data-Preparation) # 1. [Download data](#Download-Data) # 2. [Prepare Dataset](#Prepare-dataset) # 3. [Upload to S3](#Upload-to-S3) # 4. [Training](#Training) # 5. [Hosting](#Hosting) # 6. [Inference](#Inference) # ## Introduction # # Object detection is the process of identifying and localizing objects in an image. A typical object detection solution takes in an image as input and provides a bounding box on the image where an object of interest is, along with identifying what object the box encapsulates. But before we have this solution, we need to acquire and process a traning dataset, create and setup a training job for the alorithm so that the aglorithm can learn about the dataset and then host the algorithm as an endpoint, to which we can supply the query image. # # This notebook is an end-to-end example introducing the Amazon SageMaker Object Detection algorithm. In this demo, we will demonstrate how to train and to host an object detection model on the [COCO dataset](http://cocodataset.org/) using the Single Shot multibox Detector ([SSD](https://arxiv.org/abs/1512.02325)) algorithm. In doing so, we will also demonstrate how to construct a training dataset using the JSON format as this is the format that the training job will consume. We also allow the RecordIO format, which is illustrated in the [RecordIO Notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb). We will also demonstrate how to host and validate this trained model. # ## Setup # # To train the Object Detection algorithm on Amazon SageMaker, we need to setup and authenticate the use of AWS services. To begin with we need an AWS account role with SageMaker access. This role is used to give SageMaker access to your data in S3 will automatically be obtained from the role used to start the notebook. # + # %%time import sagemaker from sagemaker import get_execution_role role = get_execution_role() print(role) sess = sagemaker.Session() # - # We also need the S3 bucket that you want to use for training and to store the tranied model artifacts. In this notebook, we require a custom bucket that exists so as to keep the naming clean. You can end up using a default bucket that SageMaker comes with as well. # + tags=["parameters"] bucket = '<your_s3_bucket_name_here>' # custom bucket name. # bucket = sess.default_bucket() prefix = 'DEMO-ObjectDetection' # + from sagemaker.amazon.amazon_estimator import get_image_uri training_image = get_image_uri(sess.boto_region_name, 'object-detection', repo_version="latest") print (training_image) # - # ## Data Preparation # [MS COCO](http://cocodataset.org/#download) is a large-scale dataset for multiple computer vision tasks, including object detection, segmentation, and captioning. In this notebook, we will use the object detection dataset. Since the COCO is relative large dataset, we will only use the the validation set from 2017 and split them into training and validation sets. The data set from 2017 contains 5000 images with objects from 80 categories. # # ### Datset License # The annotations in this dataset belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License. The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset. Before you use this data for any other purpose than this example, you should understand the data license, described at http://cocodataset.org/#termsofuse" # ### Download data # Let us download the 2017 validation datasets from COCO and then unpack them. # + import os import urllib.request def download(url): filename = url.split("/")[-1] if not os.path.exists(filename): urllib.request.urlretrieve(url, filename) # MSCOCO validation image files download('http://images.cocodataset.org/zips/val2017.zip') download('http://images.cocodataset.org/annotations/annotations_trainval2017.zip') # + language="bash" # unzip -qo val2017.zip # unzip -qo annotations_trainval2017.zip # rm val2017.zip annotations_trainval2017.zip # - # Before using this dataset, we need to perform some data cleaning. The algorithm expects the dataset in a particular JSON format. The COCO dataset, while containing annotations in JSON, does not follow our specifications. We will use this as an opportunity to introduce our JSON format by performing this convertion. To begin with we create appropriate directories for training images, validation images, as well as the annotation files for both. # + language="bash" # #Create folders to store the data and annotation files # mkdir generated train train_annotation validation validation_annotation # - # ### Prepare dataset # # Next, we should convert the annotation file from the COCO dataset into json annotation files. We will require one annotation for each image. # # The Amazon SageMaker Object Detection algorithm expects lables to be indexed from `0`. It also expects lables to be unique, successive and not skip any integers. For instance, if there are ten classes, the algorithm expects and the labels only be in the set `[0,1,2,3,4,5,6,7,8,9]`. # # In the COCO validation set unfortunately, the labels do not satistify this requirement. Some indices are skipped and the labels start from `1`. We therefore need a mapper that will convert this index system to our requirement. Let us create a generic mapper therefore that could also be used to other datasets that might have nonunique or even string labels. All we need in a dictionary that would create a key-value mapping where an original label is hashed to a label that we require. Consider the following method that returns such a dictionary for the COCO validation dataset. # + import json import logging def get_coco_mapper(): original_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] iter_counter = 0 COCO = {} for orig in original_list: COCO[orig] = iter_counter iter_counter += 1 return COCO # - # Let us use this dictionary, to create a look up method. Let us do so in a way that any dictionary could be used to create this method. # + def get_mapper_fn(map): def mapper(in_category): return map[in_category] return mapper fix_index_mapping = get_mapper_fn(get_coco_mapper()) # - # The method `fix_index_mapping` is essentially a look-up method, which we can use to convert lables. Let us now iterate over every annotation in the COCO dataset and prepare our data. Note how the keywords are created and a structure is established. For more information on the JSON format details, refer the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection.html). file_name = './annotations/instances_val2017.json' with open(file_name) as f: js = json.load(f) images = js['images'] categories = js['categories'] annotations = js['annotations'] for i in images: jsonFile = i['file_name'] jsonFile = jsonFile.split('.')[0]+'.json' line = {} line['file'] = i['file_name'] line['image_size'] = [{ 'width':int(i['width']), 'height':int(i['height']), 'depth':3 }] line['annotations'] = [] line['categories'] = [] for j in annotations: if j['image_id'] == i['id'] and len(j['bbox']) > 0: line['annotations'].append({ 'class_id':int(fix_index_mapping(j['category_id'])), 'top':int(j['bbox'][1]), 'left':int(j['bbox'][0]), 'width':int(j['bbox'][2]), 'height':int(j['bbox'][3]) }) class_name = '' for k in categories: if int(j['category_id']) == k['id']: class_name = str(k['name']) assert class_name is not '' line['categories'].append({ 'class_id':int(j['category_id']), 'name':class_name }) if line['annotations']: with open(os.path.join('generated', jsonFile),'w') as p: json.dump(line,p) # + import os import json jsons = os.listdir('generated') print ('There are {} images have annotation files'.format(len(jsons))) # - # After removing the images without annotations, we have 4952 annotated images. Let us split this dataset and create our training and validation datasets, with which our algorithm will train. To do so, we will simply split the dataset into training and validation data and move them to their respective folders. # + import shutil train_jsons = jsons[:4452] val_jsons = jsons[4452:] #Moving training files to the training folders for i in train_jsons: image_file = './val2017/'+i.split('.')[0]+'.jpg' shutil.move(image_file, './train/') shutil.move('./generated/'+i, './train_annotation/') #Moving validation files to the validation folders for i in val_jsons: image_file = './val2017/'+i.split('.')[0]+'.jpg' shutil.move(image_file, './validation/') shutil.move('./generated/'+i, './validation_annotation/') # - # ### Upload to S3 # Next step in this process is to upload the data to the S3 bucket, from which the algorithm can read and use the data. We do this using multiple channels. Channels are simply directories in the bucket that differentiate between training and validation data. Let us simply call these directories `train` and `validation`. We will therefore require four channels: two for the data and two for annotations, the annotations ones named with the suffixes `_annotation`. # + # %%time train_channel = prefix + '/train' validation_channel = prefix + '/validation' train_annotation_channel = prefix + '/train_annotation' validation_annotation_channel = prefix + '/validation_annotation' sess.upload_data(path='train', bucket=bucket, key_prefix=train_channel) sess.upload_data(path='validation', bucket=bucket, key_prefix=validation_channel) sess.upload_data(path='train_annotation', bucket=bucket, key_prefix=train_annotation_channel) sess.upload_data(path='validation_annotation', bucket=bucket, key_prefix=validation_annotation_channel) s3_train_data = 's3://{}/{}'.format(bucket, train_channel) s3_validation_data = 's3://{}/{}'.format(bucket, validation_channel) s3_train_annotation = 's3://{}/{}'.format(bucket, train_annotation_channel) s3_validation_annotation = 's3://{}/{}'.format(bucket, validation_annotation_channel) # - # Next we need to setup an output location at S3, where the model artifact will be dumped. These artifacts are also the output of the algorithm's traning job. s3_output_location = 's3://{}/{}/output'.format(bucket, prefix) # ## Training # Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. od_model = sagemaker.estimator.Estimator(training_image, role, train_instance_count=1, train_instance_type='ml.p3.2xlarge', train_volume_size = 50, train_max_run = 360000, input_mode = 'File', output_path=s3_output_location, sagemaker_session=sess) # The object detection algorithm at its core is the [Single-Shot Multi-Box detection algorithm (SSD)](https://arxiv.org/abs/1512.02325). This algorithm uses a `base_network`, which is typically a [VGG](https://arxiv.org/abs/1409.1556) or a [ResNet](https://arxiv.org/abs/1512.03385). The Amazon SageMaker object detection algorithm supports VGG-16 and ResNet-50 now. It also has a lot of options for hyperparameters that help configure the training job. The next step in our training, is to setup these hyperparameters and data channels for training the model. Consider the following example definition of hyperparameters. See the SageMaker Object Detection [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection.html) for more details on the hyperparameters. # # One of the hyperparameters here for instance is the `epochs`. This defines how many passes of the dataset we iterate over and determines that training time of the algorithm. For the sake of demonstration let us run only `30` epochs. od_model.set_hyperparameters(base_network='resnet-50', use_pretrained_model=1, num_classes=80, mini_batch_size=16, epochs=30, learning_rate=0.001, lr_scheduler_step='10', lr_scheduler_factor=0.1, optimizer='sgd', momentum=0.9, weight_decay=0.0005, overlap_threshold=0.5, nms_threshold=0.45, image_shape=512, label_width=600, num_training_samples=4452) # Now that the hyperparameters are setup, let us prepare the handshake between our data channels and the algorithm. To do this, we need to create the `sagemaker.session.s3_input` objects from our data channels. These objects are then put in a simple dictionary, which the algorithm consumes. Notice that here we use a `content_type` as `image/jpeg` for the image channels and the annoation channels. Notice how unlike the [RecordIO format](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb), we use four channels here. # + train_data = sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated', content_type='image/jpeg', s3_data_type='S3Prefix') validation_data = sagemaker.session.s3_input(s3_validation_data, distribution='FullyReplicated', content_type='image/jpeg', s3_data_type='S3Prefix') train_annotation = sagemaker.session.s3_input(s3_train_annotation, distribution='FullyReplicated', content_type='image/jpeg', s3_data_type='S3Prefix') validation_annotation = sagemaker.session.s3_input(s3_validation_annotation, distribution='FullyReplicated', content_type='image/jpeg', s3_data_type='S3Prefix') data_channels = {'train': train_data, 'validation': validation_data, 'train_annotation': train_annotation, 'validation_annotation':validation_annotation} # - # We have our `Estimator` object, we have set the hyperparameters for this object and we have our data channels linked with the algorithm. The only remaining thing to do is to train the algorithm. The following cell will train the algorithm. Training the algorithm involves a few steps. Firstly, the instances that we requested while creating the `Estimator` classes are provisioned and are setup with the appropriate libraries. Then, the data from our channels are downloaded into the instance. Once this is done, the training job begins. The provisioning and data downloading will take time, depending on the size of the data. Therefore it might be a few minutes before we start getting data logs for our training jobs. The data logs will also print out Mean Average Precision (mAP) on the validation data, among other losses, for every run of the dataset once or one epoch. This metric is a proxy for the quality of the algorithm. # # Once the job has finished a "Job complete" message will be printed. The trained model can be found in the S3 bucket that was setup as `output_path` in the estimator. od_model.fit(inputs=data_channels, logs=True) # ## Hosting # Once the training is done, we can deploy the trained model as an Amazon SageMaker real-time hosted endpoint. This will allow us to make predictions (or inference) from the model. Note that we don't have to host on the same insantance (or type of instance) that we used to train. Training is a prolonged and compute heavy job that require a different of compute and memory requirements that hosting typically do not. We can choose any type of instance we want to host the model. In our case we chose the `ml.p3.2xlarge` instance to train, but we choose to host the model on the less expensive cpu instance, `ml.m4.xlarge`. The endpoint deployment can be accomplished as follows: object_detector = od_model.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') # ## Inference # Now that the trained model is deployed at an endpoint that is up-and-running, we can use this endpoint for inference. To do this, let us download an image from [PEXELS](https://www.pexels.com/) which the algorithm has so-far not seen. # + # !wget -O test.jpg https://images.pexels.com/photos/980382/pexels-photo-980382.jpeg file_name = 'test.jpg' with open(file_name, 'rb') as image: f = image.read() b = bytearray(f) ne = open('n.txt','wb') ne.write(b) # - # Let us use our endpoint to try to detect objects within this image. Since the image is `jpeg`, we use the appropriate `content_type` to run the prediction job. The endpoint returns a JSON file that we can simply load and peek into. # + import json object_detector.content_type = 'image/jpeg' results = object_detector.predict(b) detections = json.loads(results) print (detections) # - # The results are in a format that is similar to the input .lst file (See [RecordIO Notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb) for more details on the .lst file definition. )with an addition of a confidence score for each detected object. The format of the output can be represented as `[class_index, confidence_score, xmin, ymin, xmax, ymax]`. Typically, we don't consider low-confidence predictions. # # We have provided additional script to easily visualize the detection outputs. You can visulize the high-confidence preditions with bounding box by filtering out low-confidence detections using the script below: def visualize_detection(img_file, dets, classes=[], thresh=0.6): """ visualize detections in one image Parameters: ---------- img : numpy.array image, in bgr format dets : numpy.array ssd detections, numpy.array([[id, score, x1, y1, x2, y2]...]) each row is one object classes : tuple or list of str class names thresh : float score threshold """ import random import matplotlib.pyplot as plt import matplotlib.image as mpimg img=mpimg.imread(img_file) plt.imshow(img) height = img.shape[0] width = img.shape[1] colors = dict() for det in dets: (klass, score, x0, y0, x1, y1) = det if score < thresh: continue cls_id = int(klass) if cls_id not in colors: colors[cls_id] = (random.random(), random.random(), random.random()) xmin = int(x0 * width) ymin = int(y0 * height) xmax = int(x1 * width) ymax = int(y1 * height) rect = plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False, edgecolor=colors[cls_id], linewidth=3.5) plt.gca().add_patch(rect) class_name = str(cls_id) if classes and len(classes) > cls_id: class_name = classes[cls_id] plt.gca().text(xmin, ymin - 2, '{:s} {:.3f}'.format(class_name, score), bbox=dict(facecolor=colors[cls_id], alpha=0.5), fontsize=12, color='white') plt.show() # For the sake of this notebook, we used a small portion of the COCO dataset for training and trained the model with only a few (30) epochs. This implies that the results might not be optimal. To achieve better detection results, you can try to use the more data from COCO dataset and train the model for more epochs. Tuning the hyperparameters, such as `mini_batch_size`, `learning_rate`, and `optimizer`, also helps to get a better detector. # + object_categories = ['person', 'bicycle', 'car', 'motorbike', 'aeroplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'sofa', 'pottedplant', 'bed', 'diningtable', 'toilet', 'tvmonitor', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'] # Setting a threshold 0.20 will only plot detection results that have a confidence score greater than 0.20. threshold = 0.20 # Visualize the detections. visualize_detection(file_name, detections['prediction'], object_categories, threshold) # - # ## Delete the Endpoint # Having an endpoint running will incur some costs. Therefore as a clean-up job, we should delete the endpoint. sagemaker.Session().delete_endpoint(object_detector.endpoint)
introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_image_json_format.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # As we discovered in the [Introduction](Introduction.ipynb), HoloViews allows plotting a variety of data types. Here we will use the sample data module and load the pandas and dask hvPlot API: import intake import numpy as np import hvplot.pandas import hvplot.dask # As we learned The hvPlot API closely mirrors the [Pandas plotting API](https://pandas.pydata.org/pandas-docs/stable/visualization.html), but instead of generating static images when used in a notebook, it uses HoloViews to generate either static or dynamically streaming Bokeh plots. Static plots can be used in any context, while streaming plots require a live [Jupyter notebook](http://jupyter.org) or a deployed [Bokeh Server app](https://bokeh.pydata.org/en/latest/docs/user_guide/server.html). # # HoloViews provides an extensive, very rich set of objects along with a powerful set of operations to apply, as you can find out in the [HoloViews User Guide](http://holoviews.org/user_guide/index.html). But here we will focus on the most essential mechanisms needed to make your data visualizable, without having to worry about the mechanics going on behind the scenes. # # We will be focusing on two different datasets: # # - A small CSV file of US crime data, broken down by state # - A larger Parquet-format file of airline flight data # # The ``hvplot.sample_data`` module makes these datasets Intake data catalogue, which we can load either using pandas: # + from hvplot.sample_data import us_crime, airline_flights crime = us_crime.read() print(type(crime)) crime.head() # - # Or using dask as a ``dask.DataFrame``: flights = airline_flights.to_dask().persist() print(type(flights)) flights.head() # ## The plot interface # The ``dask.dataframe.DataFrame.hvplot``, ``pandas.DataFrame.hvplot`` and ``intake.DataSource.plot`` interfaces (and Series equivalents) from HvPlot provide a powerful high-level API to generate complex plots. The ``.hvplot`` API can be called directly or used as a namespace to generate specific plot types. # ### The plot method # The most explicit way to use the plotting API is to specify the names of columns to plot on the ``x``- and ``y``-axis respectively: crime.hvplot.line(x='Year', y='Violent Crime rate') # As you'll see in more detail below, you can choose which kind of plot you want to use for the data: crime.hvplot(x='Year', y='Violent Crime rate', kind='scatter') # An additional convenience on top of this explicit API is to specify an additional ``by`` variable, which groups the data by one or more additional columns. As an example here we will plot the departure delay ('depdelay') as a function of 'distance', grouping the data by the 'carrier'. There are many available carriers, so we will select only two of them so that the plot is readable: flight_subset = flights[flights.carrier.isin([b'OH', b'F9'])] flight_subset.hvplot(x='distance', y='depdelay', by='carrier', kind='scatter', alpha=0.2, persist=True) # Here we have specified the `x` axis explicitly, which can be omitted if the Pandas index column is already set to what you want on the x axis. Similarly, here we specified the `y` axis; by default all of the non-index columns would be plotted (which would be a lot of data in this case). If you don't specify the 'y' axis, it will have a default label named 'value', but you can then provide a y axis label explicitly using the ``value_label`` option. # # Putting all of this together we will plot violent crime, robbery, and burglary rates on the y-axis, specifying 'Year' as the x, and relabel the y-axis to display the 'Rate'. crime.hvplot(x='Year', y=['Violent Crime rate', 'Robbery rate', 'Burglary rate'], value_label='Rate (per 100k people)') # ### The hvplot namespace # Instead of using the ``kind`` argument to the plot call, we can use the ``hvplot`` namespace, which lets us easily discover the range of plot types that are supported. Plot types available include: # # * <a href="#Area">``.area()``</a>: Plots a area chart similar to a line chart except for filling the area under the curve and optionally stacking # * <a href="#Bars">``.bar()``</a>: Plots a bar chart that can be stacked or grouped # * <a href="#Bivariate">``.bivariate()``</a>: Plots 2D density of a set of points # * <a href="#Box-Whisker-Plots">``.box()``</a>: Plots a box-whisker chart comparing the distribution of one or more variables # * <a href="#HeatMap">``.heatmap()``</a>: Plots a heatmap to visualizing a variable across two independent dimensions # * <a href="#HexBins">``.hexbins()``</a>: Plots hex bins # * <a href="#Histogram">``.histogram()``</a>: Plots the distribution of one or histograms as a set of bins # * <a href="#KDE">``.kde()``</a>: Plots the kernel density estimate of one or more variables. # * <a href="#The-plot-method">``.line()``</a>: Plots a line chart (such as for a time series) # * <a href="#Scatter">``.scatter()``</a>: Plots a scatter chart comparing two variables # * <a href="#Step">``.step()``</a>: Plots a step chart akin to a line plot # * <a href="#Tables">``.table()``</a>: Generates a SlickGrid DataTable # * <a href="#Violin-Plots">``.violin()``</a>: Plots a violin plot comparing the distribution of one or more variables using the kernel density estimate # #### Area # # Like most other plot types the ``area`` chart supports the three ways of defining a plot outlined above. An area chart is most useful when plotting multiple variables in a stacked chart. This can be achieve by specifying ``x``, ``y``, and ``by`` columns or using the ``columns`` and ``index``/``use_index`` (equivalent to ``x``) options: crime.hvplot.area(x='Year', y=['Robbery', 'Aggravated assault'], stacked=True) # We can also explicitly set ``stacked`` to False and define an ``alpha`` value to compare the values directly: crime.hvplot.area(x='Year', y=['Aggravated assault', 'Robbery'], stacked=False, alpha=0.4) # Another use for an area plot is to visualize the spread of a value. For instance using the flights dataset we may want to see the spread in mean delay values across carriers. For that purpose we compute the mean delay by day and carrier and then the min/max mean delay for across all carriers: flights.groupby(['day', 'carrier'])['carrier_delay'].mean().groupby('day').agg([np.min, np.max]).hvplot.area('day', 'amin', 'amax', alpha=0.2) *\ flights.groupby('day')['carrier_delay'].mean().hvplot() # #### Bars # # In the simplest case we can use ``source.plot.bar`` to plot ``x`` against ``y``: crime.hvplot.bar('Year', 'Violent Crime rate', rot=90) # If we want to compare multiple columns instead we can again use the ``index`` option to treat the 'Year' column as the index and then compare the specific columns. Using the ``stacked`` option we can then compare the column values more easily: crime.hvplot.bar('Year', ['Violent crime total', 'Property crime total'], stacked=True, rot=90, width=800) # #### Scatter # # The scatter plot supports all the same features as the other chart types we have seen so far but can also be colored by another variable using the ``c`` option and allows declaring a ``cmap``. crime.hvplot.scatter('Violent Crime rate', 'Burglary rate', c='Year', cmap='viridis', size=12, colorbar=True) # #### Step # # A step chart is very similar to a line chart but instead of linearly interpolating between samples the step chart visualizes discrete steps. The point at which to step can be controlled via the ``where`` keyword allowing 'pre', 'mid' (default) and 'post' values: crime.hvplot.step(x='Year', y=['Robbery', 'Aggravated assault'], stacked=True) # #### HexBins # # You can create hexagonal bin plots with the ``hexbin`` method. Hexbin plots can be a useful alternative to scatter plots if your data are too dense to plot each point individually. flights.hvplot.hexbin(x='airtime', y='arrdelay', width=600, height=500) # #### Bivariate # # You can create a 2D density plot with the ``bivariate`` method. Bivariate plots can be a useful alternative to scatter plots if your data are too dense to plot each point individually. crime.hvplot.bivariate('Violent Crime rate', 'Burglary rate', colorbar=True, width=600, height=500) # #### HeatMap # # A ``HeatMap`` lets us view the relationship between three variables, so we specify the 'x' and 'y' variables and an additional 'C' variable. Additionally we can define a ``reduce_function`` that computes the values for each bin from the samples that fall into it. Here we plot the 'depdelay' (i.e. departure delay) for each day of the month and carrier in the dataset: flights.hvplot.heatmap(x='day', y='carrier', C='depdelay', reduce_function=np.mean, colorbar=True) # #### Tables # # Unlike all other plot types, a table only supports one signature: either all columns are plotted, or a subset of columns can be selected by defining the ``columns`` explicitly: crime.hvplot.table(columns=['Year', 'Population', 'Violent Crime rate'], width=400) # ### Distributions # # Plotting distributions differs slightly from other plots since they plot only one variable in the simple case rather than plotting two or more variables against each other. Therefore when plotting these plot types no ``index`` or ``x`` value needs to be supplied. Instead: # # 1. Declare a single ``y`` variable, e.g. ``source.plot.hist(variable)``, or # 2. Declare a ``y`` variable and ``by`` variable, e.g. ``source.plot.hist(variable, by='Group')``, or # 3. Declare columns or plot all columns, e.g. ``source.plot.hist()`` or ``source.plot.hist(columns=['A', 'B', 'C'])`` # # #### Histogram # # The Histogram is the simplest example of a distribution; often we simply plot the distribution of a single variable, in this case the 'Violent Crime rate'. Additionally we can define a range over which to compute the histogram and the number of bins using the ``bin_range`` and ``bins`` arguments respectively: crime.hvplot.hist('Violent Crime rate') # Or we can plot the distribution of multiple columns: columns = ['Violent Crime rate', 'Property crime rate', 'Burglary rate'] crime.hvplot.hist(y=columns, bins=50, alpha=0.5) # We can also group the data by another variable: flights[flights.carrier.isin([b'AA', b'US', b'OH'])].hvplot.hist( 'depdelay', by='carrier', bins=20, bin_range=(-20, 100), alpha=0.3) # #### KDE # # You can also create density plots using ``hvplot.kde()`` method: crime.hvplot.kde('Violent Crime rate') # Comparing the distribution of multiple columns is also possible: columns=['Violent Crime rate', 'Property crime rate', 'Burglary rate'] crime.hvplot.kde(y=columns, alpha=0.5, value_label='Rate') # The ``DataSource.plot.kde`` also supports the ``by`` keyword: flights[flights.carrier.isin([b'AA', b'US', b'OH'])].hvplot.kde('depdelay', by='carrier', alpha=0.3, xlim=(-20, 70)) # #### Box-Whisker Plots # # Just like the other distribution-based plot types, the box-whisker plot supports plotting a single column: crime.hvplot.box('Violent Crime rate') # It also supports multiple columns: columns=['Burglary rate', 'Larceny-theft rate', 'Motor vehicle theft rate', 'Property crime rate', 'Violent Crime rate'] crime.hvplot.box(y=columns, group_label='Crime', legend=False, value_label='Rate (per 100k)', invert=True) # Lastly, it also supports using the ``by`` keyword to split the data into multiple subsets: flights[flights.carrier.isin([b'AA', b'US', b'OH'])].hvplot.box('depdelay', by='carrier', ylim=(-10, 70)) # ## Composing Plots # # One of the core strengths of HoloViews is the ease of composing # different plots. Individual plots can be composed using the ``*`` and # ``+`` operators, which overlay and compose plots into layouts # respectively. For more information on composing objects, see the # HoloViews [User Guide](http://holoviews.org/user_guide/Composing_Elements.html). # # By using these operators we can combine multiple plots into composite plots. A simple example is overlaying two plot types: crime.hvplot('Year', 'Violent Crime rate') * crime.hvplot.scatter('Year', 'Violent Crime rate', size=30) # We can also lay out different plots and tables together: (crime.hvplot.bar('Year', 'Violent Crime rate', rot=90, width=550) + crime.hvplot.table(['Year', 'Population', 'Violent Crime rate'], width=420)) # ## Large data # # The previous examples summarized the fairly large airline dataset using statistical plot types that aggregate the data into a feasible subset for plotting. We can instead aggregate the data directly into the viewable image using [datashader](http://datashader.org), which provides a rendering of the entire set of raw data available (as far as the resolution of the screen allows). Here we plot the 'airtime' against the 'distance': flights.hvplot.scatter('distance', 'airtime', datashade=True) # ## Groupby # # Thanks to the ability of HoloViews to explore a parameter space with a set of widgets we can apply a groupby along a particular column or dimension. For example we can view the distribution of departure delays by carrier grouped by day, allowing the user to choose which day to display: flights.hvplot.violin('depdelay', by='carrier', groupby='dayofweek', ylim=(-20, 60), height=500) # This user guide merely provided an overview over the available plot types; to see a detailed description on how to customize plots see the [Customization](Customization.ipynb) user guide.
examples/user_guide/Plotting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MNIST Image Classification with TensorFlow on Vertex AI # # This notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). # # ## Learning Objectives # 1. Understand how to build a Dense Neural Network (DNN) for image classification # 2. Understand how to use dropout (DNN) for image classification # 3. Understand how to use Convolutional Neural Networks (CNN) # 4. Know how to deploy and use an image classifcation model using Google Cloud's [Vertex AI](https://cloud.google.com/vertex-ai/) # # First things first. Configure the parameters below to match your own Google Cloud project details. # + from datetime import datetime import os REGION = 'us-central1' PROJECT = !(gcloud config get-value core/project) PROJECT = PROJECT[0] BUCKET = PROJECT MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "dnn" # Do not change these os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["MODEL_TYPE"] = MODEL_TYPE # - # ## Building a dynamic model # # In the previous notebook, <a href="mnist_linear.ipynb">mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on Vertex AI, it needs to be packaged as a python module. # # The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them. # # Let's start with the trainer file first. This file parses command line arguments to feed into the model. # + # %%writefile mnist_models/trainer/task.py import argparse import json import os import sys from . import model def _parse_arguments(argv): """Parses command-line arguments.""" parser = argparse.ArgumentParser() parser.add_argument( '--model_type', help='Which model type to use', type=str, default='linear') parser.add_argument( '--epochs', help='The number of epochs to train', type=int, default=10) parser.add_argument( '--steps_per_epoch', help='The number of steps per epoch to train', type=int, default=100) parser.add_argument( '--job-dir', help='Directory where to save the given model', type=str, default='mnist_models/') return parser.parse_known_args(argv) def main(): """Parses command line arguments and kicks off model training.""" args = _parse_arguments(sys.argv[1:])[0] # Configure path for hyperparameter tuning. trial_id = json.loads( os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '') output_path = args.job_dir if not trial_id else args.job_dir + '/' model_layers = model.get_layers(args.model_type) image_model = model.build_model(model_layers, args.job_dir) model_history = model.train_and_evaluate( image_model, args.epochs, args.steps_per_epoch, args.job_dir) if __name__ == '__main__': main() # - # Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab. # + # %%writefile mnist_models/trainer/util.py import tensorflow as tf def scale(image, label): """Scales images from a 0-255 int range to a 0-1 float range""" image = tf.cast(image, tf.float32) image /= 255 image = tf.expand_dims(image, -1) return image, label def load_dataset( data, training=True, buffer_size=5000, batch_size=100, nclasses=10): """Loads MNIST dataset into a tf.data.Dataset""" (x_train, y_train), (x_test, y_test) = data x = x_train if training else x_test y = y_train if training else y_test # One-hot encode the classes y = tf.keras.utils.to_categorical(y, nclasses) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.map(scale).batch(batch_size) if training: dataset = dataset.shuffle(buffer_size).repeat() return dataset # - # Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`. # # **TODO 1**: Define the Keras layers for a DNN model # **TODO 2**: Define the Keras layers for a dropout model # **TODO 3**: Define the Keras layers for a CNN model # # Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance. # + # %%writefile mnist_models/trainer/model.py import os import shutil import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.layers import ( Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax) from . import util # Image Variables WIDTH = 28 HEIGHT = 28 def get_layers( model_type, nclasses=10, hidden_layer_1_neurons=400, hidden_layer_2_neurons=100, dropout_rate=0.25, num_filters_1=64, kernel_size_1=3, pooling_size_1=2, num_filters_2=32, kernel_size_2=3, pooling_size_2=2): """Constructs layers for a keras model based on a dict of model types.""" model_layers = { 'linear': [ Flatten(), Dense(nclasses), Softmax() ], 'dnn': [ Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dense(nclasses), Softmax() ], 'dnn_dropout': [ # TODO Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dropout(dropout_rate), Dense(nclasses), Softmax() ], 'cnn': [ # TODO Conv2D(num_filters_1, kernel_size=kernel_size_1, activation='relu', input_shape=(WIDTH, HEIGHT, 1)), MaxPooling2D(pooling_size_1), Conv2D(num_filters_2, kernel_size=kernel_size_2, activation='relu'), MaxPooling2D(pooling_size_2), Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dropout(dropout_rate), Dense(nclasses), Softmax() ] } return model_layers[model_type] def build_model(layers, output_dir): """Compiles keras model for image classification.""" model = Sequential(layers) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) return model def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir): """Compiles keras model and loads data into it for training.""" mnist = tf.keras.datasets.mnist.load_data() train_data = util.load_dataset(mnist) validation_data = util.load_dataset(mnist, training=False) callbacks = [] if output_dir: tensorboard_callback = TensorBoard(log_dir=output_dir) callbacks = [tensorboard_callback] history = model.fit( train_data, validation_data=validation_data, epochs=num_epochs, steps_per_epoch=steps_per_epoch, verbose=2, callbacks=callbacks) if output_dir: export_path = os.path.join(output_dir, 'keras_export') model.save(export_path, save_format='tf') return history # - # ## Local Training # # With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively. # # Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests! # !python3 -m mnist_models.trainer.test # Now that we know that our models are working as expected, let's run it on Google Cloud within Vertex AI. We can run it as a python module locally first using the command line. # # The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. # + current_time = datetime.now().strftime("%Y%m%d_%H%M%S") model_type = 'cnn' os.environ["MODEL_TYPE"] = model_type os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format( model_type, current_time) # - # The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file. # + language="bash" # python3 -m mnist_models.trainer.task \ # --job-dir=$JOB_DIR \ # --epochs=5 \ # --steps_per_epoch=50 \ # --model_type=$MODEL_TYPE # - # ## Training on the cloud # # For this model, we will be able to use a Tensorflow pre-built container on Vertex AI, as we do not have any particular additional prerequisites. As before, we use `setuptools` for this, and store the created source distribution on Cloud Storage. # + # %%writefile mnist_models/setup.py from setuptools import find_packages from setuptools import setup setup( name='mnist_trainer', version='0.1', packages=find_packages(), include_package_data=True, description='MNIST model training application.' ) # + language="bash" # cd mnist_models # python ./setup.py sdist --formats=gztar # cd .. # gsutil cp mnist_models/dist/mnist_trainer-0.1.tar.gz gs://${BUCKET}/mnist/ # - # Then, we can kickoff the Vertex AI Custom Job using the pre-built container. We can pass our source distribution URI using the `--python-package-uris` flag. # + current_time = datetime.now().strftime("%Y%m%d_%H%M%S") model_type = 'cnn' os.environ["MODEL_TYPE"] = model_type os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format( BUCKET, model_type, current_time) os.environ["JOB_NAME"] = "mnist_{}_{}".format( model_type, current_time) # + language="bash" # echo $JOB_DIR $REGION $JOB_NAME # # PYTHON_PACKAGE_URIS=gs://${BUCKET}/mnist/mnist_trainer-0.1.tar.gz # MACHINE_TYPE=n1-standard-4 # REPLICA_COUNT=1 # PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest" # PYTHON_MODULE=trainer.task # # WORKER_POOL_SPEC="machine-type=$MACHINE_TYPE,\ # replica-count=$REPLICA_COUNT,\ # executor-image-uri=$PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,\ # python-module=$PYTHON_MODULE" # # gcloud ai custom-jobs create \ # --region=${REGION} \ # --display-name=$JOB_NAME \ # --python-package-uris=$PYTHON_PACKAGE_URIS \ # --worker-pool-spec=$WORKER_POOL_SPEC \ # --args="--job-dir=$JOB_DIR,--model_type=$MODEL_TYPE" # + language="bash" # SAVEDMODEL_DIR=${JOB_DIR}keras_export # echo $SAVEDMODEL_DIR # gsutil ls $SAVEDMODEL_DIR # - # ## Deploying and predicting with model # # Once you have a model you're proud of, let's deploy it! All we need to do is to upload the created model artifact from Cloud Storage to Vertex AI as a model, create a new endpoint, and deploy the model to the endpoint. # + language="bash" # TIMESTAMP=$(date -u +%Y%m%d_%H%M%S) # MODEL_DISPLAYNAME=mnist_$TIMESTAMP # ENDPOINT_DISPLAYNAME=mnist_endpoint_$TIMESTAMP # IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest" # SAVEDMODEL_DIR=${JOB_DIR}keras_export # echo $SAVEDMODEL_DIR # # # Model # MODEL_RESOURCENAME=$(gcloud ai models upload \ # --region=$REGION \ # --display-name=$MODEL_DISPLAYNAME \ # --container-image-uri=$IMAGE_URI \ # --artifact-uri=$SAVEDMODEL_DIR \ # --format="value(model)") # # echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}" # echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}" # # # Endpoint # ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \ # --region=$REGION \ # --display-name=$ENDPOINT_DISPLAYNAME \ # --format="value(name)") # # echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}" # echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}" # # # Deployment # DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment # MACHINE_TYPE=n1-standard-2 # # gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \ # --region=$REGION \ # --model=$MODEL_RESOURCENAME \ # --display-name=$DEPLOYED_MODEL_DISPLAYNAME \ # --machine-type=$MACHINE_TYPE \ # --min-replica-count=1 \ # --max-replica-count=1 \ # --traffic-split=0=100 # - # To predict with the model, let's take one of the example images. # # **TODO 4**: Write a `.json` file with image data to send to a Vertex AI deployed model # + import json, codecs import tensorflow as tf import matplotlib.pyplot as plt HEIGHT = 28 WIDTH = 28 IMGNO = 12 mnist = tf.keras.datasets.mnist.load_data() (x_train, y_train), (x_test, y_test) = mnist test_image = x_test[IMGNO] jsondata = {"instances": [ test_image.reshape(HEIGHT, WIDTH, 1).tolist() ]} json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8")) plt.imshow(test_image.reshape(HEIGHT, WIDTH)); # - # !cat test.json # Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab! # + language="bash" # ENDPOINT_RESOURCENAME="projects/432069008306/locations/us-central1/endpoints/7342002088613773312" # TODO: insert ENDPOINT_RESOURCENAME from above # # gcloud ai endpoints predict $ENDPOINT_RESOURCENAME \ # --region=$REGION \ # --json-request=test.json # - # Copyright 2021 Google Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
notebooks/image_models/labs/2_mnist_models_vertex.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt from IPython.display import display, HTML, IFrame from ipywidgets import interact import pandas as pd from numpy import cos,sin,pi,dot,arccos from numpy.linalg import norm # this is the magnitude function from mpl_toolkits.mplot3d import axes3d from itertools import combinations plt.rcParams["figure.figsize"] = [8, 8] # Uncomment the one that corresponds to your Jupyter theme plt.style.use('dark_background') # plt.style.use('fivethirtyeight') # plt.style.use('Solarize_Light2') # + [markdown] slideshow={"slide_type": "skip"} # <style> # td { # font-size: 20px; # } # </style> # + [markdown] slideshow={"slide_type": "notes"} # $\newcommand{\RR}{\mathbb{R}}$ # $\newcommand{\bv}[1]{\begin{bmatrix} #1 \end{bmatrix}}$ # $\renewcommand{\vec}{\mathbf}$ # # + [markdown] slideshow={"slide_type": "slide"} # # # ### Example # # 1. Find a unit vector perpendicular to $\langle 1,2,-1\rangle$ and $\langle 3,0,1\rangle$. Is there only one? # - v,w = np.array(((1,2,-1),(3,0,1))) u = np.cross(w,v) uu = u/norm(u) norm(uu) np.dot(uu,w) # # ### Quick exercise # # Write a parametric form for a line containing position vectors $\vec p$ and $\vec q$. # # $$\vec p + t (\vec q - \vec p) = (1 - t)\vec p + t \vec q$$ # # + [markdown] slideshow={"slide_type": "subslide"} # ## Exercises # # 1. Where does the line through $(2,4,-2)$ and $(3,1,-1)$ cross the $xy$-plane? # - # **Solution** # # Use the vector connecting the positions as the direction $\vec v = \langle 3 - 2, 1 - 4, -1 - (-2) \rangle = \langle 1, -3, 1 \rangle$. Use either point as the initial position, so one possibility is # $$ \vec r(t) = \langle 2,4,-2 \rangle + t \langle 1, -3, 1 \rangle$$ # # To find the intersection with the $xy$-plane, set the $z$-coordinate to $0$ and solve for $t$. # # $$ -2 + t = 0$$ # # at $t =2$, so the line intersects the $xy$-plane at $\vec r(t) = \langle 4, -2, 0\rangle$. # + [markdown] slideshow={"slide_type": "subslide"} # 2. Is the line $(2,4,0)$ and $(1,1,1)$ perpendicular to the line through $(3,3,4)$ and $(3,-1,-8)$? # - # **Solution** # # CAREFUL. This depends on what we mean by lines being perpendicular in $\mathbb{R}^3$. # # First, we compare directions. # # $$\vec v_1 = \langle 2-1, 4 - 1, 0 - 1 \rangle = \langle 1, 3, -1 \rangle$$ # $$\vec v_2 = \langle 3 - 3, 3 - -1, 4 - -8 \rangle = \langle 0, 4, 12 \rangle$$ # # Thus, $\vec v_1 \cdot \vec v_2 = 0$, so the _directions_ are perpendicular, but we will require further that for lines to be perpendicular they **must intersect**. # # For this, we must check if there is a solution to the system of equations # # $$\bv{2 \\ 4 \\ 0} + t \bv{1 \\ 3 \\ -1 } = \bv{3 \\ 3 \\ 4} + s \bv{0 \\ 4 \\ 12}$$ # where each side is the parametric form of one of the lines above. This has 3 equations and 2 unknowns. The first (top) equation says $2 + t = 0$ so $t = 1$. # # The second component thus says $7 = 3 + 4s $ so $s = 1$ as well, which on the third line yields # $$ -(1) = 4 + 12$$ # so this system has no solution. The lines do **not** intersect, and thus they are **not** perpendicular. # # We say they are **skew lines**. # + jupyter={"source_hidden": true} @interact def _(angle = (-96,108,6)): fig = plt.figure() ax = fig.gca(projection='3d') ax.view_init(30,angle) plt.plot([2,1],[4,1],[0,1]) plt.plot([3,3],[3,-1],[4,-8]) # - # # # + [markdown] slideshow={"slide_type": "fragment"} # #### Quick exercise # # What is a normal vector to the plane given by $$ x+2y = 16-8z?$$ # Find a point on this plane. # - # **Solution** # # Don't forget to move the $z$ term over. $$\vec n = \langle 1, 2, 8 \rangle$$ # # A point on this plane is $(16, 0, 0)$ or $(0, 8, 0)$ or $(0,0,2)$. # # + [markdown] slideshow={"slide_type": "slide"} # ### Exercises # # 1. Find an equation of the plane through $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$. # - # **Solution** # # Find two directions in the plane and cross them. # # $$\vec n = (\vec j - \vec i)\times(\vec k - \vec i) = \vec j \times \vec k - \vec j\times \vec i - \vec i \times \vec k = \vec i + \vec j +\vec k$$ # # Put into eqution of plane $\vec n \cdot \vec x = \vec v \cdot \vec p$ to get $$x + y + z = 1$$ # + [markdown] slideshow={"slide_type": "slide"} # 2. Find a parametric form for the line of intersection of the planes given by $x+y-z = 2$ and $2x - y + 3z = 1$. # - # **Solution** # # We find a point of intersection (i.e., solve a system of the two equations). Start by just adding the two equations to get $$ 3x + 2z = 3$$ which has solutions $x = 1, z = 0$. Plug these back into either of the original equations to get $y = 1$, so a point is $(1,1,0)$. # # More interstingly, the direction of the line is parallel to both planes, so it is orthogonal to both normals, thus we use a cross product # $$\vec v = \vec n_1 \times \vec n_2 = \langle 1, 1, -1 \rangle \times \langle 2, -1, 3 \rangle$$ np.cross([1,1,-1],[2,-1,3]) # Thus, a parametric form of the line is $$\vec r(t) = \langle 1,1,0\rangle + t\langle 2, -5, -3\rangle $$ # + jupyter={"source_hidden": true} t = np.array([-2,2]) p = np.array([1,1,0]) v = np.array([2, -5, -3]) x = y = np.linspace(-2,2,10) x,y = np.meshgrid(x,y) @interact def _(angle = (-96,108,6)): fig = plt.figure() ax = fig.gca(projection='3d') ax.view_init(30,angle) plt.plot(p[0] + t*v[0], p[1] + t*v[1], p[2] + t*v[2]) ax.plot_surface(x, y, x + y - 2,alpha=.5) ax.plot_surface(x, y, (1 - 2*x + y)/3,alpha=.8) # -
exercises/L03-Exercises-Solutions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # --- # # _You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._ # # --- # # Applied Machine Learning: Module 2 (Supervised Learning, Part I) # ## Preamble and Review # + # %matplotlib notebook import numpy as np import pandas as pd import seaborn as sn import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier np.set_printoptions(precision=2) fruits = pd.read_table('readonly/fruit_data_with_colors.txt') feature_names_fruits = ['height', 'width', 'mass', 'color_score'] X_fruits = fruits[feature_names_fruits] y_fruits = fruits['fruit_label'] target_names_fruits = ['apple', 'mandarin', 'orange', 'lemon'] X_fruits_2d = fruits[['height', 'width']] y_fruits_2d = fruits['fruit_label'] X_train, X_test, y_train, y_test = train_test_split(X_fruits, y_fruits, random_state=0) from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_train) # we must apply the scaling to the test set that we computed for the training set X_test_scaled = scaler.transform(X_test) knn = KNeighborsClassifier(n_neighbors = 5) knn.fit(X_train_scaled, y_train) print('Accuracy of K-NN classifier on training set: {:.2f}' .format(knn.score(X_train_scaled, y_train))) print('Accuracy of K-NN classifier on test set: {:.2f}' .format(knn.score(X_test_scaled, y_test))) example_fruit = [[5.5, 2.2, 10, 0.70]] example_fruit_scaled = scaler.transform(example_fruit) print('Predicted fruit type for ', example_fruit, ' is ', target_names_fruits[knn.predict(example_fruit_scaled)[0]-1]) # - # ## Datasets # + from sklearn.datasets import make_classification, make_blobs from matplotlib.colors import ListedColormap from sklearn.datasets import load_breast_cancer from adspy_shared_utilities import load_crime_dataset cmap_bold = ListedColormap(['#FFFF00', '#00FF00', '#0000FF','#000000']) # synthetic dataset for simple regression from sklearn.datasets import make_regression plt.figure() plt.title('Sample regression problem with one input variable') X_R1, y_R1 = make_regression(n_samples = 100, n_features=1, n_informative=1, bias = 150.0, noise = 30, random_state=0) plt.scatter(X_R1, y_R1, marker= 'o', s=50) plt.show() # synthetic dataset for more complex regression from sklearn.datasets import make_friedman1 plt.figure() plt.title('Complex regression problem with one input variable') X_F1, y_F1 = make_friedman1(n_samples = 100, n_features = 7, random_state=0) plt.scatter(X_F1[:, 2], y_F1, marker= 'o', s=50) plt.show() # synthetic dataset for classification (binary) plt.figure() plt.title('Sample binary classification problem with two informative features') X_C2, y_C2 = make_classification(n_samples = 100, n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1, flip_y = 0.1, class_sep = 0.5, random_state=0) plt.scatter(X_C2[:, 0], X_C2[:, 1], c=y_C2, marker= 'o', s=50, cmap=cmap_bold) plt.show() # more difficult synthetic dataset for classification (binary) # with classes that are not linearly separable X_D2, y_D2 = make_blobs(n_samples = 100, n_features = 2, centers = 8, cluster_std = 1.3, random_state = 4) y_D2 = y_D2 % 2 plt.figure() plt.title('Sample binary classification problem with non-linearly separable classes') plt.scatter(X_D2[:,0], X_D2[:,1], c=y_D2, marker= 'o', s=50, cmap=cmap_bold) plt.show() # Breast cancer dataset for classification cancer = load_breast_cancer() (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) # Communities and Crime dataset (X_crime, y_crime) = load_crime_dataset() # - # ## K-Nearest Neighbors # ### Classification # + from adspy_shared_utilities import plot_two_class_knn X_train, X_test, y_train, y_test = train_test_split(X_C2, y_C2, random_state=0) plot_two_class_knn(X_train, y_train, 1, 'uniform', X_test, y_test) plot_two_class_knn(X_train, y_train, 3, 'uniform', X_test, y_test) plot_two_class_knn(X_train, y_train, 11, 'uniform', X_test, y_test) # - # ### Regression # + from sklearn.neighbors import KNeighborsRegressor X_train, X_test, y_train, y_test = train_test_split(X_R1, y_R1, random_state = 0) knnreg = KNeighborsRegressor(n_neighbors = 5).fit(X_train, y_train) print(knnreg.predict(X_test)) print('R-squared test score: {:.3f}' .format(knnreg.score(X_test, y_test))) # + fig, subaxes = plt.subplots(1, 2, figsize=(8,4)) X_predict_input = np.linspace(-3, 3, 50).reshape(-1,1) X_train, X_test, y_train, y_test = train_test_split(X_R1[0::5], y_R1[0::5], random_state = 0) for thisaxis, K in zip(subaxes, [1, 3]): knnreg = KNeighborsRegressor(n_neighbors = K).fit(X_train, y_train) y_predict_output = knnreg.predict(X_predict_input) thisaxis.set_xlim([-2.5, 0.75]) thisaxis.plot(X_predict_input, y_predict_output, '^', markersize = 10, label='Predicted', alpha=0.8) thisaxis.plot(X_train, y_train, 'o', label='True Value', alpha=0.8) thisaxis.set_xlabel('Input feature') thisaxis.set_ylabel('Target value') thisaxis.set_title('KNN regression (K={})'.format(K)) thisaxis.legend() plt.tight_layout() # - # ### Regression model complexity as a function of K # + # plot k-NN regression on sample dataset for different values of K fig, subaxes = plt.subplots(5, 1, figsize=(5,20)) X_predict_input = np.linspace(-3, 3, 500).reshape(-1,1) X_train, X_test, y_train, y_test = train_test_split(X_R1, y_R1, random_state = 0) for thisaxis, K in zip(subaxes, [1, 3, 7, 15, 55]): knnreg = KNeighborsRegressor(n_neighbors = K).fit(X_train, y_train) y_predict_output = knnreg.predict(X_predict_input) train_score = knnreg.score(X_train, y_train) test_score = knnreg.score(X_test, y_test) thisaxis.plot(X_predict_input, y_predict_output) thisaxis.plot(X_train, y_train, 'o', alpha=0.9, label='Train') thisaxis.plot(X_test, y_test, '^', alpha=0.9, label='Test') thisaxis.set_xlabel('Input feature') thisaxis.set_ylabel('Target value') thisaxis.set_title('KNN Regression (K={})\n\ Train $R^2 = {:.3f}$, Test $R^2 = {:.3f}$' .format(K, train_score, test_score)) thisaxis.legend() plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0) # - # ## Linear models for regression # ### Linear regression # + from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split(X_R1, y_R1, random_state = 0) linreg = LinearRegression().fit(X_train, y_train) print('linear model coeff (w): {}' .format(linreg.coef_)) print('linear model intercept (b): {:.3f}' .format(linreg.intercept_)) print('R-squared score (training): {:.3f}' .format(linreg.score(X_train, y_train))) print('R-squared score (test): {:.3f}' .format(linreg.score(X_test, y_test))) # - # ### Linear regression: example plot plt.figure(figsize=(5,4)) plt.scatter(X_R1, y_R1, marker= 'o', s=50, alpha=0.8) plt.plot(X_R1, linreg.coef_ * X_R1 + linreg.intercept_, 'r-') plt.title('Least-squares linear regression') plt.xlabel('Feature value (x)') plt.ylabel('Target value (y)') plt.show() # + X_train, X_test, y_train, y_test = train_test_split(X_crime, y_crime, random_state = 0) linreg = LinearRegression().fit(X_train, y_train) print('Crime dataset') print('linear model intercept: {}' .format(linreg.intercept_)) print('linear model coeff:\n{}' .format(linreg.coef_)) print('R-squared score (training): {:.3f}' .format(linreg.score(X_train, y_train))) print('R-squared score (test): {:.3f}' .format(linreg.score(X_test, y_test))) # - # ### Ridge regression # + from sklearn.linear_model import Ridge X_train, X_test, y_train, y_test = train_test_split(X_crime, y_crime, random_state = 0) linridge = Ridge(alpha=20.0).fit(X_train, y_train) print('Crime dataset') print('ridge regression linear model intercept: {}' .format(linridge.intercept_)) print('ridge regression linear model coeff:\n{}' .format(linridge.coef_)) print('R-squared score (training): {:.3f}' .format(linridge.score(X_train, y_train))) print('R-squared score (test): {:.3f}' .format(linridge.score(X_test, y_test))) print('Number of non-zero features: {}' .format(np.sum(linridge.coef_ != 0))) # - # #### Ridge regression with feature normalization # + from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() from sklearn.linear_model import Ridge X_train, X_test, y_train, y_test = train_test_split(X_crime, y_crime, random_state = 0) X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) linridge = Ridge(alpha=20.0).fit(X_train_scaled, y_train) print('Crime dataset') print('ridge regression linear model intercept: {}' .format(linridge.intercept_)) print('ridge regression linear model coeff:\n{}' .format(linridge.coef_)) print('R-squared score (training): {:.3f}' .format(linridge.score(X_train_scaled, y_train))) print('R-squared score (test): {:.3f}' .format(linridge.score(X_test_scaled, y_test))) print('Number of non-zero features: {}' .format(np.sum(linridge.coef_ != 0))) # - # #### Ridge regression with regularization parameter: alpha print('Ridge regression: effect of alpha regularization parameter\n') for this_alpha in [0, 1, 10, 20, 50, 100, 1000]: linridge = Ridge(alpha = this_alpha).fit(X_train_scaled, y_train) r2_train = linridge.score(X_train_scaled, y_train) r2_test = linridge.score(X_test_scaled, y_test) num_coeff_bigger = np.sum(abs(linridge.coef_) > 1.0) print('Alpha = {:.2f}\nnum abs(coeff) > 1.0: {}, \ r-squared training: {:.2f}, r-squared test: {:.2f}\n' .format(this_alpha, num_coeff_bigger, r2_train, r2_test)) # ### Lasso regression # + from sklearn.linear_model import Lasso from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() X_train, X_test, y_train, y_test = train_test_split(X_crime, y_crime, random_state = 0) X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) linlasso = Lasso(alpha=2.0, max_iter = 10000).fit(X_train_scaled, y_train) print('Crime dataset') print('lasso regression linear model intercept: {}' .format(linlasso.intercept_)) print('lasso regression linear model coeff:\n{}' .format(linlasso.coef_)) print('Non-zero features: {}' .format(np.sum(linlasso.coef_ != 0))) print('R-squared score (training): {:.3f}' .format(linlasso.score(X_train_scaled, y_train))) print('R-squared score (test): {:.3f}\n' .format(linlasso.score(X_test_scaled, y_test))) print('Features with non-zero weight (sorted by absolute magnitude):') for e in sorted (list(zip(list(X_crime), linlasso.coef_)), key = lambda e: -abs(e[1])): if e[1] != 0: print('\t{}, {:.3f}'.format(e[0], e[1])) # - # #### Lasso regression with regularization parameter: alpha # + print('Lasso regression: effect of alpha regularization\n\ parameter on number of features kept in final model\n') for alpha in [0.5, 1, 2, 3, 5, 10, 20, 50]: linlasso = Lasso(alpha, max_iter = 10000).fit(X_train_scaled, y_train) r2_train = linlasso.score(X_train_scaled, y_train) r2_test = linlasso.score(X_test_scaled, y_test) print('Alpha = {:.2f}\nFeatures kept: {}, r-squared training: {:.2f}, \ r-squared test: {:.2f}\n' .format(alpha, np.sum(linlasso.coef_ != 0), r2_train, r2_test)) # - # ### Polynomial regression # + from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.preprocessing import PolynomialFeatures X_train, X_test, y_train, y_test = train_test_split(X_F1, y_F1, random_state = 0) linreg = LinearRegression().fit(X_train, y_train) print('linear model coeff (w): {}' .format(linreg.coef_)) print('linear model intercept (b): {:.3f}' .format(linreg.intercept_)) print('R-squared score (training): {:.3f}' .format(linreg.score(X_train, y_train))) print('R-squared score (test): {:.3f}' .format(linreg.score(X_test, y_test))) print('\nNow we transform the original input data to add\n\ polynomial features up to degree 2 (quadratic)\n') poly = PolynomialFeatures(degree=2) X_F1_poly = poly.fit_transform(X_F1) X_train, X_test, y_train, y_test = train_test_split(X_F1_poly, y_F1, random_state = 0) linreg = LinearRegression().fit(X_train, y_train) print('(poly deg 2) linear model coeff (w):\n{}' .format(linreg.coef_)) print('(poly deg 2) linear model intercept (b): {:.3f}' .format(linreg.intercept_)) print('(poly deg 2) R-squared score (training): {:.3f}' .format(linreg.score(X_train, y_train))) print('(poly deg 2) R-squared score (test): {:.3f}\n' .format(linreg.score(X_test, y_test))) print('\nAddition of many polynomial features often leads to\n\ overfitting, so we often use polynomial features in combination\n\ with regression that has a regularization penalty, like ridge\n\ regression.\n') X_train, X_test, y_train, y_test = train_test_split(X_F1_poly, y_F1, random_state = 0) linreg = Ridge().fit(X_train, y_train) print('(poly deg 2 + ridge) linear model coeff (w):\n{}' .format(linreg.coef_)) print('(poly deg 2 + ridge) linear model intercept (b): {:.3f}' .format(linreg.intercept_)) print('(poly deg 2 + ridge) R-squared score (training): {:.3f}' .format(linreg.score(X_train, y_train))) print('(poly deg 2 + ridge) R-squared score (test): {:.3f}' .format(linreg.score(X_test, y_test))) # - # ## Linear models for classification # ### Logistic regression # #### Logistic regression for binary classification on fruits dataset using height, width features (positive class: apple, negative class: others) # + from sklearn.linear_model import LogisticRegression from adspy_shared_utilities import ( plot_class_regions_for_classifier_subplot) fig, subaxes = plt.subplots(1, 1, figsize=(7, 5)) y_fruits_apple = y_fruits_2d == 1 # make into a binary problem: apples vs everything else X_train, X_test, y_train, y_test = ( train_test_split(X_fruits_2d.as_matrix(), y_fruits_apple.as_matrix(), random_state = 0)) clf = LogisticRegression(C=100).fit(X_train, y_train) plot_class_regions_for_classifier_subplot(clf, X_train, y_train, None, None, 'Logistic regression \ for binary classification\nFruit dataset: Apple vs others', subaxes) h = 6 w = 8 print('A fruit with height {} and width {} is predicted to be: {}' .format(h,w, ['not an apple', 'an apple'][clf.predict([[h,w]])[0]])) h = 10 w = 7 print('A fruit with height {} and width {} is predicted to be: {}' .format(h,w, ['not an apple', 'an apple'][clf.predict([[h,w]])[0]])) subaxes.set_xlabel('height') subaxes.set_ylabel('width') print('Accuracy of Logistic regression classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Logistic regression classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) # - # #### Logistic regression on simple synthetic dataset # + from sklearn.linear_model import LogisticRegression from adspy_shared_utilities import ( plot_class_regions_for_classifier_subplot) X_train, X_test, y_train, y_test = train_test_split(X_C2, y_C2, random_state = 0) fig, subaxes = plt.subplots(1, 1, figsize=(7, 5)) clf = LogisticRegression().fit(X_train, y_train) title = 'Logistic regression, simple synthetic dataset C = {:.3f}'.format(1.0) plot_class_regions_for_classifier_subplot(clf, X_train, y_train, None, None, title, subaxes) print('Accuracy of Logistic regression classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Logistic regression classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) # - # #### Logistic regression regularization: C parameter # + X_train, X_test, y_train, y_test = ( train_test_split(X_fruits_2d.as_matrix(), y_fruits_apple.as_matrix(), random_state=0)) fig, subaxes = plt.subplots(3, 1, figsize=(4, 10)) for this_C, subplot in zip([0.1, 1, 100], subaxes): clf = LogisticRegression(C=this_C).fit(X_train, y_train) title ='Logistic regression (apple vs rest), C = {:.3f}'.format(this_C) plot_class_regions_for_classifier_subplot(clf, X_train, y_train, X_test, y_test, title, subplot) plt.tight_layout() # - # #### Application to real dataset # + from sklearn.linear_model import LogisticRegression X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) clf = LogisticRegression().fit(X_train, y_train) print('Breast cancer dataset') print('Accuracy of Logistic regression classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Logistic regression classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) # - # ### Support Vector Machines # #### Linear Support Vector Machine # + from sklearn.svm import SVC from adspy_shared_utilities import plot_class_regions_for_classifier_subplot X_train, X_test, y_train, y_test = train_test_split(X_C2, y_C2, random_state = 0) fig, subaxes = plt.subplots(1, 1, figsize=(7, 5)) this_C = 1.0 clf = SVC(kernel = 'linear', C=this_C).fit(X_train, y_train) title = 'Linear SVC, C = {:.3f}'.format(this_C) plot_class_regions_for_classifier_subplot(clf, X_train, y_train, None, None, title, subaxes) # - # #### Linear Support Vector Machine: C parameter # + from sklearn.svm import LinearSVC from adspy_shared_utilities import plot_class_regions_for_classifier X_train, X_test, y_train, y_test = train_test_split(X_C2, y_C2, random_state = 0) fig, subaxes = plt.subplots(1, 2, figsize=(8, 4)) for this_C, subplot in zip([0.00001, 100], subaxes): clf = LinearSVC(C=this_C).fit(X_train, y_train) title = 'Linear SVC, C = {:.5f}'.format(this_C) plot_class_regions_for_classifier_subplot(clf, X_train, y_train, None, None, title, subplot) plt.tight_layout() # - # #### Application to real dataset # + from sklearn.svm import LinearSVC X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) clf = LinearSVC().fit(X_train, y_train) print('Breast cancer dataset') print('Accuracy of Linear SVC classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Linear SVC classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) # - # ### Multi-class classification with linear models # #### LinearSVC with M classes generates M one vs rest classifiers. # + from sklearn.svm import LinearSVC X_train, X_test, y_train, y_test = train_test_split(X_fruits_2d, y_fruits_2d, random_state = 0) clf = LinearSVC(C=5, random_state = 67).fit(X_train, y_train) print('Coefficients:\n', clf.coef_) print('Intercepts:\n', clf.intercept_) # - # #### Multi-class results on the fruit dataset # + plt.figure(figsize=(6,6)) colors = ['r', 'g', 'b', 'y'] cmap_fruits = ListedColormap(['#FF0000', '#00FF00', '#0000FF','#FFFF00']) plt.scatter(X_fruits_2d[['height']], X_fruits_2d[['width']], c=y_fruits_2d, cmap=cmap_fruits, edgecolor = 'black', alpha=.7) x_0_range = np.linspace(-10, 15) for w, b, color in zip(clf.coef_, clf.intercept_, ['r', 'g', 'b', 'y']): # Since class prediction with a linear model uses the formula y = w_0 x_0 + w_1 x_1 + b, # and the decision boundary is defined as being all points with y = 0, to plot x_1 as a # function of x_0 we just solve w_0 x_0 + w_1 x_1 + b = 0 for x_1: plt.plot(x_0_range, -(x_0_range * w[0] + b) / w[1], c=color, alpha=.8) plt.legend(target_names_fruits) plt.xlabel('height') plt.ylabel('width') plt.xlim(-2, 12) plt.ylim(-2, 15) plt.show() # - # ## Kernelized Support Vector Machines # ### Classification # + from sklearn.svm import SVC from adspy_shared_utilities import plot_class_regions_for_classifier X_train, X_test, y_train, y_test = train_test_split(X_D2, y_D2, random_state = 0) # The default SVC kernel is radial basis function (RBF) plot_class_regions_for_classifier(SVC().fit(X_train, y_train), X_train, y_train, None, None, 'Support Vector Classifier: RBF kernel') # Compare decision boundries with polynomial kernel, degree = 3 plot_class_regions_for_classifier(SVC(kernel = 'poly', degree = 3) .fit(X_train, y_train), X_train, y_train, None, None, 'Support Vector Classifier: Polynomial kernel, degree = 3') # - # #### Support Vector Machine with RBF kernel: gamma parameter # + from adspy_shared_utilities import plot_class_regions_for_classifier X_train, X_test, y_train, y_test = train_test_split(X_D2, y_D2, random_state = 0) fig, subaxes = plt.subplots(3, 1, figsize=(4, 11)) for this_gamma, subplot in zip([0.01, 1.0, 10.0], subaxes): clf = SVC(kernel = 'rbf', gamma=this_gamma).fit(X_train, y_train) title = 'Support Vector Classifier: \nRBF kernel, gamma = {:.2f}'.format(this_gamma) plot_class_regions_for_classifier_subplot(clf, X_train, y_train, None, None, title, subplot) plt.tight_layout() # - # #### Support Vector Machine with RBF kernel: using both C and gamma parameter # + from sklearn.svm import SVC from adspy_shared_utilities import plot_class_regions_for_classifier_subplot from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X_D2, y_D2, random_state = 0) fig, subaxes = plt.subplots(3, 4, figsize=(15, 10), dpi=50) for this_gamma, this_axis in zip([0.01, 1, 5], subaxes): for this_C, subplot in zip([0.1, 1, 15, 250], this_axis): title = 'gamma = {:.2f}, C = {:.2f}'.format(this_gamma, this_C) clf = SVC(kernel = 'rbf', gamma = this_gamma, C = this_C).fit(X_train, y_train) plot_class_regions_for_classifier_subplot(clf, X_train, y_train, X_test, y_test, title, subplot) plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0) # - # ### Application of SVMs to a real dataset: unnormalized data # + from sklearn.svm import SVC X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) clf = SVC(C=10).fit(X_train, y_train) print('Breast cancer dataset (unnormalized features)') print('Accuracy of RBF-kernel SVC on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of RBF-kernel SVC on test set: {:.2f}' .format(clf.score(X_test, y_test))) # - # ### Application of SVMs to a real dataset: normalized data with feature preprocessing using minmax scaling # + from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) clf = SVC(C=10).fit(X_train_scaled, y_train) print('Breast cancer dataset (normalized with MinMax scaling)') print('RBF-kernel SVC (with MinMax scaling) training set accuracy: {:.2f}' .format(clf.score(X_train_scaled, y_train))) print('RBF-kernel SVC (with MinMax scaling) test set accuracy: {:.2f}' .format(clf.score(X_test_scaled, y_test))) # - # ## Cross-validation # ### Example based on k-NN classifier with fruit dataset (2 features) # + from sklearn.model_selection import cross_val_score clf = KNeighborsClassifier(n_neighbors = 5) X = X_fruits_2d.as_matrix() y = y_fruits_2d.as_matrix() cv_scores = cross_val_score(clf, X, y) print('Cross-validation scores (3-fold):', cv_scores) print('Mean cross-validation score (3-fold): {:.3f}' .format(np.mean(cv_scores))) # - # ### A note on performing cross-validation for more advanced scenarios. # # In some cases (e.g. when feature values have very different ranges), we've seen the need to scale or normalize the training and test sets before use with a classifier. The proper way to do cross-validation when you need to scale the data is *not* to scale the entire dataset with a single transform, since this will indirectly leak information into the training data about the whole dataset, including the test data (see the lecture on data leakage later in the course). Instead, scaling/normalizing must be computed and applied for each cross-validation fold separately. To do this, the easiest way in scikit-learn is to use *pipelines*. While these are beyond the scope of this course, further information is available in the scikit-learn documentation here: # # http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html # # or the Pipeline section in the recommended textbook: Introduction to Machine Learning with Python by <NAME> and <NAME> (O'Reilly Media). # ## Validation curve example # + from sklearn.svm import SVC from sklearn.model_selection import validation_curve param_range = np.logspace(-3, 3, 4) train_scores, test_scores = validation_curve(SVC(), X, y, param_name='gamma', param_range=param_range, cv=3) # - print(train_scores) print(test_scores) # + # This code based on scikit-learn validation_plot example # See: http://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.html plt.figure() train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.title('Validation Curve with SVM') plt.xlabel('$\gamma$ (gamma)') plt.ylabel('Score') plt.ylim(0.0, 1.1) lw = 2 plt.semilogx(param_range, train_scores_mean, label='Training score', color='darkorange', lw=lw) plt.fill_between(param_range, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.2, color='darkorange', lw=lw) plt.semilogx(param_range, test_scores_mean, label='Cross-validation score', color='navy', lw=lw) plt.fill_between(param_range, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.2, color='navy', lw=lw) plt.legend(loc='best') plt.show() # - # ## Decision Trees # + from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier from adspy_shared_utilities import plot_decision_tree from sklearn.model_selection import train_test_split iris = load_iris() X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = 3) clf = DecisionTreeClassifier().fit(X_train, y_train) print('Accuracy of Decision Tree classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Decision Tree classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) # - # #### Setting max decision tree depth to help avoid overfitting # + clf2 = DecisionTreeClassifier(max_depth = 3).fit(X_train, y_train) print('Accuracy of Decision Tree classifier on training set: {:.2f}' .format(clf2.score(X_train, y_train))) print('Accuracy of Decision Tree classifier on test set: {:.2f}' .format(clf2.score(X_test, y_test))) # - # #### Visualizing decision trees plot_decision_tree(clf, iris.feature_names, iris.target_names) # #### Pre-pruned version (max_depth = 3) plot_decision_tree(clf2, iris.feature_names, iris.target_names) # #### Feature importance # + from adspy_shared_utilities import plot_feature_importances plt.figure(figsize=(10,4), dpi=80) plot_feature_importances(clf, iris.feature_names) plt.show() print('Feature importances: {}'.format(clf.feature_importances_)) # + from sklearn.tree import DecisionTreeClassifier from adspy_shared_utilities import plot_class_regions_for_classifier_subplot X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = 0) fig, subaxes = plt.subplots(6, 1, figsize=(6, 32)) pair_list = [[0,1], [0,2], [0,3], [1,2], [1,3], [2,3]] tree_max_depth = 4 for pair, axis in zip(pair_list, subaxes): X = X_train[:, pair] y = y_train clf = DecisionTreeClassifier(max_depth=tree_max_depth).fit(X, y) title = 'Decision Tree, max_depth = {:d}'.format(tree_max_depth) plot_class_regions_for_classifier_subplot(clf, X, y, None, None, title, axis, iris.target_names) axis.set_xlabel(iris.feature_names[pair[0]]) axis.set_ylabel(iris.feature_names[pair[1]]) plt.tight_layout() plt.show() # - # #### Decision Trees on a real-world dataset # + from sklearn.tree import DecisionTreeClassifier from adspy_shared_utilities import plot_decision_tree from adspy_shared_utilities import plot_feature_importances X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) clf = DecisionTreeClassifier(max_depth = 4, min_samples_leaf = 8, random_state = 0).fit(X_train, y_train) plot_decision_tree(clf, cancer.feature_names, cancer.target_names) # + print('Breast cancer dataset: decision tree') print('Accuracy of DT classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of DT classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) plt.figure(figsize=(10,6),dpi=80) plot_feature_importances(clf, cancer.feature_names) plt.tight_layout() plt.show()
Applied_ML_with_Python/Module_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lecture 2: Story Proofs, Axioms of Probability # # # ## Stat 110, Prof. <NAME>, Harvard University # # ---- # ## Sampling, continued # # Choose $k$ objects out of $n$ # # | | ordered | unordered | # |-----------|:---------:|:-----------:| # | __w/ replacement__ | $n^k$ | ??? | # | __w/o replacement__ | $n(n-1)(n-2) \ldots (n-k+1)$ | $\binom{n}{k}$ | # # # * __ordered, w/ replacement__: there are $n$ choices for each $k$, so this follows from the multiplication rule. # * __ordered, w/out replacement__: there are $n$ choices for the 1<sup>st</sup> position; $n-1$ for the 2<sup>nd</sup>; $n-2$ for the 3<sup>rd</sup>; and $n-k+1$ for the $k$<sup>th</sup>. # * __unordered, w/ replacement__: _we will get to this shortly..._ # * __unordered, w/out replacement__: the binomial coefficient; think of choosing a hand from a deck of cards. # # To complete our discussion of sampling, recall that of the four ways of sampling as shown above, all except the case of __unordered, with replacement__ follow immediately from the multiplication rule. # # Now the solution is $\binom{n+k-1}{k}$, but let's see if we can prove this. # ### A simple proof # # We start off with some simple edge cases. # # If we let $k=0$, then we are not choosing anything, and so there is only one solution to this case: the empty set. # \begin\{align\} # \text{let }k = 0 \Rightarrow \binom{n+0-1}{0} &= \binom{n-1}{0} \\\\ # &= 1 # \end\{align\} # # If we let $k=1$, then there are $n$ ways we could select a single item out of a total of $n$. # \begin\{align\} # \text{let }k = 1 \Rightarrow \binom{n+1-1}{1} &= \binom{n}{1} \\\\ # &= n # \end\{align\} # # Now let's consider a simple but non-trivial case. If we let $n=2$, then # \begin\{align\} # \text{let }n = 2 \Rightarrow \binom{2+k-1}{k} &= \binom{k+1}{k} \\\\ # &= \binom{k+1}{1} \\\\ # &= k+1 # \end\{align\} # # Here's an example of $n=2, k=7$: # # ![title](images/L0201.png) # # But notice that we are really doing here is placing $n-1$ dividers between $k$ elements. Or in other words, we are choosing $k$ slots for the elements out of $n+k-1$ slots in total. # # ![title](images/L0202.png) # # And we can easily build on this understanding to other values of $n$ and $k$. # # ![title](images/L0203.png) # # And the number of ways to select $k$ items out of $n$, unordered and with replacement, is: # # \begin\{align\} # \text{choose k out n items, unordered, with replacement} &= \binom{n+k-1}{k} \\\\ # &= \binom{n+k-1}{n-1} # \end\{align\} # ## Story Proof # A story proof is a proof by _interpretation_. No algebra needed, just intuition. # # Here are some examples that we have already come across. # # ### Ex. 1 # $$ \binom{n}{k} = \binom{n}{n-k} $$ # # Choosing $k$ elements out of $n$ is the same as choosing $n-k$ elements out of $n$. We've just seen this above! # # ### Ex. 2 # $$ n \binom{n-1}{k-1} = k \binom{n}{k} $$ # # Imagine picking $k$ people out of $n$, and then designating of the $k$ as president. You can either select all $k$ people, and then choose 1 from among those $k$. Or, you can select a president, and then choose the remaining $k-1$ out of the $n-1$ people. # # ### Ex. 3 # $$ \binom{m+n}{k} = \sum_{j=0}^{k} \binom{m}{j} \binom{n}{k-j} $$ # # Suppose you had $m$ boys and $n$ girls, and you needed to select $k$ children out of them all. You could do this by first choosing $j$ out of the $m$ boys, and then choosing $k-j$ of the girls. You would have to apply the multiplication rule to get the total number of combinations, and then sum them all up. This is known as [Vandermonde's identity](https://en.wikipedia.org/wiki/Vandermonde%27s_identity). # # ---- # ## Non-na&iuml;ve Definition of Probability # # Now we move from the na&iuml;ve definition of probability into the more abstract and general. # # #### Definition: non-na&iuml;ve definition of probability # > Let $S$ be a sample space, the set of all possible outcomes of some experiment. $S$ might not be _finite_ anymore, and all outcomes might not be _equally probable_, either. # > # > Let $A$ be an event in, or a subset of, $S$. # > # > Let $P$ be a function that maps an event $A$ to some value from $0$ to $1$. # # And we have the following axioms: # # ### Axiom 1 # # > \begin\{align\} # > P(\emptyset) = 0 \\\\ # > P(\Omega) = 1 # > \end\{align\} # # The probability of the empty set, or a null event, is by definition $0$. # # The probability of the entire space is by definition $1$. # # These are the 2 extremes, and this is why <NAME> lumps them together in one rule. # # ### Axiom 2 # # > $$ P(\bigcup_{n=1}^{\infty} A_{n}) = \sum_{n=1}^{\infty} P(A_{n}) \iff A_1, A_2, ... A_n \text{ are disjoint (non-overlapping)} $$ # # Every theorem about probability follows from these 2 rules. You might want to have a look at [Kolmogorov's axioms](http://mathworld.wolfram.com/KolmogorovsAxioms.html). # # ---- # View [Lecture 2: Story Proofs, Axioms of Probability | Statistics 110](http://bit.ly/2nOw0JV) on YouTube.
Lecture_02.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: TensorFlow 2.4 on Python 3.8 & CUDA 11.1 # language: python # name: python3 # --- # **Equations** # # *This notebook lists all the equations in the book. If you decide to print them on a T-Shirt, I definitely want a copy! ;-)* # # **Warning**: GitHub's notebook viewer does not render equations properly. You should either view this notebook within Jupyter itself or use [Jupyter's online viewer](http://nbviewer.jupyter.org/github/ageron/handson-ml/blob/master/book_equations.ipynb). # # Chapter 1 # **Equation 1-1: A simple linear model** # # $ # \text{life_satisfaction} = \theta_0 + \theta_1 \times \text{GDP_per_capita} # $ # # # # Chapter 2 # **Equation 2-1: Root Mean Square Error (RMSE)** # # $ # \text{RMSE}(\mathbf{X}, h) = \sqrt{\frac{1}{m}\sum\limits_{i=1}^{m}\left(h(\mathbf{x}^{(i)}) - y^{(i)}\right)^2} # $ # # # **Notations (page 38):** # # $ # \mathbf{x}^{(1)} = \begin{pmatrix} # -118.29 \\ # 33.91 \\ # 1,416 \\ # 38,372 # \end{pmatrix} # $ # # # $ # y^{(1)}=156,400 # $ # # # $ # \mathbf{X} = \begin{pmatrix} # (\mathbf{x}^{(1)})^T \\ # (\mathbf{x}^{(2)})^T\\ # \vdots \\ # (\mathbf{x}^{(1999)})^T \\ # (\mathbf{x}^{(2000)})^T # \end{pmatrix} = \begin{pmatrix} # -118.29 & 33.91 & 1,416 & 38,372 \\ # \vdots & \vdots & \vdots & \vdots \\ # \end{pmatrix} # $ # # # **Equation 2-2: Mean Absolute Error** # # $ # \text{MAE}(\mathbf{X}, h) = \frac{1}{m}\sum\limits_{i=1}^{m}\left| h(\mathbf{x}^{(i)}) - y^{(i)} \right| # $ # # **$\ell_k$ norms (page 39):** # # $ \left\| \mathbf{v} \right\| _k = (\left| v_0 \right|^k + \left| v_1 \right|^k + \dots + \left| v_n \right|^k)^{\frac{1}{k}} $ # # # Chapter 3 # **Equation 3-1: Precision** # # $ # \text{precision} = \cfrac{TP}{TP + FP} # $ # # # **Equation 3-2: Recall** # # $ # \text{recall} = \cfrac{TP}{TP + FN} # $ # # # **Equation 3-3: $F_1$ score** # # $ # F_1 = \cfrac{2}{\cfrac{1}{\text{precision}} + \cfrac{1}{\text{recall}}} = 2 \times \cfrac{\text{precision}\, \times \, \text{recall}}{\text{precision}\, + \, \text{recall}} = \cfrac{TP}{TP + \cfrac{FN + FP}{2}} # $ # # # # Chapter 4 # **Equation 4-1: Linear Regression model prediction** # # $ # \hat{y} = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \dots + \theta_n x_n # $ # # # **Equation 4-2: Linear Regression model prediction (vectorized form)** # # $ # \hat{y} = h_{\boldsymbol{\theta}}(\mathbf{x}) = \boldsymbol{\theta} \cdot \mathbf{x} # $ # # # **Equation 4-3: MSE cost function for a Linear Regression model** # # $ # \text{MSE}(\mathbf{X}, h_{\boldsymbol{\theta}}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{(\boldsymbol{\theta}^T \mathbf{x}^{(i)} - y^{(i)})^2} # $ # # # **Equation 4-4: Normal Equation** # # $ # \hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y} # $ # # # ** Partial derivatives notation (page 114):** # # $\frac{\partial}{\partial \theta_j} \text{MSE}(\boldsymbol{\theta})$ # # # **Equation 4-5: Partial derivatives of the cost function** # # $ # \dfrac{\partial}{\partial \theta_j} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m}\sum\limits_{i=1}^{m}(\boldsymbol{\theta}^T \mathbf{x}^{(i)} - y^{(i)})\, x_j^{(i)} # $ # # # **Equation 4-6: Gradient vector of the cost function** # # $ # \nabla_{\boldsymbol{\theta}}\, \text{MSE}(\boldsymbol{\theta}) = # \begin{pmatrix} # \frac{\partial}{\partial \theta_0} \text{MSE}(\boldsymbol{\theta}) \\ # \frac{\partial}{\partial \theta_1} \text{MSE}(\boldsymbol{\theta}) \\ # \vdots \\ # \frac{\partial}{\partial \theta_n} \text{MSE}(\boldsymbol{\theta}) # \end{pmatrix} # = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y}) # $ # # # **Equation 4-7: Gradient Descent step** # # $ # \boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \nabla_{\boldsymbol{\theta}}\, \text{MSE}(\boldsymbol{\theta}) # $ # # # $ O(\frac{1}{\text{iterations}}) $ # # # $ \hat{y} = 0.56 x_1^2 + 0.93 x_1 + 1.78 $ # # # $ y = 0.5 x_1^2 + 1.0 x_1 + 2.0 + \text{Gaussian noise} $ # # # $ \dfrac{(n+d)!}{d!\,n!} $ # # # $ \alpha \sum_{i=1}^{n}{{\theta_i}^2}$ # # # **Equation 4-8: Ridge Regression cost function** # # $ # J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2 # $ # # # **Equation 4-9: Ridge Regression closed-form solution** # # $ # \hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X} + \alpha \mathbf{A})^{-1} \mathbf{X}^T \mathbf{y} # $ # # # **Equation 4-10: Lasso Regression cost function** # # $ # J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| # $ # # # **Equation 4-11: Lasso Regression subgradient vector** # # $ # g(\boldsymbol{\theta}, J) = \nabla_{\boldsymbol{\theta}}\, \text{MSE}(\boldsymbol{\theta}) + \alpha # \begin{pmatrix} # \operatorname{sign}(\theta_1) \\ # \operatorname{sign}(\theta_2) \\ # \vdots \\ # \operatorname{sign}(\theta_n) \\ # \end{pmatrix} \quad \text{where } \operatorname{sign}(\theta_i) = # \begin{cases} # -1 & \text{if } \theta_i < 0 \\ # 0 & \text{if } \theta_i = 0 \\ # +1 & \text{if } \theta_i > 0 # \end{cases} # $ # # # **Equation 4-12: Elastic Net cost function** # # $ # J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2} # $ # # # **Equation 4-13: Logistic Regression model estimated probability (vectorized form)** # # $ # \hat{p} = h_{\boldsymbol{\theta}}(\mathbf{x}) = \sigma(\boldsymbol{\theta}^T \mathbf{x}) # $ # # # **Equation 4-14: Logistic function** # # $ # \sigma(t) = \dfrac{1}{1 + \exp(-t)} # $ # # # **Equation 4-15: Logistic Regression model prediction** # # $ # \hat{y} = # \begin{cases} # 0 & \text{if } \hat{p} < 0.5, \\ # 1 & \text{if } \hat{p} \geq 0.5. # \end{cases} # $ # # # **Equation 4-16: Cost function of a single training instance** # # $ # c(\boldsymbol{\theta}) = # \begin{cases} # -\log(\hat{p}) & \text{if } y = 1, \\ # -\log(1 - \hat{p}) & \text{if } y = 0. # \end{cases} # $ # # # **Equation 4-17: Logistic Regression cost function (log loss)** # # $ # J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]} # $ # # # **Equation 4-18: Logistic cost function partial derivatives** # # $ # \dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)} # $ # # # **Equation 4-19: Softmax score for class k** # # $ # s_k(\mathbf{x}) = ({\boldsymbol{\theta}^{(k)}})^T \mathbf{x} # $ # # # **Equation 4-20: Softmax function** # # $ # \hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}} # $ # # # **Equation 4-21: Softmax Regression classifier prediction** # # $ # \hat{y} = \underset{k}{\operatorname{argmax}} \, \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \underset{k}{\operatorname{argmax}} \, s_k(\mathbf{x}) = \underset{k}{\operatorname{argmax}} \, \left( ({\boldsymbol{\theta}^{(k)}})^T \mathbf{x} \right) # $ # # # **Equation 4-22: Cross entropy cost function** # # $ # J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)} # $ # # **Cross entropy between two discrete probability distributions $p$ and $q$ (page 141):** # $ H(p, q) = -\sum\limits_{x}p(x) \log q(x) $ # # # **Equation 4-23: Cross entropy gradient vector for class _k_** # # $ # \nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}} # $ # # # Chapter 5 # **Equation 5-1: Gaussian RBF** # # $ # {\displaystyle \phi_{\gamma}(\mathbf{x}, \boldsymbol{\ell})} = {\displaystyle \exp({\displaystyle -\gamma \left\| \mathbf{x} - \boldsymbol{\ell} \right\|^2})} # $ # # # **Equation 5-2: Linear SVM classifier prediction** # # $ # \hat{y} = \begin{cases} # 0 & \text{if } \mathbf{w}^T \mathbf{x} + b < 0, \\ # 1 & \text{if } \mathbf{w}^T \mathbf{x} + b \geq 0 # \end{cases} # $ # # # **Equation 5-3: Hard margin linear SVM classifier objective** # # $ # \begin{split} # &\underset{\mathbf{w}, b}{\operatorname{minimize}}\quad{\frac{1}{2}\mathbf{w}^T \mathbf{w}} \\ # &\text{subject to} \quad t^{(i)}(\mathbf{w}^T \mathbf{x}^{(i)} + b) \ge 1 \quad \text{for } i = 1, 2, \dots, m # \end{split} # $ # # # **Equation 5-4: Soft margin linear SVM classifier objective** # # $ # \begin{split} # &\underset{\mathbf{w}, b, \mathbf{\zeta}}{\operatorname{minimize}}\quad{\dfrac{1}{2}\mathbf{w}^T \mathbf{w} + C \sum\limits_{i=1}^m{\zeta^{(i)}}}\\ # &\text{subject to} \quad t^{(i)}(\mathbf{w}^T \mathbf{x}^{(i)} + b) \ge 1 - \zeta^{(i)} \quad \text{and} \quad \zeta^{(i)} \ge 0 \quad \text{for } i = 1, 2, \dots, m # \end{split} # $ # # # **Equation 5-5: Quadratic Programming problem** # # $ # \begin{split} # \underset{\mathbf{p}}{\text{Minimize}} \quad & \dfrac{1}{2} \mathbf{p}^T \mathbf{H} \mathbf{p} \quad + \quad \mathbf{f}^T \mathbf{p} \\ # \text{subject to} \quad & \mathbf{A} \mathbf{p} \le \mathbf{b} \\ # \text{where } & # \begin{cases} # \mathbf{p} & \text{ is an }n_p\text{-dimensional vector (} n_p = \text{number of parameters),}\\ # \mathbf{H} & \text{ is an }n_p \times n_p \text{ matrix,}\\ # \mathbf{f} & \text{ is an }n_p\text{-dimensional vector,}\\ # \mathbf{A} & \text{ is an } n_c \times n_p \text{ matrix (}n_c = \text{number of constraints),}\\ # \mathbf{b} & \text{ is an }n_c\text{-dimensional vector.} # \end{cases} # \end{split} # $ # # # **Equation 5-6: Dual form of the linear SVM objective** # # $ # \begin{split} # \underset{\mathbf{\alpha}}{\operatorname{minimize}} # \dfrac{1}{2}\sum\limits_{i=1}^{m}{ # \sum\limits_{j=1}^{m}{ # \alpha^{(i)} \alpha^{(j)} t^{(i)} t^{(j)} {\mathbf{x}^{(i)}}^T \mathbf{x}^{(j)} # } # } \quad - \quad \sum\limits_{i=1}^{m}{\alpha^{(i)}}\\ # \text{subject to}\quad \alpha^{(i)} \ge 0 \quad \text{for }i = 1, 2, \dots, m # \end{split} # $ # # # **Equation 5-7: From the dual solution to the primal solution** # # $ # \begin{split} # &\hat{\mathbf{w}} = \sum_{i=1}^{m}{\hat{\alpha}}^{(i)}t^{(i)}\mathbf{x}^{(i)}\\ # &\hat{b} = \dfrac{1}{n_s}\sum\limits_{\scriptstyle i=1 \atop {\scriptstyle {\hat{\alpha}}^{(i)} > 0}}^{m}{\left(t^{(i)} - ({\hat{\mathbf{w}}}^T \mathbf{x}^{(i)})\right)} # \end{split} # $ # # # **Equation 5-8: Second-degree polynomial mapping** # # $ # \phi\left(\mathbf{x}\right) = \phi\left( \begin{pmatrix} # x_1 \\ # x_2 # \end{pmatrix} \right) = \begin{pmatrix} # {x_1}^2 \\ # \sqrt{2} \, x_1 x_2 \\ # {x_2}^2 # \end{pmatrix} # $ # # # **Equation 5-9: Kernel trick for a 2^nd^-degree polynomial mapping** # # $ # \begin{split} # \phi(\mathbf{a})^T \phi(\mathbf{b}) & \quad = \begin{pmatrix} # {a_1}^2 \\ # \sqrt{2} \, a_1 a_2 \\ # {a_2}^2 # \end{pmatrix}^T \begin{pmatrix} # {b_1}^2 \\ # \sqrt{2} \, b_1 b_2 \\ # {b_2}^2 # \end{pmatrix} = {a_1}^2 {b_1}^2 + 2 a_1 b_1 a_2 b_2 + {a_2}^2 {b_2}^2 \\ # & \quad = \left( a_1 b_1 + a_2 b_2 \right)^2 = \left( \begin{pmatrix} # a_1 \\ # a_2 # \end{pmatrix}^T \begin{pmatrix} # b_1 \\ # b_2 # \end{pmatrix} \right)^2 = (\mathbf{a}^T \mathbf{b})^2 # \end{split} # $ # # **In the text about the kernel trick (page 162):** # [...], then you can replace this dot product of transformed vectors simply by $ ({\mathbf{x}^{(i)}}^T \mathbf{x}^{(j)})^2 $ # # # **Equation 5-10: Common kernels** # # $ # \begin{split} # \text{Linear:} & \quad K(\mathbf{a}, \mathbf{b}) = \mathbf{a}^T \mathbf{b} \\ # \text{Polynomial:} & \quad K(\mathbf{a}, \mathbf{b}) = \left(\gamma \mathbf{a}^T \mathbf{b} + r \right)^d \\ # \text{Gaussian RBF:} & \quad K(\mathbf{a}, \mathbf{b}) = \exp({\displaystyle -\gamma \left\| \mathbf{a} - \mathbf{b} \right\|^2}) \\ # \text{Sigmoid:} & \quad K(\mathbf{a}, \mathbf{b}) = \tanh\left(\gamma \mathbf{a}^T \mathbf{b} + r\right) # \end{split} # $ # # **Equation 5-11: Making predictions with a kernelized SVM** # # $ # \begin{split} # h_{\hat{\mathbf{w}}, \hat{b}}\left(\phi(\mathbf{x}^{(n)})\right) & = \,\hat{\mathbf{w}}^T \phi(\mathbf{x}^{(n)}) + \hat{b} = \left(\sum_{i=1}^{m}{\hat{\alpha}}^{(i)}t^{(i)}\phi(\mathbf{x}^{(i)})\right)^T \phi(\mathbf{x}^{(n)}) + \hat{b}\\ # & = \, \sum_{i=1}^{m}{\hat{\alpha}}^{(i)}t^{(i)}\left(\phi(\mathbf{x}^{(i)})^T \phi(\mathbf{x}^{(n)})\right) + \hat{b}\\ # & = \sum\limits_{\scriptstyle i=1 \atop {\scriptstyle {\hat{\alpha}}^{(i)} > 0}}^{m}{\hat{\alpha}}^{(i)}t^{(i)} K(\mathbf{x}^{(i)}, \mathbf{x}^{(n)}) + \hat{b} # \end{split} # $ # # # **Equation 5-12: Computing the bias term using the kernel trick** # # $ # \begin{split} # \hat{b} & = \dfrac{1}{n_s}\sum\limits_{\scriptstyle i=1 \atop {\scriptstyle {\hat{\alpha}}^{(i)} > 0}}^{m}{\left(t^{(i)} - {\hat{\mathbf{w}}}^T \phi(\mathbf{x}^{(i)})\right)} = \dfrac{1}{n_s}\sum\limits_{\scriptstyle i=1 \atop {\scriptstyle {\hat{\alpha}}^{(i)} > 0}}^{m}{\left(t^{(i)} - { # \left(\sum_{j=1}^{m}{\hat{\alpha}}^{(j)}t^{(j)}\phi(\mathbf{x}^{(j)})\right) # }^T \phi(\mathbf{x}^{(i)})\right)}\\ # & = \dfrac{1}{n_s}\sum\limits_{\scriptstyle i=1 \atop {\scriptstyle {\hat{\alpha}}^{(i)} > 0}}^{m}{\left(t^{(i)} - # \sum\limits_{\scriptstyle j=1 \atop {\scriptstyle {\hat{\alpha}}^{(j)} > 0}}^{m}{ # {\hat{\alpha}}^{(j)} t^{(j)} K(\mathbf{x}^{(i)},\mathbf{x}^{(j)}) # } # \right)} # \end{split} # $ # # # **Equation 5-13: Linear SVM classifier cost function** # # $ # J(\mathbf{w}, b) = \dfrac{1}{2} \mathbf{w}^T \mathbf{w} \quad + \quad C {\displaystyle \sum\limits_{i=1}^{m}max\left(0, t^{(i)} - (\mathbf{w}^T \mathbf{x}^{(i)} + b) \right)} # $ # # # # # Chapter 6 # **Equation 6-1: Gini impurity** # # $ # G_i = 1 - \sum\limits_{k=1}^{n}{{p_{i,k}}^2} # $ # # # **Equation 6-2: CART cost function for classification** # # $ # \begin{split} # &J(k, t_k) = \dfrac{m_{\text{left}}}{m}G_\text{left} + \dfrac{m_{\text{right}}}{m}G_{\text{right}}\\ # &\text{where }\begin{cases} # G_\text{left/right} \text{ measures the impurity of the left/right subset,}\\ # m_\text{left/right} \text{ is the number of instances in the left/right subset.} # \end{cases} # \end{split} # $ # # **Entropy computation example (page 173):** # # $ -\frac{49}{54}\log_2(\frac{49}{54}) - \frac{5}{54}\log_2(\frac{5}{54}) $ # # # **Equation 6-3: Entropy** # # $ # H_i = -\sum\limits_{k=1 \atop p_{i,k} \ne 0}^{n}{{p_{i,k}}\log_2(p_{i,k})} # $ # # # **Equation 6-4: CART cost function for regression** # # $ # J(k, t_k) = \dfrac{m_{\text{left}}}{m}\text{MSE}_\text{left} + \dfrac{m_{\text{right}}}{m}\text{MSE}_{\text{right}} \quad # \text{where } # \begin{cases} # \text{MSE}_{\text{node}} = \dfrac{1}{m_{\text{node}}} \sum\limits_{\scriptstyle i \in \text{node}}(\hat{y}_{\text{node}} - y^{(i)})^2\\ # \hat{y}_\text{node} = \dfrac{1}{m_{\text{node}}}\sum\limits_{\scriptstyle i \in \text{node}}y^{(i)} # \end{cases} # $ # # # Chapter 7 # # **Equation 7-1: Weighted error rate of the $j^\text{th}$ predictor** # # $ # r_j = \dfrac{\displaystyle \sum\limits_{\textstyle {i=1 \atop \hat{y}_j^{(i)} \ne y^{(i)}}}^{m}{w^{(i)}}}{\displaystyle \sum\limits_{i=1}^{m}{w^{(i)}}} \quad # \text{where }\hat{y}_j^{(i)}\text{ is the }j^{\text{th}}\text{ predictor's prediction for the }i^{\text{th}}\text{ instance.} # $ # # **Equation 7-2: Predictor weight** # # $ # \begin{split} # \alpha_j = \eta \log{\dfrac{1 - r_j}{r_j}} # \end{split} # $ # # # **Equation 7-3: Weight update rule** # # $ # \begin{split} # & \text{ for } i = 1, 2, \dots, m \\ # & w^{(i)} \leftarrow # \begin{cases} # w^{(i)} & \text{if }\hat{y_j}^{(i)} = y^{(i)}\\ # w^{(i)} \exp(\alpha_j) & \text{if }\hat{y_j}^{(i)} \ne y^{(i)} # \end{cases} # \end{split} # $ # # **In the text page 194:** # # Then all the instance weights are normalized (i.e., divided by $ \sum_{i=1}^{m}{w^{(i)}} $). # # # **Equation 7-4: AdaBoost predictions** # # $ # \hat{y}(\mathbf{x}) = \underset{k}{\operatorname{argmax}}{\sum\limits_{\scriptstyle j=1 \atop \scriptstyle \hat{y}_j(\mathbf{x}) = k}^{N}{\alpha_j}} \quad \text{where }N\text{ is the number of predictors.} # $ # # # # # Chapter 8 # # **Equation 8-1: Principal components matrix** # # $ # \mathbf{V}^T = # \begin{pmatrix} # \mid & \mid & & \mid \\ # \mathbf{c_1} & \mathbf{c_2} & \cdots & \mathbf{c_n} \\ # \mid & \mid & & \mid # \end{pmatrix} # $ # # # **Equation 8-2: Projecting the training set down to _d_ dimensions** # # $ # \mathbf{X}_{d\text{-proj}} = \mathbf{X} \mathbf{W}_d # $ # # # **Equation 8-3: PCA inverse transformation, back to the original number of dimensions** # # $ # \mathbf{X}_{\text{recovered}} = \mathbf{X}_{d\text{-proj}} {\mathbf{W}_d}^T # $ # # # $ \sum_{j=1}^{m}{w_{i,j}\mathbf{x}^{(j)}} $ # # # **Equation 8-4: LLE step 1: linearly modeling local relationships** # # $ # \begin{split} # & \hat{\mathbf{W}} = \underset{\mathbf{W}}{\operatorname{argmin}}{\displaystyle \sum\limits_{i=1}^{m}} \left\|\mathbf{x}^{(i)} - \sum\limits_{j=1}^{m}{w_{i,j}}\mathbf{x}^{(j)}\right\|^2\\ # & \text{subject to } # \begin{cases} # w_{i,j}=0 & \text{if }\mathbf{x}^{(j)} \text{ is not one of the }k\text{ c.n. of }\mathbf{x}^{(i)}\\ # \sum\limits_{j=1}^{m}w_{i,j} = 1 & \text{for }i=1, 2, \dots, m # \end{cases} # \end{split} # $ # # **In the text page 223:** # # [...] then we want the squared distance between $\mathbf{z}^{(i)}$ and $ \sum_{j=1}^{m}{\hat{w}_{i,j}\mathbf{z}^{(j)}} $ to be as small as possible. # # # **Equation 8-5: LLE step 2: reducing dimensionality while preserving relationships** # # $ # \hat{\mathbf{Z}} = \underset{\mathbf{Z}}{\operatorname{argmin}}{\displaystyle \sum\limits_{i=1}^{m}} \left\|\mathbf{z}^{(i)} - \sum\limits_{j=1}^{m}{\hat{w}_{i,j}}\mathbf{z}^{(j)}\right\|^2 # $ # # # Chapter 9 # # **Equation 9-1: Rectified linear unit** # # $ # h_{\mathbf{w}, b}(\mathbf{X}) = \max(\mathbf{X} \mathbf{w} + b, 0) # $ # # Chapter 10 # # **Equation 10-1: Common step functions used in Perceptrons** # # $ # \begin{split} # \operatorname{heaviside}(z) = # \begin{cases} # 0 & \text{if }z < 0\\ # 1 & \text{if }z \ge 0 # \end{cases} & \quad\quad # \operatorname{sgn}(z) = # \begin{cases} # -1 & \text{if }z < 0\\ # 0 & \text{if }z = 0\\ # +1 & \text{if }z > 0 # \end{cases} # \end{split} # $ # # # **Equation 10-2: Perceptron learning rule (weight update)** # # $ # {w_{i,j}}^{(\text{next step})} = w_{i,j} + \eta (y_j - \hat{y}_j) x_i # $ # # # **In the text page 266:** # # It will be initialized randomly, using a truncated normal (Gaussian) distribution with a standard deviation of $ 2 / \sqrt{\text{n}_\text{inputs}} $. # # # Chapter 11 # **Equation 11-1: Xavier initialization (when using the logistic activation function)** # # $ # \begin{split} # & \text{Normal distribution with mean 0 and standard deviation } # \sigma = \sqrt{\dfrac{2}{n_\text{inputs} + n_\text{outputs}}}\\ # & \text{Or a uniform distribution between -r and +r, with } # r = \sqrt{\dfrac{6}{n_\text{inputs} + n_\text{outputs}}} # \end{split} # $ # # **In the text page 278:** # # When the number of input connections is roughly equal to the number of output # connections, you get simpler equations (e.g., $ \sigma = 1 / \sqrt{n_\text{inputs}} $ or $ r = \sqrt{3} / \sqrt{n_\text{inputs}} $). # # **Table 11-1: Initialization parameters for each type of activation function** # # * Logistic uniform: $ r = \sqrt{\dfrac{6}{n_\text{inputs} + n_\text{outputs}}} $ # * Logistic normal: $ \sigma = \sqrt{\dfrac{2}{n_\text{inputs} + n_\text{outputs}}} $ # * Hyperbolic tangent uniform: $ r = 4 \sqrt{\dfrac{6}{n_\text{inputs} + n_\text{outputs}}} $ # * Hyperbolic tangent normal: $ \sigma = 4 \sqrt{\dfrac{2}{n_\text{inputs} + n_\text{outputs}}} $ # * ReLU (and its variants) uniform: $ r = \sqrt{2} \sqrt{\dfrac{6}{n_\text{inputs} + n_\text{outputs}}} $ # * ReLU (and its variants) normal: $ \sigma = \sqrt{2} \sqrt{\dfrac{2}{n_\text{inputs} + n_\text{outputs}}} $ # # **Equation 11-2: ELU activation function** # # $ # \operatorname{ELU}_\alpha(z) = # \begin{cases} # \alpha(\exp(z) - 1) & \text{if } z < 0\\ # z & if z \ge 0 # \end{cases} # $ # # # **Equation 11-3: Batch Normalization algorithm** # # $ # \begin{split} # 1.\quad & \mathbf{\mu}_B = \dfrac{1}{m_B}\sum\limits_{i=1}^{m_B}{\mathbf{x}^{(i)}}\\ # 2.\quad & {\mathbf{\sigma}_B}^2 = \dfrac{1}{m_B}\sum\limits_{i=1}^{m_B}{(\mathbf{x}^{(i)} - \mathbf{\mu}_B)^2}\\ # 3.\quad & \hat{\mathbf{x}}^{(i)} = \dfrac{\mathbf{x}^{(i)} - \mathbf{\mu}_B}{\sqrt{{\mathbf{\sigma}_B}^2 + \epsilon}}\\ # 4.\quad & \mathbf{z}^{(i)} = \gamma \hat{\mathbf{x}}^{(i)} + \beta # \end{split} # $ # # **In the text page 285:** # # [...] given a new value $v$, the running average $v$ is updated through the equation: # # $ \hat{v} \gets \hat{v} \times \text{momentum} + v \times (1 - \text{momentum}) $ # # **Equation 11-4: Momentum algorithm** # # 1. $\mathbf{m} \gets \beta \mathbf{m} - \eta \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta})$ # 2. $\boldsymbol{\theta} \gets \boldsymbol{\theta} + \mathbf{m}$ # # **In the text page 296:** # # You can easily verify that if the gradient remains constant, the terminal velocity (i.e., the maximum size of the weight updates) is equal to that gradient multiplied by the learning rate η multiplied by $ \frac{1}{1 - \beta} $. # # # **Equation 11-5: Nesterov Accelerated Gradient algorithm** # # 1. $\mathbf{m} \gets \beta \mathbf{m} - \eta \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta} + \beta \mathbf{m})$ # 2. $\boldsymbol{\theta} \gets \boldsymbol{\theta} + \mathbf{m}$ # # **Equation 11-6: AdaGrad algorithm** # # 1. $\mathbf{s} \gets \mathbf{s} + \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta}) \otimes \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta})$ # 2. $\boldsymbol{\theta} \gets \boldsymbol{\theta} - \eta \, \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta}) \oslash {\sqrt{\mathbf{s} + \epsilon}}$ # # **In the text page 298-299:** # # This vectorized form is equivalent to computing $s_i \gets s_i + \left( \dfrac{\partial J(\boldsymbol{\theta})}{\partial \theta_i} \right)^2$ for each element $s_i$ of the vector $\mathbf{s}$. # # **In the text page 299:** # # This vectorized form is equivalent to computing $ \theta_i \gets \theta_i - \eta \, \dfrac{\partial J(\boldsymbol{\theta})}{\partial \theta_i} \dfrac{1}{\sqrt{s_i + \epsilon}} $ for all parameters $\theta_i$ (simultaneously). # # # **Equation 11-7: RMSProp algorithm** # # 1. $\mathbf{s} \gets \beta \mathbf{s} + (1 - \beta ) \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta}) \otimes \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta})$ # 2. $\boldsymbol{\theta} \gets \boldsymbol{\theta} - \eta \, \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta}) \oslash {\sqrt{\mathbf{s} + \epsilon}}$ # # # **Equation 11-8: Adam algorithm** # # 1. $\mathbf{m} \gets \beta_1 \mathbf{m} - (1 - \beta_1) \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta})$ # 2. $\mathbf{s} \gets \beta_2 \mathbf{s} + (1 - \beta_2) \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta}) \otimes \nabla_\boldsymbol{\theta}J(\boldsymbol{\theta})$ # 3. $\hat{\mathbf{m}} \gets \left(\dfrac{\mathbf{m}}{1 - {\beta_1}^T}\right)$ # 4. $\hat{\mathbf{s}} \gets \left(\dfrac{\mathbf{s}}{1 - {\beta_2}^T}\right)$ # 5. $\boldsymbol{\theta} \gets \boldsymbol{\theta} + \eta \, \hat{\mathbf{m}} \oslash {\sqrt{\hat{\mathbf{s}} + \epsilon}}$ # # **In the text page 309:** # # We typically implement this constraint by computing $\left\| \mathbf{w} \right\|_2$ after each training step # and clipping $\mathbf{w}$ if needed $ \left( \mathbf{w} \gets \mathbf{w} \dfrac{r}{\left\| \mathbf{w} \right\|_2} \right) $. # # # # # Chapter 13 # # **Equation 13-1: Computing the output of a neuron in a convolutional layer** # # $ # z_{i,j,k} = b_k + \sum\limits_{u = 0}^{f_h - 1} \, \, \sum\limits_{v = 0}^{f_w - 1} \, \, \sum\limits_{k' = 0}^{f_{n'} - 1} \, \, x_{i', j', k'} \times w_{u, v, k', k} # \quad \text{with } # \begin{cases} # i' = i \times s_h + u \\ # j' = j \times s_w + v # \end{cases} # $ # # **Equation 13-2: Local response normalization** # # $ # b_i = a_i \left(k + \alpha \sum\limits_{j=j_\text{low}}^{j_\text{high}}{{a_j}^2} \right)^{-\beta} \quad \text{with } # \begin{cases} # j_\text{high} = \min\left(i + \dfrac{r}{2}, f_n-1\right) \\ # j_\text{low} = \max\left(0, i - \dfrac{r}{2}\right) # \end{cases} # $ # # # # # Chapter 14 # # **Equation 14-1: Output of a recurrent layer for a single instance** # # $ # \mathbf{y}_{(t)} = \phi\left({\mathbf{W}_x}^T{\mathbf{x}_{(t)}} + {{\mathbf{W}_y}^T\mathbf{y}_{(t-1)}} + \mathbf{b} \right) # $ # # # **Equation 14-2: Outputs of a layer of recurrent neurons for all instances in a mini-batch** # # $ # \begin{split} # \mathbf{Y}_{(t)} & = \phi\left(\mathbf{X}_{(t)} \mathbf{W}_{x} + \mathbf{Y}_{(t-1)} \mathbf{W}_{y} + \mathbf{b} \right) \\ # & = \phi\left( # \left[\mathbf{X}_{(t)} \quad \mathbf{Y}_{(t-1)} \right] # \mathbf{W} + \mathbf{b} \right) \text{ with } \mathbf{W}= # \left[ \begin{matrix} # \mathbf{W}_x\\ # \mathbf{W}_y # \end{matrix} \right] # \end{split} # $ # # **In the text page 391:** # # Just like in regular backpropagation, there is a first forward pass through the unrolled network (represented by the dashed arrows); then the output sequence is evaluated using a cost function $ C(\mathbf{Y}_{(t_\text{min})}, \mathbf{Y}_{(t_\text{min}+1)}, \dots, \mathbf{Y}_{(t_\text{max})}) $ (where $t_\text{min}$ and $t_\text{max}$ are the first and last output time steps, not counting the ignored outputs)[...] # # # **Equation 14-3: LSTM computations** # # $ # \begin{split} # \mathbf{i}_{(t)}&=\sigma({\mathbf{W}_{xi}}^T \mathbf{x}_{(t)} + {\mathbf{W}_{hi}}^T \mathbf{h}_{(t-1)} + \mathbf{b}_i)\\ # \mathbf{f}_{(t)}&=\sigma({\mathbf{W}_{xf}}^T \mathbf{x}_{(t)} + {\mathbf{W}_{hf}}^T \mathbf{h}_{(t-1)} + \mathbf{b}_f)\\ # \mathbf{o}_{(t)}&=\sigma({\mathbf{W}_{xo}}^T \mathbf{x}_{(t)} + {\mathbf{W}_{ho}}^T \mathbf{h}_{(t-1)} + \mathbf{b}_o)\\ # \mathbf{g}_{(t)}&=\operatorname{tanh}({\mathbf{W}_{xg}}^T \mathbf{x}_{(t)} + {\mathbf{W}_{hg}}^T \mathbf{h}_{(t-1)} + \mathbf{b}_g)\\ # \mathbf{c}_{(t)}&=\mathbf{f}_{(t)} \otimes \mathbf{c}_{(t-1)} \, + \, \mathbf{i}_{(t)} \otimes \mathbf{g}_{(t)}\\ # \mathbf{y}_{(t)}&=\mathbf{h}_{(t)} = \mathbf{o}_{(t)} \otimes \operatorname{tanh}(\mathbf{c}_{(t)}) # \end{split} # $ # # # **Equation 14-4: GRU computations** # # $ # \begin{split} # \mathbf{z}_{(t)}&=\sigma({\mathbf{W}_{xz}}^T \mathbf{x}_{(t)} + {\mathbf{W}_{hz}}^T \mathbf{h}_{(t-1)}) \\ # \mathbf{r}_{(t)}&=\sigma({\mathbf{W}_{xr}}^T \mathbf{x}_{(t)} + {\mathbf{W}_{hr}}^T \mathbf{h}_{(t-1)}) \\ # \mathbf{g}_{(t)}&=\operatorname{tanh}\left({\mathbf{W}_{xg}}^T \mathbf{x}_{(t)} + {\mathbf{W}_{hg}}^T (\mathbf{r}_{(t)} \otimes \mathbf{h}_{(t-1)})\right) \\ # \mathbf{h}_{(t)}&=(1-\mathbf{z}_{(t)}) \otimes \mathbf{h}_{(t-1)} + \mathbf{z}_{(t)} \otimes \mathbf{g}_{(t)} # \end{split} # $ # # # # # Chapter 15 # # **Equation 15-1: Kullback–Leibler divergence** # # $ # D_{\mathrm{KL}}(P\|Q) = \sum\limits_{i} P(i) \log \dfrac{P(i)}{Q(i)} # $ # # # **Equation: KL divergence between the target sparsity _p_ and the actual sparsity _q_** # # $ # D_{\mathrm{KL}}(p\|q) = p \, \log \dfrac{p}{q} + (1-p) \log \dfrac{1-p}{1-q} # $ # # **In the text page 433:** # # One common variant is to train the encoder to output $\gamma = \log\left(\sigma^2\right)$ rather than $\sigma$. # Wherever we need $\sigma$ we can just compute $ \sigma = \exp\left(\dfrac{\gamma}{2}\right) $. # # # # # Chapter 16 # # **Equation 16-1: Bellman Optimality Equation** # # $ # V^*(s) = \underset{a}{\max}\sum\limits_{s'}{T(s, a, s') [R(s, a, s') + \gamma . V^*(s')]} \quad \text{for all }s # $ # # **Equation 16-2: Value Iteration algorithm** # # $ # V_{k+1}(s) \gets \underset{a}{\max}\sum\limits_{s'}{T(s, a, s') [R(s, a, s') + \gamma . V_k(s')]} \quad \text{for all }s # $ # # # **Equation 16-3: Q-Value Iteration algorithm** # # $ # Q_{k+1}(s, a) \gets \sum\limits_{s'}{T(s, a, s') [R(s, a, s') + \gamma . \underset{a'}{\max}\,{Q_k(s',a')}]} \quad \text{for all } (s,a) # $ # # **In the text page 458:** # # Once you have the optimal Q-Values, defining the optimal policy, noted $\pi^{*}(s)$, is trivial: when the agent is in state $s$, it should choose the action with the highest Q-Value for that state: $ \pi^{*}(s) = \underset{a}{\operatorname{argmax}} \, Q^*(s, a) $. # # # **Equation 16-4: TD Learning algorithm** # # $ # V_{k+1}(s) \gets (1-\alpha)V_k(s) + \alpha\left(r + \gamma . V_k(s')\right) # $ # # # **Equation 16-5: Q-Learning algorithm** # # $ # Q_{k+1}(s, a) \gets (1-\alpha)Q_k(s,a) + \alpha\left(r + \gamma . \underset{a'}{\max} \, Q_k(s', a')\right) # $ # # # **Equation 16-6: Q-Learning using an exploration function** # # $ # Q(s, a) \gets (1-\alpha)Q(s,a) + \alpha\left(r + \gamma \, \underset{a'}{\max}f(Q(s', a'), N(s', a'))\right) # $ # # **Equation 16-7: Target Q-Value** # # $ # y(s,a)=r+\gamma\,\max_{a'}\,Q_\boldsymbol\theta(s',a') # $ # # Appendix A # # Equations that appear in the text: # # $ # \mathbf{H} = # \begin{pmatrix} # \mathbf{H'} & 0 & \cdots\\ # 0 & 0 & \\ # \vdots & & \ddots # \end{pmatrix} # $ # # # $ # \mathbf{A} = # \begin{pmatrix} # \mathbf{A'} & \mathbf{I}_m \\ # \mathbf{0} & -\mathbf{I}_m # \end{pmatrix} # $ # # # $ 1 - \frac{1}{5}^2 - \frac{4}{5}^2 $ # # # $ 1 - \frac{1}{2}^2 - \frac{1}{2}^2 $ # # # $ \frac{2}{5} \times $ # # # $ \frac{3}{5} \times 0 $ # # Appendix C # Equations that appear in the text: # # $ (\hat{x}, \hat{y}) $ # # # $ \hat{\alpha} $ # # # $ (\hat{x}, \hat{y}, \hat{\alpha}) $ # # # $ # \begin{cases} # \frac{\partial}{\partial x}g(x, y, \alpha) = 2x - 3\alpha\\ # \frac{\partial}{\partial y}g(x, y, \alpha) = 2 - 2\alpha\\ # \frac{\partial}{\partial \alpha}g(x, y, \alpha) = -3x - 2y - 1\\ # \end{cases} # $ # # # $ 2\hat{x} - 3\hat{\alpha} = 2 - 2\hat{\alpha} = -3\hat{x} - 2\hat{y} - 1 = 0 $ # # # $ \hat{x} = \frac{3}{2} $ # # # $ \hat{y} = -\frac{11}{4} $ # # # $ \hat{\alpha} = 1 $ # # # **Equation C-1: Generalized Lagrangian for the hard margin problem** # # $ # \begin{split} # \mathcal{L}(\mathbf{w}, b, \mathbf{\alpha}) = \frac{1}{2}\mathbf{w}^T \mathbf{w} - \sum\limits_{i=1}^{m}{\alpha^{(i)} \left(t^{(i)}(\mathbf{w}^T \mathbf{x}^{(i)} + b) - 1\right)} \\ # \text{with}\quad \alpha^{(i)} \ge 0 \quad \text{for }i = 1, 2, \dots, m # \end{split} # $ # # **More equations in the text:** # # $ (\hat{\mathbf{w}}, \hat{b}, \hat{\mathbf{\alpha}}) $ # # # $ t^{(i)}(\hat{\mathbf{w}}^T \mathbf{x}^{(i)} + \hat{b}) \ge 1 \quad \text{for } i = 1, 2, \dots, m $ # # # $ {\hat{\alpha}}^{(i)} \ge 0 \quad \text{for } i = 1, 2, \dots, m $ # # # $ {\hat{\alpha}}^{(i)} = 0 $ # # # $ t^{(i)}((\hat{\mathbf{w}})^T \mathbf{x}^{(i)} + \hat{b}) = 1 $ # # # $ {\hat{\alpha}}^{(i)} = 0 $ # # # **Equation C-2: Partial derivatives of the generalized Lagrangian** # # $ # \begin{split} # \nabla_{\mathbf{w}}\mathcal{L}(\mathbf{w}, b, \mathbf{\alpha}) = \mathbf{w} - \sum\limits_{i=1}^{m}\alpha^{(i)}t^{(i)}\mathbf{x}^{(i)}\\ # \dfrac{\partial}{\partial b}\mathcal{L}(\mathbf{w}, b, \mathbf{\alpha}) = -\sum\limits_{i=1}^{m}\alpha^{(i)}t^{(i)} # \end{split} # $ # # # **Equation C-3: Properties of the stationary points** # # $ # \begin{split} # \hat{\mathbf{w}} = \sum_{i=1}^{m}{\hat{\alpha}}^{(i)}t^{(i)}\mathbf{x}^{(i)}\\ # \sum_{i=1}^{m}{\hat{\alpha}}^{(i)}t^{(i)} = 0 # \end{split} # $ # # # **Equation C-4: Dual form of the SVM problem** # # $ # \begin{split} # \mathcal{L}(\hat{\mathbf{w}}, \hat{b}, \mathbf{\alpha}) = \dfrac{1}{2}\sum\limits_{i=1}^{m}{ # \sum\limits_{j=1}^{m}{ # \alpha^{(i)} \alpha^{(j)} t^{(i)} t^{(j)} {\mathbf{x}^{(i)}}^T \mathbf{x}^{(j)} # } # } \quad - \quad \sum\limits_{i=1}^{m}{\alpha^{(i)}}\\ # \text{with}\quad \alpha^{(i)} \ge 0 \quad \text{for }i = 1, 2, \dots, m # \end{split} # $ # # **Some more equations in the text:** # # $ \hat{\mathbf{\alpha}} $ # # # $ {\hat{\alpha}}^{(i)} \ge 0 $ # # # $ \hat{\mathbf{\alpha}} $ # # # $ \hat{\mathbf{w}} $ # # # $ \hat{b} $ # # # $ \hat{b} = t^{(k)} - {\hat{\mathbf{w}}}^T \mathbf{x}^{(k)} $ # # # **Equation C-5: Bias term estimation using the dual form** # # $ # \hat{b} = \dfrac{1}{n_s}\sum\limits_{\scriptstyle i=1 \atop {\scriptstyle {\hat{\alpha}}^{(i)} > 0}}^{m}{\left[t^{(i)} - {\hat{\mathbf{w}}}^T \mathbf{x}^{(i)}\right]} # $ # # Appendix D # **Equation D-1: Partial derivatives of $f(x,y)$** # # $ # \begin{split} # \dfrac{\partial f}{\partial x} & = \dfrac{\partial(x^2y)}{\partial x} + \dfrac{\partial y}{\partial x} + \dfrac{\partial 2}{\partial x} = y \dfrac{\partial(x^2)}{\partial x} + 0 + 0 = 2xy \\ # \dfrac{\partial f}{\partial y} & = \dfrac{\partial(x^2y)}{\partial y} + \dfrac{\partial y}{\partial y} + \dfrac{\partial 2}{\partial y} = x^2 + 1 + 0 = x^2 + 1 \\ # \end{split} # $ # # **In the text:** # # $ \frac{\partial g}{\partial x} = 0 + (0 \times x + y \times 1) = y $ # # # $ \frac{\partial x}{\partial x} = 1 $ # # # $ \frac{\partial y}{\partial x} = 0 $ # # # $ \frac{\partial (u \times v)}{\partial x} = \frac{\partial v}{\partial x} \times u + \frac{\partial u}{\partial x} \times u $ # # # $ \frac{\partial g}{\partial x} = 0 + (0 \times x + y \times 1) $ # # # $ \frac{\partial g}{\partial x} = y $ # # # **Equation D-2: Derivative of a function _h_(_x_) at point _x_~0~** # # $ # \begin{split} # h'(x) & = \underset{\textstyle x \to x_0}{\lim}\dfrac{h(x) - h(x_0)}{x - x_0}\\ # & = \underset{\textstyle \epsilon \to 0}{\lim}\dfrac{h(x_0 + \epsilon) - h(x_0)}{\epsilon} # \end{split} # $ # # # **Equation D-3: A few operations with dual numbers** # # $ # \begin{split} # &\lambda(a + b\epsilon) = \lambda a + \lambda b \epsilon\\ # &(a + b\epsilon) + (c + d\epsilon) = (a + c) + (b + d)\epsilon \\ # &(a + b\epsilon) \times (c + d\epsilon) = ac + (ad + bc)\epsilon + (bd)\epsilon^2 = ac + (ad + bc)\epsilon\\ # \end{split} # $ # # **In the text:** # # $ \frac{\partial f}{\partial x}(3, 4) $ # # # $ \frac{\partial f}{\partial y}(3, 4) $ # # # **Equation D-4: Chain rule** # # $ # \dfrac{\partial f}{\partial x} = \dfrac{\partial f}{\partial n_i} \times \dfrac{\partial n_i}{\partial x} # $ # # **In the text:** # # $ \frac{\partial f}{\partial n_7} = 1 $ # # # $ \frac{\partial f}{\partial n_5} = \frac{\partial f}{\partial n_7} \times \frac{\partial n_7}{\partial n_5} $ # # # $ \frac{\partial f}{\partial n_7} = 1 $ # # # $ \frac{\partial n_7}{\partial n_5} $ # # # $ \frac{\partial n_7}{\partial n_5} = 1 $ # # # $ \frac{\partial f}{\partial n_5} = 1 \times 1 = 1 $ # # # $ \frac{\partial f}{\partial n_4} = \frac{\partial f}{\partial n_5} \times \frac{\partial n_5}{\partial n_4} $ # # # $ \frac{\partial n_5}{\partial n_4} = n_2 $ # # # $ \frac{\partial f}{\partial n_4} = 1 \times n_2 = 4 $ # # # $ \frac{\partial f}{\partial x} = 24 $ # # # $ \frac{\partial f}{\partial y} = 10 $ # # Appendix E # **Equation E-1: Probability that the i^th^ neuron will output 1** # # $ # p\left(s_i^{(\text{next step})} = 1\right) \, = \, \sigma\left(\frac{\textstyle \sum\limits_{j = 1}^N{w_{i,j}s_j + b_i}}{\textstyle T}\right) # $ # # **In the text:** # # $ \dot{\mathbf{x}} $ # # # $ \dot{\mathbf{h}} $ # # # **Equation E-2: Contrastive divergence weight update** # # $ # w_{i,j}^{(\text{next step})} = w_{i,j} + \eta(\mathbf{x}\mathbf{h}^T - \dot{\mathbf{x}} \dot {\mathbf{h}}^T) # $ # # Glossary # # In the text: # # $\ell _1$ # # # $\ell _2$ # # # $\ell _k$ # # # $ \chi^2 $ # # Just in case your eyes hurt after all these equations, let's finish with the single most beautiful equation in the world. No, it's not $E = mc²$, it's obviously Euler's identity: # $e^{i\pi}+1=0$
book_equations.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Annotating Your Data import holoviews as hv import holoviews.util hv.extension('bokeh') # As introduced in the [Getting Started guide](../getting_started/1-Introduction.ipynb), HoloViews relies heavily on semantic *annotations*, i.e., metadata you declare that lets HoloViews interpret what your data represents. With these annotations, HoloViews can perform complex tasks like visualization automatically. # # There are three main kinds of annotation that can be associated with each element: # 1. **Type**, used to declare the sort of data you have, which is required before it can be visualized, # 2. **Dimensions**, used to specify the abstract space in which the data resides, allowing axis labeling and indexing, and # 3. **Group/Label**, used to declare a meaningful category and human-readable description of the element, allowing plot labeling and selecting related sets of elements. # # This user guide explains each of these three types of annotation, describing why you would need or want to use them. # ## 1. Specifying element type # # Basic Python data structures like dataframes, arrays, lists, and dictionaries can be used to represent an infinite variety of different types of data, and thus they cannot be visualized as any particular type of graphical representation without some additional information from the user that says what sort of data it is meant to be. The user can declare this information by selecting a suitable HoloViews element type from the many different ones available (see the [Reference Gallery](http://holoviews.org/reference/index.html)). # # For instance, let's say you have two lists of numbers: xs = range(-10,11) ys = [100-x**2 for x in xs] # As far as Python is concerned, ``xs`` and ``ys`` are just two arbitrary lists, which could represent nearly anything imaginable. But we as humans can see that each of the ``ys`` is a value computed from one of the ``xs`` by evaluating the function $y=100-x^2$. We can convey some of that information to HoloViews by choosing a ``Curve`` element type, which is a convenient shorthand for "a discrete set of real-valued samples from a continuous function of one real-valued variable": curve = hv.Curve((xs, ys)) curve # As you can see, declaring the element type is the only *required* bit of annotation, instantly making your data visualizable. However, this initial visualization relies on various defaults that may not be appropriate for your data, and you can override these defaults by declaring additional annotations as described below. # ## 2. Specifying element dimensionality # # Each element type can process a certain number and type of *dimensions*, i.e., ways in which the data can vary. For instance, the ``Curve`` object above has two dimensions, $x$ and $y$. If you look at how we generated the data, you can see that these two dimensions are semantically different -- we chose an arbitrary set of values for the ``xs``, and then calculated a corresponding value to make each of the ``ys``. In mathematical terms, $x$ is thus an independent variable (selected by the creator of the data), and $y$ is a dependent variable (typically measured or calculated from the independent variable(s)). # # HoloViews elements call these two different types of variables *key dimensions* (``kdims``) and *value dimensions* (``vdims``). The *key dimensions* are the dimensions you can index *by* to get the values corresponding to the *value* dimensions. You can learn more about indexing data in the later [Indexing and Selecting Data](./09-Indexing_and_Selecting_Data.ipynb) user guide. # # Different elements have different numbers of required key dimensions and value dimensions. For instance, a ``Curve`` always has one key dimension and one value dimension. As we did not explicitly specify anything regarding dimensions when declaring the curve above, the ``kdims`` and ``vidms`` use their default names 'x' and 'y': "Object 'curve' has kdims {kdims} and vdims {vdims}".format(kdims=curve.kdims, vdims=curve.vdims) # The easiest way to override the default dimension names is to provide strings for the dimensions, where the second argument in the Element constructor will always be the ``kdims``, and the third will always be the ``vdims``: trajectory = hv.Curve((xs, ys), 'distance', 'height') trajectory "Object 'trajectory' has kdims {kdims} and vdims {vdims} ".format(kdims=trajectory.kdims, vdims=trajectory.vdims) # We can see that the strings we provided have been 'promoted' to dimension objects. The ``kdims`` and ``vdims`` *always* contain instances of the ``Dimension`` class, described in the following section. Here, the immediate effect is to use the new names for the displayed axis labels. # ### Dimension parameters # # ``Dimensions`` are not just names, they are rich objects with numerous parameters that can be used to describe the space in which the data resides. Only two of these are considered *core* parameters that uniquely identify the dimension object; the rest are auxilliary metadata. The most important parameters are: # # <br> # <dl class="dl-horizontal"> # <dt>``name``</dt><dd>(core) A concise name for the dimension, which for convenient usage as a keyword argument should usually be a legal Python identifier.</dd> # <dt>``label`` <dd>(core) A optional longer description of the dimension, which is convenient if you want the displayed label to contain arbitrary spaces, symbols, or unicode.</dd> # <dt>``range`` <dd>The minimum and maximum allowable values for the dimension, for error checking and generating widgets when needed.</dd> # <dt>``soft_range`` <dd>Suggested minimum and maximum values within the allowed range, used to specify a useful portion of the range for widgets and animations.</dd> # <dt>``step`` <dd>Suggested interval for sampling a continuous range, if needed for a widget or animation.</dd> # <dt>``unit`` <dd>The name of the unit to be associated with the dimension, if any, for labelling.</dd> # <dt>``values`` <dd>Explicit list of allowed dimension values, for error checking, widgets, and animations.</dd> # </dl> # # # For the full list of parameters, you can call ``hv.help(hv.Dimension)``. # # Similar to how you can just use a string if all you want to specify is the name, you can provide a ``(name,label)`` tuple if you want to specify the ``name`` and the ``label`` to ``kdims`` and ``vdims`` without building an explicit ``Dimension``: # + wo_unit = hv.Curve((xs, ys), ('distance','Horizontal distance'), ('height','Height above sea level')) distance = hv.Dimension('distance', label='Horizontal distance', unit='m') height = hv.Dimension(('height','Height above sea level'), unit='m') with_unit = hv.Curve((xs, ys), distance, height) # (using + to compose elements is described in the next guide) wo_unit + with_unit # - # Note that after supplying the longer labels, you can still use the short name to specify the dimension in keyword arguments. For instance, try using ``with_unit.select(distance=(5,8))`` in the cell above. # ### Setting properties with redim # # Declaring dimension objects with appropriate parameters can be awkward and verbose if you only want to set a few specific parameters. You can often avoid declaring explicit dimension objects using the ``redim`` method, which returns a *clone* of the element: the same data, wrapped in a new instance of the same element type with the new dimension settings. # # Let's use ``redim`` to swap out the 'height' dimension for an 'altitude' dimension: renamed_height = trajectory.redim(height='altitude') renamed_height # The ``redim`` "method" is actually a utility that can be used to set any of the dimension parameters, such as the label, unit, range, or values. For instance, the label can be updated on an existing object by specifying the dimension name and then the new value for that parameter: renamed_height.redim.label(altitude='Altitude above sea-level', distance='Horizontal distance') # ## 3. Organizing your elements with groups and labels # # A complex visualization you build with HoloViews may include many instances of the same element type, each built from different bits of data and potentially representing categorically distinct types of information to you. To help you keep track of these distinctions when you need to, HoloViews provides a ``group`` parameter you can use to declare semantically distinct categories for elements, and a ``label`` parameter you can use to identify which specific item the element represents within that category: low_ys = [25-(0.5*el)**2 for el in xs] hv.Curve((xs, low_ys), group='Trajectory', label='Shallow') + \ hv.Curve((xs, ys), group='Trajectory', label='Medium') # As you can see, the ``group`` and ``label`` information will be used to generate sensible titles, here indicating that both sets of data represent trajectories, and that there are two different specific trajectories being shown. Once the group and/or label have been specified, they can be used for [Customizing Plots](./03-Customizing_Plots.ipynb) (e.g. to make all trajectories have the same line width and style, or to customize one particular plot out of many of the same type). The group and label are also used for indexing, as we will see in the following [Composing_Elements](./02-Composing_Elements.ipynb) guide.
examples/user_guide/01-Annotating_Data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img src="https://hms.harvard.edu/themes/harvardmedical/logo.svg" width= "250px"> # # --- # # <img src="https://hail.is/docs/devel/hail-logo-cropped.png" width= "50px"> **Workshop** # This notebook is designed to provide a broad overview of Hail's functionality, with emphasis on the functionality to manipulate and query a genetic dataset. Please refer to <https://hail.is/docs/0.2/index.html> for additional information. This sample notebook was generated based on the following: <https://hail.is/docs/0.2/tutorials/01-genome-wide-association-study.html>. Note the additional functionality of library `plotting.py`, also part of the development tools from the Hail team. Additional information on `Jupyter Lab`: <https://jupyterlab.readthedocs.io/en/stable/getting_started/overview.html> # # **Module 1** # # ## Introduction to `Hail` # Load HAIL and packages import hail as hl import hail.expr.aggregators as agg hl.init() from pprint import pprint from bokeh.io import output_notebook, show from bokeh.layouts import gridplot from bokeh.models import Span from bokeh.plotting import figure, show, output_file import pandas as pd import os , sys, time import numpy as np output_notebook() # To learn more about bokeh, look at https://bokeh.pydata.org/en/latest/ local_path=os.getcwd() sys.path.append(local_path) import plotting # --- # Load data from the 1K-Genome project # hl.utils.get_1kg('data/') # Read genetic data into a matrix table. mt = hl.read_matrix_table('data/1kg.mt/') # Hail has its own internal data representation, called a [MatrixTable](https://hail.is/docs/0.2/tutorials/09-matrixtable.html) # # Dividing mt into partitions. See additional details: <https://hail.is/docs/0.2/hail.MatrixTable.html?highlight=partition#hail.MatrixTable.repartition> CPU = 4 nodes = 1 # 1 node = 1 machine mt = mt.repartition( 4 * CPU * nodes) type(mt) # The `MatrixTable.describe()` method prints all fields in the table and their types, as well as the keys. mt.describe() list(mt.row) print('Samples: %d Variants: %d' % (mt.count_cols(), mt.count_rows())) # To know exactly the number of variants per chromosome and the nature of our SNPs, we can use `summarize_variants()`. hl.summarize_variants(mt) mt.qual.show() # The [rows](https://hail.is/docs/devel/hail.MatrixTable.html#hail.MatrixTable.rows) method can be used to get a table with all the row fields in our MatrixTable. # You can use the `show` method to display the variants. mt.AD.show() # To look at the first few genotype calls, we can use [entries](https://hail.is/docs/devel/hail.MatrixTable.html#hail.MatrixTable.entries) along with `select` and `take`. The `take` method collects the first n rows into a list. Alternatively, we can use the `show` method, which prints the first n rows to the console in a table format. # # Try changing `take` to `show` in the cell below. mt.entry.show(5) mt.aggregate_rows(hl.agg.count_where(mt.alleles==['A','T'])) snp_counts = mt.aggregate_rows( hl.array(hl.agg.counter(mt.alleles))) snp_counts type(snp_counts) sorted(snp_counts, key=lambda x: x[1]) mt.aggregate_entries(hl.agg.stats(mt.GQ)) mt.aggregate_entries( hl.agg.filter(mt.GT.is_hom_ref(),hl.agg.stats(mt.GQ))) # + # hl.agg.stats? # - mt.aggregate_entries( hl.agg.filter(~mt.GT.is_hom_ref(),hl.agg.stats(mt.GQ))) mt.aggregate_entries( hl.agg.filter(mt.GT.is_het(),hl.agg.stats(mt.GQ))) p=hl.plot.histogram(mt.GQ, bins=100) show(p) p=hl.plot.histogram(mt.filter_entries(mt.GT.is_hom_ref()).GQ, bins=100) show(p) p=hl.plot.histogram( mt.filter_entries(mt.GT.is_het_ref()).GQ, bins=100) show(p) p=hl.plot.histogram( mt.filter_entries((mt.DP == 10 ) & mt.GT.is_het_ref()).GQ, bins=100) show(p) # # **Module 2** # # ## GWAS in 5 steps # Load phenotypic data as table table = (hl.import_table('data/1kg_annotations.txt', impute=True) .key_by('Sample')) # Annotations are important in any genetic study. Column fields are where you will store information about sample like phenotypes, ancestry, sex, and covariates. Let's annotate the columns in our MatrixTable. # Show the first 10 rows of the table table.show(10) # Notice that the show command only works this way in tables. In matrix tables it is necessary to specify which of the 3 tables we want to show: rows, columns or entries: # # `table.show()` --> Table # # `mt.row.alles.show()` --> Matrix Table # # This is a magic function from Python. Not very common, but one can preview local data using a shell command # %%sh head plotting.py # We use the `annotate_cols` method to join the table with the MatrixTable containing our dataset. mt = mt.annotate_cols(pheno = table[mt.s]) # The information from the table is added to the column field of the matrixtable under "pheno". # ### 1. QC: mt = hl.variant_qc(mt) mt.row.describe() # The hardy-weinberg equilibrium (HWE) states that the allele frequency should remain unchanged within a population. # Outliers from hwe are identified by a p-value larger than 1e-6. mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6) mt = hl.sample_qc(mt) mt.col.describe() # Control for allele depth (dp_stats) and missingness (call_rate). mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97)) # Check whether the labels homozygous to reference(hom_ref), heterozygous (het), or homozygous variants (hom_var) are indeed correct. # Calculate number of alternates : ab = mt.AD[1] / hl.sum(mt.AD) # + filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) | (mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) | (mt.GT.is_hom_var() & (ab >= 0.9))) mt = mt.filter_entries(filter_condition_ab) # - # For each of the statistics you can filter for outliers. In this example number of singletons (variants that occur once in the dataset). stats_singleton = mt.aggregate_cols(hl.agg.stats(mt.sample_qc.n_singleton)) mt = mt.filter_cols(mt.sample_qc.n_singleton < (stats_singleton.mean + (3 * stats_singleton.stdev))) mt = mt.filter_cols(mt.sample_qc.n_singleton > (stats_singleton.mean - (3 * stats_singleton.stdev))) # ### 2. Population stratification by genetic ancestry # The primary confounder of single‐nucleotide poly- morphism (SNP) to phenotype associations is genetic ancestry. To control for this, we estimate the principal components (PCs) that summarize genetic ancestry to include as covariates in all analyses. # Filter for common variants to preserve power in the principal component analysis. mt_common = mt.filter_rows(mt.variant_qc.AF[1] > 0.05) # The `pca` method produces eigenvalues as a list and sample PCs as a Table, and can also produce variant loadings when asked. The `hwe_normalized_pca` method does the same, using HWE-normalized genotypes for the PCA. eigenvalues, scores, loadings = hl.hwe_normalized_pca(mt_common.GT, k = 10, compute_loadings = True) pprint(eigenvalues) scores.show(5, width = 100) # Project the scores from the common variants onto the rare variants. The scores will be used to correct for the population stratification in the following analyses. mt = mt.annotate_cols(scores = scores[mt.s].scores) mt.scores.dtype # Plot the first two PCs # After plotting the PCA, try to click on the population labels on the left. The plot is interactive, this is done through the `plotting.py` library. # + pca = plotting.scatter_plot(mt.scores[0], mt.scores[1], label_fields={ 'Population': mt.pheno.SuperPopulation, 'Caffeine': mt.pheno.CaffeineConsumption}, title='PCA, first two principal components', xlabel='PC1', ylabel='PC2') show(pca) # - # ### 3. Linear regression # Perform linear regression on caffeine consumption and the variants (that are not equal to reference, thus alternates) with covariates: # - 1.0 is input variable number of alternate alleles, with input variable the genotype dosage derived from the PL field. # - Gender # - Population stratification (population structure) with 10 PCs for genetic ancestry. gwas = hl.linear_regression_rows( y = mt.pheno.CaffeineConsumption, x = mt.GT.n_alt_alleles(), covariates = [1, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2], mt.scores[3], mt.scores[4], mt.scores[5], mt.scores[6], mt.scores[7], mt.scores[8], mt.scores[9]]) # Idenitify your top hits: gwas_ordered = gwas.order_by(gwas.p_value) gwas_ordered.show(10) # ### 4. Visualization # Quantile-quantile plot: # # Observed against expected p-value to assess inflation. A successful correction for population stratification should bring the observed p-values closer to expected p-values, visualized as a diagonal line. qqplot = hl.plot.qq(gwas.p_value) show(qqplot) # Manhattan Plot manh = hl.plot.manhattan(gwas.p_value, title = "Manhattan Plot", size = 4) show(manh) # ### 5. Multiple testing correction (Bonferroni) # Calculate the Bonferroni corrected P-value cut off. signlevel = 0.05 N = mt.count_rows() Bonferroni_line = -np.log10(signlevel / N) line = Span(location = Bonferroni_line, dimension = "width", line_color = "red", line_width = 1) manh.renderers.extend([line]) show(manh) # --- # # **Module 3** # # ## Variant discovery # The `aggregate` method can be used to aggregate over rows of the table. # `counter` is an aggregation function that counts the number of occurrences of each unique element. pprint(mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))) mt.aggregate_cols(hl.agg.count_where(hl.is_missing(mt.pheno))) # `stats` is an aggregation function that produces some useful statistics about numeric collections. # Extract entries table entries = mt.entries() # Group by supper population and chromosome, then count heteregeneous variants results = (entries.group_by(pop = entries.pheno.SuperPopulation, chromosome = entries.locus.contig) .aggregate(n_het = hl.agg.count_where(entries.GT.is_het()))) results.show(40) # ### Rare variants # Compute minor allele frequency and generate an annotation column for rare, low frequency and common variants entries = entries.annotate(maf = hl.cond(entries.info.AF[0]<0.01, "<1%", hl.cond(entries.info.AF[0]<0.05, "1%-5%", ">5%"))) # Group by minor allele frequency and hair color results2 = (entries.group_by(af_bin = entries.maf, purple_hair = entries.pheno.PurpleHair) .aggregate(mean_gq = hl.agg.stats(entries.GQ).mean, mean_dp = hl.agg.stats(entries.DP).mean)) results2.show() # Filter rare variants only rare_vars = entries.filter(entries.maf=="<1%") rare_vars.count() # why this instruction works rare_vars.aggregate(hl.agg.stats(rare_vars.DP)) # but this one does not work rare_vars.aggregate(hl.agg.stats(rare_vars.s)) # answer below rare_count_per_sample = rare_vars.aggregate((hl.agg.counter(rare_vars.s))) rare_count_per_sample count_per_sample = entries.aggregate((hl.agg.counter(entries.s))) print(type(count_per_sample)) print(str(len(count_per_sample)) + " samples") count_per_sample
notebook/GWAS_tutorial_with_HMS_additions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Continuous Control # # --- # # Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** # # ### 1. Start the Environment # # We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/). import torch device = torch.cuda.set_device(0) print(device) from unityagents import UnityEnvironment import numpy as np # Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded. # # - **Mac**: `"path/to/Crawler.app"` # - **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"` # - **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"` # - **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"` # - **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"` # - **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"` # - **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"` # # For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows: # ``` # env = UnityEnvironment(file_name="Crawler.app") # ``` env = UnityEnvironment(file_name='../../crawler/Crawler.app') # Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] # ### 2. Examine the State and Action Spaces # # Run the code cell below to print some information about the environment. # + # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) # - # ### 3. Take Random Actions in the Environment # # In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment. # # Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. # # Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment! env_info = env.reset(train_mode=False)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = np.clip(actions, -1, 1) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step if np.any(dones): # exit loop if episode finished break print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores))) # When finished, you can close the environment. env.close() # ### 4. It's Your Turn! # # Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following: # ```python # env_info = env.reset(train_mode=True)[brain_name] # ```
Exercise_II_Continuous_Control/.ipynb_checkpoints/Crawler-checkpoint.ipynb
# --- # jupyter: # jupytext: # formats: ipynb,md:myst # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="kORMl5KmfByI" # # Advanced Auomatic Differentiation in JAX # # [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/jax/blob/master/docs/jax-101/04-advanced-autodiff.ipynb) # # *Authors: <NAME> & <NAME>* # # Computing gradients is a critical part of modern machine learning methods. This section considers a few advanced topics in the areas of automatic differentiation as it relates to modern machine learning. # # While understanding how automatic differentiation works under the hood isn't crucial for using JAX in most contexts, we encourage the reader to check out this quite accessible [video](https://www.youtube.com/watch?v=wG_nF1awSSY) to get a deeper sense of what's going on. # # [The Autodiff Cookbook](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html) is a more advanced and more detailed explanation of how these ideas are implemented in the JAX backend. It's not necessary to understand this to do most things in JAX. However, some features (like defining [custom derivatives](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html)) depend on understanding this, so it's worth knowing this explanation exists if you ever need to use them. # + [markdown] id="bYLCEa0Jt-n3" # ## Imports # + id="-JJ8sMDcelto" import jax import jax.numpy as jnp # + [markdown] id="qx50CO1IorCc" # ## Higher-order derivatives # # JAX's autodiff makes it easy to compute higher-order derivatives, because the functions that compute derivatives are themselves differentiable. Thus, higher-order derivatives are as easy as stacking transformations. # # We illustrate this in the single-variable case: # # The derivative of $f(x) = x^3 + 2x^2 - 3x + 1$ can be computed as: # + id="Kqsbj98UTVdi" f = lambda x: x**3 + 2*x**2 - 3*x + 1 dfdx = jax.grad(f) # + [markdown] id="ItEt15OGiiAF" # The higher-order derivatives of $f$ are: # # $$ # \begin{array}{l} # f'(x) = 3x^2 + 4x -3\\ # f''(x) = 6x + 4\\ # f'''(x) = 6\\ # f^{iv}(x) = 0 # \end{array} # $$ # # Computing any of these in JAX is as easy as chaining the `grad` function: # + id="5X3yQqLgimqH" d2fdx = jax.grad(dfdx) d3fdx = jax.grad(d2fdx) d4fdx = jax.grad(d3fdx) # + [markdown] id="fVL2P_pcj8T1" # Evaluating the above in $x=1$ would give us: # # $$ # \begin{array}{l} # f'(1) = 4\\ # f''(1) = 10\\ # f'''(1) = 6\\ # f^{iv}(1) = 0 # \end{array} # $$ # # Using JAX: # + id="tJkIp9wFjxL3" outputId="581ecf87-2d20-4c83-9443-5befc1baf51d" print(dfdx(1.)) print(d2fdx(1.)) print(d3fdx(1.)) print(d4fdx(1.)) # + [markdown] id="3-fTelU7LHRr" # In the multivariable case, higher-order derivatives are more complicated. The second-order derivative of a function is represented by its [Hessian matrix](https://en.wikipedia.org/wiki/Hessian_matrix), defined according to # # $$(\mathbf{H}f)_{i,j} = \frac{\partial^2 f}{\partial_i\partial_j}.$$ # # The Hessian of a real-valued function of several variables, $f: \mathbb R^n\to\mathbb R$, can be identified with the Jacobian of its gradient. JAX provides two transformations for computing the Jacobian of a function, `jax.jacfwd` and `jax.jacrev`, corresponding to forward- and reverse-mode autodiff. They give the same answer, but one can be more efficient than the other in different circumstances – see the [video about autodiff](https://www.youtube.com/watch?v=wG_nF1awSSY) linked above for an explanation. # + id="ILhkef1rOB6_" def hessian(f): return jax.jacfwd(jax.grad(f)) # + [markdown] id="xaENwADXOGf_" # Let's double check this is correct on the dot-product $f: \mathbf{x} \mapsto \mathbf{x} ^\top \mathbf{x}$. # # if $i=j$, $\frac{\partial^2 f}{\partial_i\partial_j}(\mathbf{x}) = 2$. Otherwise, $\frac{\partial^2 f}{\partial_i\partial_j}(\mathbf{x}) = 0$. # + id="Xm3A0QdWRdJl" outputId="e1e8cba9-b567-439b-b8fc-34b21497e67f" def f(x): return jnp.dot(x, x) hessian(f)(jnp.array([1., 2., 3.])) # + [markdown] id="7_gbi34WSUsD" # Often, however, we aren't interested in computing the full Hessian itself, and doing so can be very inefficient. [The Autodiff Cookbook](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html) explains some tricks, like the Hessian-vector product, that allow to use it without materialising the whole matrix. # # If you plan to work with higher-order derivatives in JAX, we strongly recommend reading the Autodiff Cookbook. # + [markdown] id="zMT2qAi-SvcK" # ## Higher order optimization # # Some meta-learning techniques, such as Model-Agnostic Meta-Learning ([MAML](https://arxiv.org/abs/1703.03400)), require differentiating through gradient updates. In other frameworks this can be quite cumbersome, but in JAX it's much easier: # # ```python # def meta_loss_fn(params, data): # """Computes the loss after one step of SGD.""" # grads = jax.grad(loss_fn)(params, data) # return loss_fn(params - lr * grads, data) # # meta_grads = jax.grad(meta_loss_fn)(params, data) # ``` # + [markdown] id="3h9Aj3YyuL6P" # ## Stopping gradients # # Auto-diff enables automatic computation of the gradient of a function with respect to its inputs. Sometimes, however, we might want some additional control: for instance, we might want to avoid back-propagating gradients through some subset of the computational graph. # # Consider for instance the TD(0) ([temporal difference](https://en.wikipedia.org/wiki/Temporal_difference_learning)) reinforcement learning update. This is used to learn to estimate the *value* of a state in an environment from experience of interacting with the environment. Let's assume the value estimate $v_{\theta}(s_{t-1}$) in a state $s_{t-1}$ is parameterised by a linear function. # + id="fjLqbCb6SiOm" # Value function and initial parameters value_fn = lambda theta, state: jnp.dot(theta, state) theta = jnp.array([0.1, -0.1, 0.]) # + [markdown] id="85S7HBo1tBzt" # Consider a transition from a state $s_{t-1}$ to a state $s_t$ during which we observed the reward $r_t$ # + id="T6cRPau6tCSE" # An example transition. s_tm1 = jnp.array([1., 2., -1.]) r_t = jnp.array(1.) s_t = jnp.array([2., 1., 0.]) # + [markdown] id="QO5CHA9_Sk01" # The TD(0) update to the network parameters is: # # $$ # \Delta \theta = (r_t + v_{\theta}(s_t) - v_{\theta}(s_{t-1})) \nabla v_{\theta}(s_{t-1}) # $$ # # This update is not the gradient of any loss function. # # However it can be **written** as the gradient of the pseudo loss function # # $$ # L(\theta) = [r_t + v_{\theta}(s_t) - v_{\theta}(s_{t-1})]^2 # $$ # # if the dependency of the target $r_t + v_{\theta}(s_t)$ on the parameter $\theta$ is ignored. # # How can we implement this in JAX? If we write the pseudo loss naively we get: # + id="uMcFny2xuOwz" outputId="79c10af9-10b8-4e18-9753-a53918b9d72d" def td_loss(theta, s_tm1, r_t, s_t): v_tm1 = value_fn(theta, s_tm1) target = r_t + value_fn(theta, s_t) return (target - v_tm1) ** 2 td_update = jax.grad(td_loss) delta_theta = td_update(theta, s_tm1, r_t, s_t) delta_theta # + [markdown] id="CPnjm59GG4Gq" # But `td_update` will **not** compute a TD(0) update, because the gradient computation will include the dependency of `target` on $\theta$. # # We can use `jax.lax.stop_gradient` to force JAX to ignore the dependency of the target on $\theta$: # + id="WCeq7trKPS4V" outputId="0f38d754-a871-4c47-8e3a-a961418a24cc" def td_loss(theta, s_tm1, r_t, s_t): v_tm1 = value_fn(theta, s_tm1) target = r_t + value_fn(theta, s_t) return (jax.lax.stop_gradient(target) - v_tm1) ** 2 td_update = jax.grad(td_loss) delta_theta = td_update(theta, s_tm1, r_t, s_t) delta_theta # + [markdown] id="TNF0CkwOTKpD" # This will treat `target` as if it did **not** depend on the parameters $\theta$ and compute the correct update to the parameters. # # The `jax.lax.stop_gradient` may also be useful in other settings, for instance if you want the gradient from some loss to only affect a subset of the parameters of the neural network (because, for instance, the other parameters are trained using a different loss). # # ## Straight-through estimator using `stop_gradient` # # The straight-through estimator is a trick for defining a 'gradient' of a function that is otherwise non-differentiable. Given a non-differentiable function $f : \mathbb{R}^n \to \mathbb{R}^n$ that is used as part of a larger function that we wish to find a gradient of, we simply pretend during the backward pass that $f$ is the identity function. This can be implemented neatly using `jax.lax.stop_gradient`: # + id="hdORJENmVHvX" outputId="f0839541-46a4-45a9-fce7-ead08f20046b" def f(x): return jnp.round(x) # non-differentiable def straight_through_f(x): return x + jax.lax.stop_gradient(f(x) - x) print("f(x): ", f(3.2)) print("straight_through_f(x):", straight_through_f(3.2)) print("grad(f)(x):", jax.grad(f)(3.2)) print("grad(straight_through_f)(x):", jax.grad(straight_through_f)(3.2)) # + [markdown] id="Wx3RNE0Sw5mn" # ## Per-example gradients # # While most ML systems compute gradients and updates from batches of data, for reasons of computational efficiency and/or variance reduction, it is sometimes necessary to have access to the gradient/update associated with each specific sample in the batch. # # For instance, this is needed to prioritise data based on gradient magnitude, or to apply clipping / normalisations on a sample by sample basis. # # In many frameworks (PyTorch, TF, Theano) it is often not trivial to compute per-example gradients, because the library directly accumulates the gradient over the batch. Naive workarounds, such as computing a separate loss per example and then aggregating the resulting gradients are typically very inefficient. # # In JAX we can define the code to compute the gradient per-sample in an easy but efficient way. # # Just combine the `jit`, `vmap` and `grad` transformations together: # + id="tFLyd9ifw4GG" outputId="bf3ad4a3-102d-47a6-ece0-f4a8c9e5d434" perex_grads = jax.jit(jax.vmap(jax.grad(td_loss), in_axes=(None, 0, 0, 0))) # Test it: batched_s_tm1 = jnp.stack([s_tm1, s_tm1]) batched_r_t = jnp.stack([r_t, r_t]) batched_s_t = jnp.stack([s_t, s_t]) perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t) # + [markdown] id="VxvYVEYQYiS_" # Let's walk through this one transformation at a time. # # First, we apply `jax.grad` to `td_loss` to obtain a function that computes the gradient of the loss w.r.t. the parameters on single (unbatched) inputs: # + id="rPO67QQrY5Bk" outputId="fbb45b98-2dbf-4865-e6e5-87dc3eef5560" dtdloss_dtheta = jax.grad(td_loss) dtdloss_dtheta(theta, s_tm1, r_t, s_t) # + [markdown] id="cU36nVAlcnJ0" # This function computes one row of the array above. # + [markdown] id="c6DQF0b3ZA5u" # Then, we vectorise this function using `jax.vmap`. This adds a batch dimension to all inputs and outputs. Now, given a batch of inputs, we produce a batch of outputs -- each output in the batch corresponds to the gradient for the corresponding member of the input batch. # + id="5agbNKavaNDM" outputId="ab081012-88ab-4904-a367-68e9f81445f0" almost_perex_grads = jax.vmap(dtdloss_dtheta) batched_theta = jnp.stack([theta, theta]) almost_perex_grads(batched_theta, batched_s_tm1, batched_r_t, batched_s_t) # + [markdown] id="K-v34yLuan7k" # This isn't quite what we want, because we have to manually feed this function a batch of `theta`s, whereas we actually want to use a single `theta`. We fix this by adding `in_axes` to the `jax.vmap`, specifying theta as `None`, and the other args as `0`. This makes the resulting function add an extra axis only to the other arguments, leaving `theta` unbatched, as we want: # + id="S6kd5MujbGrr" outputId="d3d731ef-3f7d-4a0a-ce91-7df57627ddbd" inefficient_perex_grads = jax.vmap(dtdloss_dtheta, in_axes=(None, 0, 0, 0)) inefficient_perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t) # + [markdown] id="O0hbsm70be5T" # Almost there! This does what we want, but is slower than it has to be. Now, we wrap the whole thing in a `jax.jit` to get the compiled, efficient version of the same function: # + id="Fvr709FcbrSW" outputId="627db899-5620-4bed-8d34-cd1364d3d187" perex_grads = jax.jit(inefficient_perex_grads) perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t) # + id="FH42yzbHcNs2" outputId="c8e52f93-615a-4ce7-d8ab-fb6215995a39" # %timeit inefficient_perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t).block_until_ready() # %timeit perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t).block_until_ready()
docs/jax-101/04-advanced-autodiff.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Project: Part of Speech Tagging with Hidden Markov Models # --- # ### Introduction # # Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation. # # In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more. # # ![](_post-hmm.png) # # The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! # <div class="alert alert-block alert-info"> # **Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files. # </div> # <div class="alert alert-block alert-info"> # **Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. # </div> # ### The Road Ahead # You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers. # # - [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus # - [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline # - [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline # - [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger # <div class="alert alert-block alert-warning"> # **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine. # </div> # Jupyter "magic methods" -- only need to be run once per kernel restart # %load_ext autoreload # %aimport helpers, tests # %autoreload 1 # + # import python modules -- this cell needs to be run again if you make changes to any of the files import matplotlib.pyplot as plt import numpy as np from IPython.core.display import HTML from itertools import chain from collections import Counter, defaultdict from helpers import show_model, Dataset from pomegranate import State, HiddenMarkovModel, DiscreteDistribution # - # ## Step 1: Read and preprocess the dataset # --- # We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same. # # The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line. # # Example from the Brown corpus. # ``` # b100-38532 # Perhaps ADV # it PRON # was VERB # right ADJ # ; . # ; . # # b100-35577 # ... # ``` # + data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8) print("There are {} sentences in the corpus.".format(len(data))) print("There are {} sentences in the training set.".format(len(data.training_set))) print("There are {} sentences in the testing set.".format(len(data.testing_set))) assert len(data) == len(data.training_set) + len(data.testing_set), \ "The number of sentences in the training set + testing set should sum to the number of sentences in the corpus" # - # ### The Dataset Interface # # You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step. # # ``` # Dataset-only Attributes: # training_set - reference to a Subset object containing the samples for training # testing_set - reference to a Subset object containing the samples for testing # # Dataset & Subset Attributes: # sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus # keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus # vocab - an immutable collection of the unique words in the corpus # tagset - an immutable collection of the unique tags in the corpus # X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...) # Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...) # N - returns the number of distinct samples (individual words or tags) in the dataset # # Methods: # stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus # __iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs # __len__() - returns the nubmer of sentences in the dataset # ``` # # For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes: # # ``` # subset.keys == {"s1", "s0"} # unordered # subset.vocab == {"See", "run", "ran", "Spot"} # unordered # subset.tagset == {"VERB", "NOUN"} # unordered # subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys # subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys # subset.N == 7 # there are a total of seven observations over all sentences # len(subset) == 2 # because there are two sentences # ``` # # <div class="alert alert-block alert-info"> # **Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data. # </div> # #### Sentences # # `Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`. key = 'b100-38532' print("Sentence: {}".format(key)) print("words:\n\t{!s}".format(data.sentences[key].words)) print("tags:\n\t{!s}".format(data.sentences[key].tags)) # <div class="alert alert-block alert-info"> # **Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data. # </div> # # #### Counting Unique Elements # # You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`. # + print("There are a total of {} samples of {} unique words in the corpus." .format(data.N, len(data.vocab))) print("There are {} samples of {} unique words in the training set." .format(data.training_set.N, len(data.training_set.vocab))) print("There are {} samples of {} unique words in the testing set." .format(data.testing_set.N, len(data.testing_set.vocab))) print("There are {} words in the test set that are missing in the training set." .format(len(data.testing_set.vocab - data.training_set.vocab))) assert data.N == data.training_set.N + data.testing_set.N, \ "The number of training + test samples should sum to the total number of samples" # - # #### Accessing word and tag Sequences # The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset. # accessing words with Dataset.X and tags with Dataset.Y for i in range(2): print("Sentence {}:".format(i + 1), data.X[i]) print() print("Labels {}:".format(i + 1), data.Y[i]) print() # #### Accessing (word, tag) Samples # The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus. # use Dataset.stream() (word, tag) samples for the entire corpus print("\nStream (word, tag) pairs:\n") for i, pair in enumerate(data.stream()): print("\t", pair) if i > 5: break # # For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. # ## Step 2: Build a Most Frequent Class tagger # --- # # Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus. # ### IMPLEMENTATION: Pair Counts # # Complete the function below that computes the joint frequency counts for two input sequences. # + def pair_counts(sequence_A, sequence_B): """Return a dictionary keyed to each unique value in the first sequence list that counts the number of occurrences of the corresponding value from the second sequences list. For example, if sequences_A is tags and sequences_B is the corresponding words, then if 1244 sequences contain the word "time" tagged as a NOUN, then you should return a dictionary such that pair_counts[NOUN][time] == 1244 """ # TODO: Finish this function! # tag_dic = {} # count_seq = len(sequence_A) # print(count_seq) # for i in range(count_seq): # for j in range(len(sequence_A[i])): # if sequence_A[i][j] in tag_dic.keys(): # if sequence_B[i][j] in tag_dic[sequence_A[i][j]].keys(): # tag_dic[sequence_A[i][j]][sequence_B[i][j]] = tag_dic[sequence_A[i][j]][sequence_B[i][j]] + 1 # else: # tag_dic[sequence_A[i][j]][sequence_B[i][j]] = 0 # else: # tag_dic[sequence_A[i][j]] = {sequence_B[i][j]:0} # return tag_dic d = defaultdict(Counter) # Loop all over the sentences for i in range(len(sequence_A)): # Loop all over the 2 sequences for a, b in zip(sequence_A[i],sequence_B[i]): # Insert the tag and word pair if needed + increment their counter d[a][b] += 1 return d raise NotImplementedError # Calculate C(t_i, w_i) emission_counts = pair_counts(data.Y, data.X) assert len(emission_counts) == 12, \ "Uh oh. There should be 12 tags in your dictionary." assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \ "Hmmm...'time' is expected to be the most common NOUN." HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>') # - test_counts = pair_counts(data.X, data.Y) test_counts emission_counts data.Y # ### IMPLEMENTATION: Most Frequent Class Tagger # # Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string. # # The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably. # + # Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word from collections import namedtuple FakeState = namedtuple("FakeState", "name") class MFCTagger: # NOTE: You should not need to modify this class or any of its methods missing = FakeState(name="<MISSING>") def __init__(self, table): self.table = defaultdict(lambda: MFCTagger.missing) self.table.update({word: FakeState(name=tag) for word, tag in table.items()}) def viterbi(self, seq): """This method simplifies predictions by matching the Pomegranate viterbi() interface""" return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"])) #def getKeys(dict): # return "".join([*dict]) # TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not # the same as the emission probabilities) and use it to fill the mfc_table word_counts = pair_counts(data.training_set.X, data.training_set.Y) # def get_word_counts(): # word_counts = {} # for k in data.training_set.vocab mfc_table = {k:list(word_counts[k].keys())[0] for k in word_counts.keys()} #mfc_table = {k:getKeys(word_counts[k].keys()) for k in word_counts.keys()} # mfc_table = defaultdict() # # Loop over the words # for word, tags in word_counts.items(): # # Select the corresponding tag with highest count value # tag, _ = max(tags.items(), key=lambda item: item[1]) # mfc_table[word]=tag # DO NOT MODIFY BELOW THIS LINE mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance assert len(mfc_table) == len(data.training_set.vocab), "" assert all(k in data.training_set.vocab for k in mfc_table.keys()), "" assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, "" HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>') # - # ### Making Predictions with a Model # The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger. # + def replace_unknown(sequence): """Return a copy of the input sequence where each unknown word is replaced by the literal string value 'nan'. Pomegranate will ignore these values during computation. """ return [w if w in data.training_set.vocab else 'nan' for w in sequence] def simplify_decoding(X, model): """X should be a 1-D sequence of observations for the model to predict""" _, state_path = model.viterbi(replace_unknown(X)) return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions # - # ### Example Decoding Sequences with MFC Tagger for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, mfc_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n") # ### Evaluating Model Accuracy # # The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus. def accuracy(X, Y, model): """Calculate the prediction accuracy by using the model to decode each sequence in the input X and comparing the prediction with the true labels in Y. The X should be an array whose first dimension is the number of sentences to test, and each element of the array should be an iterable of the words in the sequence. The arrays X and Y should have the exact same shape. X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...] Y = [(), (), ...] """ correct = total_predictions = 0 for observations, actual_tags in zip(X, Y): # The model.viterbi call in simplify_decoding will return None if the HMM # raises an error (for example, if a test sentence contains a word that # is out of vocabulary for the training set). Any exception counts the # full sentence as an error (which makes this a conservative estimate). try: most_likely_tags = simplify_decoding(observations, model) correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags)) except: pass total_predictions += len(observations) return correct / total_predictions # #### Evaluate the accuracy of the MFC tagger # Run the next cell to evaluate the accuracy of the tagger on the training and test corpus. # + mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model) print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc)) mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model) print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc)) assert mfc_training_acc >= 0.9, "Uh oh. Your MFC accuracy on the training set doesn't look right." assert mfc_testing_acc >= 0.9, "Uh oh. Your MFC accuracy on the testing set doesn't look right." HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>') # - # ## Step 3: Build an HMM tagger # --- # The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence. # # We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence). # # The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula: # # $$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$ # # Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information. # ### IMPLEMENTATION: Unigram Counts # # Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.) # # $$P(tag_1) = \frac{C(tag_1)}{N}$$ # + def unigram_counts(sequences): """ Return a dictionary keyed to each unique value in the input sequence list that counts the number of occurrences of the value in the sequences list. The sequences collection should be a 2-dimensional array. For example, if the tag NOUN appears 275558 times over all the input sequences, then you should return a dictionary such that your_unigram_counts[NOUN] == 275558. """ uni_counts = {} for tag in data.training_set.tagset: uni_counts[tag] = 0 for tag in data.training_set.tagset: for sequence in sequences: for i in sequence: uni_counts[i] = uni_counts[i]+1 return uni_counts # TODO: Finish this function! raise NotImplementedError # TODO: call unigram_counts with a list of tag sequences from the training set tag_unigrams = unigram_counts(data.training_set.Y) assert set(tag_unigrams.keys()) == data.training_set.tagset, \ "Uh oh. It looks like your tag counts doesn't include all the tags!" assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \ "Hmmm...'X' is expected to be the least common class" assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \ "Hmmm...'NOUN' is expected to be the most common class" HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>') # - # ### IMPLEMENTATION: Bigram Counts # # Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$ # # + def bigram_counts(sequences): """Return a dictionary keyed to each unique PAIR of values in the input sequences list that counts the number of occurrences of pair in the sequences list. The input should be a 2-dimensional array. For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582 """ bigram_counts = {} l = len(sequences) for tag_tuple in (sequences): for i in range (len(tag_tuple)): if i > len(tag_tuple)-2: break key = (tag_tuple[i], tag_tuple[i+1]) if key in bigram_counts.keys(): bigram_counts[key] = bigram_counts[key] + 1 else: bigram_counts[key] = 1 # TODO: Finish this function! return bigram_counts raise NotImplementedError # TODO: call bigram_counts with a list of tag sequences from the training set tag_bigrams = bigram_counts(data.training_set.Y) assert len(tag_bigrams) == 144, \ "Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)" assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \ "Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')." assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \ "Hmmm...('DET', 'NOUN') is expected to be the most common bigram." HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>') # - # ### IMPLEMENTATION: Sequence Starting Counts # Complete the code below to estimate the bigram probabilities of a sequence starting with each tag. # + def starting_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the beginning of a sequence. For example, if 8093 sequences start with NOUN, then you should return a dictionary such that your_starting_counts[NOUN] == 8093 """ starting_count = {} for tag_tuple in sequences: if tag_tuple[0] in starting_count.keys(): starting_count[tag_tuple[0]] = starting_count[tag_tuple[0]] + 1 else: starting_count[tag_tuple[0]] = 1 # TODO: Finish this function! return starting_count raise NotImplementedError # TODO: Calculate the count of each tag starting a sequence tag_starts = starting_counts(data.training_set.Y) assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary." assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram." assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram." HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>') # - # ### IMPLEMENTATION: Sequence Ending Counts # Complete the function below to estimate the bigram probabilities of a sequence ending with each tag. # + def ending_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the end of a sequence. For example, if 18 sequences end with DET, then you should return a dictionary such that your_starting_counts[DET] == 18 """ ending_count = {} for tag_tuple in sequences: if tag_tuple[-1] in ending_count.keys(): ending_count[tag_tuple[-1]] = ending_count[tag_tuple[-1]] + 1 else: ending_count[tag_tuple[-1]] = 1 # TODO: Finish this function! return ending_count # TODO: Finish this function! raise NotImplementedError # TODO: Calculate the count of each tag ending a sequence tag_ends = ending_counts(data.training_set.Y) assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary." assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram." assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram." HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>') # - # ### IMPLEMENTATION: Basic HMM Tagger # Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger. # # - Add one state per tag # - The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$ # - Add an edge from the starting state `basic_model.start` to each tag # - The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$ # - Add an edge from each tag to the end state `basic_model.end` # - The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$ # - Add an edge between _every_ pair of tags # - The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$ # + basic_model = HiddenMarkovModel(name="base-hmm-tagger") # TODO: create states with emission probability distributions P(word | tag) and add to the model # (Hint: you may need to loop & create/add new states) states = {} # for tag in data.training_set.tagset: # emission = DiscreteDistribution(p_dist(tag)) # state = State(emission, name=tag) # states[tag] = state # basic_model.add_state(states[tag]) # for tag in emission_counts: # total = tag_unigrams[tag] # dist_distr = {key: v/total for k, v in emission_counts[tag].items()} # emission = DiscreteDistribution(dist_distr) # state = State(emission, name=tag) # states[tag] = state # basic_model.add_state(states[tag]) # # TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1) # # (Hint: you may need to loop & add transitions # total_tags = len(data.training_set.Y) # def get_tag_seq_prob(bigram_tuple): # bigram_count = bigram_counts(bigram_tuple) # return tag_bigrams[bigram_tuple]/total_tags # for tag_tuple in tag_bigrams.keys(): # t0 = tag_tuple[0] # t1 = tag_tuple[1] # prob_start = tag_starts[tag]/total_tags # prob_end = tag_ends[tag]/total_tags # basic_model.add_transition(basic_model.start, states[t0], prob_start) # tag_seq_prob = get_tag_seq_prob((t0, t1)) # basic_model.add_transition(states[t0], states[t1], get_tag_seq_prob((t0, t1))) # basic_model.add_transition(states[t0], basic_model.end, prob_end) # # NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE # # finalize the model # basic_model.bake() # assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \ # "Every state in your network should use the name of the associated tag, which must be one of the training set tags." # assert basic_model.edge_count() == 168, \ # ("Your network should have an edge from the start node to each state, one edge between every " + # "pair of tags (states), and an edge from each state to the end node.") # HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>') # + basic_model = HiddenMarkovModel(name="base-hmm-tagger") # TODO: create states with emission probability distributions P(word | tag) and add to the model # (Hint: you may need to loop & create/add new states) #basic_model.add_states() # Data preparation : compute pair_counts C(t,w) # ctw is constructed such as to get ctw[NOUN]['time']==1275 ctw = pair_counts(data.training_set.Y , data.training_set.X) # Initialize a dictionary to store the states object by tag key states = dict() # Loop over all the tagset and create a state for each tags for tag in data.training_set.tagset: # Create a dict to store the emission distribution for the tag emissions_distribution = dict() # Compute the emission distribution P(w|t)=C(t,w) / C(t) and store it in a dictionary for word in ctw[tag]: emissions_distribution[word] = ctw[tag][word] / tag_unigrams[tag] # Create a discrete distribution and a state for the current tag tag_emissions = DiscreteDistribution(emissions_distribution) tag_state = State(tag_emissions, name=tag) # Store the created state in a dictionary states[tag]=tag_state # Debug info #show_model(basic_model, figsize=(10, 10), filename="example.png", overwrite=True, show_ends=False) # Add all the created states to the model # Not clear in pomegranate documentation, but add_states requires to convert dict.values into list type to avoid errors basic_model.add_states([elt for elt in states.values()]) # TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1) # (Hint: you may need to loop & add transitions #basic_model.add_transition() ## Add an edge from the starting state to each tag # Not clear in pomegranate documentation, but add_transition requires a State object in arguments, not a tag name (str) for tag in data.training_set.tagset: # Compute the transition probability P(t|start)=C(start,t) / C(start) tp = tag_starts[tag] / len(tag_starts) basic_model.add_transition(basic_model.start, states[tag], tp) ## Add an edge from each tag to the end state for tag in data.training_set.tagset: # Compute the transition probability P(end|t)=C(t,end) / C(t) tp = tag_ends[tag] / tag_unigrams[tag] basic_model.add_transition(states[tag], basic_model.end , tp) ## Add an edge between every pair of tags for t1, t2 in tag_bigrams.keys(): # Compute the transition probability P(t2|t1)=C(t1,t2) / C(t1) tp = tag_bigrams[(t1,t2)] / tag_unigrams[t1] basic_model.add_transition(states[t1], states[t2] , tp) # Debug info #show_model(basic_model, figsize=(10, 10), filename="example.png", overwrite=True, show_ends=False) # NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE # finalize the modelstate basic_model.bake() assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \ "Every state in your network should use the name of the associated tag, which must be one of the training set tags." assert basic_model.edge_count() == 168, \ ("Your network should have an edge from the start node to each state, one edge between every " + "pair of tags (states), and an edge from each state to the end node.") HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>') # + hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model) print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc)) hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model) print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc)) assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right." assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right." HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>') # - # ### Example Decoding Sequences with the HMM Tagger for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, basic_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n") # # ## Finishing the project # --- # # <div class="alert alert-block alert-info"> # **Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review. # </div> !!jupyter nbconvert *.ipynb # ## Step 4: [Optional] Improving model performance # --- # There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional. # # - [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts) # Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values. # # - Backoff Smoothing # Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information. # # - Extending to Trigrams # HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two. # # ### Obtain the Brown Corpus with a Larger Tagset # Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison. # # Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets. # + import nltk from nltk import pos_tag, word_tokenize from nltk.corpus import brown nltk.download('brown') training_corpus = nltk.corpus.brown training_corpus.tagged_sents()[0] # -
HMM Tagger.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # We will build a Linear regression model for Medical cost dataset. The dataset consists of age, sex, BMI(body mass index), children, smoker and region feature, which are independent and charge as a dependent feature. We will predict individual medical costs billed by health insurance. # # Definition & Working principle # Let's build model using **Linear regression**. # # Linear regression is a **supervised learining** algorithm used when target / dependent variable **continues** real number. It establishes relationship between dependent variable $y$ and one or more independent variable $x$ using best fit line. It work on the principle of ordinary least square $(OLS)$ / Mean square errror $(MSE)$. In statistics ols is method to estimated unkown parameter of linear regression function, it's goal is to minimize sum of square difference between observed dependent variable in the given data set and those predicted by linear regression fuction. # # ## Hypothesis representation # # We will use $\mathbf{x_i}$ to denote the independent variable and $\mathbf{y_i}$ to denote dependent variable. A pair of $\mathbf{(x_i,y_i)}$ is called training example. The subscripe $\mathbf{i}$ in the notation is simply index into the training set. We have $\mathbf{m}$ training example then $\mathbf{i = 1,2,3,...m}$. # # The goal of supervised learning is to learn a *hypothesis function $\mathbf{h}$*, for a given training set that can used to estimate $\mathbf{y}$ based on $\mathbf{x}$. So hypothesis fuction represented as # # $$\mathbf{ h_\theta(x_{i}) = \theta_0 + \theta_1x_i }$$ # $\mathbf{\theta_0,\theta_1}$ are parameter of hypothesis.This is equation for **Simple / Univariate Linear regression**. # # For **Multiple Linear regression** more than one independent variable exit then we will use $\mathbf{x_{ij}}$ to denote indepedent variable and $\mathbf{y_{i}}$ to denote dependent variable. We have $\mathbf{n}$ independent variable then $\mathbf{j=1,2,3 ..... n}$. The hypothesis function represented as # # $$\mathbf{h_\theta(x_{i}) = \theta_0 + \theta_1x_{i1} + \theta_2 x_{i2} + ..... \theta_j x_{ij} ...... \theta_n x_{mn} }$$ # $\mathbf{\theta_0,\theta_1,....\theta_j....\theta_n }$ are parameter of hypothesis, # $\mathbf{m}$ Number of training exaples, # $\mathbf{n}$ Number of independent variable, # $\mathbf{x_{ij}}$ is $\mathbf{i^{th}}$ training exaple of $\mathbf{j^{th}}$ feature. # # ## Import Library and Dataset # Now we will import couple of python library required for our analysis and import dataset # Import library import pandas as pd #Data manipulation import numpy as np #Data manipulation import matplotlib.pyplot as plt # Visualization import seaborn as sns #Visualization plt.rcParams['figure.figsize'] = [8,5] plt.rcParams['font.size'] =14 plt.rcParams['font.weight']= 'bold' plt.style.use('seaborn-whitegrid') # + # Import dataset #path ='dataset/' df = pd.read_csv('Parker.csv') print('\nNumber of rows and columns in the data set: ',df.shape) print('') #Lets look into top few rows and columns in the dataset df.head() # - # Now we have import dataset. When we look at the shape of dataset it has return as (1338,7).So there are $\mathbf{m=1338}$ training exaple and $\mathbf{n=7}$ independent variable. The target variable here is charges and remaining six variables such as age, sex, bmi, children, smoker, region are independent variable. There are multiple independent variable, so we need to fit Multiple linear regression. Then the hypothesis function looks like # # $$\mathbf{ h_\theta(x_{i}) = \theta_0+\theta_1 age + \theta_2 sex + \theta_3 bmi + \theta_4 children + \theta_5 smoker + \theta_6 region }$$ # # This multiple linear regression equation for given dataset. # If $\mathbf{i=1}$ then # $$\mathbf{h_\theta(x_{1}) = \theta_0+\theta_1 19 + \theta_2 female + \theta_3 27.900 + \theta_4 1 + \theta_5 yes + \theta_6 southwest}$$ # $$\mathbf{y_1 = 16884.92400}$$ # If $\mathbf{i=3}$ then $$\mathbf{h_\theta(x_{3}) = \theta_0+\theta_1 28 + \theta_2 male + \theta_3 33.000 + \theta_4 3 + \theta_5 no + \theta_6 northwest}$$ # $$\mathbf{y_3 = 4449.46200}$$ # *Note*: In python index starts from 0. # $$\mathbf{x_1 = \left(\begin{matrix} x_{11} & x_{12} & x_{13} & x_{14} & x_{15} & x_{16}\end{matrix}\right) = \left(\begin{matrix} 19 & female & 27.900 & 1 & no & northwest\end{matrix}\right) }$$ # ## Matrix Formulation # # In general we can write above vector as $$ \mathbf{ x_{ij}} = \left( \begin{smallmatrix} \mathbf{x_{i1}} & \mathbf{x_{i2}} &.&.&.& \mathbf{x_{in}} \end{smallmatrix} \right)$$ # # Now we combine all aviable individual vector into single input matrix of size $(m,n)$ and denoted it by $\mathbf{X}$ input matrix, which consist of all training exaples, # $$\mathbf{X} = \left( \begin{smallmatrix} x_{11} & x_{12} &.&.&.&.& x_{1n}\\ # x_{21} & x_{22} &.&.&.&.& x_{2n}\\ # x_{31} & x_{32} &.&.&.&.& x_{3n}\\ # .&.&.&. &.&.&.& \\ # .&.&.&. &.&.&.& \\ # x_{m1} & x_{m2} &.&.&.&.&. x_{mn}\\ # \end{smallmatrix} \right)_{(m,n)}$$ # # We represent parameter of function and dependent variable in vactor form as # $$\theta = \left (\begin{matrix} \theta_0 \\ \theta_1 \\ .\\.\\ \theta_j\\.\\.\\ \theta_n \end {matrix}\right)_{(n+1,1)} # \mathbf{ y } = \left (\begin{matrix} y_1\\ y_2\\. \\. \\ y_i \\. \\. \\ y_m \end{matrix} \right)_{(m,1)}$$ # # So we represent hypothesis function in vectorize form $$\mathbf{ h_\theta{(x)} = X\theta}$$. # # # + """ for our visualization purpose will fit line using seaborn library only for bmi as independent variable and charges as dependent variable""" sns.lmplot(x='Income_Range',y='After_FBS',data=df,aspect=2,height=6) plt.xlabel('Boby Mass Index$(kg/m^2)$: as Independent variable') plt.ylabel('Insurance Charges: as Dependent variable') plt.title('Charge Vs BMI'); # - # In above plot we fit regression line into the variables. # ## Cost function # # A cost function measures how much error in the model is in terms of ability to estimate the relationship between $x$ and $y$. # We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference of observed dependent variable in the given the dataset and those predicted by the hypothesis function. # # $$\mathbf{ J(\theta) = \frac{1}{m} \sum_{i=1}^{m}(\hat{y}_i - y_i)^2}$$ # $$\mathbf{J(\theta) = \frac{1}{m} \sum_{i=1}^{m}(h_\theta(x_i) - y_i)^2}$$ # To implement the linear regression, take training example add an extra column that is $x_0$ feature, where $\mathbf{x_0=1}$. $\mathbf{x_{o}} = \left( \begin{smallmatrix} x_{i0} & x_{i1} & x_{i2} &.&.&.& x_{mi} \end{smallmatrix} \right)$,where $\mathbf{x_{i0} =0}$ and input matrix will become as # # $$\mathbf{X} = \left( \begin{smallmatrix} x_{10} & x_{11} & x_{12} &.&.&.&.& x_{1n}\\ # x_{20} & x_{21} & x_{22} &.&.&.&.& x_{2n}\\ # x_{30} & x_{31} & x_{32} &.&.&.&.& x_{3n}\\ # .&.&.&.&. &.&.&.& \\ # .&.&.&.&. &.&.&.& \\ # x_{m0} & x_{m1} & x_{m2} &.&.&.&.&. x_{mn}\\ # \end{smallmatrix} \right)_{(m,n+1)}$$ # Each of the m input samples is similarly a column vector with n+1 rows $x_0$ being 1 for our convenience, that is $\mathbf{x_{10},x_{20},x_{30} .... x_{m0} =1}$. Now we rewrite the ordinary least square cost function in matrix form as # $$\mathbf{J(\theta) = \frac{1}{m} (X\theta - y)^T(X\theta - y)}$$ # # Let's look at the matrix multiplication concept,the multiplication of two matrix happens only if number of column of firt matrix is equal to number of row of second matrix. Here input matrix $\mathbf{X}$ of size $\mathbf{(m,n+1)}$, parameter of function is of size $(n+1,1)$ and dependent variable vector of size $\mathbf{(m,1)}$. The product of matrix $\mathbf{X_{(m,n+1)}\theta_{(n+1,1)}}$ will return a vector of size $\mathbf{(m,1)}$, then product of $\mathbf{(X\theta - y)^T_{(1,m})(X\theta - y)_{(m,1)}}$ will return size of unit vector. # ## Normal Equation # The normal equation is an analytical solution to the linear regression problem with a ordinary least square cost function. To minimize our cost function, take partial derivative of $\mathbf{J(\theta)}$ with respect to $\theta$ and equate to $0$. The derivative of function is nothing but if a small change in input what would be the change in output of function. # $$\mathbf{min_{\theta_0,\theta_1..\theta_n} J({\theta_0,\theta_1..\theta_n})}$$ # $$\mathbf{\frac{\partial J(\theta_j)}{\partial\theta_j} =0}$$ # where $\mathbf{j = 0,1,2,....n}$ # # Now we will apply partial derivative of our cost function, # $$\mathbf{\frac{\partial J(\theta_j)}{\partial\theta_j} = \frac{\partial }{\partial \theta} \frac{1}{m}(X\theta - y)^T(X\theta - y) }$$ # I will throw $\mathbf{\frac {1}{m}}$ part away since we are going to compare a derivative to $0$. And solve $\mathbf{J(\theta)}$, # # $$\mathbf{J(\theta) = (X\theta -y)^T(X\theta - y)}$$ # $$\mathbf{= (X\theta)^T - y^T)(X\theta -y)}$$ # $$\mathbf{= (\theta^T X^T - y^T)(X\theta - y)}$$ # $$\mathbf{= \theta^T X^T X \theta - y^T X \theta - \theta^T X^T y + y^T y}$$ # $$\mathbf{ = \theta^T X^T X \theta - 2\theta^T X^T y + y^T y}$$ # # Here $\mathbf{y^T_{(1,m)} X_{(m,n+1)} \theta_{(n+1,1)} = \theta^T_{(1,n+1)} X^T_{(n+1,m)} y_{(m,1)}}$ because unit vector. # # $$\mathbf{\frac{\partial J(\theta)}{\partial \theta} = \frac{\partial}{\partial \theta} (\theta^T X^T X \theta - 2\theta^T X^T y + y^T y )}$$ # $$\mathbf{ = X^T X \frac {\partial \theta^T \theta}{\partial\theta} - 2 X^T y \frac{\partial \theta^T}{\partial\theta} + \frac {\partial y^T y}{\partial\theta}}$$ # Partial derivative $\mathbf{\frac {\partial x^2}{\partial x} = 2x}$, $\mathbf{\frac {\partial kx^2}{\partial x} = kx}$, # $\mathbf{\frac {\partial Constact}{\partial x} = 0}$ # # $$\mathbf{\frac{\partial J(\theta)}{\partial\theta} = X^T X 2\theta - 2X^T y +0}$$ # $$\mathbf{ 0 = 2X^T X \theta - 2X^T y}$$ # $$\mathbf{ X^T X \theta = X^T }$$ # $$\mathbf{ \theta = (X^TX)^{-1} X^Ty }$$ # this the normal equation for linear regression # ## Exploratory data analysis df.describe() # ### Check for missing value plt.figure(figsize=(12,4)) sns.heatmap(df.isnull(),cbar=False,cmap='viridis',yticklabels=False) plt.title('Missing value in the dataset'); # There is no missing value in the data sex # ### Plots # correlation plot corr = df.corr() sns.heatmap(corr, cmap = 'Wistia', annot= True); # Thier no correlation among valiables. # + f= plt.figure(figsize=(12,4)) ax=f.add_subplot(121) sns.distplot(df["Before_FBS"],bins=50,color='r',ax=ax) ax.set_title('Distribution of Before_FBS') ax=f.add_subplot(122) sns.distplot(np.log10(df['After_FBS']),bins=40,color='b',ax=ax) ax.set_title('Distribution of After_FBS') ax.set_xscale('log'); # - # If we look at the left plot the charges varies from 1120 to 63500, the plot is right skewed. In right plot we will apply natural log, then plot approximately tends to normal. for further analysis we will apply log on target variable charges. # + f = plt.figure(figsize=(14,6)) ax = f.add_subplot(121) sns.violinplot(x='Income_Range', y='After_FBS',data=df,palette='Wistia',ax=ax) ax.set_title('Violin plot Income') ax = f.add_subplot(122) sns.violinplot(x="Before_FBS", y='After_FBS',data=df,palette='magma',ax=ax) ax.set_title('Violin plot of FBS'); # - # From left plot the insurance charge for male and female is approximatley in same range,it is average around 5000 bucks. In right plot the insurance charge for smokers is much wide range compare to non smokers, the average charges for non smoker is approximately 5000 bucks. For smoker the minimum insurance charge is itself 5000 bucks. df.groupby('Income_Range').agg(['mean','min','max'])['After_FBS'] # >From left plot the minimum age person is insured is 18 year. There is slabs in policy most of non smoker take $1^{st}$ and $2^{nd}$ slab, for smoker policy start at $2^{nd}$ and $3^{rd}$ slab. # # >Body mass index (BMI) is a measure of body fat based on height and weight that applies to adult men and women. The minimum bmi is 16$kg/m^2$ and maximum upto 54$kg/m^2$ # ## Data Preprocessing # ### Encoding # Machine learning algorithms cannot work with categorical data directly, categorical data must be converted to number. # 1. Label Encoding # 2. One hot encoding # 3. Dummy variable trap # # **Label encoding** refers to transforming the word labels into numerical form so that the algorithms can understand how to operate on them. # # A **One hot encoding** is a representation of categorical variable as binary vectors.It allows the representation of categorical data to be more expresive. This first requires that the categorical values be mapped to integer values, that is label encoding. Then, each integer value is represented as a binary vector that is all zero values except the index of the integer, which is marked with a 1. # # The **Dummy variable trap** is a scenario in which the independent variable are multicollinear, a scenario in which two or more variables are highly correlated in simple term one variable can be predicted from the others. # # By using *pandas get_dummies* function we can do all above three step in line of code. We will this fuction to get dummy variable for sex, children,smoker,region features. By setting *drop_first =True* function will remove dummy variable trap by droping one variable and original variable.The pandas makes our life easy. # ### Box -Cox transformation # A Box Cox transformation is a way to transform non-normal dependent variables into a normal shape. Normality is an important assumption for many statistical techniques; if your data isn’t normal, applying a Box-Cox means that you are able to run a broader number of tests. All that we need to perform this transformation is to find lambda value and apply the rule shown below to your variable. # $$\mathbf{ \begin {cases}\frac {y^\lambda - 1}{\lambda},& y_i\neg=0 \\ # log(y_i) & \lambda = 0 \end{cases}}$$ # The trick of Box-Cox transformation is to find lambda value, however in practice this is quite affordable. The following function returns the transformed variable, lambda value,confidence interval # The original categorical variable are remove and also one of the one hot encode varible column for perticular categorical variable is droped from the column. So we completed all three encoding step by using get dummies function. # ## Train Test split # + from sklearn.model_selection import train_test_split X = df_encode.drop('charges',axis=1) # Independet variable y = df_encode['charges'] # dependent variable X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=23) # - # ## Model building # In this step build model using our linear regression equation $\mathbf{\theta = (X^T X)^{-1} X^Ty}$. In first step we need to add a feature $\mathbf{x_0 =1}$ to our original data set. # + # Step 1: add x0 =1 to dataset X_train_0 = np.c_[np.ones((X_train.shape[0],1)),X_train] X_test_0 = np.c_[np.ones((X_test.shape[0],1)),X_test] # Step2: build model theta = np.matmul(np.linalg.inv( np.matmul(X_train_0.T,X_train_0) ), np.matmul(X_train_0.T,y_train)) # - # The parameters for linear regression model parameter = ['theta_'+str(i) for i in range(X_train_0.shape[1])] columns = ['intersect:x_0=1'] + list(X.columns.values) parameter_df = pd.DataFrame({'Parameter':parameter,'Columns':columns,'theta':theta}) # + # Scikit Learn module from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X_train,y_train) # Note: x_0 =1 is no need to add, sklearn will take care of it. #Parameter sk_theta = [lin_reg.intercept_]+list(lin_reg.coef_) parameter_df = parameter_df.join(pd.Series(sk_theta, name='Sklearn_theta')) parameter_df # - # The parameter obtained from both the model are same.So we succefull build our model using normal equation and verified using sklearn linear regression module. Let's move ahead, next step is prediction and model evaluvation. # ## Model evaluation # We will predict value for target variable by using our model parameter for test data set. Then compare the predicted value with actual valu in test set. We compute **Mean Square Error** using formula # $$\mathbf{ J(\theta) = \frac{1}{m} \sum_{i=1}^{m}(\hat{y}_i - y_i)^2}$$ # # $\mathbf{R^2}$ is statistical measure of how close data are to the fitted regression line. $\mathbf{R^2}$ is always between 0 to 100%. 0% indicated that model explains none of the variability of the response data around it's mean. 100% indicated that model explains all the variablity of the response data around the mean. # # $$\mathbf{R^2 = 1 - \frac{SSE}{SST}}$$ # **SSE = Sum of Square Error** # **SST = Sum of Square Total** # $$\mathbf{SSE = \sum_{i=1}^{m}(\hat{y}_i - y_i)^2}$$ # $$\mathbf{SST = \sum_{i=1}^{m}(y_i - \bar{y}_i)^2}$$ # Here $\mathbf{\hat{y}}$ is predicted value and $\mathbf{\bar{y}}$ is mean value of $\mathbf{y}$. # + # Normal equation y_pred_norm = np.matmul(X_test_0,theta) #Evaluvation: MSE J_mse = np.sum((y_pred_norm - y_test)**2)/ X_test_0.shape[0] # R_square sse = np.sum((y_pred_norm - y_test)**2) sst = np.sum((y_test - y_test.mean())**2) R_square = 1 - (sse/sst) print('The Mean Square Error(MSE) or J(theta) is: ',J_mse) print('R square obtain for normal equation method is :',R_square) # + # sklearn regression module y_pred_sk = lin_reg.predict(X_test) #Evaluvation: MSE from sklearn.metrics import mean_squared_error J_mse_sk = mean_squared_error(y_pred_sk, y_test) # R_square R_square_sk = lin_reg.score(X_test,y_test) print('The Mean Square Error(MSE) or J(theta) is: ',J_mse_sk) print('R square obtain for scikit learn library is :',R_square_sk) # - # The model returns $R^2$ value of 77.95%, so it fit our data test very well, but still we can imporve the the performance of by diffirent technique. Please make a note that we have transformer out variable by applying natural log. When we put model into production antilog is applied to the equation. # ## Model Validation # In order to validated model we need to check few assumption of linear regression model. The common assumption for *Linear Regression* model are following # 1. Linear Relationship: In linear regression the relationship between the dependent and independent variable to be *linear*. This can be checked by scatter ploting Actual value Vs Predicted value # 2. The residual error plot should be *normally* distributed. # 3. The *mean* of *residual error* should be 0 or close to 0 as much as possible # 4. The linear regression require all variables to be multivariate normal. This assumption can best checked with Q-Q plot. # 5. Linear regession assumes that there is little or no *Multicollinearity in the data. Multicollinearity occurs when the independent variables are too highly correlated with each other. The variance inflation factor *VIF* identifies correlation between independent variables and strength of that correlation. $\mathbf{VIF = \frac {1}{1-R^2}}$, If VIF >1 & VIF <5 moderate correlation, VIF < 5 critical level of multicollinearity. # 6. Homoscedasticity: The data are homoscedastic meaning the residuals are equal across the regression line. We can look at residual Vs fitted value scatter plot. If heteroscedastic plot would exhibit a funnel shape pattern. # + # Check for Linearity f = plt.figure(figsize=(14,5)) ax = f.add_subplot(121) sns.scatterplot(y_test,y_pred_sk,ax=ax,color='r') ax.set_title('Check for Linearity:\n Actual Vs Predicted value') # Check for Residual normality & mean ax = f.add_subplot(122) sns.distplot((y_test - y_pred_sk),ax=ax,color='b') ax.axvline((y_test - y_pred_sk).mean(),color='k',linestyle='--') ax.set_title('Check for Residual normality & mean: \n Residual eror'); # + # Check for Multivariate Normality # Quantile-Quantile plot f,ax = plt.subplots(1,2,figsize=(14,6)) import scipy as sp _,(_,_,r)= sp.stats.probplot((y_test - y_pred_sk),fit=True,plot=ax[0]) ax[0].set_title('Check for Multivariate Normality: \nQ-Q Plot') #Check for Homoscedasticity sns.scatterplot(y = (y_test - y_pred_sk), x= y_pred_sk, ax = ax[1],color='r') ax[1].set_title('Check for Homoscedasticity: \nResidual Vs Predicted'); # - # Check for Multicollinearity #Variance Inflation Factor VIF = 1/(1- R_square_sk) VIF # The model assumption linear regression as follows # 1. In our model the actual vs predicted plot is curve so linear assumption fails # 2. The residual mean is zero and residual error plot right skewed # 3. Q-Q plot shows as value log value greater than 1.5 trends to increase # 4. The plot is exhibit heteroscedastic, error will insease after certian point. # 5. Variance inflation factor value is less than 5, so no multicollearity.
third_project/linear-regression-tutorial (1).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # How to fit a rise time to an exponential instability with FITX # # FITX is a small library to help isolate and fit exponential rise times in unstable systems with saturation. # # In the following we show how to use the libary with the example of a dynamical instability in a particle accelerator which stops due to machine non-linearities. # # Copyright CERN, <NAME>, 2019 # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns sns.set_context('talk', font_scale=1.4, rc={'lines.linewidth': 3}) sns.set_style('whitegrid', {'grid.linestyle': ':', 'grid.color': 'red', 'axes.edgecolor': '0.5', 'axes.linewidth': 1.2, 'legend.frameon': True}) # - # ## The Data # # We have stored the centroid motion of the unstable beam. Here we use both the $x$ and $x'$ data to obtain a purely positive signal to be fit exponentially. The quadrature signal $x'$ can also be obtained by using a `Hilbert` filter on $x$ (see e.g. `scipy.signal.Hilbert` with its imaginary part), e.g. for measurement data in a particle accelerator. In the present example, the data comes from a simulation with octupole amplitude detuning leading to a saturation effect. mean_x = np.loadtxt('./example_mean_x.dat') mean_xp = np.loadtxt('./example_mean_xp.dat') plt.plot(mean_x) plt.xlabel('Turns') plt.ylabel('Horizontal centroid position'); # Let's construct the envelope or amplitude signal by using the quadrature signal $x'$: beta_x = 92.759 signal_x = np.sqrt((mean_x)**2 + (beta_x * mean_xp)**2) plt.plot(signal_x) plt.xlabel('Turns') plt.ylabel("Horizontal amplitude\n" + r"$\sqrt{\langle x \rangle^2 + \beta_x^2\langle x'\rangle^2}$"); # ## The Instability Fit # # Now let's use FITX in order to isolate the pure exponential instability from this positive signal and fit the rise time: from FITX import fit_risetime # + # numerical parameters smoothing_window_size = 2000 plt.figure(figsize=(8, 5)) # set a minimum level below which the instability is not fit min_level = 5 * np.max(signal_x[:1000]) # FITX me! --> returns the rise time in turns rx = fit_risetime( signal_x, min_level=min_level, smoothing_window_size=smoothing_window_size, matplotlib_axis=plt.gca() ) # plotting plt.title(r"Horizontal amplitude $\sqrt{\langle x \rangle^2 + \beta_x^2\langle x'\rangle^2}$" + "\n" + r"exponential fit: {:.1f} turns".format(rx)); plt.ylabel(r'Centroid amplitude $J_{\langle x \rangle}$') plt.xlabel('Turns') for l in plt.gca().xaxis.get_ticklabels(): l.set_rotation(15) l.set_horizontalalignment('center') plt.plot(signal_x[:500000], ls=':', color='green', zorder=-10) plt.savefig('fitting.png', bbox_inches='tight') # - # Note the isolated region between the red bars. The algorithm fits starting from a positive curvature point and stops when the curvature turns negative. Through this solid approach we avoid fitting the saturated part of the instability. The original signal is plotted in green. The fit itself is plotted with the orange broken line. # # We obtain an exponential rise time of... print ('... {:.1f} turns!'.format(rx))
HowTo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Assignment 2 # # (๑• .̫ •๑) # # Your last pokemon adventure went well, but you aren't quite the very best like no one ever was. Faithful to your data scientist ways, you decide to further analyse your pokedex to improve your training. # # The data can be found under `pokedex/pokemons.csv`, and is the same as assignment 1. Run the cell below to get an overview of the dataset: # + import pandas as pd import numpy as np df = pd.read_csv('pokedex/pokemons.csv') df.head() # - # ## Problem 1 # # Analysing and grouping "smart" pokemons by `Type 1` wasn't very successful last assignment: we got a headache from trying to train a Psyduck. Since then however, we learnt a powerful unsupervised learning method for analysing **clusters** in our datasets. # # 💪 **Task: Use k-Means clustering to find 4 clusters in the pokemon dataset, and store the predictions in a vector called `y_kmeans`.** # Pro-tip 1: You should only take into account the `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed`, and `HP` columns. # Pro-tip 2: Please use the `random_state=42` argument when constructing your sklearn class, to make sure your results are reproducible. Marks won't be taken off for using the wrong random seed, but the unit tests won't pass! # Pro-tip 3: We have seen in lectures that sklearn expects NumPy `ndarray`s as argument to its training and prediction methods. Whilst that is true, it can also accept pandas `DataFrame`s directly, since these are `ndarray` wrappers. You can use whichever you prefer. # + # INSERT YOUR CODE HERE # + def test_kmeans(): assert len(y_kmeans) == 800, f'The size of your prediction vector is wrong: {len(y_kmeans)}. There should be 800, one per pokemon.' unique_clusters = len(np.unique(y_kmeans)) assert unique_clusters == 4, f'There should 4 unique clusters, your prediction vector has {unique_clusters}' assert y_kmeans.mean() == 1.5025, f'Something is not quite right with your prediction vector. Have you used a random seed of 42?' print('Success! 🎉') return test_kmeans() # - # ## Problem 2 # # Now that we have clustered our pokemons, we'd like to explore these groups. Specifically, we'd like to know the mean stats of each cluster, so we can compare their average strengths and weaknesses. # # # 💪 **Task: Group the pokemons by cluster, and calculate the mean statistics of each group. Save this in a `DataFrame` called `cluster_means`. For example, you should be able to clearly read the average `Defense` of cluster 2 in your `cluster_means` `DataFrame`.** # Pro-tip 1: Adding a `Cluster` column to `df` will allow you to work on a single `DataFrame` and make the task much easier 🙃 # Pro-tip 2: You should only expect numerical columns in `cluster_means`, since the mean of a string is undefined. # + # INSERT YOUR CODE HERE # + import math def test_cluster_means(): assert len(cluster_means) == 4, f'Your dataframe has {len(cluster_means)} rows, but 4 are expected: one per cluster' assert 'Attack' in cluster_means.columns, f'Your dataframe should contain the Attack column' assert math.isclose(cluster_means.values.sum(), 5276.0872, rel_tol=1e-5), f'Something is not quite right with your cluster means. Have you used a random seed of 42?' print('Success! 🎉') return cluster_means test_cluster_means() # - # 🧠 **Bonus Question: Inspect the clusters and their traits. What do you think the clusters represent? Try to identify what makes each cluster stand out and qualitatively describe the "identity" of each cluster.** # # ℹ️ Notice how building these kinds of clustered "profiles" is beyond anything we could have done just by manipulating the `DataFrame`. Last assignment, we split the pokemons by types, but k-Means takes into account the _density_ of the dataset to create more natural groupings. # ## Problem 3 # # We're getting an idea of what our clusters represent, and how their distributions vary. However, we have recently acquired data visualization powers ⚡️, so we'd like to visualize these differences. # # 💪 **Task: Visualize some aspect of `cluster_means`. Feel free to focus on a particular column, or to aggregate some of the data. The graph should show some differences between the clusters. Be creative!** # Pro-tip 1: Don't overthink the chart content, you will mostly be graded on healthy visualization practices. # Pro-tip 2: Try to use the matplotlib api instead of the `Dataframe.plot` built in pandas. This should give you more control and allow you to create a more effective visualization. # + # INSERT YOUR CODE HERE # - # 🧠 **Bonus Question: Why you chose this data to plot? Why did you represent it in this particular way?** # ## Problem 4 # # We have shown differences in the cluster average statistics with a beautiful graph. Now, we want to visualize the cluster assignments of ALL of the data. However, we have six "stats" columns, and even the world of pokemon is only three dimensional... Prepare for trouble, and make it double, it's time for dimensionality reduction! # # 💪 **Task: Reduce the dimensions of the pokemon dataset using PCA. Store the principal components in a NumPy `ndarray` called `components`. The unit test will call a `.plot_PCA()` method to display the data points, and their color coded cluster assignments.** # Pro-tip 1: You should only use the numerical columns: `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed`, and `HP`. # Pro-tip 2: Think of how many dimensions you must reduce the dataset to, so that we are able to visualize it. It's the same as we did in class! # Pro-tip 3: Please use the `random_state=42` argument when constructing your sklearn class, to make sure your results are reproducible. Marks won't be taken off for using the wrong random seed, but the unit tests won't pass! # Pro-tip 4: We have seen in lectures that sklearn expects NumPy `ndarrays` as argument to its training and prediction methods. Whilst that is true, it can also accept pandas `DataFrames` directly, since these are `ndarray` wrappers. You can use whichever you prefer. # Pro-tip 5: The `plot_PCA()` method uses the `y_kmeans` predictions to pick marker colors. Make sure you have finished problem 1 and run the cells to make it available here. # # + # INSERT YOUR CODE HERE # + import matplotlib.pyplot as plt from matplotlib.lines import Line2D def plot_PCA(components): # assign a color to each prediction colors = ['blue', 'red', 'green', 'orange'] features_colors = [colors[y] for y in y_kmeans] # plot the PCA components fig = plt.figure() ax = fig.add_subplot('111') ax.scatter(components[:, 0], components[:, 1], c=features_colors, marker='o', alpha=0.4) ax.set_title('PCA visualization of pokemon k-Means clusters') legends = [legend(i, c) for i, c in enumerate(colors)] ax.legend(handles=legends, loc='upper left') plt.show() def legend(i, color): return Line2D([0], [0], marker='o', color='w', label=f'Cluster {i}',markerfacecolor=color, markersize=8) def test_pca(): rows, columns = components.shape assert columns == 2, f'Your components have {columns} dimensions. In order to visualise the data, we expect 2 dimensions.' assert rows == 800, f'Your components have {rows} data points, but 800 are expected, one per pokemon.' assert math.isclose(components[42, 1], -18.321118, rel_tol=1e-5), f'Something is not quite right with your dimensional reduction. Have you used a random seed of 42?' print('Success! 🎉') plot_PCA(components) test_pca() # - # 🧠 **Bonus Question: Do you think this matches the results of problem 2? Why? What do the 2 principal axes seem to represent?** # ## Problem 5 # # An Old man once told you how to catch Weedles. 🐛 But he also said that winning battles comes down to unique fighting styles. We want to find the pokemons that stand out the most from the rest. # # 💪 **Task: Use gaussian distribution anomaly detection to identify the top 1% of most unique pokemons. Use the resulting predictions vector to filter our `df` `DataFrame`, and save the outlier pokemons in a new `DataFrame` called `outliers`.** # Pro-tip 1: You should only use the numerical columns: `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed`, and `HP`. # Pro-tip 2: Please use the `random_state=42` argument when constructing your sklearn class, to make sure your results are reproducible. Marks won't be taken off for using the wrong random seed, but the unit tests won't pass! # Pro-tip 3: We have seen in lectures that sklearn expects NumPy ndarrays as argument to its training and prediction methods. Whilst that is true, it can also accept pandas DataFrames directly, since these are ndarray wrappers. So use whichever you prefer. # Pro-tip 4: Remember that the `contamination` argument changes the percentage of our dataset we expect to be outliers. # Pro-tip 5: It could help to add the predictions in an `Outlier` column to the original `df`, to make the filtering of the anomalous pokemons easier 🙃 # # + # INSERT YOUR CODE HERE # + def test_anomaly_detection(): assert len(outliers) == 8, f'You found {len(outliers)} outliers, but we expected 800 * 1% = 8' assert outliers['Total'].sum() == 4284, f'Something is not quite right with your anomaly detection. Have you used a random seed of 42?' print('Success! 🎉') return outliers test_anomaly_detection() # - # 🧠 **Bonus Question: Is this what you expected? Can you explain why these pokemons are outliers? Can you spot a pattern?**
assignments/assignment2/assignment2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Developing an AI application # # Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. # # In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. # # <img src='assets/Flowers.png' width=500px> # # The project is broken down into multiple steps: # # * Load and preprocess the image dataset # * Train the image classifier on your dataset # * Use the trained classifier to predict image content # # We'll lead you through each part which you'll implement in Python. # # When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new. # # First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here. # # Please make sure if you are running this notebook in the workspace that you have chosen GPU rather than CPU mode. # Import required modules import torch from torch import nn, optim import torch.nn.functional as F from torchvision import datasets, transforms, models from PIL import Image import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import json from collections import OrderedDict # ## Load the data # # Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks. # # The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size. # # The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. # # define directories data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' # + # Define transforms for the training set train_transforms = transforms.Compose([transforms.RandomResizedCrop(224), transforms.RandomRotation(40), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # Define transforms for the validation set valid_transforms = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # Define your transforms for the testing set test_transforms = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # Load the datasets with ImageFolder train_dataset = datasets.ImageFolder(train_dir, transform=train_transforms) valid_dataset = datasets.ImageFolder(valid_dir, transform=valid_transforms) test_dataset = datasets.ImageFolder(test_dir, transform=test_transforms) # Using the image datasets and the transforms, define the dataloaders train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=64, shuffle=True) valid_loader = torch.utils.data.DataLoader(valid_dataset,batch_size=32, shuffle=True) test_loader = torch.utils.data.DataLoader(test_dataset,batch_size=32, shuffle=True) # - # ### Label mapping # # You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers. with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) # # Building and training the classifier # # Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features. # # We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do: # # * Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use) # * Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout # * Train the classifier layers using backpropagation using the pre-trained network to get the features # * Track the loss and accuracy on the validation set to determine the best hyperparameters # # We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal! # # When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project. # # One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module. # + # Build and train your network # - # define model parameters (choose model network, dropout rate, nodes of hidden layer 1, 2 and 3 as well as learning rate) # As recommended, we us the pretrained VGG networks network = "vgg11" dropout = 0.2 hidden_layer1 = 512 hidden_layer2 = 256 hidden_layer3 = 128 lr = 0.001 def nn_create(network, dropout = 0.2, hidden_layer1 = 512, hidden_layer2 = 256, hidden_layer3 = 128, lr = 0.001): """ This function creates a neural network by using the feauteres of a pretrained network, which are fed into a defined classifier. The network consists of three hidden layers and has 102 output classes. INPUT: network - string of pretrained model (vggXX) dropout - dropout rate hidden_layer1 - nodes of first hidden layer hidden_layer2 - nodes of second hidden layer hidden_layer3 - nodes of third hidden layer lr - learning rate OUTPUT: model - classifier optimizer - optimizer of model parameters criterion - defined loss """ model_creatable = True if network == 'vgg11': network = models.vgg11(pretrained = True) elif network == 'vgg13': network = models.vgg13(pretrained = True) elif network == 'vgg16': network = models.vgg16(pretrained = True) elif network == 'vgg19': network = models.vgg19(pretrained = True) else: print("Please enter an existing VGG network (vgg11, vgg13, vgg16, vgg19)") model_creatable = False if model_creatable: # get in_features of model and define output classes num_features = network.classifier[0].in_features num_outputs = 102 # turn off gradients, since they are not required for param in network.parameters(): param.requires_grad = False # define classifier classifier = nn.Sequential(OrderedDict([('dropout', nn.Dropout(dropout)), ('fc1', nn.Linear(num_features, hidden_layer1)), ('relu1', nn.ReLU()), ('fc2', nn.Linear(hidden_layer1, hidden_layer2)), ('relu2', nn.ReLU()), ('fc3', nn.Linear(hidden_layer2, hidden_layer3)), ('relu3', nn.ReLU()), ('fc4', nn.Linear(hidden_layer3, num_outputs)), ('output', nn.LogSoftmax(dim=1)) ])) # replace classifier network.classifier = classifier # move model to GPU network.cuda() # define optimizer and criterion optimizer = optim.Adam(network.classifier.parameters(), lr = lr) criterion = nn.NLLLoss() return network, optimizer, criterion try: # create deep learning model model, optimizer, criterion = nn_create(network, dropout, hidden_layer1, hidden_layer2, hidden_layer3, lr) # Show model print(model) except TypeError: print("Non-existing network. Try again!") # + ## Perform training # define training parameters epochs = 10 steps = 0 print_every = 5 running_loss = 0 model.to('cuda') train_losses, valid_losses = [], [] for epoch in range(epochs): for inputs, labels in train_loader: steps += 1 # move inputs and labels tensors to default devices inputs,labels = inputs.to('cuda'), labels.to('cuda') # clear gradient optimizer.zero_grad() # Forward pass outputs = model.forward(inputs) # calculate loss loss = criterion(outputs, labels) # backward pass loss.backward() # perform optimization step optimizer.step() # cumulate loss running_loss += loss.item() # validation pass if steps % print_every == 0: model.eval() valid_loss = 0 accuracy=0 # turn off gradients, since no back propagation on validation pass required with torch.no_grad(): for inputs_valid,labels_valid in valid_loader: # move everything to GPU inputs_valid, labels_valid = inputs_valid.to('cuda:0') , labels_valid.to('cuda:0') model.to('cuda:0') # Forward pass outputs_valid = model.forward(inputs_valid) # calculate validation loss valid_loss = criterion(outputs_valid,labels_valid) # calculate accuracy by checking if the predicted classes match the labels ps = torch.exp(outputs_valid) top_p, top_class = ps.topk(1,dim=1) equals = top_class == labels_valid.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() train_losses.append(running_loss/len(train_loader)) valid_losses.append(valid_loss/len(valid_loader)) print(f"Epoch {epoch+1}/{epochs}.. " f"Training Loss: {running_loss/len(train_loader):.3f}.. " f"Validation loss: {valid_loss/len(valid_loader):.3f}.. " f"Validation accuracy: {accuracy/len(valid_loader):.3f}") running_loss = 0 model.train() # + # Plot Training and Validation losses over steps to validate the training behaviour and avoid overfitting # %matplotlib inline # %config InlineBackend.figure_format = 'retina' plt.plot(train_losses, label="Training loss") plt.plot(valid_losses, label="Validation loss") plt.legend() # - # ## Testing your network # # It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well. # + # Do validation on the test set def calc_accuracy_test(test_loader): """ Function calculates the accuracy of the trained model according the test dataset loaded into the test_loader INPUT: test_loader - test dataset loaded into the test_loader OUTPUT: test_accuracy - calculated accuracy """ # initialize accuracy accuracy = 0 # move model to GPU model.to('cuda:0') # turn off gradients, since only forward pass is required with torch.no_grad(): # loop images and labels of test dataset for images, labels in test_loader: # move everything to GPU images, labels = images.to('cuda:0') , labels.to('cuda:0') # Forward pass outputs = model(images) # calculate validation loss valid_loss = criterion(outputs_valid,labels_valid) # calculate accuracy by checking if the predicted classes match the labels ps = torch.exp(outputs) top_p, top_class = ps.topk(1,dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print('Accuracy on Test dataset: {:.2f}%'.format(accuracy/len(test_loader)*100)) return accuracy test_acc = calc_accuracy_test(test_loader) # - # ## Save the checkpoint # # Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on. # # ```model.class_to_idx = image_datasets['train'].class_to_idx``` # # Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. # TODO: Save the checkpoint def save_checkpoint(model, network, hidden_layer1, hidden_layer2, hidden_layer3, dropout, lr, filepath, train_dataset): """ save the checkpoint of a deep learning model INPUT: model - deep learning model network - structure of deep learning model hidden_layer1 - nodes in first hidden layer hidden_layer2 - nodes in second hidden layer hidden_layer3 - nodes in third hidden layer dropout - dropout rate lr - learning rate filepath - destination of the checkpoint train_dataset - training dataset """ model.class_to_idx = train_dataset.class_to_idx model.cpu torch.save({'network' : network, 'hidden_layer1' : hidden_layer1, 'hidden_layer2' : hidden_layer2, 'hidden_layer3' : hidden_layer3, 'dropout' : dropout, 'learning_rate' : lr, 'state_dict' : model.state_dict(), 'class_to_idx':model.class_to_idx}, filepath) save_dir = 'checkpoint.pth' save_checkpoint(model, network, hidden_layer1, hidden_layer2, hidden_layer3, dropout, lr, save_dir, train_dataset) # ## Loading the checkpoint # # At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. # TODO: Write a function that loads a checkpoint and rebuilds the model def load_checkpoint(filepath): ''' Load previously saved checkpoint INPUT: filepath - path of checkpoint file OUTPUT: loaded_model - model created from loaded checkpoint data ''' # load checkpoint data (use CPU) checkpoint = torch.load(filepath, map_location=lambda storage, loc: storage) # read out model properties network = checkpoint['network'] hidden_layer1 = checkpoint['hidden_layer1'] hidden_layer2 = checkpoint['hidden_layer2'] hidden_layer3 = checkpoint['hidden_layer3'] dropout = checkpoint['dropout'] lr = checkpoint['learning_rate'] # create model from properties using nn_create and pass state_dict loaded_model,_,_ = nn_create(network, dropout, hidden_layer1, hidden_layer2, hidden_layer3, lr) loaded_model.class_to_idx = checkpoint['class_to_idx'] loaded_model.load_state_dict(checkpoint['state_dict']) return loaded_model # load model from checkpoint file model = load_checkpoint('checkpoint.pth') print(model) # # Inference for classification # # Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like # # ```python # probs, classes = predict(image_path, model) # print(probs) # print(classes) # > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] # > ['70', '3', '45', '62', '55'] # ``` # # First you'll need to handle processing the input image such that it can be used in your network. # # ## Image Preprocessing # # You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. # # First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image. # # Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`. # # As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. # # And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. def process_image(image): ''' Scales, crops, and normalizes a PIL image for a PyTorch model, returns an Numpy array INPUT: image - path to image file OUTPUT: img_np - image as Numpy array ''' # TODO: Process a PIL image for use in a PyTorch model # open image as PIL image img_pil = Image.open(image) # define transformations transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # perform transformation img_tensor = transform(img_pil) # convert PyTorch Tensor to Numpy array and return array return img_tensor.numpy() # + # load image and process image image_path = data_dir + '/test' + '/11/' + 'image_03098.jpg' img = process_image(image_path) # show dimensions of img to check whether the color channel is within the first dimension # (required result (3, 224, 224)) print(img.shape) # - # To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). def imshow(image, ax=None, title=None): if ax is None: fig, ax = plt.subplots() # PyTorch tensors assume the color channel is the first dimension # but matplotlib assumes is the third dimension image = image.transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 or it looks like noise when displayed image = np.clip(image, 0, 1) ax.imshow(image) return ax # Test process_image using imshow and the defined filepath imshow(img) # ## Class Prediction # # Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values. # # To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well. # # Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes. # # ```python # probs, classes = predict(image_path, model) # print(probs) # print(classes) # > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] # > ['70', '3', '45', '62', '55'] # ``` def predict(image_path, model, topk=5): ''' Predict the class (or classes) of an image using a trained deep learning model. INPUT: image_path - path to image file model - deep learning model defined by checkpoint file topk - top k classes of the prediction OUTPUT: probs_topk_list - list of probabilities of top k classes classes_topk_list - list of classes with the k highest probabilites ''' # TODO: Implement the code to predict the class from an image file # process image, convert image to torch.FloatTensor and unsqueeze tensor to comply with model input img = process_image(image_path) img_tensor = torch.from_numpy(img).type(torch.FloatTensor) img_tensor = img_tensor.unsqueeze_(0) # load deep learning model and move to CPU model = load_checkpoint(model).cpu() # set model to evaluation mode model.eval() # turn off gradients - not required for predicting with torch.no_grad(): # forward pass for predictions outputs = model.forward(img_tensor) # calculate output probabilities and get the top k classes with indices - save as lists probs = torch.exp(outputs) probs_topk = probs.topk(topk)[0] idx_topk = probs.topk(topk)[1] probs_topk_list = np.array(probs_topk)[0] idx_topk_list = np.array(idx_topk[0]) # map indices to classes idx_to_class = {x: y for y, x in model.class_to_idx.items()} # create class list classes_topk_list = [] for i in idx_topk_list: classes_topk_list += [idx_to_class[i]] return probs_topk_list, classes_topk_list # + # perform prediction model_path = 'checkpoint.pth' image_path = data_dir + '/test' + '/11/' + 'image_03098.jpg' probs, classes = predict(image_path, model_path) print(probs) print(classes) # - # ## Sanity Checking # # Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this: # # <img src='assets/inference_example.png' width=300px> # # You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above. # TODO: Display an image along with the top 5 classes def sanity_check(image_path, model): # get probabilities and classes of top 5 predictions probs, classes = predict(image_path, model) # get class names class_names = [] for c in classes: class_names.append(cat_to_name[c]) # process image (resize, center, normalize) img = process_image(image_path) # define figure size and structure plt.figure(figsize=(4,11)) plt.subplot(211) # show image ax = imshow(img, ax = plt) ax.axis('off') ax.title(class_names[0]) # plot predicted probabilities using barh() and invert the yaxis plt.subplot(212) plt.grid(linestyle = '--', linewidth = 0.5) plt.barh(np.arange(len(class_names)), probs, color = 'blue') plt.yticks(np.arange(len(class_names)), class_names) plt.gca().invert_yaxis() # plot plt.show() # + # perform sanity checking model_path = 'checkpoint.pth' image_path = data_dir + '/test' + '/17/' + 'image_03872.jpg' sanity_check(image_path, model_path)
Image Classifier Project.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **Chapter 11 – Training Deep Neural Networks** # _This notebook contains all the sample code and solutions to the exercises in chapter 11._ # <table align="left"> # <td> # <a href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/11_training_deep_neural_networks.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # </td> # <td> # <a target="_blank" href="https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml2/blob/master/11_training_deep_neural_networks.ipynb"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" /></a> # </td> # </table> # # Setup # First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0. # + # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" try: # # %tensorflow_version only exists in Colab. # %tensorflow_version 2.x except Exception: pass # TensorFlow ≥2.0 is required import tensorflow as tf from tensorflow import keras assert tf.__version__ >= "2.0" # %load_ext tensorboard # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures # %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "deep" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # - # # Vanishing/Exploding Gradients Problem def logit(z): return 1 / (1 + np.exp(-z)) # + z = np.linspace(-5, 5, 200) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([-5, 5], [1, 1], 'k--') plt.plot([0, 0], [-0.2, 1.2], 'k-') plt.plot([-5, 5], [-3/4, 7/4], 'g--') plt.plot(z, logit(z), "b-", linewidth=2) props = dict(facecolor='black', shrink=0.1) plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center") plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center") plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center") plt.grid(True) plt.title("Sigmoid activation function", fontsize=14) plt.axis([-5, 5, -0.2, 1.2]) save_fig("sigmoid_saturation_plot") plt.show() # - # ## Xavier and He Initialization [name for name in dir(keras.initializers) if not name.startswith("_")] keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal") init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg', distribution='uniform') keras.layers.Dense(10, activation="relu", kernel_initializer=init) # ## Nonsaturating Activation Functions # ### Leaky ReLU def leaky_relu(z, alpha=0.01): return np.maximum(alpha*z, z) # + plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([0, 0], [-0.5, 4.2], 'k-') plt.grid(True) props = dict(facecolor='black', shrink=0.1) plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center") plt.title("Leaky ReLU activation function", fontsize=14) plt.axis([-5, 5, -0.5, 4.2]) save_fig("leaky_relu_plot") plt.show() # - [m for m in dir(keras.activations) if not m.startswith("_")] [m for m in dir(keras.layers) if "relu" in m.lower()] # Let's train a neural network on Fashion MNIST using the Leaky ReLU: (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data() X_train_full = X_train_full / 255.0 X_test = X_test / 255.0 X_valid, X_train = X_train_full[:5000], X_train_full[5000:] y_valid, y_train = y_train_full[:5000], y_train_full[5000:] # + tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, kernel_initializer="he_normal"), keras.layers.LeakyReLU(), keras.layers.Dense(100, kernel_initializer="he_normal"), keras.layers.LeakyReLU(), keras.layers.Dense(10, activation="softmax") ]) # - model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) # Now let's try PReLU: # + tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, kernel_initializer="he_normal"), keras.layers.PReLU(), keras.layers.Dense(100, kernel_initializer="he_normal"), keras.layers.PReLU(), keras.layers.Dense(10, activation="softmax") ]) # - model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) # ### ELU def elu(z, alpha=1): return np.where(z < 0, alpha * (np.exp(z) - 1), z) # + plt.plot(z, elu(z), "b-", linewidth=2) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([-5, 5], [-1, -1], 'k--') plt.plot([0, 0], [-2.2, 3.2], 'k-') plt.grid(True) plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14) plt.axis([-5, 5, -2.2, 3.2]) save_fig("elu_plot") plt.show() # - # Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer: keras.layers.Dense(10, activation="elu") # ### SELU # This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by <NAME>, <NAME> and <NAME>, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ<sub>1</sub> or ℓ<sub>2</sub> regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions. # + from scipy.special import erfc # alpha and scale to self normalize with mean 0 and standard deviation 1 # (see equation 14 in the paper): alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1) scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2) # - def selu(z, scale=scale_0_1, alpha=alpha_0_1): return scale * elu(z, alpha) # + plt.plot(z, selu(z), "b-", linewidth=2) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([-5, 5], [-1.758, -1.758], 'k--') plt.plot([0, 0], [-2.2, 3.2], 'k-') plt.grid(True) plt.title("SELU activation function", fontsize=14) plt.axis([-5, 5, -2.2, 3.2]) save_fig("selu_plot") plt.show() # - # By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem: np.random.seed(42) Z = np.random.normal(size=(500, 100)) # standardized inputs for layer in range(1000): W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization Z = selu(np.dot(Z, W)) means = np.mean(Z, axis=0).mean() stds = np.std(Z, axis=0).mean() if layer % 100 == 0: print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds)) # Using SELU is easy: keras.layers.Dense(10, activation="selu", kernel_initializer="lecun_normal") # Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function: np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[28, 28])) model.add(keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal")) for layer in range(99): model.add(keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal")) model.add(keras.layers.Dense(10, activation="softmax")) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) # Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1: pixel_means = X_train.mean(axis=0, keepdims=True) pixel_stds = X_train.std(axis=0, keepdims=True) X_train_scaled = (X_train - pixel_means) / pixel_stds X_valid_scaled = (X_valid - pixel_means) / pixel_stds X_test_scaled = (X_test - pixel_means) / pixel_stds history = model.fit(X_train_scaled, y_train, epochs=5, validation_data=(X_valid_scaled, y_valid)) # Now look at what happens if we try to use the ReLU activation function instead: np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[28, 28])) model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal")) for layer in range(99): model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal")) model.add(keras.layers.Dense(10, activation="softmax")) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model.fit(X_train_scaled, y_train, epochs=5, validation_data=(X_valid_scaled, y_valid)) # Not great at all, we suffered from the vanishing/exploding gradients problem. # # Batch Normalization model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.BatchNormalization(), keras.layers.Dense(300, activation="relu"), keras.layers.BatchNormalization(), keras.layers.Dense(100, activation="relu"), keras.layers.BatchNormalization(), keras.layers.Dense(10, activation="softmax") ]) model.summary() bn1 = model.layers[1] [(var.name, var.trainable) for var in bn1.variables] # + #bn1.updates #deprecated # - model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) # Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers: model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.BatchNormalization(), keras.layers.Dense(300, use_bias=False), keras.layers.BatchNormalization(), keras.layers.Activation("relu"), keras.layers.Dense(100, use_bias=False), keras.layers.BatchNormalization(), keras.layers.Activation("relu"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) # ## Gradient Clipping # All Keras optimizers accept `clipnorm` or `clipvalue` arguments: optimizer = keras.optimizers.SGD(clipvalue=1.0) optimizer = keras.optimizers.SGD(clipnorm=1.0) # ## Reusing Pretrained Layers # ### Reusing a Keras model # Let's split the fashion MNIST training set in two: # * `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6). # * `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts. # # The validation set and the test set are also split this way, but without restricting the number of images. # # We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter). # + def split_dataset(X, y): y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts y_A = y[~y_5_or_6] y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7 y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)? return ((X[~y_5_or_6], y_A), (X[y_5_or_6], y_B)) (X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train) (X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid) (X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test) X_train_B = X_train_B[:200] y_train_B = y_train_B[:200] # - X_train_A.shape X_train_B.shape y_train_A[:30] y_train_B[:30] tf.random.set_seed(42) np.random.seed(42) model_A = keras.models.Sequential() model_A.add(keras.layers.Flatten(input_shape=[28, 28])) for n_hidden in (300, 100, 50, 50, 50): model_A.add(keras.layers.Dense(n_hidden, activation="selu")) model_A.add(keras.layers.Dense(8, activation="softmax")) model_A.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model_A.fit(X_train_A, y_train_A, epochs=20, validation_data=(X_valid_A, y_valid_A)) model_A.save("my_model_A.h5") model_B = keras.models.Sequential() model_B.add(keras.layers.Flatten(input_shape=[28, 28])) for n_hidden in (300, 100, 50, 50, 50): model_B.add(keras.layers.Dense(n_hidden, activation="selu")) model_B.add(keras.layers.Dense(1, activation="sigmoid")) model_B.compile(loss="binary_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model_B.fit(X_train_B, y_train_B, epochs=20, validation_data=(X_valid_B, y_valid_B)) model_B.summary() model_A = keras.models.load_model("my_model_A.h5") model_B_on_A = keras.models.Sequential(model_A.layers[:-1]) model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid")) # Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`: model_A_clone = keras.models.clone_model(model_A) model_A_clone.set_weights(model_A.get_weights()) model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1]) model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid")) # + for layer in model_B_on_A.layers[:-1]: layer.trainable = False model_B_on_A.compile(loss="binary_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) # + history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4, validation_data=(X_valid_B, y_valid_B)) for layer in model_B_on_A.layers[:-1]: layer.trainable = True model_B_on_A.compile(loss="binary_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16, validation_data=(X_valid_B, y_valid_B)) # - # So, what's the final verdict? model_B.evaluate(X_test_B, y_test_B) model_B_on_A.evaluate(X_test_B, y_test_B) # Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9! (100 - 97.05) / (100 - 99.40) # # Faster Optimizers # ## Momentum optimization optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9) # ## Nesterov Accelerated Gradient optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True) # ## AdaGrad optimizer = keras.optimizers.Adagrad(learning_rate=0.001) # ## RMSProp optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9) # ## Adam Optimization optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999) # ## Adamax Optimization optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999) # ## Nadam Optimization optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999) # ## Learning Rate Scheduling # ### Power Scheduling # ```lr = lr0 / (1 + steps / s)**c``` # * Keras uses `c=1` and `s = 1 / decay` optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) n_epochs = 25 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid)) # + import math learning_rate = 0.01 decay = 1e-4 batch_size = 32 n_steps_per_epoch = math.ceil(len(X_train) / batch_size) epochs = np.arange(n_epochs) lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch) plt.plot(epochs, lrs, "o-") plt.axis([0, n_epochs - 1, 0, 0.01]) plt.xlabel("Epoch") plt.ylabel("Learning Rate") plt.title("Power Scheduling", fontsize=14) plt.grid(True) plt.show() # - # ### Exponential Scheduling # ```lr = lr0 * 0.1**(epoch / s)``` def exponential_decay_fn(epoch): return 0.01 * 0.1**(epoch / 20) # + def exponential_decay(lr0, s): def exponential_decay_fn(epoch): return lr0 * 0.1**(epoch / s) return exponential_decay_fn exponential_decay_fn = exponential_decay(lr0=0.01, s=20) # - model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"]) n_epochs = 25 lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn) history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid), callbacks=[lr_scheduler]) plt.plot(history.epoch, history.history["lr"], "o-") plt.axis([0, n_epochs - 1, 0, 0.011]) plt.xlabel("Epoch") plt.ylabel("Learning Rate") plt.title("Exponential Scheduling", fontsize=14) plt.grid(True) plt.show() # The schedule function can take the current learning rate as a second argument: def exponential_decay_fn(epoch, lr): return lr * 0.1**(1 / 20) # If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class: # + K = keras.backend class ExponentialDecay(keras.callbacks.Callback): def __init__(self, s=40000): super().__init__() self.s = s def on_batch_begin(self, batch, logs=None): # Note: the `batch` argument is reset at each epoch lr = K.get_value(self.model.optimizer.learning_rate) K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / self.s)) def on_epoch_end(self, epoch, logs=None): logs = logs or {} logs['lr'] = K.get_value(self.model.optimizer.learning_rate) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(10, activation="softmax") ]) lr0 = 0.01 optimizer = keras.optimizers.Nadam(learning_rate=lr0) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) n_epochs = 25 s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32) exp_decay = ExponentialDecay(s) history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid), callbacks=[exp_decay]) # - n_steps = n_epochs * len(X_train) // 32 steps = np.arange(n_steps) lrs = lr0 * 0.1**(steps / s) plt.plot(steps, lrs, "-", linewidth=2) plt.axis([0, n_steps - 1, 0, lr0 * 1.1]) plt.xlabel("Batch") plt.ylabel("Learning Rate") plt.title("Exponential Scheduling (per batch)", fontsize=14) plt.grid(True) plt.show() # ### Piecewise Constant Scheduling def piecewise_constant_fn(epoch): if epoch < 5: return 0.01 elif epoch < 15: return 0.005 else: return 0.001 # + def piecewise_constant(boundaries, values): boundaries = np.array([0] + boundaries) values = np.array(values) def piecewise_constant_fn(epoch): return values[np.argmax(boundaries > epoch) - 1] return piecewise_constant_fn piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001]) # + lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"]) n_epochs = 25 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid), callbacks=[lr_scheduler]) # - plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-") plt.axis([0, n_epochs - 1, 0, 0.011]) plt.xlabel("Epoch") plt.ylabel("Learning Rate") plt.title("Piecewise Constant Scheduling", fontsize=14) plt.grid(True) plt.show() # ### Performance Scheduling tf.random.set_seed(42) np.random.seed(42) # + lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(10, activation="softmax") ]) optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) n_epochs = 25 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid), callbacks=[lr_scheduler]) # + plt.plot(history.epoch, history.history["lr"], "bo-") plt.xlabel("Epoch") plt.ylabel("Learning Rate", color='b') plt.tick_params('y', colors='b') plt.gca().set_xlim(0, n_epochs - 1) plt.grid(True) ax2 = plt.gca().twinx() ax2.plot(history.epoch, history.history["val_loss"], "r^-") ax2.set_ylabel('Validation Loss', color='r') ax2.tick_params('y', colors='r') plt.title("Reduce LR on Plateau", fontsize=14) plt.show() # - # ### tf.keras schedulers model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(10, activation="softmax") ]) s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32) learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1) optimizer = keras.optimizers.SGD(learning_rate) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) n_epochs = 25 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid)) # For piecewise constant scheduling, try this: learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay( boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch], values=[0.01, 0.005, 0.001]) # ### 1Cycle scheduling # + K = keras.backend class ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_batch_end(self, batch, logs): self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(logs["loss"]) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor) def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10): init_weights = model.get_weights() iterations = math.ceil(len(X) / batch_size) * epochs factor = np.exp(np.log(max_rate / min_rate) / iterations) init_lr = K.get_value(model.optimizer.learning_rate) K.set_value(model.optimizer.learning_rate, min_rate) exp_lr = ExponentialLearningRate(factor) history = model.fit(X, y, epochs=epochs, batch_size=batch_size, callbacks=[exp_lr]) K.set_value(model.optimizer.learning_rate, init_lr) model.set_weights(init_weights) return exp_lr.rates, exp_lr.losses def plot_lr_vs_loss(rates, losses): plt.plot(rates, losses) plt.gca().set_xscale('log') plt.hlines(min(losses), min(rates), max(rates)) plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2]) plt.xlabel("Learning rate") plt.ylabel("Loss") # - # **Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss): # # ```python # class ExponentialLearningRate(keras.callbacks.Callback): # def __init__(self, factor): # self.factor = factor # self.rates = [] # self.losses = [] # def on_epoch_begin(self, epoch, logs=None): # self.prev_loss = 0 # def on_batch_end(self, batch, logs=None): # batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch # self.prev_loss = logs["loss"] # self.rates.append(K.get_value(self.model.optimizer.learning_rate)) # self.losses.append(batch_loss) # K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor) # ``` # + tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-3), metrics=["accuracy"]) # - batch_size = 128 rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size) plot_lr_vs_loss(rates, losses) class OneCycleScheduler(keras.callbacks.Callback): def __init__(self, iterations, max_rate, start_rate=None, last_iterations=None, last_rate=None): self.iterations = iterations self.max_rate = max_rate self.start_rate = start_rate or max_rate / 10 self.last_iterations = last_iterations or iterations // 10 + 1 self.half_iteration = (iterations - self.last_iterations) // 2 self.last_rate = last_rate or self.start_rate / 1000 self.iteration = 0 def _interpolate(self, iter1, iter2, rate1, rate2): return ((rate2 - rate1) * (self.iteration - iter1) / (iter2 - iter1) + rate1) def on_batch_begin(self, batch, logs): if self.iteration < self.half_iteration: rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate) elif self.iteration < 2 * self.half_iteration: rate = self._interpolate(self.half_iteration, 2 * self.half_iteration, self.max_rate, self.start_rate) else: rate = self._interpolate(2 * self.half_iteration, self.iterations, self.start_rate, self.last_rate) self.iteration += 1 K.set_value(self.model.optimizer.learning_rate, rate) n_epochs = 25 onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05) history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size, validation_data=(X_valid_scaled, y_valid), callbacks=[onecycle]) # # Avoiding Overfitting Through Regularization # ## $\ell_1$ and $\ell_2$ regularization layer = keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal", kernel_regularizer=keras.regularizers.l2(0.01)) # or l1(0.1) for ℓ1 regularization with a factor of 0.1 # or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal", kernel_regularizer=keras.regularizers.l2(0.01)), keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal", kernel_regularizer=keras.regularizers.l2(0.01)), keras.layers.Dense(10, activation="softmax", kernel_regularizer=keras.regularizers.l2(0.01)) ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"]) n_epochs = 2 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid)) # + from functools import partial RegularizedDense = partial(keras.layers.Dense, activation="elu", kernel_initializer="he_normal", kernel_regularizer=keras.regularizers.l2(0.01)) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), RegularizedDense(300), RegularizedDense(100), RegularizedDense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"]) n_epochs = 2 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid)) # - # ## Dropout model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dropout(rate=0.2), keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"), keras.layers.Dropout(rate=0.2), keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"), keras.layers.Dropout(rate=0.2), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"]) n_epochs = 2 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid)) # ## Alpha Dropout tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.AlphaDropout(rate=0.2), keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"), keras.layers.AlphaDropout(rate=0.2), keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"), keras.layers.AlphaDropout(rate=0.2), keras.layers.Dense(10, activation="softmax") ]) optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) n_epochs = 20 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid)) model.evaluate(X_test_scaled, y_test) model.evaluate(X_train_scaled, y_train) history = model.fit(X_train_scaled, y_train) # ## MC Dropout tf.random.set_seed(42) np.random.seed(42) y_probas = np.stack([model(X_test_scaled, training=True) for sample in range(100)]) y_proba = y_probas.mean(axis=0) y_std = y_probas.std(axis=0) np.round(model.predict(X_test_scaled[:1]), 2) np.round(y_probas[:, :1], 2) np.round(y_proba[:1], 2) y_std = y_probas.std(axis=0) np.round(y_std[:1], 2) y_pred = np.argmax(y_proba, axis=1) accuracy = np.sum(y_pred == y_test) / len(y_test) accuracy # + class MCDropout(keras.layers.Dropout): def call(self, inputs): return super().call(inputs, training=True) class MCAlphaDropout(keras.layers.AlphaDropout): def call(self, inputs): return super().call(inputs, training=True) # - tf.random.set_seed(42) np.random.seed(42) mc_model = keras.models.Sequential([ MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer for layer in model.layers ]) mc_model.summary() optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True) mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) mc_model.set_weights(model.get_weights()) # Now we can use the model with MC Dropout: np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2) # ## Max norm layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal", kernel_constraint=keras.constraints.max_norm(1.)) # + MaxNormDense = partial(keras.layers.Dense, activation="selu", kernel_initializer="lecun_normal", kernel_constraint=keras.constraints.max_norm(1.)) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), MaxNormDense(300), MaxNormDense(100), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"]) n_epochs = 2 history = model.fit(X_train_scaled, y_train, epochs=n_epochs, validation_data=(X_valid_scaled, y_valid)) # - # # Exercises # ## 1. to 7. # See appendix A. # ## 8. Deep Learning on CIFAR10 # ### a. # *Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.* # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32, 32, 3])) for _ in range(20): model.add(keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal")) # - # ### b. # *Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* # Let's add the output layer to the model: model.add(keras.layers.Dense(10, activation="softmax")) # Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better. optimizer = keras.optimizers.Nadam(learning_rate=5e-5) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) # Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set: # + (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data() X_train = X_train_full[5000:] y_train = y_train_full[5000:] X_valid = X_train_full[:5000] y_valid = y_train_full[:5000] # - # Now we can create the callbacks we need and train the model: early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True) run_index = 1 # increment every time you train the model run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index)) tensorboard_cb = keras.callbacks.TensorBoard(run_logdir) callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb] # %tensorboard --logdir=./my_cifar10_logs --port=6006 model.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=callbacks) model = keras.models.load_model("my_cifar10_model.h5") model.evaluate(X_valid, y_valid) # The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. # ### c. # *Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* # The code below is very similar to the code above, with a few changes: # # * I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer. # * I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs. # * I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5. # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32, 32, 3])) model.add(keras.layers.BatchNormalization()) for _ in range(20): model.add(keras.layers.Dense(100, kernel_initializer="he_normal")) model.add(keras.layers.BatchNormalization()) model.add(keras.layers.Activation("elu")) model.add(keras.layers.Dense(10, activation="softmax")) optimizer = keras.optimizers.Nadam(learning_rate=5e-4) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True) run_index = 1 # increment every time you train the model run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index)) tensorboard_cb = keras.callbacks.TensorBoard(run_logdir) callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb] model.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=callbacks) model = keras.models.load_model("my_cifar10_bn_model.h5") model.evaluate(X_valid, y_valid) # - # * *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster. # * *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14). # * *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! # ### d. # *Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).* # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32, 32, 3])) for _ in range(20): model.add(keras.layers.Dense(100, kernel_initializer="lecun_normal", activation="selu")) model.add(keras.layers.Dense(10, activation="softmax")) optimizer = keras.optimizers.Nadam(learning_rate=7e-4) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True) run_index = 1 # increment every time you train the model run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index)) tensorboard_cb = keras.callbacks.TensorBoard(run_logdir) callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb] X_means = X_train.mean(axis=0) X_stds = X_train.std(axis=0) X_train_scaled = (X_train - X_means) / X_stds X_valid_scaled = (X_valid - X_means) / X_stds X_test_scaled = (X_test - X_means) / X_stds model.fit(X_train_scaled, y_train, epochs=100, validation_data=(X_valid_scaled, y_valid), callbacks=callbacks) model = keras.models.load_model("my_cifar10_selu_model.h5") model.evaluate(X_valid_scaled, y_valid) # - model = keras.models.load_model("my_cifar10_selu_model.h5") model.evaluate(X_valid_scaled, y_valid) # We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. # ### e. # *Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.* # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32, 32, 3])) for _ in range(20): model.add(keras.layers.Dense(100, kernel_initializer="lecun_normal", activation="selu")) model.add(keras.layers.AlphaDropout(rate=0.1)) model.add(keras.layers.Dense(10, activation="softmax")) optimizer = keras.optimizers.Nadam(learning_rate=5e-4) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) early_stopping_cb = keras.callbacks.EarlyStopping(patience=20) model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True) run_index = 1 # increment every time you train the model run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index)) tensorboard_cb = keras.callbacks.TensorBoard(run_logdir) callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb] X_means = X_train.mean(axis=0) X_stds = X_train.std(axis=0) X_train_scaled = (X_train - X_means) / X_stds X_valid_scaled = (X_valid - X_means) / X_stds X_test_scaled = (X_test - X_means) / X_stds model.fit(X_train_scaled, y_train, epochs=100, validation_data=(X_valid_scaled, y_valid), callbacks=callbacks) model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5") model.evaluate(X_valid_scaled, y_valid) # - # The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. # Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience: class MCAlphaDropout(keras.layers.AlphaDropout): def call(self, inputs): return super().call(inputs, training=True) # Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers: mc_model = keras.models.Sequential([ MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer for layer in model.layers ]) # Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance: # + def mc_dropout_predict_probas(mc_model, X, n_samples=10): Y_probas = [mc_model.predict(X) for sample in range(n_samples)] return np.mean(Y_probas, axis=0) def mc_dropout_predict_classes(mc_model, X, n_samples=10): Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples) return np.argmax(Y_probas, axis=1) # - # Now let's make predictions for all the instances in the validation set, and compute the accuracy: # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled) accuracy = np.mean(y_pred == y_valid[:, 0]) accuracy # - # We get no accuracy improvement in this case (we're still at 48.9% accuracy). # # So the best model we got in this exercise is the Batch Normalization model. # ### f. # *Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.* # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32, 32, 3])) for _ in range(20): model.add(keras.layers.Dense(100, kernel_initializer="lecun_normal", activation="selu")) model.add(keras.layers.AlphaDropout(rate=0.1)) model.add(keras.layers.Dense(10, activation="softmax")) optimizer = keras.optimizers.SGD(learning_rate=1e-3) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) # - batch_size = 128 rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size) plot_lr_vs_loss(rates, losses) plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4]) # + keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[32, 32, 3])) for _ in range(20): model.add(keras.layers.Dense(100, kernel_initializer="lecun_normal", activation="selu")) model.add(keras.layers.AlphaDropout(rate=0.1)) model.add(keras.layers.Dense(10, activation="softmax")) optimizer = keras.optimizers.SGD(learning_rate=1e-2) model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) # - n_epochs = 15 onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05) history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size, validation_data=(X_valid_scaled, y_valid), callbacks=[onecycle]) # One cycle allowed us to train the model in just 15 epochs, each taking only 2 seconds (thanks to the larger batch size). This is several times faster than the fastest model we trained so far. Moreover, we improved the model's performance (from 47.6% to 52.0%). The batch normalized model reaches a slightly better performance (54%), but it's much slower to train.
11_training_deep_neural_networks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Import libraries # * **Numpy** is the fundamental package for scientific computing with Python. # * **Pandas** is for data manipulation and analysis. # > pip install pandas # * **Matplotlib** is a Python 2D plotting library which produces publication quality figures # > pip install matplotlib # * **matplotlib** The scikit-learn package is a machine learning library, written in Python. It contains numerous algorithms, datasets, utilities, and frameworks for performing machine learning. # > pip3 install -U scikit-learn # * **Warning** messages are typically issued in situations where it is useful to alert the user of some condition in a program, # import numpy as np import pandas as pd import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') # # Example 1 # ## Import data set dataset = pd.read_csv('WorldCupMatches.csv') dataset.head(5) # ### Print data set properties print(dataset.shape) print() print(dataset.index) print() print(dataset.columns) dataset.loc[0] # ## Data Cleaning # ### a. Check out missing data dataset.isnull() dataset.isnull().sum() # ### 1. Ignore the tuple with missing data # This method is advised only when there are enough samples in the data set. One has to make sure that after we have deleted the data, there is no addition of bias. Removing the data will lead to loss of information which will not give the expected results while predicting the output. dataset2 = dataset.dropna(inplace=False) dataset2.isnull().sum() dataset2.shape # ### 2. Replace missing values with **mean, median or mode** # This is an approximation which can add variance to the data set. But the loss of the data can be negated by this method which yields better results compared to removal of rows and columns. dataset['Year'].tail(10) int(dataset['Year'].mean()) dataset['Year'] = dataset['Year'].replace(np.NaN,int(dataset['Year'].mean().round())) dataset['Year'].tail(10) dataset['City'].mode().item() dataset['City'] = dataset['City'].replace(np.NaN, dataset['City'].mode().item()) dataset # ## Data Transformation # ### 1. Handling Categorical Data dataset.dropna(inplace=True) dataset # + from sklearn.preprocessing import LabelEncoder labelencoder = LabelEncoder() # creating instance of labelencoder dataset['Stadium'] = labelencoder.fit_transform(dataset['Stadium']) dataset['Home Team Name'] = labelencoder.fit_transform(dataset['Home Team Name']) dataset['Away Team Name'] = labelencoder.fit_transform(dataset['Away Team Name']) dataset # - # ## Data Reduction # ### 1. Attribute Subset Selection dataset = dataset[['Year','Stadium', 'Home Team Name', 'Away Team Name','Home Team Goals', 'Away Team Goals']] dataset # ## Splitting the data-set into Training and Test Set from sklearn.model_selection import train_test_split train, test = train_test_split(dataset, test_size = 0.2) test test # # Example 2 # ## Import data and remove missing values dataframe = pd.read_csv('diabetes_null.csv', na_values=['#NAME?']) dataframe = dataframe.dropna(axis=0) dataframe.head(10) # ## Handling Noicy Data # ### Function to find outliers def find_outliers_tukey(data): q1 = data.quantile(.25) q3 = data.quantile(.75) iqr = q3 - q1 floor = q1 - 1.5*iqr ceiling = q3 + 1.5*iqr outlier_indices = list(data.index[(data < floor) | (data > ceiling)]) outlier_values = list(data[outlier_indices]) return outlier_indices, outlier_values # ### Finding outliers glucose_indices, glucose_values = find_outliers_tukey(dataframe['Glucose']) print("Outliers for Glucose") print(np.sort(glucose_values)) bmi_indices, bmi_values = find_outliers_tukey(dataframe['BMI']) print("Outliers for BMI") print(np.sort(bmi_values)) # ### Deleting row with outlier dataframe = dataframe.drop(bmi_indices) dataframe.head(10) # # Visualization # ## Histogram import matplotlib.pyplot as plt dataframe.hist() plt.show() # ## Density Plot dataframe.plot(kind='density', subplots=True, layout=(3,3), sharex=False) plt.show() # ## Box Plot dataframe.plot(kind='box', subplots=True, layout=(3,3), sharex=False, sharey=False) plt.show() # ## Scatter Plot from pandas.plotting import scatter_matrix scatter_matrix(dataframe) plt.show()
Session 2/Data Preprocessing Code/Data Preprocessing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Project: Identify Customer Segments # # In this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task. # # This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. **Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks.** The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it. # # It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. **There will be times in the project where you will need to make and justify your own decisions on how to treat the data.** These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made. # # At the end of most sections, there will be a Markdown cell labeled **Discussion**. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. **Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.** # + # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.cluster import KMeans # magic word for producing visualizations in notebook # %matplotlib inline ''' Import note: The classroom currently uses sklearn version 0.19. If you need to use an imputer, it is available in sklearn.preprocessing.Imputer, instead of sklearn.impute as in newer versions of sklearn. ''' # - # ### Step 0: Load the Data # # There are four files associated with this project (not including this one): # # - `Udacity_AZDIAS_Subset.csv`: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns). # - `Udacity_CUSTOMERS_Subset.csv`: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns). # - `Data_Dictionary.md`: Detailed information file about the features in the provided datasets. # - `AZDIAS_Feature_Summary.csv`: Summary of feature attributes for demographics data; 85 features (rows) x 4 columns # # Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign. # # To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the `.csv` data files in this project: they're semicolon (`;`) delimited, so you'll need an additional argument in your [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely. # # Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings. # + # Load in the general demographics data. azdias = pd.read_csv('Udacity_AZDIAS_Subset.csv', error_bad_lines=False, sep=';') # Load in the feature summary file. feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv', error_bad_lines=False, sep=';') # - # Check the structure of the data after it's loaded (e.g. print the number of # rows and columns, print the first few rows). azdias.shape azdias.head(10) feat_info.shape feat_info.head(5) # > **Tip**: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut `esc --> a` (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and `esc --> b` adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, use `esc --> m` and to convert to a code cell, use `esc --> y`. # # ## Step 1: Preprocessing # # ### Step 1.1: Assess Missing Data # # The feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the **Discussion** cell with your findings and decisions at the end of each step that has one! # # #### Step 1.1.1: Convert Missing Value Codes to NaNs # The fourth column of the feature attributes summary (loaded in above as `feat_info`) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. `[-1,0]`), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest. # # **As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.** # Identify missing or unknown data values and convert them to NaNs. # First let's lets convert strings in missing or unknown to lists. feat_info['missing_or_unknown'] = feat_info.missing_or_unknown.apply(lambda x: x[1:-1].split(',')) #Now by using loop from this question https://knowledge.udacity.com/questions/113144 we convert all missing or unknown values #to NaNs for attribute, missing_or_unknown in zip(feat_info['attribute'], feat_info['missing_or_unknown']): if missing_or_unknown[0] != '': for val in missing_or_unknown: if val.isnumeric() or val.lstrip('-').isnumeric(): azdias.loc[azdias[attribute] == int(val), attribute] = np.nan else: azdias.loc[azdias[attribute] == val, attribute] = np.nan #Updated table azdias.head(10) # #### Step 1.1.2: Assess Missing Data in Each Column # # How much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's [`hist()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!) # # For the remaining features, are there any patterns in which columns have, or share, missing data? # Perform an assessment of how much missing data there is in each column of the # dataset. missing_count = azdias.isnull().sum()/len(azdias) plt.figure(figsize=(16,8)) plt.xticks(np.arange(len(missing_count))+0.5,missing_count.index,rotation='vertical') plt.ylabel("Fraction of missing data") plt.bar(np.arange(len(missing_count)),missing_count) plt.show() # Investigate patterns in the amount of missing data in each column. # AGER_TYP, GEBURTSJAHR, TITEL_KZ, ALTER_HH, KK_KUNDENTYP, KBA05_BAUMAX columns have high amount of missing data. # Lets remove them from data set azdias_new = azdias.drop(columns = ['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX'], axis = 1) #New df without outlier columns azdias_new.head() # How much data is missing in each column of the dataset? azdias_new.isnull().sum() # #### Discussion 1.1.2: Assess Missing Data in Each Column # # After looking at the missing data bat plot, I decided that there are 6 outlier columns in the data. # There columns are 'AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX' # # In remaining data each column contains either near zero missing values or have around %10 of missing data. # #### Step 1.1.3: Assess Missing Data in Each Row # # Now, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold. # # In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values. # - You can use seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) function to create a bar chart of code frequencies and matplotlib's [`subplot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplot.html) function to put bar charts for the two subplots side by side. # - To reduce repeated code, you might want to write a function that can perform this comparison, taking as one of its arguments a column to be compared. # # Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. **Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.** # How much data is missing in each row of the dataset? # I printed only first 20 row because graphing or printing takes too much time #for i in range(len(azdias_new.index)) : # missing_count_row = azdias_new.iloc[i].isnull().sum()/len(azdias_new.iloc[i]) for i in range(0,20): print("Nan in row ", i , " : " , azdias_new.iloc[i].isnull().sum()) # Write code to divide the data into two subsets based on the number of missing # values in each row. #azdias_clean consists of rows without null values azdias_clean = azdias_new.dropna() #azdias_null consists of rows contains any amount of null values azdias_null = azdias_new.loc[~azdias_new.index.isin(azdias_new.dropna().index)] # + # Compare the distribution of values for at least five columns where there are # no or few missing values, between the two subsets. test_columns = ['FINANZ_MINIMALIST', 'GREEN_AVANTGARDE', 'SEMIO_SOZ', 'ALTERSKATEGORIE_GROB', 'ONLINE_AFFINITAET'] def plotColumns(dataset1, dataset2, columnlist): sns.set(style="darkgrid") fig, axs = plt.subplots(2, len(columnlist), figsize=(35,20)) for i in range(0, len(columnlist)): sns.countplot(dataset1[columnlist[i]], ax=axs[0, i]) sns.countplot(dataset2[columnlist[i]], ax=axs[1, i]) plotColumns(azdias_clean, azdias_null, test_columns) # - # #### Discussion 1.1.3: Assess Missing Data in Each Row # In first 3 columns in above graph there were zero null values in these columns. Data pattern is similiar and highest count values same in each dataframe. They are qualitatively different but, we can consider removed values as noise. # ### Step 1.2: Select and Re-Encode Features # # Checking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (`feat_info`) for a summary of types of measurement. # - For numeric and interval data, these features can be kept without changes. # - Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes). # - Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'. # # In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns. # # Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project! # How many features are there of each data type? typeArray = feat_info['type'].unique() for tp in typeArray: count = len(feat_info[feat_info.type == tp]) print("Type =",tp," | Count =", count) # #### Step 1.2.1: Re-Encode Categorical Features # # For categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following: # - For binary (two-level) categoricals that take numeric values, you can keep them without needing to do anything. # - There is one binary variable that takes on non-numeric values. For this one, you need to re-encode the values as numbers or create a dummy variable. # - For multi-level categoricals (three or more values), you can choose to encode the values using multiple dummy variables (e.g. via [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html)), or (to keep things straightforward) just drop them from the analysis. As always, document your choices in the Discussion section. #First lets make list of previously dropped attributes dropped_names = ['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX'] #Then we take copy of original frame cat_attr = feat_info.copy() #Then drop attributes from copy for name in dropped_names: cat_attr.drop(feat_info[feat_info['attribute'] == name ].index , inplace=True) # + # Assess categorical variables: which are binary, which are multi-level, and # which one needs to be re-encoded? # In order to assess categorical values we create new frame with only categorical attributes df_clean_copy = azdias_clean.copy() cat_attr = cat_attr[cat_attr['type'].isin(['categorical'])] df_clean_copy = df_clean_copy.filter(cat_attr['attribute']) df_clean_copy.head(10) #DF_CLEAN_COPY ONLY CONTAINS CATEGORICAL FEATURES MAKE CHANGES IN AZDIAS_CLEAN # - df_clean_copy.info() #We have got three object type. First lets start with OST_WEST_KZ print(df_clean_copy['OST_WEST_KZ'].value_counts()) #It is non numeric binary variable. We replace them with integers. replace_map = {'OST_WEST_KZ': {'W': 1, 'O': 2}} azdias_clean.replace(replace_map, inplace=True) #Lets look at CAMEO_DEUG_2015 object type attribute print(df_clean_copy['CAMEO_DEUG_2015'].value_counts()) #This only contains integer variables, we can convert this to numeric values azdias_clean["CAMEO_DEUG_2015"] = pd.to_numeric(azdias_clean["CAMEO_DEUG_2015"]) #Lets analyze our final object attribute print(df_clean_copy['CAMEO_DEU_2015'].value_counts()) #It contains categorical string values. We can use get_dummies azdias_clean = pd.get_dummies(azdias_clean, columns=['CAMEO_DEU_2015'], drop_first=True) #Now we check our frame azdias_clean.shape azdias_clean.head(10) # #### Discussion 1.2.1: Re-Encode Categorical Features # # In our cleaned data frame we have got 18 attributes. 15 of them was float values and required no handling. # * OST_WEST_KZ = String binary value. Replaced with integers 1,2 # * CAMEO_DEUG_2015 = Integers with object type. Cast their type to integer # * CAMEO_DEU_2015 = String type attribute with a lot of unique value. Used get_dummies to encode # # I kept all of the non-null attributes. # #### Step 1.2.2: Engineer Mixed-Type Features # # There are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices: # - "PRAEGENDE_JUGENDJAHRE" combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, you should create two new variables to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement. # - "CAMEO_INTL_2015" combines information on two axes: wealth and life stage. Break up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values). # - If you decide to keep or engineer new features around the other mixed-type features, make sure you note your steps in the Discussion section. # # Be sure to check `Data_Dictionary.md` for the details needed to finish these tasks. # Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables. print(azdias_clean['PRAEGENDE_JUGENDJAHRE'].value_counts()) # First we make list of movements types mainstream_lst = [1, 3, 5, 8, 10, 12, 14] avantgarde_lst = [2, 4, 6, 7, 9, 11, 13, 15] # and generation lists gen_40s_lst = [1, 2] gen_50s_lst = [3, 4] gen_60s_lst = [5, 6, 7] gen_70s_lst = [8, 9] gen_80s_lst = [10, 11, 12, 13] gen_90s_lst = [14, 15] # + # TODO : ASSIGN NEW LIST AT THE END #New column values to insert insert_mainstream = [] insert_avantgarde = [] insert_gen40 = [] insert_gen50 = [] insert_gen60 = [] insert_gen70 = [] insert_gen80 = [] insert_gen90 = [] #Fill insert lists for i in azdias_clean["PRAEGENDE_JUGENDJAHRE"].tolist(): insert_mainstream.append(1) if i in mainstream_lst else insert_mainstream.append(0) insert_avantgarde.append(1) if i in avantgarde_lst else insert_avantgarde.append(0) insert_gen40.append(1) if i in gen_40s_lst else insert_gen40.append(0) insert_gen50.append(1) if i in gen_50s_lst else insert_gen50.append(0) insert_gen60.append(1) if i in gen_60s_lst else insert_gen60.append(0) insert_gen70.append(1) if i in gen_70s_lst else insert_gen70.append(0) insert_gen80.append(1) if i in gen_80s_lst else insert_gen80.append(0) insert_gen90.append(1) if i in gen_90s_lst else insert_gen90.append(0) #Insert new columns azdias_clean["MAINSTREAM"] = insert_mainstream azdias_clean["AVANTGARDE"] = insert_avantgarde azdias_clean["GEN40"] = insert_gen40 azdias_clean["GEN50"] = insert_gen50 azdias_clean["GEN60"] = insert_gen60 azdias_clean["GEN70"] = insert_gen70 azdias_clean["GEN80"] = insert_gen80 azdias_clean["GEN90"] = insert_gen90 #Drop original column azdias_clean = azdias_clean.drop(['PRAEGENDE_JUGENDJAHRE'], axis=1) azdias_clean.head(5) # + # Investigate "CAMEO_INTL_2015" and engineer two new variables. insert_wealth = [] insert_life_stage = [] for i in azdias_clean["CAMEO_INTL_2015"].tolist(): insert_wealth.append(int(str(i)[0])) insert_life_stage.append(int(str(i)[1])) azdias_clean["WEALTH"] = insert_wealth azdias_clean["LIFE_STAGE"] = insert_life_stage azdias_clean = azdias_clean.drop(['CAMEO_INTL_2015'], axis=1) azdias_clean.head(5) # - # #### Discussion 1.2.2: Engineer Mixed-Type Features # # PRAEGENDE_JUGENDJAHRE contains data in three dimensions. I ignored the nation part as instructed and focued on remaining dimensions. I created seperate lists for each movement and generation filled them with labels inside data_dictionary, created new columns for each of them, and filled them according to original column in the list. Finally I dropped the original column. # I applied same process for CAMEO_INTL_2015 but instead of using binary values I used digits in the tens place one ones place # #### Step 1.2.3: Complete Feature Selection # # In order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following: # - All numeric, interval, and ordinal type columns from the original dataset. # - Binary categorical features (all numerically-encoded). # - Engineered features from other multi-level categorical features and mixed features. # # Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from **the subset with few or no missing values**. # If there are other re-engineering tasks you need to perform, make sure you # take care of them here. (Dealing with missing data will come in step 2.1.) azdias_clean.info() # All values are numeric, interval and ordinal type. # Categorical features are numerically encoded # Mixed features seperated into new single level features and original columns are dropped. # ### Step 1.3: Create a Cleaning Function # # Even though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step. def clean_data(df): """ Perform feature trimming, re-encoding, and engineering for demographics data INPUT: Demographics DataFrame OUTPUT: Trimmed and cleaned demographics DataFrame """ mainstream_lst = [1, 3, 5, 8, 10, 12, 14] avantgarde_lst = [2, 4, 6, 7, 9, 11, 13, 15] gen_40s_lst = [1, 2] gen_50s_lst = [3, 4] gen_60s_lst = [5, 6, 7] gen_70s_lst = [8, 9] gen_80s_lst = [10, 11, 12, 13] gen_90s_lst = [14, 15] insert_mainstream = [] insert_avantgarde = [] insert_gen40 = [] insert_gen50 = [] insert_gen60 = [] insert_gen70 = [] insert_gen80 = [] insert_gen90 = [] insert_wealth = [] insert_life_stage = [] # Put in code here to execute all main cleaning steps: # convert missing value codes into NaNs, ... for attribute, missing_or_unknown in zip(feat_info['attribute'], feat_info['missing_or_unknown']): if missing_or_unknown[0] != '': for val in missing_or_unknown: if val.isnumeric() or val.lstrip('-').isnumeric(): df.loc[df[attribute] == int(val), attribute] = np.nan else: df.loc[df[attribute] == val, attribute] = np.nan print("NAN Convertion Complete") # remove selected columns and rows, ... df = df.drop(columns = ['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX'], axis = 1) df = df.dropna() print("Columns containing high amount of null values are dropped and row containing null values are dropped") # select, re-encode, and engineer column values. replace_map = {'OST_WEST_KZ': {'W': 1, 'O': 2}} df.replace(replace_map, inplace=True) df["CAMEO_DEUG_2015"] = pd.to_numeric(df["CAMEO_DEUG_2015"]) df = pd.get_dummies(df, columns=['CAMEO_DEU_2015'], drop_first=True) print("Non numeric categorical features converted") for i in df["PRAEGENDE_JUGENDJAHRE"].tolist(): insert_mainstream.append(1) if i in mainstream_lst else insert_mainstream.append(0) insert_avantgarde.append(1) if i in avantgarde_lst else insert_avantgarde.append(0) insert_gen40.append(1) if i in gen_40s_lst else insert_gen40.append(0) insert_gen50.append(1) if i in gen_50s_lst else insert_gen50.append(0) insert_gen60.append(1) if i in gen_60s_lst else insert_gen60.append(0) insert_gen70.append(1) if i in gen_70s_lst else insert_gen70.append(0) insert_gen80.append(1) if i in gen_80s_lst else insert_gen80.append(0) insert_gen90.append(1) if i in gen_90s_lst else insert_gen90.append(0) df["MAINSTREAM"] = insert_mainstream df["AVANTGARDE"] = insert_avantgarde df["GEN40"] = insert_gen40 df["GEN50"] = insert_gen50 df["GEN60"] = insert_gen60 df["GEN70"] = insert_gen70 df["GEN80"] = insert_gen80 df["GEN90"] = insert_gen90 df = df.drop(['PRAEGENDE_JUGENDJAHRE'], axis=1) df.head(5) for i in df["CAMEO_INTL_2015"].tolist(): insert_wealth.append(int(str(i)[0])) insert_life_stage.append(int(str(i)[1])) df["WEALTH"] = insert_wealth df["LIFE_STAGE"] = insert_life_stage df = df.drop(['CAMEO_INTL_2015'], axis=1) df.head(5) print("Multi level attributes re-engineered to low level attributes") # Return the cleaned dataframe. print("Data clean completed") return df # ## Step 2: Feature Transformation # # ### Step 2.1: Apply Feature Scaling # # Before we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the [API reference page for sklearn](http://scikit-learn.org/stable/modules/classes.html) to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following: # # - sklearn requires that data not have missing values in order for its estimators to work properly. So, before applying the scaler to your data, make sure that you've cleaned the DataFrame of the remaining missing values. This can be as simple as just removing all data points with missing data, or applying an [Imputer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html) to replace all missing values. You might also try a more complicated procedure where you temporarily remove missing values in order to compute the scaling parameters before re-introducing those missing values and applying imputation. Think about how much missing data you have and what possible effects each approach might have on your analysis, and justify your decision in the discussion section below. # - For the actual scaling function, a [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) instance is suggested, scaling each feature to mean 0 and standard deviation 1. # - For these classes, you can make use of the `.fit_transform()` method to both fit a procedure to the data as well as apply the transformation to the data at the same time. Don't forget to keep the fit sklearn objects handy, since you'll be applying them to the customer demographics data towards the end of the project. # Apply feature scaling to the general population demographics data. sc = StandardScaler() azdias_scaled = sc.fit_transform(azdias_clean) # ### Discussion 2.1: Apply Feature Scaling # # Since I used data without missing features I left the data as it is. For feature scaling I imported StandardScaler, and used fit_transform. # ### Step 2.2: Perform Dimensionality Reduction # # On your scaled data, you are now ready to apply dimensionality reduction techniques. # # - Use sklearn's [PCA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) class to apply principal component analysis on the data, thus finding the vectors of maximal variance in the data. To start, you should not set any parameters (so all components are computed) or set a number of components that is at least half the number of features (so there's enough features to see the general trend in variability). # - Check out the ratio of variance explained by each principal component as well as the cumulative variance explained. Try plotting the cumulative or sequential values using matplotlib's [`plot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) function. Based on what you find, select a value for the number of transformed features you'll retain for the clustering part of the project. # - Once you've made a choice for the number of components to keep, make sure you re-fit a PCA instance to perform the decided-on transformation. # Apply PCA to the data. pca = PCA() pca_fit = pca.fit(azdias_scaled) plt.plot(np.cumsum(pca_fit.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); plt.show() # Re-apply PCA to the data while selecting for number of components to retain. pca = PCA(n_components=80) principalComponents = pca.fit_transform(azdias_scaled) # ### Discussion 2.2: Perform Dimensionality Reduction # # First of all I used pca without any parameter. After that I plotted cumulative variance with number of components. # At 80 component cumulative variance stays around %92 and we reduce component number %40. # ### Step 2.3: Interpret Principal Components # # Now that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion. # # As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other. # # - To investigate the features, you should map each weight to their corresponding feature name, then sort the features according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list. Use the data dictionary document to help you understand these most prominent features, their relationships, and what a positive or negative value on the principal component might indicate. # - You should investigate and interpret feature associations from the first three principal components in this substep. To help facilitate this, you should write a function that you can call at any time to print the sorted list of feature weights, for the *i*-th principal component. This might come in handy in the next step of the project, when you interpret the tendencies of the discovered clusters. colNames = [] for i in range(1,81): colNames.append("c" + str(i)) def sorted_weights(componentNum, components): principalDf = pd.DataFrame(data = components, columns = colNames) col_list = principalDf["c" + str(componentNum)].tolist() col_list.sort(reverse=True) return col_list # Map weights for the first principal component to corresponding feature names # and then print the linked values, sorted by weight. # HINT: Try defining a function here or in a new cell that you can reuse in the # other cells. c1_list = sorted_weights(1, principalComponents) print(c1_list[:10]) print(c1_list[-10:]) # Map weights for the second principal component to corresponding feature names # and then print the linked values, sorted by weight. c2_list = sorted_weights(2, principalComponents) print(c2_list[:10]) print(c2_list[-10:]) # Map weights for the third principal component to corresponding feature names # and then print the linked values, sorted by weight. c3_list = sorted_weights(3, principalComponents) print(c3_list[:10]) print(c3_list[-10:]) # ### Discussion 2.3: Interpret Principal Components # # First 2 components' weights are close to each other so we can say that these components are associated with each other. 3rd component also have a positive sign however since its more closer to 0 it is not closely correlated as first two attributes. # ## Step 3: Clustering # # ### Step 3.1: Apply Clustering to General Population # # You've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep. # # - Use sklearn's [KMeans](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans) class to perform k-means clustering on the PCA-transformed data. # - Then, compute the average difference from each point to its assigned cluster's center. **Hint**: The KMeans object's `.score()` method might be useful here, but note that in sklearn, scores tend to be defined so that larger is better. Try applying it to a small, toy dataset, or use an internet search to help your understanding. # - Perform the above two steps for a number of different cluster counts. You can then see how the average distance decreases with an increasing number of clusters. However, each additional cluster provides a smaller net benefit. Use this fact to select a final number of clusters in which to group the data. **Warning**: because of the large size of the dataset, it can take a long time for the algorithm to resolve. The more clusters to fit, the longer the algorithm will take. You should test for cluster counts through at least 10 clusters to get the full picture, but you shouldn't need to test for a number of clusters above about 30. # - Once you've selected a final number of clusters to use, re-fit a KMeans instance to perform the clustering operation. Make sure that you also obtain the cluster assignments for the general demographics data, since you'll be using them in the final Step 3.3. # Over a number of different cluster counts run k-means clustering on the data and # compute the average within-cluster distances. wcss = [] for i in range(2, 22): print('Calculating Model with ' + str(i) + ' Centers') kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0, precompute_distances=True) cluster_labels_KM = kmeans.fit_predict(principalComponents) print('Calculating WCSS Score') wcss.append(kmeans.inertia_) print('Model with ' + str(i) + ' Centers Done') # + # Investigate the change in within-cluster distance across number of clusters. # HINT: Use matplotlib's plot function to visualize this relationship. plt.plot(range(2, 22), wcss) plt.title('WCSS For Elbow Analysis') plt.xlabel('Number of clusters') plt.ylabel('WCSS(kmeans.inertia_)') plt.show() # - # Re-fit the k-means model with the selected number of clusters and obtain # cluster predictions for the general population demographics data. kmeans_final = KMeans(n_clusters=13, init='k-means++', max_iter=300, n_init=10, random_state=0) azdias_labels_KM = kmeans_final.fit_predict(principalComponents) # ### Discussion 3.1: Apply Clustering to General Population # # If we look at the wcss graph we can see that curve breaks in at 13 clusters, than slope of curve reduces. Because of that I picked 13 clusters for running KMeans # ### Step 3.2: Apply All Steps to the Customer Data # # Now that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data. # # - Don't forget when loading in the customers data, that it is semicolon (`;`) delimited. # - Apply the same feature wrangling, selection, and engineering steps to the customer demographics using the `clean_data()` function you created earlier. (You can assume that the customer demographics data has similar meaning behind missing data patterns as the general demographics data.) # - Use the sklearn objects from the general demographics data, and apply their transformations to the customers data. That is, you should not be using a `.fit()` or `.fit_transform()` method to re-fit the old objects, nor should you be creating new sklearn objects! Carry the data through the feature scaling, PCA, and clustering steps, obtaining cluster assignments for all of the data in the customer demographics data. # Load in the customer demographics data. customers = pd.read_csv('Udacity_CUSTOMERS_Subset.csv', error_bad_lines=False, sep=';') # Apply preprocessing, feature transformation, and clustering from the general # demographics onto the customer data, obtaining cluster predictions for the # customer demographics data. customers = clean_data(customers) customers_scaled = sc.fit_transform(customers) print("Data Scaling completed") principalComponents_customers = pca.fit_transform(customers_scaled) print("Dimentionality reduction complete") customers_labels_KM = kmeans_final.fit_predict(principalComponents_customers) print("KMeans calculation complete") # ### Step 3.3: Compare Customer Data to Demographics Data # # At this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distributions to see where the strongest customer base for the company is. # # Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics. # # Take a look at the following points in this step: # # - Compute the proportion of data points in each cluster for the general population and the customer data. Visualizations will be useful here: both for the individual dataset proportions, but also to visualize the ratios in cluster representation between groups. Seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) or [`barplot()`](https://seaborn.pydata.org/generated/seaborn.barplot.html) function could be handy. # - Recall the analysis you performed in step 1.1.3 of the project, where you separated out certain data points from the dataset if they had more than a specified threshold of missing values. If you found that this group was qualitatively different from the main bulk of the data, you should treat this as an additional data cluster in this analysis. Make sure that you account for the number of data points in this subset, for both the general population and customer datasets, when making your computations! # - Which cluster or clusters are overrepresented in the customer dataset compared to the general population? Select at least one such cluster and infer what kind of people might be represented by that cluster. Use the principal component interpretations from step 2.3 or look at additional components to help you make this inference. Alternatively, you can use the `.inverse_transform()` method of the PCA and StandardScaler objects to transform centroids back to the original data space and interpret the retrieved values directly. # - Perform a similar investigation for the underrepresented clusters. Which cluster or clusters are underrepresented in the customer dataset compared to the general population, and what kinds of people are typified by these clusters? #Convert general demographic clusters info to DF general_data_labels = np.array(azdias_labels_KM) unique, counts = np.unique(general_data_labels, return_counts=True) general_data_labels_unq = dict(zip(unique, counts)) gen_df = pd.DataFrame(list(general_data_labels_unq.items()),columns = ['clusters','count']) gen_df['percentage'] = (gen_df['count']/gen_df['count'].sum()) * 100 #Convert customer clusters info to DF customer_data_labels = np.array(customers_labels_KM) unique, counts = np.unique(customer_data_labels, return_counts=True) customer_data_labels_unq = dict(zip(unique, counts)) cust_df = pd.DataFrame(list(customer_data_labels_unq.items()),columns = ['clusters','count']) cust_df['percentage'] = (cust_df['count']/cust_df['count'].sum()) * 100 # Compare the proportion of data in each cluster for the customer data to the # proportion of data in each cluster for the general population. sns.set(style="darkgrid") fig, ax = plt.subplots(1,2, figsize=(35,20)) sns.barplot(x=gen_df['clusters'], y=gen_df['percentage'], ax=ax[0]) sns.barplot(x=cust_df['clusters'], y=cust_df['percentage'], ax=ax[1]) gen_df cust_df azdias_clean['clusters'] = general_data_labels customers['clusters'] = customer_data_labels # + # What kinds of people are part of a cluster that is overrepresented in the # customer data compared to the general population? #If we look at above graphics we can see that cluster 0 overrepresented compared to general population cust_cl0 = customers.loc[customers['clusters'] == 0] # - cust_cl0.head() print(cust_cl0['MAINSTREAM'].value_counts()) print(cust_cl0['GEN40'].value_counts()) print(cust_cl0['GEN50'].value_counts()) print(cust_cl0['GEN60'].value_counts()) print(cust_cl0['GEN70'].value_counts()) print(cust_cl0['GEN80'].value_counts()) print(cust_cl0['GEN90'].value_counts()) print(cust_cl0['WEALTH'].value_counts()) print(cust_cl0['LIFE_STAGE'].value_counts()) # What kinds of people are part of a cluster that is underrepresented in the # customer data compared to the general population? #If we look at above graphics we can see that cluster 3 underrepresented compared to general population cust_cl3 = customers.loc[customers['clusters'] == 3] print(cust_cl3['MAINSTREAM'].value_counts()) print(cust_cl3['GEN40'].value_counts()) print(cust_cl3['GEN50'].value_counts()) print(cust_cl3['GEN60'].value_counts()) print(cust_cl3['GEN70'].value_counts()) print(cust_cl3['GEN80'].value_counts()) print(cust_cl3['GEN90'].value_counts()) print(cust_cl3['WEALTH'].value_counts()) print(cust_cl3['LIFE_STAGE'].value_counts()) # ### Discussion 3.3: Compare Customer Data to Demographics Data # # Cluster 0 is overrepresented in customer dataset compared to general demographic set. So this cluster is one of our target customer cluster. # When we analyze customers in this data list # * They are more likely to be AVANTGARDE # * More likely to belong GEN50/60 # * Most of them belongs to wealthy and prosperous households # * Their life stage most probably older families & mature couples # # Cluster 3 is underrepresented in customer dataset compared to general demographic set. So this cluster is outside of our target demographics # When we analyze customers in this data list # * They are more likely to be MAINSTREAM # * More likely to belong GEN40/50 # * Most of them belongs to poorer households # * Their life stage most probably elders in retirement # > Congratulations on making it this far in the project! Before you finish, make sure to check through the entire notebook from top to bottom to make sure that your analysis follows a logical flow and all of your findings are documented in **Discussion** cells. Once you've checked over all of your work, you should export the notebook as an HTML document to submit for evaluation. You can do this from the menu, navigating to **File -> Download as -> HTML (.html)**. You will submit both that document and this notebook for your project submission.
Project_3/Identify_Customer_Segments.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Simple Linear Regression with NumPy # In school, students are taught to draw lines like the following. # # $y=2x+1$ # <br> # # They're taught to pick two values for x and calculate the corresponding values for y using the equation. Then they draw a set of axes, plot the points, and then draw a line extending through the two dots on their axes. # + # numpy efficiently deals with numerical multi-dimensional arrays. import numpy as np # matplotlib is a plotting library, and pyplot is its easy-to-use module. import matplotlib.pyplot as plt # This just sets the default plot size to be bigger. plt.rcParams['figure.figsize'] = (8, 6) # + # Draw some axes. plt.plot([-1, 10], [0, 0], 'k-') plt.plot([0, 0], [-1, 10], 'k-') # Plot the red, blue and green lines. plt.plot([1, 1], [-1, 3], 'b:') plt.plot([-1, 1], [3, 3], 'r:') # Plot the two points (1,3) and (2,5). plt.plot([1, 2], [3, 5], 'ko') # Join them with an (extending) green lines. plt.plot([-1, 10], [-1, 21], 'g-') # Set some reasonable plot limits. plt.xlim([-1, 10]) plt.ylim([-1, 10]) # Show the plot. plt.show() # - # Simple linear regression is about the opposite problem - what if you have some points and are looking for the equation? It's easy when the points are perfectly on a line already, but usually real-world data has some noise. The data might still look roughly linear, but aren't exactly so. # # *** # ## Example (contrived and simulated) # ### Scenario # Suppose you are trying to weigh your suitcase to avoid an airline's extra charges. You don't have a weighing scales, but you do have a spring and some gym-style weights of masses 7KG, 14KG and 21KG. You attach the spring to the wall hook, and mark where the bottom of it hangs. You then hang the 7KG weight on the end and mark where the bottom of the spring is. You repeat this with the 14KG weight and the 21KG weight. Finally, you place your case hanging on the spring, and the spring hangs down halfway between the 7KG mark and the 14KG mark. Is your case over the 10KG limit set by the airline? # # ### Hypothesis # When you look at the marks on the wall, it seems that the 0KG, 7KG, 14KG and 21KG marks are evenly spaced. You wonder if that means your case weighs 10.5KG. That is, you wonder if there is a linear relationship between the distance the spring's hook is from its resting position, and the mass on the end of it. # # ### Experiment # You decide to experiment. You buy some new weights - a 1KG, a 2KG, a 3Kg, all the way up to 20KG. You place them each in turn on the spring and measure the distance the spring moves from the resting position. You tabulate the data and plot them. # # ### Analysis # Here we'll import the Python libraries we need for or investigations below. # # numpy efficiently deals with numerical multi-dimensional arrays. # import numpy as np # # # matplotlib is a plotting library, and pyplot is its easy-to-use module. # import matplotlib.pyplot as plt # # # This just sets the default plot size to be bigger. # plt.rcParams['figure.figsize'] = (8, 6) # Ignore the next couple of lines where I fake up some data. I'll use the fact that I faked the data to explain some results later. Just pretend that w is an array containing the weight values and d are the corresponding distance measurements. w = np.arange(0.0, 21.0, 1.0) d = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size) # Let's have a look at w. w # Let's have a look at d. d # + # Create the plot. plt.plot(w, d, 'k.') # Set some properties for the plot. plt.xlabel('Weight (KG)') plt.ylabel('Distance (CM)') # Show the plot. plt.show() # - # #### Model # It looks like the data might indeed be linear. The points don't exactly fit on a straight line, but they are not far off it. We might put that down to some other factors, such as the air density, or errors, such as in our tape measure. Then we can go ahead and see what would be the best line to fit the data. # #### Straight lines # All straight lines can be expressed in the form y=mx+c. The number m is the slope of the line. The slope is how much y increases by when x is increased by 1.0. The number c is the y-intercept of the line. It's the value of y when x is 0. # #### Fitting the model # To fit a straight line to the data, we just must pick values for m and c. These are called the parameters of the model, and we want to pick the best values possible for the parameters. That is, the best parameter values given the data observed. Below we show various lines plotted over the data, with different values for m and c. # + # Plot w versus d with black dots. plt.plot(w, d, 'k.', label="Data") # Overlay some lines on the plot. x = np.arange(0.0, 21.0, 1.0) plt.plot(x, 5.0 * x + 10.0, 'r-', label=r"$5x + 10$") plt.plot(x, 6.0 * x + 5.0, 'g-', label=r"$6x + 5$") plt.plot(x, 5.0 * x + 15.0, 'b-', label=r"$5x + 15$") # Add a legend. plt.legend() # Add axis labels. plt.xlabel('Weight (KG)') plt.ylabel('Distance (CM)') # Show the plot. plt.show() # - # #### Calculating the cost # You can see that each of these lines roughly fits the data. Which one is best, and is there another line that is better than all three? Is there a "best" line? # # It depends how you define the word best. Luckily, everyone seems to have settled on what the best means. The best line is the one that minimises the following calculated value. # # $$\sum_i (y_i - mx_i - c)^2 $$ # # Here $(x_i, y_i)$ is the $i^{th}$ point in the data set and $\sum_i$ means to sum over all points. The values of $m$ and $c$ are to be determined. # We usually denote the above as $Cost(m, c)$. # # Where does the above calculation come from? It's easy to explain the part in the brackets $(y_i - mx_i - c)$. The corresponding value to xi in the dataset is yi. These are the measured values. The value $m x_i + c$ is what the model says $y_i$ should have been. The difference between the value that was observed ($y_i$) and the value that the model gives ($m x_i + c$), is $y_i - mx_i - c$.. # # Why square that value? Well note that the value could be positive or negative, and you sum over all of these values. If we allow the values to be positive or negative, then the positive could cancel the negatives. So, the natural thing to do is to take the absolute value $\mid y_i - m x_i - c \mid$.. Well it turns out that absolute values are a pain to deal with, and instead it was decided to just square the quantity instead, as the square of a number is always positive. There are pros and cons to using the square instead of the absolute value, but the square is used. This is usually called least squares fitting. # + # Calculate the cost of the lines above for the data above. cost = lambda m,c: np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (5.0, 10.0, cost(5.0, 10.0))) print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (6.0, 5.0, cost(6.0, 5.0))) print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (5.0, 15.0, cost(5.0, 15.0))) # - # #### Minimising the cost # We want to calculate values of $m$ and $c$ that give the lowest value for the cost value above. # For our given data set we can plot the cost value/function. # Recall that the cost is: # # $$ Cost(m, c) = \sum_i (y_i − mx_i − c)^2 $$ # # This is a function of two variables, $m$ and $c$, so a plot of it is three dimensional. See the **Advanced** section below for the plot. # # In the case of fitting a two-dimensional line to a few data points, we can easily calculate exactly the best values of $m$ and $c$. # Some of the details are discussed in the **Advanced section**, as they involve calculus, but the resulting code is straight-forward. # We first calculate the mean (average) values of our $x$ values and that of our $y$ values. Then we subtract the mean of $x$ from each of the $x$ values, and the mean of $y$ from each of the $y$ values. # Then we take the *dot product* of the new x values and the new $y$ values and divide it by the dot product of the new $x$ values with themselves. That gives us m, and we use m to calculate c. # # Remember that in our dataset x is called $w$ (for weight) and $y$ is called $d$ (for distance). We calculate $m$ and $c$ below. # ***Calculus is used to find the value of m and c the two values that give the minimum cost value of the cost based on the data*** # + # Calculate the best values for m and c. # First calculate the means (a.k.a. averages) of w and d. w_avg = np.mean(w) d_avg = np.mean(d) # Subtract means from w and d. w_zero = w - w_avg d_zero = d - d_avg # The best m is found by the following calculation. m = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero) # Use m from above to calculate the best c. c = d_avg - m * w_avg print("m is %8.6f and c is %6.6f." % (m, c)) # - # Note that numpy has a function that will perform this calculation for us, called polyfit. It can be used to fit lines in many dimensions. np.polyfit(w, d, 1) # #### Best fit line # So, the best values for m and c given our data and using least squares fitting are about 4.95 for m and about 11.13 for c. We plot this line on top of the data below. # + # Plot the best fit line. plt.plot(w, d, 'k.', label='Original data') plt.plot(w, m * w + c, 'b-', label='Best fit line') # Add axis labels and a legend. plt.xlabel('Weight (KG)') plt.ylabel('Distance (CM)') plt.legend() # Show the plot. plt.show() # - # Note that the $Cost$ of the best $m$ and best c is not zero in this case. print("Cost with m = %5.2f and c = %5.2f: %8.2f" % (m, c, cost(m, c))) # #### Summary # In this notebook we did: # # 1. Investigated the data. # 2. Picked a model. # 3. Picked a cost function. # 4. Estimated the model parameter values that minimised our cost function. # # ***
myExercises2/simple-linear-regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/parekhakhil/pyImageSearch/blob/main/1002_hyperparameter_tuning_cv_Search.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="8ntZ1AkXZIxY" # # # This notebook is associated with the [Grid search hyperparameter tuning with scikit-learn ( GridSearchCV )](https://www.pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/) blog post published on 2021-05-24. # # Only the code for the blog post is here. Most codeblocks have a 1:1 relationship with what you find in the blog post with two exceptions: (1) Python classes are not separate files as they are typically organized with PyImageSearch projects, and (2) Command Line Argument parsing is replaced with an `args` dictionary that you can manipulate as needed. # # We recommend that you execute (press ▶️) the code block-by-block, as-is, before adjusting parameters and `args` inputs. Once you've verified that the code is working, you are welcome to hack with it and learn from manipulating inputs, settings, and parameters. For more information on using Jupyter and Colab, please refer to these resources: # # * [Jupyter Notebook User Interface](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html#notebook-user-interface) # * [Overview of Google Colaboratory Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb) # # # Happy hacking! # # # <hr> # # # + [markdown] id="NFhAzQB3aNMa" # ### Download the code zip file # + id="7y0LG1EuaRlB" # !wget https://pyimagesearch-code-downloads.s3-us-west-2.amazonaws.com/hyperparameter-tuning-cv/hyperparameter-tuning-cv.zip # !unzip -qq hyperparameter-tuning-cv.zip # %cd hyperparameter-tuning-cv # + [markdown] id="_SgTVT3HagGZ" # ## Blog Post Code # + [markdown] id="wcrOk6pURp50" # ### Import Packages # + id="VJaCNlDDRz6d" # import the necessary packages from sklearn.model_selection import GridSearchCV from sklearn.metrics import classification_report from sklearn.svm import SVC from sklearn.model_selection import train_test_split from skimage import feature from imutils import paths import numpy as np import time import cv2 import os # + [markdown] id="VZ3bW6aU2a6v" # ### Our Local Binary Pattern (LBP) descriptor # + id="bS9nQNo02dIs" class LocalBinaryPatterns: def __init__(self, numPoints, radius): # store the number of points and radius self.numPoints = numPoints self.radius = radius def describe(self, image, eps=1e-7): # compute the Local Binary Pattern representation # of the image, and then use the LBP representation # to build the histogram of patterns lbp = feature.local_binary_pattern(image, self.numPoints, self.radius, method="uniform") (hist, _) = np.histogram(lbp.ravel(), bins=np.arange(0, self.numPoints + 3), range=(0, self.numPoints + 2)) # normalize the histogram hist = hist.astype("float") hist /= (hist.sum() + eps) # return the histogram of Local Binary Patterns return hist # + [markdown] id="Jppw5-Bd56H-" # ### Implementing our grid search for hyperparameter tuning # + id="okM7Bpyeq8Kc" # # construct the argument parser and parse the arguments # ap = argparse.ArgumentParser() # ap.add_argument("-d", "--dataset", required=True, # help="path to input dataset") # args = vars(ap.parse_args()) # since we are using Jupyter Notebooks we can replace our argument # parsing code with *hard coded* arguments and values args = { "dataset": "texture_dataset" } # + id="J43Rixrm3Duo" # grab the image paths in the input dataset directory imagePaths = list(paths.list_images(args["dataset"])) # initialize the local binary patterns descriptor along with # the data and label lists print("[INFO] extracting features...") desc = LocalBinaryPatterns(24, 8) data = [] labels = [] # + id="zD54-goR3IJV" # loop over the dataset of images for imagePath in imagePaths: # load the image, convert it to grayscale, and quantify it # using LBPs image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) hist = desc.describe(gray) # extract the label from the image path, then update the # label and data lists labels.append(imagePath.split(os.path.sep)[-2]) data.append(hist) # partition the data into training and testing splits using 75% of # the data for training and the remaining 25% for testing print("[INFO] constructing training/testing split...") (trainX, testX, trainY, testY) = train_test_split(data, labels, random_state=22, test_size=0.25) # + id="_A3Hycjs3NkY" # construct the set of hyperparameters to tune parameters = [ {"kernel": ["linear"], "C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]}, {"kernel": ["poly"], "degree": [2, 3, 4], "C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]}, {"kernel": ["rbf"], "gamma": ["auto", "scale"], "C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]} ] # + id="E1iCNr2K3WEI" # tune the hyperparameters via a cross-validated grid search print("[INFO] tuning hyperparameters via grid search") grid = GridSearchCV(estimator=SVC(), param_grid=parameters, n_jobs=-1) start = time.time() grid.fit(trainX, trainY) end = time.time() # show the grid search information print("[INFO] grid search took {:.2f} seconds".format( end - start)) print("[INFO] grid search best score: {:.2f}%".format( grid.best_score_ * 100)) print("[INFO] grid search best parameters: {}".format( grid.best_params_)) # + id="NbkALIBP3Z-p" # grab the best model and evaluate it print("[INFO] evaluating...") model = grid.best_estimator_ predictions = model.predict(testX) print(classification_report(testY, predictions)) # + [markdown] id="4ogkNauArL6u" # For a detailed walkthrough of the concepts and code, be sure to refer to the full tutorial, [*Grid search hyperparameter tuning with scikit-learn ( GridSearchCV )*](https://www.pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/) published on 2021-05-24.
1002_hyperparameter_tuning_cv_Search.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tutorial for AQCEL # ・Quantum Gate Pattern recognition (level = 1, 2, 3) # <br> # ・Circuit Optimization (level = 1, 2) # <br> # + import sys sys.path.append('..') import warnings warnings.simplefilter('ignore') from qiskit import(QuantumCircuit, QuantumRegister, ClassicalRegister) # - # ## Example Circuit # + q = QuantumRegister(6, 'q') c = ClassicalRegister( 6, 'c') circ = QuantumCircuit(q,c) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.x(2) circ.x(3) circ.x(2) circ.x(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) circ.x(4) circ.x(1) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.h(2) circ.h(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) circ.x(5) circ.cry(1,2,0) circ.toffoli(1,5,2) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.h(2) circ.h(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) circ.toffoli(3,4,1) circ.toffoli(1,5,2) circ.cry(1,2,0) circ.toffoli(1,5,2) circ.toffoli(3,4,1) circ.x(2) circ.ry(1,0) circ.x(2) circ.toffoli(3,4,1) circ.toffoli(1,5,2) circ.cry(2,2,0) circ.toffoli(1,5,2) circ.toffoli(3,4,1) circ.h(1) circ.cry(2,2,0) circ.toffoli(1,5,2) circ.cx(3,4) circ.cx(1,5) circ.cry(1,2,0) circ.cx(1,5) circ.cx(3,4) circ.x(2) circ.cx(1,0) circ.toffoli(2,3,0) circ.x(2) circ.cx(3,4) circ.cx(1,5) circ.cry(2,2,0) circ.cx(1,5) circ.cx(3,4) circ.toffoli(0,4,1) circ.toffoli(2,3,0) circ.x(2) circ.x(3) circ.x(2) circ.x(3) circ.toffoli(2,3,0) circ.toffoli(0,4,1) for i in range(6): circ.measure(i,i) circ.draw(output='mpl',fold=100) # - # ## Pattern recognition from aqcel.optimization import pattern example2 = pattern.recognition( circuit=circ, level=2, n_patterns=4, min_num_nodes=4, max_num_nodes=8, min_n_repetition=2) circ_max, designated_gates2 = example2.quantum_pattern() circ_max.draw(output='mpl') # ## Circuit Optimization level1 from aqcel.optimization import slim print(circ.depth(), ',', circ.__len__()) print('Gate counts:', circ.count_ops()) example1 = slim.circuit_optimization( circuit=circ, threshold=None) circ_op = example1.slim() print(circ_op.depth(), ',', circ_op.__len__()) print('Gate counts:', circ_op.count_ops()) circ_op.draw(output='mpl',fold=100) # ## Circuit Optimization level 2 from aqcel.optimization import optimization example3 = optimization.optimizer( circuit=circ, slim_level=2, pattern_level =2, n_patterns=4, min_num_nodes=4, max_num_nodes=8, min_n_repetition=2) circ_op2 = example3.slimer() circ_op2.draw(output='mpl',fold=30)
demo/tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py3-TF2.0] # language: python # name: conda-env-py3-TF2.0-py # --- # # Practical example. Audiobooks # ## Problem # # You are given data from an Audiobook app. Logically, it relates only to the audio versions of books. Each customer in the database has made a purchase at least once, that's why he/she is in the database. We want to create a machine learning algorithm based on our available data that can predict if a customer will buy again from the Audiobook company. # # The main idea is that if a customer has a low probability of coming back, there is no reason to spend any money on advertizing to him/her. If we can focus our efforts ONLY on customers that are likely to convert again, we can make great savings. Moreover, this model can identify the most important metrics for a customer to come back again. Identifying new customers creates value and growth opportunities. # # You have a .csv summarizing the data. There are several variables: Customer ID, Book length in mins_avg (average of all purchases), Book length in minutes_sum (sum of all purchases), Price Paid_avg (average of all purchases), Price paid_sum (sum of all purchases), Review (a Boolean variable), Review (out of 10), Total minutes listened, Completion (from 0 to 1), Support requests (number), and Last visited minus purchase date (in days). # # So these are the inputs (excluding customer ID, as it is completely arbitrary. It's more like a name, than a number). # # The targets are a Boolean variable (so 0, or 1). We are taking a period of 2 years in our inputs, and the next 6 months as targets. So, in fact, we are predicting if: based on the last 2 years of activity and engagement, a customer will convert in the next 6 months. 6 months sounds like a reasonable time. If they don't convert after 6 months, chances are they've gone to a competitor or didn't like the Audiobook way of digesting information. # # The task is simple: create a machine learning algorithm, which is able to predict if a customer will buy again. # # This is a classification problem with two classes: won't buy and will buy, represented by 0s and 1s. # # Good luck! # ## Create the machine learning algorithm # # # ### Import the relevant libraries # we must import the libraries once again since we haven't imported them in this file import numpy as np import tensorflow as tf # ### Data # + # let's create a temporary variable npz, where we will store each of the three Audiobooks datasets npz = np.load('Audiobooks_data_train.npz') # we extract the inputs using the keyword under which we saved them # to ensure that they are all floats, let's also take care of that train_inputs = npz['inputs'].astype(np.float) # targets must be int because of sparse_categorical_crossentropy (we want to be able to smoothly one-hot encode them) train_targets = npz['targets'].astype(np.int) # we load the validation data in the temporary variable npz = np.load('Audiobooks_data_validation.npz') # we can load the inputs and the targets in the same line validation_inputs, validation_targets = npz['inputs'].astype(np.float), npz['targets'].astype(np.int) # we load the test data in the temporary variable npz = np.load('Audiobooks_data_test.npz') # we create 2 variables that will contain the test inputs and the test targets test_inputs, test_targets = npz['inputs'].astype(np.float), npz['targets'].astype(np.int) # -
Resources/Data-Science/Deep-Learning/AudioBooks NN/TensorFlow_Audiobooks_Machine_Learning_Part1_with_comments.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: srs # language: python # name: srs # --- # # Table of contents # # 1. [Load the dataset](#load_the_dataset) # 2. [Split the dataset](#split_the_dataset) # 3. [Fitting the recommender](#fitting) # 4. [Sequential evaluation](#seq_evaluation) # 4.1 [Evaluation with sequentially revaeled user profiles](#eval_seq_rev) # 4.2 [Evaluation with "static" user profiles](#eval_static) # 5. [Analysis of next-item recommendation](#next-item) # 5.1 [Evaluation with different recommendation list lengths](#next-item_list_length) # 5.2 [Evaluation with different user profile lengths](#next-item_profile_length) import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline from util.data_utils import create_seq_db_filter_top_k, sequences_to_spfm_format from util.split import last_session_out_split from util.metrics import precision, recall, mrr from util import evaluation from recommenders.FPMCRecommender import FPMCRecommender import datetime def get_test_sequences_and_users(test_data, given_k, train_users): # we can run evaluation only over sequences longer than abs(LAST_K) mask = test_data['sequence'].map(len) > abs(given_k) mask &= test_data['user_id'].isin(train_users) test_sequences = test_data.loc[mask, 'sequence'].values test_users = test_data.loc[mask, 'user_id'].values return test_sequences, test_users # <a id='load_the_dataset'></a> # # 1. Load the dataset # # For this hands-on session we will use a dataset of user-listening sessions crawled from [last.fm](https://www.last.fm/). In detail, we will use a subset of the following dataset: # # * 30Music listening and playlists dataset, Turrin et al., ACM RecSys 2015 ([paper](https://home.deib.polimi.it/pagano/portfolio/papers/30Musiclisteningandplaylistsdataset.pdf)) # + # unzip the dataset, if you haven't already done it # # ! unzip datasets/sessions.zip -d datasets # + dataset_path = 'datasets/sessions.csv' # load this sample if you experience a severe slowdown with the previous dataset #dataset_path = 'datasets/sessions_sample_10.csv' # for the sake of speed, let's keep only the top-1k most popular items in the last month dataset = create_seq_db_filter_top_k(path=dataset_path, topk=1000, last_months=1) # - # Let's see at how the dataset looks like dataset.head() # Let's show some statistics about the dataset from collections import Counter cnt = Counter() dataset.sequence.map(cnt.update); # + sequence_length = dataset.sequence.map(len).values n_sessions_per_user = dataset.groupby('user_id').size() print('Number of items: {}'.format(len(cnt))) print('Number of users: {}'.format(dataset.user_id.nunique())) print('Number of sessions: {}'.format(len(dataset)) ) print('\nSession length:\n\tAverage: {:.2f}\n\tMedian: {}\n\tMin: {}\n\tMax: {}'.format( sequence_length.mean(), np.quantile(sequence_length, 0.5), sequence_length.min(), sequence_length.max())) print('Sessions per user:\n\tAverage: {:.2f}\n\tMedian: {}\n\tMin: {}\n\tMax: {}'.format( n_sessions_per_user.mean(), np.quantile(n_sessions_per_user, 0.5), n_sessions_per_user.min(), n_sessions_per_user.max())) # - print('Most popular items: {}'.format(cnt.most_common(5))) # <a id='split_the_dataset'></a> # # 2. Split the dataset # For simplicity, let's split the dataset by assigning the **last session** of every user to the **test set**, and **all the previous** ones to the **training set**. train_data, test_data = last_session_out_split(dataset) print("Train sessions: {} - Test sessions: {}".format(len(train_data), len(test_data))) # <a id='fitting'></a> # # 3. Fitting the recommender # # Here we fit the recommedation algorithm over the sessions in the training set. # This recommender is based on the following paper: # # _<NAME>., <NAME>., & <NAME>. (2010). Factorizing personalized Markov chains for next-basket recommendation. Proceedings of the 19th International Conference on World Wide Web - WWW ’10, 811_ # # In short, FPMC factorizes a personalized order-1 transition tensor using Tensor Factorization with pairwise loss function akin to BPR (Bayesian Pairwise Ranking). # # <img src="images/fpmc.png" width="200px" /> # # TF allows to impute values for the missing transitions between items for each user. For this reason, FPMC can be used for generating _personalized_ recommendations in session-aware recommenders as well. # # In this notebook, you will be able to change the number of latent factors and a few other learning hyper-parameters and see the impact on the recommendation quality. # # The class `FPMCRecommender` has the following initialization hyper-parameters: # * `n_factor`: (optional) the number of latent factors # * `learn_rate`: (optional) the learning rate # * `regular`: (optional) the L2 regularization coefficient # * `n_epoch`: (optional) the number of training epochs # * `n_neg`: (optional) the number of negative samples used in BPR learning # recommender = FPMCRecommender(n_factor=16, n_epoch=5) recommender.fit(train_data) # <a id='seq_evaluation'></a> # # # 4. Sequential evaluation # # In the evaluation of sequence-aware recommenders, each sequence in the test set is split into: # - the _user profile_, used to compute recommendations, is composed by the first *k* events in the sequence; # - the _ground truth_, used for performance evaluation, is composed by the remainder of the sequence. # # In the cells below, you can control the dimension of the _user profile_ by assigning a **positive** value to `GIVEN_K`, which correspond to the number of events from the beginning of the sequence that will be assigned to the initial user profile. This ensures that each user profile in the test set will have exactly the same initial size, but the size of the ground truth will change for every sequence. # # Alternatively, by assigning a **negative** value to `GIVEN_K`, you will set the initial size of the _ground truth_. In this way the _ground truth_ will have the same size for all sequences, but the dimension of the user profile will differ. METRICS = {'precision':precision, 'recall':recall, 'mrr': mrr} TOPN = 10 # length of the recommendation list # <a id='eval_seq_rev'></a> # ## 4.1 Evaluation with sequentially revealed user-profiles # # Here we evaluate the quality of the recommendations in a setting in which user profiles are revealed _sequentially_. # # The _user profile_ starts from the first `GIVEN_K` events (or, alternatively, from the last `-GIVEN_K` events if `GIVEN_K<0`). # The recommendations are evaluated against the next `LOOK_AHEAD` events (the _ground truth_). # The _user profile_ is next expanded to the next `STEP` events, the ground truth is scrolled forward accordingly, and the evaluation continues until the sequence ends. # # In typical **next-item recommendation**, we start with `GIVEN_K=1`, generate a set of **alternatives** that will evaluated against the next event in the sequence (`LOOK_AHEAD=1`), move forward of one step (`STEP=1`) and repeat until the sequence ends. # # You can set the `LOOK_AHEAD='all'` to see what happens if you had to recommend a **whole sequence** instead of a set of a set of alternatives to a user. # # NOTE: Metrics are averaged over each sequence first, then averaged over all test sequences. # # ** (TODO) Try out with different evaluation settings to see how the recommandation quality changes. ** # # # ![](gifs/sequential_eval.gif) # GIVEN_K=1, LOOK_AHEAD=1, STEP=1 corresponds to the classical next-item evaluation GIVEN_K = 1 LOOK_AHEAD = 1 STEP=1 # + test_sequences, test_users = get_test_sequences_and_users(test_data, GIVEN_K, train_data['user_id'].values) # we need user ids now! print('{} sequences available for evaluation ({} users)'.format(len(test_sequences), len(np.unique(test_users)))) results = evaluation.sequential_evaluation(recommender, test_sequences=test_sequences, users=test_users, given_k=GIVEN_K, look_ahead=LOOK_AHEAD, evaluation_functions=METRICS.values(), top_n=TOPN, scroll=True, # scrolling averages metrics over all profile lengths step=STEP) # - print('Sequential evaluation (GIVEN_K={}, LOOK_AHEAD={}, STEP={})'.format(GIVEN_K, LOOK_AHEAD, STEP)) for mname, mvalue in zip(METRICS.keys(), results): print('\t{}@{}: {:.4f}'.format(mname, TOPN, mvalue)) # <a id='eval_static'></a> # ## 4.2 Evaluation with "static" user-profiles # # Here we evaluate the quality of the recommendations in a setting in which user profiles are instead _static_. # # The _user profile_ starts from the first `GIVEN_K` events (or, alternatively, from the last `-GIVEN_K` events if `GIVEN_K<0`). # The recommendations are evaluated against the next `LOOK_AHEAD` events (the _ground truth_). # # The user profile is *not extended* and the ground truth *doesn't move forward*. # This allows to obtain "snapshots" of the recommendation performance for different user profile and ground truth lenghts. # # Also here you can set the `LOOK_AHEAD='all'` to see what happens if you had to recommend a **whole sequence** instead of a set of a set of alternatives to a user. # # **(TODO) Try out with different evaluation settings to see how the recommandation quality changes.** GIVEN_K = 1 LOOK_AHEAD = 'all' STEP=1 # + test_sequences, test_users = get_test_sequences_and_users(test_data, GIVEN_K, train_data['user_id'].values) # we need user ids now! print('{} sequences available for evaluation ({} users)'.format(len(test_sequences), len(np.unique(test_users)))) results = evaluation.sequential_evaluation(recommender, test_sequences=test_sequences, users=test_users, given_k=GIVEN_K, look_ahead=LOOK_AHEAD, evaluation_functions=METRICS.values(), top_n=TOPN, scroll=False # notice that scrolling is disabled! ) # - print('Sequential evaluation (GIVEN_K={}, LOOK_AHEAD={}, STEP={})'.format(GIVEN_K, LOOK_AHEAD, STEP)) for mname, mvalue in zip(METRICS.keys(), results): print('\t{}@{}: {:.4f}'.format(mname, TOPN, mvalue)) # <a id='next-item'></a> # ## 5. Analysis of next-item recommendation # # Here we propose to analyse the performance of the recommender system in the scenario of *next-item recommendation* over the following dimensions: # # * the *length* of the **recommendation list**, and # * the *length* of the **user profile**. # # NOTE: This evaluation is by no means exhaustive, as different the hyper-parameters of the recommendation algorithm should be *carefully tuned* before drawing any conclusions. Unfortunately, given the time constraints for this tutorial, we had to leave hyper-parameter tuning out. A very useful reference about careful evaluation of (session-based) recommenders can be found at: # # * Evaluation of Session-based Recommendation Algorithms, Ludewig and Jannach, 2018 ([paper](https://arxiv.org/abs/1803.09587)) # <a id='next-item_list_length'></a> # ### 5.1 Evaluation for different recommendation list lengths GIVEN_K = 1 LOOK_AHEAD = 1 STEP = 1 topn_list = [1, 5, 10, 20, 50, 100] # ensure that all sequences have the same minimum length test_sequences, test_users = get_test_sequences_and_users(test_data, GIVEN_K, train_data['user_id'].values) # we need user ids now! print('{} sequences available for evaluation ({} users)'.format(len(test_sequences), len(np.unique(test_users)))) # + res_list = [] for topn in topn_list: print('Evaluating recommendation lists with length: {}'.format(topn)) res_tmp = evaluation.sequential_evaluation(recommender, test_sequences=test_sequences, users=test_users, given_k=GIVEN_K, look_ahead=LOOK_AHEAD, evaluation_functions=METRICS.values(), top_n=topn, scroll=True, # here we average over all profile lengths step=STEP) mvalues = list(zip(METRICS.keys(), res_tmp)) res_list.append((topn, mvalues)) # - # show separate plots per metric fig, axes = plt.subplots(nrows=1, ncols=len(METRICS), figsize=(15,5)) res_list_t = list(zip(*res_list)) for midx, metric in enumerate(METRICS): mvalues = [res_list_t[1][j][midx][1] for j in range(len(res_list_t[1]))] ax = axes[midx] ax.plot(topn_list, mvalues) ax.set_title(metric) ax.set_xticks(topn_list) ax.set_xlabel('List length') # <a id='next-item_profile_length'></a> # ### 5.2 Evaluation for different user profile lengths given_k_list = [1, 2, 3, 4] LOOK_AHEAD = 1 STEP = 1 TOPN = 20 # + res_list = [] for gk in given_k_list: print('Evaluating profiles having length: {}'.format(gk)) res_tmp = evaluation.sequential_evaluation(recommender, test_sequences=test_sequences, users=test_users, given_k=gk, look_ahead=LOOK_AHEAD, evaluation_functions=METRICS.values(), top_n=TOPN, scroll=False, # here we stop at each profile length step=STEP) mvalues = list(zip(METRICS.keys(), res_tmp)) res_list.append((gk, mvalues)) # - # show separate plots per metric fig, axes = plt.subplots(nrows=1, ncols=len(METRICS), figsize=(15,5)) res_list_t = list(zip(*res_list)) for midx, metric in enumerate(METRICS): mvalues = [res_list_t[1][j][midx][1] for j in range(len(res_list_t[1]))] ax = axes[midx] ax.plot(given_k_list, mvalues) ax.set_title(metric) ax.set_xticks(given_k_list) ax.set_xlabel('Profile length')
03_FPMC.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (reco_base) # language: python # name: reco_base # --- # <i>Copyright (c) Microsoft Corporation. All rights reserved.</i> # # <i>Licensed under the MIT License.</i> # # LightFM - hybrid matrix factorisation on MovieLens (Python, CPU) # This notebook explains the concept of a hybrid matrix factorisation based model for recommendation, it also outlines the steps to construct a pure matrix factorisation and a hybrid models using the [LightFM](https://github.com/lyst/lightfm) package. It also demonstrates how to extract both user and item affinity from a fitted hybrid model. # # ## 1. Hybrid matrix factorisation model # # ### 1.1 Background # # In general, most recommendation models can be divided into two categories: # - Content based model, # - Collaborative filtering model. # # The content-based model recommends based on similarity of the items and/or users using their description/metadata/profile. On the other hand, collaborative filtering model (discussion is limited to matrix factorisation approach in this notebook) computes the latent factors of the users and items. It works based on the assumption that if a group of people expressed similar opinions on an item, these peole would tend to have similar opinions on other items. For further background and detailed explanation between these two approaches, the reader can refer to machine learning literatures [3, 4]. # # The choice between the two models is largely based on the data availability. For example, the collaborative filtering model is usually adopted and effective when sufficient ratings/feedbacks have been recorded for a group of users and items. # # However, if there is a lack of ratings, content based model can be used provided that the metadata of the users and items are available. This is also the common approach to address the cold-start issues, where there are insufficient historical collaborative interactions available to model new users and/or items. # # <!-- In addition, most collaborative filtering models only consume explicit ratings e.g. movie # # **NOTE** add stuff about implicit and explicit ratings --> # # ### 1.2 Hybrid matrix factorisation algorithm # # In view of the above problems, there have been a number of proposals to address the cold-start issues by combining both content-based and collaborative filtering approaches. The hybrid matrix factorisation model is among one of the solutions proposed [1]. # # In general, most hybrid approaches proposed different ways of assessing and/or combining the feature data in conjunction with the collaborative information. # # ### 1.3 LightFM package # # LightFM is a Python implementation of a hybrid recommendation algorithms for both implicit and explicit feedbacks [1]. # # It is a hybrid content-collaborative model which represents users and items as linear combinations of their content features’ latent factors. The model learns **embeddings or latent representations of the users and items in such a way that it encodes user preferences over items**. These representations produce scores for every item for a given user; items scored highly are more likely to be interesting to the user. # # The user and item embeddings are estimated for every feature, and these features are then added together to be the final representations for users and items. # # For example, for user i, the model retrieves the i-th row of the feature matrix to find the features with non-zero weights. The embeddings for these features will then be added together to become the user representation e.g. if user 10 has weight 1 in the 5th column of the user feature matrix, and weight 3 in the 20th column, the user 10’s representation is the sum of embedding for the 5th and the 20th features multiplying their corresponding weights. The representation for each items is computed in the same approach. # # #### 1.3.1 Modelling approach # # Let $U$ be the set of users and $I$ be the set of items, and each user can be described by a set of user features $f_{u} \subset F^{U}$ whilst each items can be described by item features $f_{i} \subset F^{I}$. Both $F^{U}$ and $F^{I}$ are all the features which fully describe all users and items. # # The LightFM model operates based binary feedbacks, the ratings will be normalised into two groups. The user-item interaction pairs $(u,i) \in U\times I$ are the union of positive (favourable reviews) $S^+$ and negative interactions (negative reviews) $S^-$ for explicit ratings. For implicit feedbacks, these can be the observed and not observed interactions respectively. # # For each user and item feature, their embeddings are $e_{f}^{U}$ and $e_{f}^{I}$ respectively. Furthermore, each feature is also has a scalar bias term ($b_U^f$ for user and $b_I^f$ for item features). The embedding (latent representation) of user $u$ and item $i$ are the sum of its respective features’ latent vectors: # # $$ # q_{u} = \sum_{j \in f_{u}} e_{j}^{U} # $$ # # $$ # p_{i} = \sum_{j \in f_{i}} e_{j}^{I} # $$ # # Similarly the biases for user $u$ and item $i$ are the sum of its respective bias vectors. These variables capture the variation in behaviour across users and items: # # $$ # b_{u} = \sum_{j \in f_{u}} b_{j}^{U} # $$ # # $$ # b_{i} = \sum_{j \in f_{i}} b_{j}^{I} # $$ # # In LightFM, the representation for each user/item is a linear weighted sum of its feature vectors. # # The prediction for user $u$ and item $i$ can be modelled as sigmoid of the dot product of user and item vectors, adjusted by its feature biases as follows: # # $$ # \hat{r}_{ui} = \sigma (q_{u} \cdot p_{i} + b_{u} + b_{i}) # $$ # # As the LightFM is constructed to predict binary outcomes e.g. $S^+$ and $S^-$, the function $\sigma()$ is based on the [sigmoid function](https://mathworld.wolfram.com/SigmoidFunction.html). # # The LightFM algorithm estimates interaction latent vectors and bias for features. For model fitting, the cost function of the model consists of maximising the likelihood of data conditional on the parameters described above using stochastic gradient descent. The likelihood can be expressed as follows: # # $$ # L = \prod_{(u,i) \in S+}\hat{r}_{ui} \times \prod_{(u,i) \in S-}1 - \hat{r}_{ui} # $$ # # Note that if the feature latent vectors are not available, the algorithm will behaves like a [logistic matrix factorisation model](http://stanford.edu/~rezab/nips2014workshop/submits/logmat.pdf). # ## 2. Movie recommender with LightFM using only explicit feedbacks # ### 2.1 Import libraries # + import sys import os import itertools import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import lightfm from lightfm import LightFM from lightfm.data import Dataset from lightfm import cross_validation # Import LightFM's evaluation metrics from lightfm.evaluation import precision_at_k as lightfm_prec_at_k from lightfm.evaluation import recall_at_k as lightfm_recall_at_k # Import repo's evaluation metrics from reco_utils.evaluation.python_evaluation import ( precision_at_k, recall_at_k) from reco_utils.common.timer import Timer from reco_utils.dataset import movielens from reco_utils.recommender.lightfm.lightfm_utils import ( track_model_metrics, prepare_test_df, prepare_all_predictions, compare_metric, similar_users, similar_items) print("System version: {}".format(sys.version)) print("LightFM version: {}".format(lightfm.__version__)) # - # ### 2.2 Defining variables # + tags=["parameters"] # Select MovieLens data size MOVIELENS_DATA_SIZE = '100k' # default number of recommendations K = 10 # percentage of data used for testing TEST_PERCENTAGE = 0.25 # model learning rate LEARNING_RATE = 0.25 # no of latent factors NO_COMPONENTS = 20 # no of epochs to fit model NO_EPOCHS = 20 # no of threads to fit model NO_THREADS = 32 # regularisation for both user and item features ITEM_ALPHA=1e-6 USER_ALPHA=1e-6 # seed for pseudonumber generations SEEDNO = 42 # - # ### 2.2 Retrieve data data = movielens.load_pandas_df( size=MOVIELENS_DATA_SIZE, genres_col='genre', header=["userID", "itemID", "rating"] ) # quick look at the data data.sample(5) # ### 2.3 Prepare data # Before fitting the LightFM model, we need to create an instance of `Dataset` which holds the interaction matrix. dataset = Dataset() # The `fit` method creates the user/item id mappings. # + dataset.fit(users=data['userID'], items=data['itemID']) # quick check to determine the number of unique users and items in the data num_users, num_topics = dataset.interactions_shape() print(f'Num users: {num_users}, num_topics: {num_topics}.') # - # Next is to build the interaction matrix. The `build_interactions` method returns 2 COO sparse matrices, namely the `interactions` and `weights` matrices. (interactions, weights) = dataset.build_interactions(data.iloc[:, 0:3].values) # LightLM works slightly differently compared to other packages as it expects the train and test sets to have same dimension. Therefore the conventional train test split will not work. # # The package has included the `cross_validation.random_train_test_split` method to split the interaction data and splits it into two disjoint training and test sets. # # However, note that **it does not validate the interactions in the test set to guarantee all items and users have historical interactions in the training set**. Therefore this may result into a partial cold-start problem in the test set. train_interactions, test_interactions = cross_validation.random_train_test_split( interactions, test_percentage=TEST_PERCENTAGE, random_state=np.random.RandomState(SEEDNO)) # Double check the size of both the train and test sets. print(f"Shape of train interactions: {train_interactions.shape}") print(f"Shape of test interactions: {test_interactions.shape}") # ### 2.4 Fit the LightFM model # In this notebook, the LightFM model will be using the weighted Approximate-Rank Pairwise (WARP) as the loss. Further explanation on the topic can be found [here](https://making.lyst.com/lightfm/docs/examples/warp_loss.html#learning-to-rank-using-the-warp-loss). # # # In general, it maximises the rank of positive examples by repeatedly sampling negative examples until a rank violation has been located. This approach is recommended when only positive interactions are present. model1 = LightFM(loss='warp', no_components=NO_COMPONENTS, learning_rate=LEARNING_RATE, random_state=np.random.RandomState(SEEDNO)) # The LightFM model can be fitted with the following code: model1.fit(interactions=train_interactions, epochs=NO_EPOCHS); # ### 2.5 Prepare model evaluation data # Before we can evaluate the fitted model and to get the data into a format which is compatible with the existing evaluation methods within this repo, the data needs to be massaged slightly. # # First the train/test indices need to be extracted from the `lightfm.cross_validation` method as follows: # + uids, iids, interaction_data = cross_validation._shuffle( interactions.row, interactions.col, interactions.data, random_state=np.random.RandomState(SEEDNO)) cutoff = int((1.0 - TEST_PERCENTAGE) * len(uids)) test_idx = slice(cutoff, None) # - # Then the the mapping between internal and external representation of the user and item are extracted as follows: uid_map, ufeature_map, iid_map, ifeature_map = dataset.mapping() # Once the train/test indices and mapping are ready, the test dataframe can be constructed as follows: with Timer() as test_time: test_df = prepare_test_df(test_idx, uids, iids, uid_map, iid_map, weights) print(f"Took {test_time.interval:.1f} seconds for prepare and predict test data.") time_reco1 = test_time.interval # And samples of the test dataframe: test_df.sample(5) # In addition, the predictions of all unseen user-item pairs (e.g. removing those seen in the training data) can be prepared as follows: with Timer() as test_time: all_predictions = prepare_all_predictions(data, uid_map, iid_map, interactions=train_interactions, model=model1, num_threads=NO_THREADS) print(f"Took {test_time.interval:.1f} seconds for prepare and predict all data.") time_reco2 = test_time.interval # Samples of the `all_predictions` dataframe: all_predictions.sample(5) # Note that the **raw prediction values from the LightFM model are for ranking purposes only**, they should not be used directly. The magnitude and sign of these values do not have any specific interpretation. # ### 2.6 Model evaluation # Once the evaluation data are ready, they can be passed into to the repo's evaluation methods as follows. The performance of the model will be tracked using both Precision@K and Recall@K. # # In addition, the results have also being compared with those computed from LightFM's own evaluation methods to ensure accuracy. # + with Timer() as test_time: eval_precision = precision_at_k(rating_true=test_df, rating_pred=all_predictions, k=K) eval_recall = recall_at_k(test_df, all_predictions, k=K) time_reco3 = test_time.interval with Timer() as test_time: eval_precision_lfm = lightfm_prec_at_k(model1, test_interactions, train_interactions, k=K).mean() eval_recall_lfm = lightfm_recall_at_k(model1, test_interactions, train_interactions, k=K).mean() time_lfm = test_time.interval print( "------ Using Repo's evaluation methods ------", f"Precision@K:\t{eval_precision:.6f}", f"Recall@K:\t{eval_recall:.6f}", "\n------ Using LightFM evaluation methods ------", f"Precision@K:\t{eval_precision_lfm:.6f}", f"Recall@K:\t{eval_recall_lfm:.6f}", sep='\n') # - # ## 3. Movie recommender with LightFM using explicit feedbacks and additional item and user features # As the LightFM was designed to incorporates both user and item metadata, the model can be extended to include additional features such as movie genres and user occupations. # ### 3.1 Extract and prepare movie genres # In this notebook, the movie's genres will be used as the item metadata. As the genres have already been loaded during the initial data import, it can be processed directly as follows: # split the genre based on the separator movie_genre = [x.split('|') for x in data['genre']] # retrieve the all the unique genres in the data all_movie_genre = sorted(list(set(itertools.chain.from_iterable(movie_genre)))) # quick look at the all the genres within the data all_movie_genre # ### 3.2 Retrieve and prepare movie genres # Further user features can be included as part of the model fitting process. In this notebook, **only the occupation of each user will be included** but the feature list can be extended easily. # # #### 3.2.1 Retrieve and merge data # The user features can be retrieved directly from the grouplens website and merged with the existing data as follows: # + user_feature_URL = 'http://files.grouplens.org/datasets/movielens/ml-100k/u.user' user_data = pd.read_table(user_feature_URL, sep='|', header=None) user_data.columns = ['userID','age','gender','occupation','zipcode'] # merging user feature with existing data new_data = data.merge(user_data[['userID','occupation']], left_on='userID', right_on='userID') # quick look at the merged data new_data.sample(5) # - # #### 3.2.2 Extract and prepare user occupations # retrieve all the unique occupations in the data all_occupations = sorted(list(set(new_data['occupation']))) # ### 3.3 Prepare data and features # Similar to the previous model, the data is required to be converted into a `Dataset` instance and then create a user/item id mapping with the `fit` method. dataset2 = Dataset() dataset2.fit(data['userID'], data['itemID'], item_features=all_movie_genre, user_features=all_occupations) # The movie genres are then converted into a item feature matrix using the `build_item_features` method as follows: item_features = dataset2.build_item_features( (x, y) for x,y in zip(data.itemID, movie_genre)) # The user occupations are then converted into an user feature matrix using the `build_user_features` method as follows: user_features = dataset2.build_user_features( (x, [y]) for x,y in zip(new_data.userID, new_data['occupation'])) # Once the item and user features matrices have been completed, the next steps are similar as before, which is to build the interaction matrix and split the interactions into train and test sets as follows: # + (interactions2, weights2) = dataset2.build_interactions(data.iloc[:, 0:3].values) train_interactions2, test_interactions2 = cross_validation.random_train_test_split( interactions2, test_percentage=TEST_PERCENTAGE, random_state=np.random.RandomState(SEEDNO)) # - # ### 3.3 Fit the LightFM model with additional user and item features # The parameters of the second model will be similar to the first model to facilitates comparison. # # The model performance at each epoch is also tracked by the same metrics as before. model2 = LightFM(loss='warp', no_components=NO_COMPONENTS, learning_rate=LEARNING_RATE, item_alpha=ITEM_ALPHA, user_alpha=USER_ALPHA, random_state=np.random.RandomState(SEEDNO)) # The LightFM model can then be fitted: model2.fit(interactions=train_interactions2, user_features=user_features, item_features=item_features, epochs=NO_EPOCHS); # ### 3.4 Prepare model evaluation data # Similar to the previous model, the evaluation data needs to be prepared in order to get them into a format consumable with this repo's evaluation methods. # # Firstly the train/test indices and id mappings are extracted using the new interations matrix as follows: # + uids, iids, interaction_data = cross_validation._shuffle( interactions2.row, interactions2.col, interactions2.data, random_state=np.random.RandomState(SEEDNO)) uid_map, ufeature_map, iid_map, ifeature_map = dataset2.mapping() # - # The test dataframe is then constructed as follows: with Timer() as test_time: test_df2 = prepare_test_df(test_idx, uids, iids, uid_map, iid_map, weights2) print(f"Took {test_time.interval:.1f} seconds for prepare and predict test data.") # The predictions of all unseen user-item pairs can be prepared as follows: # + with Timer() as test_time: all_predictions2 = prepare_all_predictions(data, uid_map, iid_map, interactions=train_interactions2, user_features=user_features, item_features=item_features, model=model2, num_threads=NO_THREADS) print(f"Took {test_time.interval:.1f} seconds for prepare and predict all data.") # - # ### 3.5 Model evaluation and comparison # The predictive performance of the new model can be computed and compared with the previous model (which used only the explicit rating) as follows: # + eval_precision2 = precision_at_k(rating_true=test_df2, rating_pred=all_predictions2, k=K) eval_recall2 = recall_at_k(test_df2, all_predictions2, k=K) print( "------ Using only explicit ratings ------", f"Precision@K:\t{eval_precision:.6f}", f"Recall@K:\t{eval_recall:.6f}", "\n------ Using both implicit and explicit ratings ------", f"Precision@K:\t{eval_precision2:.6f}", f"Recall@K:\t{eval_recall2:.6f}", sep='\n') # - # The new model which used both implicit and explicit data performed consistently better than the previous model which used only the explicit data, thus highlighting the benefits of including such additional features to the model. # ### 3.6 Evaluation metrics comparison # Note that the evaluation approaches here are solely for demonstration purposes only. # # If the reader were using the LightFM package and/or its models, the LightFM's built-in evaluation methods are much more efficient and are the recommended approach for production usage as they are designed and optimised to work with the package. # # As a comparison, the times recorded to compute Precision@K and Recall@K for model1 are shown as follows: print( "------ Using Repo's evaluation methods ------", f"Time [sec]:\t{(time_reco1+time_reco2+time_reco3):.1f}", "\n------ Using LightFM evaluation methods ------", f"Time [sec]:\t{time_lfm:.1f}", sep='\n') # ## 4. Evaluate model fitting process # In addition to the inclusion of both implicit and explicit data, the model fitting process can also be monitored in order to determine whether the model is being trained properly. # # This notebook also includes a `track_model_metrics` method which plots the model's metrics e.g. Precision@K and Recall@K as model fitting progresses. # # For the first model (using only explicit data), the model fitting progress is shown as follows: output1, _ = track_model_metrics(model=model1, train_interactions=train_interactions, test_interactions=test_interactions, k=K, no_epochs=NO_EPOCHS, no_threads=NO_THREADS) # The second model (with both implicit and explicit data) fitting progress: output2, _ = track_model_metrics(model=model2, train_interactions=train_interactions2, test_interactions=test_interactions2, k=K, no_epochs=NO_EPOCHS, no_threads=NO_THREADS, item_features=item_features, user_features=user_features) # These show slightly different behaviour with the two approaches, the reader can then tune the hyperparameters to improve the model fitting process. # # ### 4.1 Performance comparison # In addition, the model's performance metrics (based on the test dataset) can be plotted together to facilitate easier comparison as follows: for i in ['Precision', 'Recall']: sns.set_palette("Set2") plt.figure() sns.scatterplot(x="epoch", y="value", hue='data', data=compare_metric(df_list = [output1, output2], metric=i) ).set_title(f'{i} comparison using test set'); # Referring to the figures above, it is rather obvious that the number of epochs is too low as the model's performances have not stabilised. Reader can decide on the number of epochs and other hyperparameters to adjust suit the application. # # As stated previously, it is interesting to see model2 (using both implicit and explicit data) performed consistently better than model1 (using only explicit ratings). # ## 5. Similar users and items # As the LightFM package operates based on latent embeddings, these can be retrieved once the model has been fitted to assess user-user and/or item-item affinity. # ### 5.1 User affinity # The user-user affinity can be retrieved with the `get_user_representations` method from the fitted model as follows: _, user_embeddings = model2.get_user_representations(features=user_features) user_embeddings # In order to retrieve the top N similar users, we can use the `similar_users` from `reco_utils`. For example, if we want to choose top 10 users most similar to the user 1: similar_users(user_id=1, user_features=user_features, model=model2) # ### 5.2 Item affinity # Similar to the user affinity, the item-item affinity can be retrieved with the `get_item_representations` method using the fitted model. _, item_embeddings = model2.get_item_representations(features=item_features) item_embeddings # The function to retrieve the top N similar items is similar to similar_users() above. For example, if we want to choose top 10 items most similar to the item 10: similar_items(item_id=10, item_features=item_features, model=model2) # ## 6. Conclusion # In this notebook, the background of hybrid matrix factorisation model has been explained together with a detailed example of LightFM's implementation. # # The process of incorporating additional user and item metadata has also been demonstrated with performance comparison. Furthermore, the calculation of both user and item affinity scores have also been demonstrated and extracted from the fitted model. # # This notebook remains a fairly simple treatment on the subject and hopefully could serve as a good foundation for the reader. # ## References # - [[1](https://arxiv.org/abs/1507.08439)]. <NAME> - Metadata Embeddings for User and Item Cold-start Recommendations, 2015. arXiv:1507.08439 # - [[2](https://making.lyst.com/lightfm/docs/home.html)]. LightFM documentation, # - [3]. <NAME> - Recommender Systems: The Textbook, Springer, April 2016. ISBN 978-3-319-29659-3 # - [4]. <NAME>, <NAME> - Statistical Methods for Recommender Systems, 2016. ISBN: 9781107036079 #
examples/02_model_hybrid/lightfm_deep_dive.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_python3 # language: python # name: conda_python3 # --- # # Debugging XGBoost training jobs in real time with Amazon SageMaker Debugger # # This notebook uses the MNIST dataset to demonstrate real-time analysis of XGBoost training jobs while the training jobs are running. # # This notebook was created and tested on an ml.m5.4xlarge notebook instance using 100GB instance volume. # # ## Overview # # Amazon SageMaker Debugger allows debugging machine learning training. # SageMaker Debugger helps you to monitor your training in near real time using rules and provides alerts if it detects issues in training. # # Using SageMaker Debugger is a two step process: Saving model parameters and analysis. # Let's look at each one of them closely. # # ### Saving model parameters # # In machine learning process, model parameters are updated every forward and backward pass and can describe the state of the training job at any particular instant in an ML lifecycle. # Amazon SageMaker Debugger allows you to capture the model parameters and save them for analysis. # Although XGBoost is not a deep learning algorithm, Amazon SageMaker Debugger is highly customizable and can help you interpret results by saving insightful metrics. For example, performance metrics or the importance of features at different frequencies. # Refer to [SageMaker Debugger documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-configuration.html) for details on how to save the metrics you want. # # # ### Analysis # # There are two ways to get to model parameters and run analysis on them. # # One way is to use concept called ***Rules***. On a very broad level, a rule is Python code used to detect certain conditions during training. # Some of the conditions that a data scientist training an algorithm may care about are monitoring for gradients getting too large or too small, detecting overfitting, and so on. # Amazon SageMaker Debugger comes pre-packaged with certain built-in rules that can be invoked on Amazon SageMaker. You can also write your own rules using the Amazon SageMaker Debugger APIs. # For more details about automatic analysis using rules, see [Configure Debugger Built-in Rules](https://docs.aws.amazon.com/sagemaker/latest/dg/use-debugger-built-in-rules.html) and [List of Debugger Built-in Rules](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-built-in-rules.html). # # This notebook also walk you through how to use the SMDebug client library for analysis in real time while training jobs are running. The SMDebug client library enables you to retrieve model parameters and scalars saved during training job via few lines of code. # # Through the model parameter analysis, you can drill down into training issues you're running into. You save raw model parameter data in order to understand your model better, and figure out the root cause of training problems. # # ![Animated confusion matrix](cm.gif) # ## Import SageMaker Python SDK and the SMDebug client library # <font color='red'>**Important**</font>: To use the new Debugger features, you need to upgrade the SageMaker Python SDK and the SMDebug libary. In the following cell, change the third line to `install_needed=True` and run to upgrade the libraries. import sys import IPython install_needed = False # Set to True to upgrade if install_needed: print("installing deps and restarting kernel") # !{sys.executable} -m pip install -U sagemaker # !{sys.executable} -m pip install -U smdebug IPython.Application.instance().kernel.do_shutdown(True) import boto3 import sagemaker # Amazon SageMaker Debugger is available in Amazon SageMaker XGBoost container version `0.90-2` or later. The following cell retrieves the SageMaker XGBoost 0.90-2 container. # + from sagemaker import image_uris # Below changes the region to be one where this notebook is running region = boto3.Session().region_name container = sagemaker.image_uris.retrieve("xgboost", region, "0.90-2") # - # ## Training XGBoost models in Amazon SageMaker with Amazon SageMaker Debugger # # In this section you learn to train an XGBoost model with Amazon SageMaker Debugger enabled and monitor the training jobs. # This is done using the SageMaker [Estimator API](https://sagemaker.readthedocs.io/en/stable/estimators.html#sagemaker.estimator.Estimator). # While training job is running, use the SageMaker Debugger API to access saved model parameters in real time and visualize them. # You can also download a fresh set of model parameters every time you query for using the SMDebug library. # # This notebook is adapted from [XGBoost for Classification](https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/xgboost_mnist/xgboost_mnist.ipynb). # # ### Data preparation # # Use the [MNIST data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html) stored in [LIBSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) format. # + from data_utils import load_mnist, upload_to_s3 bucket = sagemaker.Session().default_bucket() prefix = "DEMO-smdebug-xgboost-mnist" # + # %%time train_file, validation_file = load_mnist() upload_to_s3(train_file, bucket, f"{prefix}/train/mnist.train.libsvm") upload_to_s3(validation_file, bucket, f"{prefix}/validation/mnist.validation.libsvm") # - # ### Enabling Amazon SageMaker Debugger in the estimator object # # Enabling Amazon SageMaker Debugger in a training job can be accomplished by adding its configuration into an Estimator object constructor: # # ``` # from sagemaker.debugger import DebuggerHookConfig # # estimator = Estimator( # ..., # debugger_hook_config = DebuggerHookConfig( # s3_output_path="s3://{bucket_name}/{location_in_bucket}", # Optional # collection_configs=[ # CollectionConfig( # name="metrics", # parameters={ # "save_interval": "10" # } # ) # ] # ) # ) # ``` # Here, the `DebuggerHookConfig` object configures which data `Estimator` should save for the real-time visualization. Provide two parameters: # # - `s3_output_path`: Points to an S3 bucket where you intend to store model parameters. Amount of data saved depends on multiple factors, major ones are training job, data set, model, frequency of saving model parameters. This S3 bucket should be in your AWS account so that you have full access to control over the stored data. **Note**: The S3 bucket should be originally created in the same Region where your training job is running, otherwise you might run into problems with cross-Region access. # # - `collection_configs`: It enumerates named collections of model parameters to save. Collections are a convenient way to organize relevant model parameters under same umbrella to make it easy to navigate them during analysis. In this particular example, you are interested in a single collection named `metrics`. You also configured Amazon SageMaker Debugger to save metrics every 10 iterations. For all parameters that are supported by Collections and DebuggerConfig, see [Collection documentation](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md). # ### Using Amazon SageMaker Debugger with XGBoost Classification # # Import the libraries for the demo of Amazon SageMaker Debugger. # + from sagemaker import get_execution_role role = get_execution_role() base_job_name = "demo-smdebug-xgboost-classification" bucket_path = "s3://{}".format(bucket) num_round = 25 save_interval = 3 hyperparameters = { "max_depth": "5", "eta": "0.1", "gamma": "4", "min_child_weight": "6", "silent": "0", "objective": "multi:softmax", "num_class": "10", # num_class is required for 'multi:*' objectives "num_round": num_round, } # + from sagemaker.estimator import Estimator from sagemaker.debugger import DebuggerHookConfig, CollectionConfig xgboost_algorithm_mode_estimator = Estimator( role=role, base_job_name=base_job_name, instance_count=1, instance_type="ml.m5.xlarge", image_uri=container, hyperparameters=hyperparameters, max_run=1800, debugger_hook_config=DebuggerHookConfig( s3_output_path=bucket_path, # Required collection_configs=[ CollectionConfig(name="metrics", parameters={"save_interval": str(save_interval)}), CollectionConfig(name="predictions", parameters={"save_interval": str(save_interval)}), CollectionConfig(name="labels", parameters={"save_interval": str(save_interval)}), ], ), ) # + [markdown] pycharm={"name": "#%% md\n"} # With the next step you are going to actually start a training job using the Estimator object you created above. This job is started in asynchronous, non-blocking way. This means that control is passed back to the notebook and further commands can be run while training job is progressing. # + from sagemaker.session import TrainingInput train_s3_input = TrainingInput( "s3://{}/{}/{}".format(bucket, prefix, "train"), content_type="libsvm" ) validation_s3_input = TrainingInput( "s3://{}/{}/{}".format(bucket, prefix, "validation"), content_type="libsvm" ) # This is a fire and forget event. By setting wait=False, you just submit the job to run in the background. # Amazon SageMaker will start one training job and release control to next cells in the notebook. # Follow this notebook to see status of the training job. xgboost_algorithm_mode_estimator.fit( {"train": train_s3_input, "validation": validation_s3_input}, wait=False ) # - # ### Result # # As a result of the above command, Amazon SageMaker starts one training job for you and it produces model parameters to be analyzed. # This job will run in a background without you having to wait for it to complete in order to continue with the rest of the notebook. # Because of this asynchronous nature of a training job, you need to monitor its status so that you don't start to request debugging too early. # # # ## Analysis and Visualization # # ### Checking on the training job status # # Check the status of the training job by running the following code. # It checks on the status of an Amazon SageMaker training job every 15 seconds. # Once a training job has started its training cycle, it proceeds to the next cells in the notebook. # That means training job started to tune the model and, in parallel, emit model parameters. # + import time from time import gmtime, strftime # Below command will give the status of training job job_name = xgboost_algorithm_mode_estimator.latest_training_job.name client = xgboost_algorithm_mode_estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name) print("Training job name: " + job_name) if description["TrainingJobStatus"] != "Completed": while description["SecondaryStatus"] not in ["Training", "Completed"]: description = client.describe_training_job(TrainingJobName=job_name) primary_status = description["TrainingJobStatus"] secondary_status = description["SecondaryStatus"] print("{}: {}, {}".format(strftime("%X", gmtime()), primary_status, secondary_status)) time.sleep(15) # - # ### Retrieving and analyzing model parameters # # Before getting to analysis, here are some notes on concepts being used in Amazon SageMaker Debugger that help with analysis. # - ***Trial*** - Object that is a centerpiece of the SageMaker Debugger API when it comes to getting access to model parameters. It is a top level abstract that represents a single run of a training job. All model parameters emitted by a training job are associated with its *trial*. # - ***Tensor*** - Object that represents model parameters, such as weights, gradients, accuracy, and loss, that are saved during training job. # # For more details on aforementioned concepts as well as on SageMaker Debugger API and examples, see [SageMaker Debugger Analysis API](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md) documentation. # # In the following code cell, use a ***Trial*** to access model parameters. You can do that by inspecting currently running training job and extract necessary parameters from its debug configuration to instruct SageMaker Debugger where the data you are looking for is located. Keep in mind the following: # - Model parameters are being stored in your own S3 bucket to which you can navigate and manually inspect its content if desired. # - You might notice a slight delay before trial object is created. This is normal as SageMaker Debugger monitors the corresponding bucket and waits until model parameters data to appear. The delay is introduced by less than instantaneous upload of model parameters from a training container to your S3 bucket. # + from smdebug.trials import create_trial description = client.describe_training_job(TrainingJobName=job_name) s3_output_path = xgboost_algorithm_mode_estimator.latest_job_debugger_artifacts_path() # This is where we create a Trial object that allows access to saved model parameters. trial = create_trial(s3_output_path) # + import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import confusion_matrix from IPython.display import display, clear_output def plot_confusion_for_one_step(trial, step, ax=None): if ax is None: fig, ax = plt.subplots() cm = confusion_matrix( trial.tensor("labels").value(step), trial.tensor("predictions").value(step) ) normalized_cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis] sns.heatmap(normalized_cm, cmap="bone", ax=ax, cbar=False, annot=cm, fmt="") print(f"iteration: {step}") def plot_and_update_confusion_for_all_steps(trial): fig, ax = plt.subplots() rendered_steps = [] # trial.loaded_all_steps is a way to keep monitoring for a state of a training job # as seen by Amazon SageMaker Debugger. # When training job is completed Trial becomes aware of it. while not rendered_steps or not trial.loaded_all_steps: steps = trial.steps() # quick way to get diff between two lists steps_to_render = list(set(steps).symmetric_difference(set(rendered_steps))) # plot only from newer chunk for step in steps_to_render: clear_output(wait=True) plot_confusion_for_one_step(trial, step, ax=ax) display(fig) plt.pause(5) ax.clear() rendered_steps.extend(steps_to_render) fig.clear() plt.close() # - # ### Visualizing confusion matrix of a running training job # # Finally, wait until Amazon SageMaker Debugger has downloaded initial collection of model parameters to look at. Once that collection is ready you keep getting new model parameters every five seconds and plot them correspondingly one under another. plot_and_update_confusion_for_all_steps(trial)
sagemaker-debugger/xgboost_realtime_analysis/xgboost-realtime-analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # + # Visualization of the KO+ChIP Gold Standard from: # Miraldi et al. (2018) "Leveraging chromatin accessibility for transcriptional regulatory network inference in Th17 Cells" # TO START: In the menu above, choose "Cell" --> "Run All", and network + heatmap will load # NOTE: Default limits networks to TF-TF edges in top 1 TF / gene model (.93 quantile), to see the full # network hit "restore" (in the drop-down menu in cell below) and set threshold to 0 and hit "threshold" # You can search for gene names in the search box below the network (hit "Match"), and find regulators ("targeted by") # Change "canvas" to "SVG" (drop-down menu in cell below) to enable drag interactions with nodes & labels # Change "SVG" to "canvas" to speed up layout operations # More info about jp_gene_viz and user interface instructions are available on Github: # https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/dNetwork%20widget%20overview.ipynb # directory containing gene expression data and network folder directory = "." # folder containing networks netPath = 'Networks' # network file name networkFile = 'ChIP_A17_bias50_TFmRNA_sp.tsv' # title for network figure netTitle = 'ChIP/sA(Th17), bias = 50_TFmRNA, TFA = TF mRNA' # name of gene expression file expressionFile = 'Th0_Th17_48hTh.txt' # column of gene expression file to color network nodes rnaSampleOfInt = 'Th17(48h)' # edge cutoff -- for Inferelator TRNs, corresponds to signed quantile (rank of edges in 15 TFs / gene models), # increase from 0 --> 1 to get more significant edges (e.g., .33 would correspond to edges only in 10 TFs / gene # models) edgeCutoff = .93 # - import sys if ".." not in sys.path: sys.path.append("..") from jp_gene_viz import dNetwork dNetwork.load_javascript_support() # from jp_gene_viz import multiple_network from jp_gene_viz import LExpression LExpression.load_javascript_support() # Load network linked to gene expression data L = LExpression.LinkedExpressionNetwork() L.show() # + # Load Network and Heatmap L.load_network(directory + '/' + netPath + '/' + networkFile) L.load_heatmap(directory + '/' + expressionFile) N = L.network N.set_title(netTitle) N.threshhold_slider.value = edgeCutoff N.apply_click(None) N.draw() # Add labels to nodes N.labels_button.value=True # Limit to TFs only, remove unconnected TFs, choose and set network layout N.restore_click() N.tf_only_click() N.connected_only_click() N.layout_dropdown.value = 'fruchterman_reingold' N.layout_click() # Interact with Heatmap # Limit genes in heatmap to network genes L.gene_click(None) # Z-score heatmap values L.expression.transform_dropdown.value = 'Z score' L.expression.apply_transform() # Choose a column in the heatmap (e.g., 48h Th17) to color nodes L.expression.col = rnaSampleOfInt L.condition_click(None) # Switch SVG layout to get line colors, then switch back to faster canvas mode N.force_svg(None) # -
TRN_Notebooks/ChIP_Atac17_bias50_TFmRNA_TFmRNA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # <div align="right"><i>COM418 - Computers and Music</i></div> # <div align="right"><a href="https://people.epfl.ch/paolo.prandoni"><NAME></a>, <a href="https://www.epfl.ch/labs/lcav/">LCAV, EPFL</a></div> # # <p style="font-size: 30pt; font-weight: bold; color: #B51F1F;">Hearing the phase of a sound </p> # + [markdown] slideshow={"slide_type": "skip"} # In this notebook we will investigate the effect of phase on the perceptual quality of a sound. It is often said that the human ear is insensitive to phase and that's why most of the equalization in commercial-grade audio equipment takes place in the magnitude domain only. # # But is it really so? Let's find out. # + slideshow={"slide_type": "skip"} # %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import IPython from scipy.io import wavfile # + slideshow={"slide_type": "skip"} plt.rcParams["figure.figsize"] = (14,4) # + [markdown] slideshow={"slide_type": "skip"} # # Helper functions # # We will be synthesizing audio clips so let's set the sampling rate for the rest of the notebook: # + slideshow={"slide_type": "skip"} Fs = 16000 # sampling freqency TWOPI = 2 * np.pi # + slideshow={"slide_type": "skip"} import ipywidgets as widgets def multiplay(clips, rate=Fs, title=None): outs = [widgets.Output() for c in clips] for ix, item in enumerate(clips): with outs[ix]: print(title[ix] if title is not None else "") display(IPython.display.Audio(prepare(item), rate=rate)) return widgets.HBox(outs) # + [markdown] slideshow={"slide_type": "skip"} # Let's also define a helper function that plays our synthesized clips a bit more gracefully: basically, we want a gentle fade-in and fade-out to avoid the abrupt "clicks" that occur when the data file begins and ends. # # Also, there is a "bug" in the some versions of IPython whereby audio data is forcibly normalized prior to playing (see [here](https://github.com/ipython/ipython/issues/8608) for details; this may have been solved in the meantime). We want to avoid normalization so that we keep control over the volume of the sound. A way to do so is to make sure that all audio clips have at least one sample at a pre-defined maximum value, and this value is the same for all clips; to achieve this we add a slow "tail" to the data which will not result in an audible sound but will set a common maximum value to all clips. # + slideshow={"slide_type": "skip"} def prepare(x, max_value = 3): N = len(x) # fade-in and fade-out times max 0.2 seconds tf = min(int(0.2 * Fs), int(0.1 * N)) for n in range(0, int(tf)): s = float(n) / float(tf) x[n] = x[n] * s x[N-n-1] *= s # let's append an anti-normalization tail; drawback is one second of silence in the end x = np.concatenate((x, np.linspace(0, max_value, int(Fs/2)), np.linspace(max_value, 0, int(Fs/2)))) return x # + [markdown] slideshow={"slide_type": "slide"} # # Sustained sounds # # The first experiment will use sustained sounds, i.e. sounds where the "shape" of the waveform does not change over time: # # * a periodic sustained waveform is the sum of harmonically-related sinusoidal components # * frequency of first component determines pitch # * relative amplitude of harmonic overtones determines timbre # # + [markdown] slideshow={"slide_type": "slide"} # ## A simple clarinet model # # <img src="img/clarinet.png" style="float: right; width: 400px; margin: 20px 30px;"/> # # # * simple additive synthesis # * only odd multiples of the fundamental (see [here](http://www.phy.mtu.edu/~suits/clarinet.html) # * we will use just five components # + slideshow={"slide_type": "slide"} def clarinet(f, phase = []): # length in seconds of audio clips T = 3 # we will keep 5 harmonics and the fundamental # amplitude of components: ha = [0.75, 0.5, 0.14, 0.5, 0.12, 0.17] # phase phase = np.concatenate((phase, np.zeros(len(ha)-len(phase)))) x = np.zeros((T * Fs)) # clarinet has only odd harmonics n = np.arange(len(x)) for k, h in enumerate(ha): x += h * np.sin(phase[k] + TWOPI * (2*k + 1) * (float(f)/Fs) * n) return x # + slideshow={"slide_type": "slide"} # fundamental frequency: D4 D4 = 293.665 x = clarinet(D4) # let's look at the waveform, nice odd-harmonics shape: plt.plot(x[0:300]) plt.show() # and of course we can play it (using our preparing function): IPython.display.Audio(prepare(x), rate=Fs) # + [markdown] slideshow={"slide_type": "skip"} # Ok, so it's not the best clarinet sound in the universe but it's not bad for just a few lines of code! # + [markdown] slideshow={"slide_type": "slide"} # ## Changing the phase # # * random phase offsets for each component # * waveform completely different in time domain # * can you hear the difference? # + xrp = clarinet(D4, [3.84, 0.90, 3.98, 4.50, 4.80, 2.96]) plt.plot(xrp[0:300]) plt.show() # + slideshow={"slide_type": "slide"} multiplay([xrp, x], title=['random phase', 'original']) # + [markdown] slideshow={"slide_type": "skip"} # OK, so it seems that phase is not important after all. To check once again, run the following notebook cell as many times as you want and see if you can tell the difference between the original zero-phase and a random-phase sustained note (the phases will be different every time you run the cell): # + slideshow={"slide_type": "slide"} xrp = clarinet(D4, np.random.rand(6) * TWOPI) plt.plot(xrp[0:300]) plt.show() multiplay([xrp, x], title=['random phase', 'original']) # + [markdown] slideshow={"slide_type": "slide"} # # Dynamic sounds # # <img src="img/piano.jpg" style="float: right; width: 400px; margin: 20px 30px;"/> # # In the second experiment we will use real-world dynamic sounds, i.e. sounds that display time-varying characteristics. Typically, a physical musical instrument will produce sounds whose envelope displays four subsequent portions: # # * the **attack** time is the time taken for the sound to go from silence to max amplitude # * the **decay** time is the time taken for the sound to decrease to sustain level # * the **sustain** time is the time during the sound is kept at the same amplitude # * the **release** time is the time taken for sound to go to zero after the stimulation is stopped. # + [markdown] slideshow={"slide_type": "skip"} # Consider for instance a piano note: the attack time is very quick (the hammer hits the string); the decay is quite rapid as the string settles into harmonic equilibrium but there is no sustain since once the hammer hits, the stimulation ends. So a piano note has a distinct volume envelope that rises very fast and then releases slowly: # + slideshow={"slide_type": "slide"} from scipy.io import wavfile Fs, x = wavfile.read("snd/piano.wav") plt.plot(x) plt.show() IPython.display.Audio(x, rate=Fs) # + [markdown] slideshow={"slide_type": "slide"} # ## Changing the phase # # The "shape" of a waveform in time is determined by the phase as we saw with the clarinet. # # To alter the phase of the real piano sound: # * compute the DFT of the sound # * set the phase to arbitrary values # * compute the inverse DFT # + slideshow={"slide_type": "skip"} # first some prep work; let's make sure that # the length of the signal is even # (it will be useful later) if len(x) % 2 != 0: x = x[:-1] # let's also store the maximum value for our # "prepare" function mv = int(max(abs(x)) * 1.2) # + slideshow={"slide_type": "slide"} # Let's take the Fourier transform X = np.fft.fft(x) # we can plot the DFT and verify we have a nice # harmonic spectrum plt.plot(np.abs(X[0:int(len(X)/2)])) plt.show() # + slideshow={"slide_type": "slide"} # now we set the phase to zero; we just need to # take the magnitude of the DFT xzp = np.fft.ifft(np.abs(X)) # in theory, xzp should be real; however, because # of numerical imprecision, we're left with some imaginary crumbs: print (max(np.imag(xzp)) / max(np.abs(xzp))) # + slideshow={"slide_type": "slide"} # the imaginary part is negligible, as expected, # so let's just get rid of it xzp = np.real(xzp) # and now we can plot: plt.plot(xzp) plt.show() # - IPython.display.Audio(prepare(xzp, mv), rate=Fs) # + [markdown] slideshow={"slide_type": "slide"} # Gee, what happened?!? Well, by removing the phase, we have destroyed the timing information that, for instance, made the sharp attack possible (mathematically, note that by creating a zero-phase spectrum we did obtain a symmetric signal in the time domain!). # # If we play the waveform, we can hear that the pitch and some of the timbral quality have been preserved (after all, the magnitude spectrum is the same), but the typical piano-like envelope has been lost. # + [markdown] slideshow={"slide_type": "slide"} # We can amuse ourselves with even more brutal phase mangling: let's for instance set a random phase for each DFT component. The only tricky thing here is that we need to preserve the Hermitian symmetry of the DFT in order to have a real-valued time-domain signal: # + # we know the signal is even-length so we need to build # a phase vector of the form [0 p1 p2 ... pM -pM ... -p2 -p1] # where M = len(x)/2 ph = np.random.rand(int(len(x) / 2) ) * TWOPI * 1j # tricky but cute Python slicing syntax... ph = np.concatenate(([0], ph, -ph[-2::-1])) # now let's add the phase offset and take the IDFT xrp = np.fft.ifft(X * np.exp(ph)) # always verify that the imaginary part is only roundoff error print (max(np.imag(xrp))/max(np.abs(xrp))) # + slideshow={"slide_type": "slide"} xrp = np.real(xrp) plt.plot(xrp) plt.show() IPython.display.Audio(prepare(xrp, mv), rate=Fs) # + [markdown] slideshow={"slide_type": "skip"} # Pretty bad, eh? So, in conclusion, phase is very important to the temporal aspects of the sound, but not so important for sustained sounds. In fact, the brain processes the temporal and spectral cues of sound very differently: when we concentrate on attacks and sound envelope, the brain uses time-domain processing, whereas for pitch and timbre, it uses primarily the magnitude of the spectrum!
AudioPhase/AudioPhase.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tutorial 06: Setting spatially varying fields # # > Interactive online tutorial: # > [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ubermag/discretisedfield/master?filepath=docs%2Fipynb%2Findex.ipynb) # # There are several different ways how a spatially varying field can be defined. Let us first define a mesh we are going to use to define the fields. # + import discretisedfield as df p1 = (-50, -50, -50) p2 = (50, 50, 50) n = (2, 2, 2) mesh = df.Mesh(p1=p1, p2=p2, n=n) # - # ## Using a Python function # # One of the ways how a spatially varying field can be defined is by using a Python function, which can be passed as `value` argument to `discretisedfield.Field`. It should satisfy three main criteria: # 1. It takes one argument. `discretisedfield.Field` is going to pass the coordinates of discertisation cells as tuples of length 3 to this argument. # 2. Function should be able to return a value for any coordinate in the mesh # 3. The value returned must be of the same dimension as the dimension of the field. # # Let us assume we want to have a scalar field which has a value 0 for all points with negative $x$ coordinate and value 1 otherwise. # # $$ # f(x, y, z)= # \begin{cases} # 0, & \text{if}\ x<0 \\ # 1, & \text{otherwise} # \end{cases} # $$ # # The Python function is then: def my_value_function(pos): x, y, z = pos if x < 0: return 0 else: return 1 # After defining the value function, we can define the field. field = df.Field(mesh, dim=1, value=my_value_function) # If we sample the field at a point with negative value of $x$ field((-10, 5, 5)) # If the $x$ coordinate is positive, we get 1. field((25, -3, 14)) # The array now has different values field.array # ### Value property # # It is not very informative to look at `discretisedfield.Field.array` to understand what is the actual value of the field. Therefore, if a unique representation value exists, `discretisedfield.Field.value` is going to return it. For instance: field.value # The source code of this function can be seen as import inspect print(inspect.getsource(field.value)) # Now, if we change the value of the field as field.value = 5 # the value of the field is changed field.value # as well as the underlying array field.array # If we violently change the value of a single discretisation cell via array field.array[0, 0, 0, 0] = 1 field.array # no unique representation exists and `field.value` returns an array. field.value # Similar to scalar fields, a Python function can be used to set the value of a vector field. This time, the function should return three-dimensional values. def vector_value_function(pos): x, y, z = pos vx = x vy = x*y vz = x*y*z return (vx, vy, vz) # This function can now be used at the definition of the field: field = df.Field(mesh, dim=3, value=vector_value_function) # Its value is now: field.value field.array # ## Using mesh regions # # If regions were defined as a part of the mesh, and we want to set the value of the field differently in those regions, we can employ some of the functionality of regions. Let us assume that in the mesh we defined we want to have two regions. Region 1 is going to include all cells with negative $y$ coordinate and region 2 cells with positive $y$ coordinate. Our mesh would be: regions = {'region1': df.Region(p1=(-50, -50, -50), p2=(50, 0, 50)), 'region2': df.Region(p1=(-50, 0, -50), p2=(50, 50, 50))} mesh = df.Mesh(p1=p1, p2=p2, n=n, regions=regions) # Python function employing these regions can now be def regions_function(pos): if pos in mesh.regions['region1']: return (1, 0, 0) elif pos in mesh.regions['region2']: return (0, 1, 0) else: return (0, 0, 0) # We can now pass this function to the `discretisedfield.Field` class field = df.Field(mesh, dim=3, value=regions_function) # For a negative value of $y$, we get: field((10, -10, 10)) # And for positive: field((10, 30, 10)) # Another way of setting the field is passing the dictionary as a value to the field. However, there are several warnings that must be taken care of: # 1. Region names must be the same as defined regions in `discretisedfield.Mesh`. # 2. Only those points in the mesh which belong to one of the regions will be set. If there is a point which is not in any of the regions, its value will remain unchanged. region_values = {'region1': (1, 1, 1), 'region2': (2, 2, 2)} field.value = region_values # Now, we can sample points in two regions. field((-10, -10, -10)) field((10, 10, 10)) # ## Using another Field object # # Sometimes it is necessary to "resample" the field using a different mesh. Another field can be passed as a value to the new field. If our new mesh is: p1 = (-10, -10, -10) p2 = (10, 10, 10) cell = (5, 5, 5) new_mesh = df.Mesh(p1=p1, p2=p2, cell=cell) # The field we initialised previouly has the value field.array # We can now resample that field as new_field = df.Field(new_mesh, dim=3, value=field) # The values are now new_field.array.shape new_field((-5, -5, -5)) new_field((5, 5, 5)) # ## Other # # Full description of all existing functionality can be found in the [API Reference](https://discretisedfield.readthedocs.io/en/latest/api_documentation.html).
docs/ipynb/06-tutorial-spatially-varying-field.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved BSD-3 license. (c) <NAME>, <NAME> 2017. Thanks to NSF for support via CAREER award #1149784. # - # [@LorenaABarba](https://twitter.com/LorenaABarba) # 12 steps to Navier–Stokes # ====== # *** # This Jupyter notebook continues the presentation of the **12 steps to Navier–Stokes**, the practical module taught in the interactive CFD class of [Prof. <NAME>](http://lorenabarba.com). You should have completed [Step 1](./01_Step_1.ipynb) before continuing, having written your own Python script or notebook and having experimented with varying the parameters of the discretization and observing what happens. # # Step 2: Nonlinear Convection # ----- # *** # Now we're going to implement nonlinear convection using the same methods as in step 1. The 1D convection equation is: # # $$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0$$ # # Instead of a constant factor $c$ multiplying the second term, now we have the solution $u$ multiplying it. Thus, the second term of the equation is now *nonlinear*. We're going to use the same discretization as in Step 1 — forward difference in time and backward difference in space. Here is the discretized equation. # # $$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n-u_{i-1}^n}{\Delta x} = 0$$ # # Solving for the only unknown term, $u_i^{n+1}$, yields: # # $$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n)$$ # As before, the Python code starts by loading the necessary libraries. Then, we declare some variables that determine the discretization in space and time (you should experiment by changing these parameters to see what happens). Then, we create the initial condition $u_0$ by initializing the array for the solution using $u = 2\ @\ 0.5 \leq x \leq 1$ and $u = 1$ everywhere else in $(0,2)$ (i.e., a hat function). # + import numpy # we're importing numpy from matplotlib import pyplot # and our 2D plotting library # %matplotlib inline nx = 41 dx = 2 / (nx - 1) nt = 2 #nt is the number of timesteps we want to calculate dt = .025 #dt is the amount of time each timestep covers (delta t) u = numpy.ones(nx) #as before, we initialize u with every value equal to 1. u[int(.5 / dx) : int(1 / dx + 1)] = 2 #then set u = 2 between 0.5 and 1 as per our I.C.s un = numpy.ones(nx) #initialize our placeholder array un, to hold the time-stepped solution # - # The code snippet below is *unfinished*. We have copied over the line from [Step 1](./01_Step_1.ipynb) that executes the time-stepping update. Can you edit this code to execute the nonlinear convection instead? # + for n in range(nt): #iterate through time un = u.copy() ##copy the existing values of u into un for i in range(1, nx): ##now we'll iterate through the u array ###This is the line from Step 1, copied exactly. Edit it for our new equation. ###then uncomment it and run the cell to evaluate Step 2 u[i] = un[i] - un[i] * dt / dx * (un[i] - un[i-1]) pyplot.plot(numpy.linspace(0, 2, nx), u) ##Plot the results # - # What do you observe about the evolution of the hat function under the nonlinear convection equation? What happens when you change the numerical parameters and run again? # ## Learn More # For a careful walk-through of the discretization of the convection equation with finite differences (and all steps from 1 to 4), watch **Video Lesson 4** by <NAME> on YouTube. from IPython.display import YouTubeVideo YouTubeVideo('y2WaK7_iMRI') from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() # > (The cell above executes the style for this notebook.)
lessons/02_Step_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import tensorflow as tf from tensorflow import keras import keras_tuner as kt (img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data() # Normalize pixel values between 0 and 1 img_train = img_train.astype('float32') / 255.0 img_test = img_test.astype('float32') / 255.0 def model_builder(hp): model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) # Tune the number of units in the first Dense layer # Choose an optimal value between 32-512 hp_units = hp.Int('units', min_value=32, max_value=512, step=32) model.add(keras.layers.Dense(units=hp_units, activation='relu')) model.add(keras.layers.Dense(10)) # Tune the learning rate for the optimizer # Choose an optimal value from 0.01, 0.001, or 0.0001 hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4]) model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) return model tuner = kt.Hyperband(model_builder, objective='val_accuracy', max_epochs=10, factor=3, directory='my_dir', project_name='intro_to_kt') #Create a callback to stop training early after reaching a certain value for the validation loss. stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5) # + tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early]) # Get the optimal hyperparameters best_hps=tuner.get_best_hyperparameters(num_trials=1)[0] print(f""" The hyperparameter search is complete. The optimal number of units in the first densely-connected layer is {best_hps.get('units')} and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}. """) # + # Build the model with the optimal hyperparameters and train it on the data for 50 epochs model = tuner.hypermodel.build(best_hps) history = model.fit(img_train, label_train, epochs=50, validation_split=0.2) val_acc_per_epoch = history.history['val_accuracy'] best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1 print('Best epoch: %d' % (best_epoch,)) # + hypermodel = tuner.hypermodel.build(best_hps) # Retrain the model hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2) # - eval_result = hypermodel.evaluate(img_test, label_test) print("[test loss, test accuracy]:", eval_result)
01_ML_basics_with_Keras/07. Tuning hyperparams with the Keras tuner.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <a id="1"></a> <br> # ## Step 1 : Reading and Understanding Data # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import datetime as dt import sklearn from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans # + # Source: # Dr <NAME>, Director: Public Analytics group. chend '@' lsbu.ac.uk, School of Engineering, London South Bank University, London SE1 0AA, UK. # Data Set Information: # This is a transnational data set which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail.The company mainly sells unique all-occasion gifts. Many customers of the company are wholesalers. # Attribute Information: # InvoiceNo: Invoice number. Nominal, a 6-digit integral number uniquely assigned to each transaction. If this code starts with letter 'c', it indicates a cancellation. # StockCode: Product (item) code. Nominal, a 5-digit integral number uniquely assigned to each distinct product. # Description: Product (item) name. Nominal. # Quantity: The quantities of each product (item) per transaction. Numeric. # InvoiceDate: Invice Date and time. Numeric, the day and time when each transaction was generated. # UnitPrice: Unit price. Numeric, Product price per unit in sterling. # CustomerID: Customer number. Nominal, a 5-digit integral number uniquely assigned to each customer. # Country: Country name. Nominal, the name of the country where each customer resides. # + # Reading the data on which analysis needs to be done retail = retail = pd.read_csv("./datasets/OnlineRetail.csv", encoding= 'unicode_escape') retail.tail(200) # + # shape of df retail.shape # + # df info retail.info() # + # df description retail.describe() # - # <a id="2"></a> <br> # ## Step 2 : Data Cleansing # + # Calculating the Missing Values % contribution in DF df_null = round(100*(retail.isnull().sum())/len(retail), 2) df_null # + # Droping rows having missing values retail = retail.dropna() retail.shape # + # Changing the datatype of Customer Id as per Business understanding retail['CustomerID'] = retail['CustomerID'].astype(str) # - # <a id="3"></a> <br> # ## Step 3 : Data Preparation # #### We are going to analysis the Customers based on below 3 factors: # - R (Recency): Number of days since last purchase # - F (Frequency): Number of tracsactions # - M (Monetary): Total amount of transactions (revenue contributed) # + # New Attribute : Monetary retail['Amount'] = retail['Quantity']*retail['UnitPrice'] rfm_m = retail.groupby('CustomerID')['Amount'].sum() rfm_m = rfm_m.reset_index() rfm_m.head() # + # New Attribute : Frequency rfm_f = retail.groupby('CustomerID')['InvoiceNo'].count() rfm_f = rfm_f.reset_index() rfm_f.columns = ['CustomerID', 'Frequency'] rfm_f.head() # + # Merging the two dfs rfm = pd.merge(rfm_m, rfm_f, on='CustomerID', how='inner') rfm.head() # + # New Attribute : Recency # Convert to datetime to proper datatype retail['InvoiceDate'] = pd.to_datetime(retail['InvoiceDate'],format='%m/%d/%Y %H:%M') # + # Compute the maximum date to know the last transaction date max_date = max(retail['InvoiceDate']) max_date # + # Compute the difference between max date and transaction date retail['Diff'] = max_date - retail['InvoiceDate'] retail.head() # + # Compute last transaction date to get the recency of customers rfm_p = retail.groupby('CustomerID')['Diff'].min() rfm_p = rfm_p.reset_index() rfm_p.head() # + # Extract number of days only rfm_p['Diff'] = rfm_p['Diff'].dt.days rfm_p.head() # - rfm.head() rfm_p.head() rfm_p['CustomerID'] = rfm_p.CustomerID.astype(str) # + # Merge tha dataframes to get the final RFM dataframe rfm = pd.merge(rfm, rfm_p, on='CustomerID', how='inner') rfm.columns = ['CustomerID', 'Amount', 'Frequency', 'Recency'] rfm.head() # - # #### There are 2 types of outliers and we will treat outliers as it can skew our dataset # - Statistical # - Domain specific # + # Outlier Analysis of Amount Frequency and Recency attributes = ['Amount','Frequency','Recency'] plt.rcParams['figure.figsize'] = [10,8] sns.boxplot(data = rfm[attributes], orient="v", palette="Set2" ,whis=1.5,saturation=1, width=0.7) plt.title("Outliers Variable Distribution", fontsize = 14, fontweight = 'bold') plt.ylabel("Range", fontweight = 'bold') plt.xlabel("Attributes", fontweight = 'bold') # + # Removing (statistical) outliers for Amount Q1 = rfm.Amount.quantile(0.05) Q3 = rfm.Amount.quantile(0.95) IQR = Q3 - Q1 rfm = rfm[(rfm.Amount >= Q1 - 1.5*IQR) & (rfm.Amount <= Q3 + 1.5*IQR)] # Removing (statistical) outliers for Recency Q1 = rfm.Recency.quantile(0.05) Q3 = rfm.Recency.quantile(0.95) IQR = Q3 - Q1 rfm = rfm[(rfm.Recency >= Q1 - 1.5*IQR) & (rfm.Recency <= Q3 + 1.5*IQR)] # Removing (statistical) outliers for Frequency Q1 = rfm.Frequency.quantile(0.05) Q3 = rfm.Frequency.quantile(0.95) IQR = Q3 - Q1 rfm = rfm[(rfm.Frequency >= Q1 - 1.5*IQR) & (rfm.Frequency <= Q3 + 1.5*IQR)] # - # ### Rescaling the Attributes # # It is extremely important to rescale the variables so that they have a comparable scale.| # There are two common ways of rescaling: # # 1. Min-Max scaling # 2. Standardisation (mean-0, sigma-1) # # Here, we will use Standardisation Scaling. # + # Rescaling the attributes rfm_df = rfm[['Amount', 'Frequency', 'Recency']] # Instantiate scaler = StandardScaler() # fit_transform rfm_df_scaled = scaler.fit_transform(rfm_df) rfm_df_scaled.shape # - rfm_df_scaled = pd.DataFrame(rfm_df_scaled) rfm_df_scaled.columns = ['Amount', 'Frequency', 'Recency'] rfm_df_scaled.head() # <a id="4"></a> <br> # ## Step 4 : Building the Model # ### K-Means Clustering # + # k-means with some arbitrary k kmeans = KMeans(n_clusters=4, max_iter=50) kmeans.fit(rfm_df_scaled) # - kmeans.labels_ # ### Finding the Optimal Number of Clusters # #### Elbow Curve to get the right number of Clusters # A fundamental step for any unsupervised algorithm is to determine the optimal number of clusters into which the data may be clustered. The Elbow Method is one of the most popular methods to determine this optimal value of k. # + # Elbow-curve/SSD # inertia # Sum of squared distances of samples to their closest cluster center. ssd = [] range_n_clusters = [2, 3, 4, 5, 6, 7, 8] for num_clusters in range_n_clusters: kmeans = KMeans(n_clusters=num_clusters, max_iter=50) kmeans.fit(rfm_df_scaled) ssd.append(kmeans.inertia_) # plot the SSDs for each n_clusters plt.plot(ssd) # - # Final model with k=3 kmeans = KMeans(n_clusters=3, max_iter=50) kmeans.fit(rfm_df_scaled) kmeans.labels_ # assign the label rfm['Cluster_Id'] = kmeans.labels_ rfm.head() # + # Box plot to visualize Cluster Id vs Frequency sns.boxplot(x='Cluster_Id', y='Amount', data=rfm) # + # Box plot to visualize Cluster Id vs Frequency sns.boxplot(x='Cluster_Id', y='Frequency', data=rfm) # + # Box plot to visualize Cluster Id vs Recency sns.boxplot(x='Cluster_Id', y='Recency', data=rfm) # - # ### Inference: # K-Means Clustering with 3 Cluster Ids # - Customers with Cluster Id 1 are the customers with high amount of transactions as compared to other customers. # - Customers with Cluster Id 1 are frequent buyers. # - Customers with Cluster Id 2 are not recent buyers and hence least of importance from business point of view.
6_K_Means_Clustering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Recommendations with MovieTweetings: Getting to Know The Data # # Throughout this lesson, you will be working with the [MovieTweetings Data](https://github.com/sidooms/MovieTweetings/tree/master/recsyschallenge2014). To get started, you can read more about this project and the dataset from the [publication here](http://crowdrec2013.noahlab.com.hk/papers/crowdrec2013_Dooms.pdf). # # **Note:** There are solutions to each of the notebooks available by hitting the orange jupyter logo in the top left of this notebook. Additionally, you can watch me work through the solutions on the screencasts that follow each workbook. # # To get started, read in the libraries and the two datasets you will be using throughout the lesson using the code below. # # # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import tests as t # %matplotlib inline # Read in the datasets movies = pd.read_csv('https://raw.githubusercontent.com/sidooms/MovieTweetings/master/latest/movies.dat', delimiter='::', header=None, names=['movie_id', 'movie', 'genre'], dtype={'movie_id': object}, engine='python') reviews = pd.read_csv('https://raw.githubusercontent.com/sidooms/MovieTweetings/master/latest/ratings.dat', delimiter='::', header=None, names=['user_id', 'movie_id', 'rating', 'timestamp'], dtype={'movie_id': object, 'user_id': object, 'timestamp': object}, engine='python') # - # #### 1. Take a Look At The Data # # Take a look at the data and use your findings to fill in the dictionary below with the correct responses to show your understanding of the data. # + # number of movies print("The number of movies is {}.".format(movies.shape[0])) # number of ratings print("The number of ratings is {}.".format(reviews.shape[0])) # unique users print("The number of unique users is {}.".format(reviews.user_id.nunique())) # missing ratings print("The number of missing reviews is {}.".format(int(reviews.rating.isnull().mean()*reviews.shape[0]))) # the average, min, and max ratings given print("The average, minimum, and max ratings given are {}, {}, and {}, respectively.".format(np.round(reviews.rating.mean(), 0), reviews.rating.min(), reviews.rating.max())) # + # number of different genres genres = [] for val in movies.genre: try: genres.extend(val.split('|')) except AttributeError: pass # we end up needing this later genres = set(genres) print("The number of genres is {}.".format(len(genres))) # + # Use your findings to match each variable to the correct statement in the dictionary a = 53968 b = 10 c = 7 d = 31245 e = 15 f = 0 g = 4 h = 712337 i = 28 dict_sol1 = { 'The number of movies in the dataset': d, 'The number of ratings in the dataset': h, 'The number of different genres': i, 'The number of unique users in the dataset': a, 'The number missing ratings in the reviews dataset': f, 'The average rating given across all ratings': c, 'The minimum rating given across all ratings': f, 'The maximum rating given across all ratings': b } # Check your solution t.q1_check(dict_sol1) # - # #### 2. Data Cleaning # # Next, we need to pull some additional relevant information out of the existing columns. # # For each of the datasets, there are a couple of cleaning steps we need to take care of: # # #### Movies # * Pull the date from the title and create new column # * Dummy the date column with 1's and 0's for each century of a movie (1800's, 1900's, and 2000's) # * Dummy column the genre with 1's and 0's for each genre # # #### Reviews # * Create a date out of time stamp # # You can check your results against the header of my solution by running the cell below with the **show_clean_dataframes** function. # + # pull date if it exists create_date = lambda val: val[-5:-1] if val[-1] == ')' else np.nan # apply the function to pull the date movies['date'] = movies['movie'].apply(create_date) # Return century of movie as a dummy column def add_movie_year(val): if val[:2] == yr: return 1 else: return 0 # Apply function for yr in ['18', '19', '20']: movies[str(yr) + "00's"] = movies['date'].apply(add_movie_year) # + # Function to split and return values for columns def split_genres(val): try: if val.find(gene) >-1: return 1 else: return 0 except AttributeError: return 0 # Apply function for each genre for gene in genres: movies[gene] = movies['genre'].apply(split_genres) # - movies.head() #Check what it looks like # + import datetime change_timestamp = lambda val: datetime.datetime.fromtimestamp(int(val)).strftime('%Y-%m-%d %H:%M:%S') reviews['date'] = reviews['timestamp'].apply(change_timestamp) # - # now reviews and movies are the final dataframes with the necessary columns reviews.to_csv('./reviews_clean.csv') movies.to_csv('./movies_clean.csv')
lessons/Recommendations/1_Intro_to_Recommendations/.ipynb_checkpoints/1_Introduction to the Recommendation Data - Solution-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img src="https://maltem.com/wp-content/uploads/2020/04/LOGO_MALTEM.png" style="float: left; margin: 20px; height: 55px"> # # <br> # <br> # <br> # <br> # # # Introduction to Logistic Regression # # _Authors: <NAME>, <NAME>, <NAME>_ # # --- # # ### Learning Objectives # - Distinguish between regression and classification problems. # - Understand how logistic regression is similar to and different from linear regression. # - Fit, generate predictions from, and evaluate a logistic regression model in `sklearn`. # - Understand how to interpret the coefficients of logistic regression. # - Know the benefits of logistic regression as a classifier. # <a id='introduction'></a> # # ## Introduction # # --- # # Logistic regression is a natural bridge to connect regression and classification. # - Logistic regression is the most common binary classification algorithm. # - Because it is a regression model, logistic regression will predict continuous values. # - Logistic regression will predict continuous probabilities between 0 and 1. # - Example: What is the probability that someone shows up to vote? # - However, logistic regression almost always operates as a classification model. # - Logistic regression will use these continuous predictions to classify something as 0 or 1. # - Example: Based on the predicted probability, do we predict that someone votes? # # In this lecture, we'll only be reviewing the binary outcome case with two classes, but logistic regression can be generalized to predicting outcomes with 3 or more classes. # # **Some examples of when logistic regression could be used:** # - Will a user will purchase a product, given characteristics like income, age, and number of family members? # - Does this patient have a specific disease based on their symptoms? # - Will a person default on their loan? # - Is the iris flower in front of me an "*Iris versicolor*?" # - Given one's GPA and the prestige of a college, will a student be admitted to a specific graduate program? # # And many more. # + # imports import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Import train_test_split. from sklearn.model_selection import train_test_split # Import logistic regression from sklearn.linear_model import LogisticRegression # - # ## Graduate School Admissions # # --- # # Today, we'll be applying logistic regression to solve the following problem: "Given one's GPA, will a student be admitted to a specific graduate program?" # Read in the data. admissions = pd.read_csv('data/grad_admissions.csv') # Check first five rows. admissions.head() admissions.shape # The columns are: # - `admit`: A binary 0/1 variable indicating whether or not a student was admitted, where 1 means admitted and 0 means not admitted. # - `gre`: The student's [GRE (Graduate Record Exam)](https://en.wikipedia.org/wiki/Graduate_Record_Examinations) score. # - `gpa`: The student's GPA. # How many missing values do we have in each column? admissions.info() admissions.isnull().sum() # Drop every row that has an NA. admissions.dropna(inplace=True) admissions.shape # <details><summary>What assumption are we making when we drop rows that have at least one NA in it?</summary> # # - We assume that what we drop looks like what we have observed. That is, there's nothing special about the rows we happened to drop. # - We might say that what we dropped is a random sample of our whole data. # - It's not important to know this now, but the formal term is that our data is missing completely at random. # </details> # ## Recap of Notation # # You're quite familiar with **linear** regression: # # $$ # \begin{eqnarray*} # \hat{\mathbf{y}} &=& \hat{\beta}_0 + \hat{\beta}_1x_1 + \hat{\beta}_2x_2 + \cdots + \hat{\beta}_px_p \\ # &=& \hat{\beta}_0 + \sum_{j=1}^p\hat{\beta}_jX_j # \end{eqnarray*} # $$ # # Where: # - $\hat{\mathbf{y}}$ is the predicted values of $\mathbf{y}$ based on all of the inputs $x_j$. # - $x_1$, $x_2$, $\ldots$, $x_p$ are the predictors. # - $\hat{\beta}_0$ is the estimated intercept. # - $\hat{\beta}_j$ is the estimated coefficient for the predictor $x_j$, the $j$th column in variable matrix $X$. # # <a id='plot-reg'></a> # ### What if we predicted `admit` with `gpa` using Linear Regression? # # Looking at the plot below, what are problems with using a regression? # plot admissions vs. gpa and line of best fit plt.figure(figsize = (12, 5)) sns.regplot(admissions['gpa'], admissions['admit'], admissions, ci = False, scatter_kws = {'s': 2}, line_kws = {'color': 'orange'}) plt.ylim(-0.1, 1.1); # <a id='pred-binary'></a> # # ## Predicting a Binary Class # # --- # # In our case we have two classes: `1=admitted` and `0=rejected`. # # The logistic regression is still solving for $\hat{y}$. However, in our binary classification case, $\hat{y}$ will be the probability of $y$ being one of the classes. # # $$ # \hat{y} = P(y = 1) # $$ # # We'll still try to fit a "line" of best fit to this... except it won't be perfectly linear. We need to *guarantee* that the right-hand side of the regression equation will evaluate to a probability. (That is, some number between 0 and 1!) # ## The Logit Link Function (advanced) # # --- # # We will use something called a **link function** to effectively "bend" our line of best fit so that it is a curve of best fit that matches the range or set of values in which we're interested. # # For logistic regression, that specific link function that transforms ("bends") our line is known as the **logit** link. # # $$ # \text{logit}\left(P(y = 1)\right) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p # $$ # # $$ # \log\left(\frac{P(y = 1)}{1 - P(y = 1)}\right) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p # $$ # # Equivalently, we assume that each independent variable $x_i$ is linearly related to the **log of the odds of success**. # # Remember, the purpose of the link function is to bend our line of best fit. # - This is convenient because we can have any values of $X$ inputs that we want, and we'll only ever predict between 0 and 1! # - However, interpreting a one-unit change gets a little harder. (More on this later.) # <img src="./images/logregmeme.png" style="height: 400px"> # # [*image source*](https://twitter.com/ChelseaParlett/status/1279111984433127425?s=20) # ## Fitting and making predictions with the logistic regression model. # # We can follow the same steps to build a logistic regression model that we follow to build a linear regression model. # # 1. Define X & y # 2. Instantiate the model. # 3. Fit the model. # 4. Generate predictions. # 5. Evaluate model. admissions.head() # + # Step 1: Split into training & testing sets X = admissions[['gpa']] y = admissions['admit'] X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=50) # + # Step 2: Instantiate our model. logreg = LogisticRegression() # Step 3: Fit our model. logreg.fit(X_train,y_train) # - logreg.intercept_ logreg.coef_ # There are two methods in `sklearn` to be aware of when using logistic regression: # - `.predict()` # - `.predict_proba()` # Step 4 (part 1): Generate predicted values. logreg.predict(X_test)[:10] # Step 4 (part 2): Generate predicted probabilities. np.round(logreg.predict_proba(X_test),3) # <details><summary>How would you interpret the predict_proba() output?</summary> # # - This shows the probability of being rejected ($P(Y=0)$) and the probability of being admitted ($P(Y=1)$) for each observation in the testing dataset. # - The first array, corresponds to the first testing observation. # - The `.predict()` value for this observation is 0. This is because $P(Y=0) > P(Y=1)$. # - The second array, corresponds to the second testing observation. # - The `.predict()` value for this observation is 0. This is because $P(Y=0) > P(Y=1)$. # </details> # + # Visualizing logistic regression probabilities. plt.figure(figsize = (10, 5)) plt.scatter(X_test, y_test, s = 10); plt.plot(X_test.sort_values('gpa'), logreg.predict_proba(X_test.sort_values('gpa'))[:,1], color = 'grey', alpha = 0.8, lw = 3) plt.xlabel('GPA') plt.ylabel('Admit') plt.title('Predicting Admission from GPA'); # + # Step 5: Evaluate model. logreg.score(X_train,y_train) # - logreg.score(X_test,y_test) # By default, the `.score()` method for classification models gives us the accuracy score. # # $$ # \begin{eqnarray*} # \text{Accuracy} = \frac{\text{number of correct predictions}}{\text{number of total predictions}} # \end{eqnarray*} # $$ # <details><summary>Remind me: what does .score() tell me for a regression model?</summary> # # - The $R^2$ score. # - Remember that $R^2$ is the proportion of variance in our $Y$ values that are explained by our model. # </details> # ### Using the log-odds —the natural logarithm of the odds. # # The combination of converting the "probability of success" to "odds of success," then taking the logarithm of that is called the **logit link function**. # # $$ # \text{logit}\big(P(y=1)\big) = \log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p # $$ # # We've bent our line how we want... but how do we interpret our coefficients? # ### Odds # # Probabilities and odds represent the same thing in different ways. The odds for probability **p** is defined as: # # $$ # \text{odds}(p) = \frac{p}{1-p} # $$ # # The odds of a probability is a measure of how many times as likely an event is to happen than it is to not happen. # # **Example**: Suppose I'm looking at the probability and odds of a specific horse, "Secretariat," winning a race. # # - When **`p = 0.5`**: **`odds = 1`** # - The horse Secretariat is as likely to win as it is to lose. # - When **`p = 0.75`**: **`odds = 3`** # - The horse Secretariat is three times as likely to win as it is to lose. # - When **`p = 0.40`**: **`odds = 0.666..`** # - The horse Secretariat is two-thirds as likely to win as it is to lose. # ## Interpreting a one-unit change in $x_i$. # # $$\log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p$$ # # Given this model, a one-unit change in $x_i$ implies a $\beta_i$ unit change in the log odds of success. # # **This is annoying**. # # We often convert log-odds back to "regular odds" when interpreting our coefficient... our mind understands odds better than the log of odds. # # **(BONUS)** So, let's get rid of the log on the left-hand side. Mathematically, we do this by "exponentiating" each side. # $$ # \begin{eqnarray*} # \log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg) &=& \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p \\ # \Rightarrow e^{\Bigg(\log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg)\Bigg)} &=& e^{\Bigg(\beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p\Bigg)} \\ # \Rightarrow \frac{P(y=1)}{1-P(y=1)} &=& e^{\Bigg(\beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p\Bigg)} \\ # \end{eqnarray*} # $$ # # **Interpretation**: A one-unit change in $x_i$ means that success is $e^{\beta_i}$ times as likely. logreg.coef_ # <details><summary> I want to interpret the coefficient $\hat{\beta}_1$ for my logistic regression model. How would I interpret this coefficient?</summary> # # - Our model is that $\log\bigg(\frac{P(admit=1)}{1-P(admit=1)}\bigg) = \beta_0 + \beta_1\text{GPA}$. # - As GPA increases by 1, the log-odds of someone being admitted increases by 4.92. # - As GPA increases by 1, someone is $e^{4.92}$ times as likely to be admitted. # - As GPA increases by 1, someone is about 137.06 times as likely to be admitted to grad school. # </details> # # > Hint: Use the [np.exp](https://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html) function. # + #exponentiate the coefficient np.exp(logreg.coef_) # - # ## Conclusion # # The goal of logistic regression is to find the best-fitting model to describe the relationship between a binary outcome and a set of independent variables. # # Logistic regression generates the coefficients of a formula to predict a logit transformation of the probability that the characteristic of interest is present. # ## Interview Questions # <details><summary>What is the difference between a classification and a regression problem?</summary> # # - A classification problem has a categorical $Y$ variable. A regression problem has a numeric $Y$ variable. # </details> # <details><summary>What are some of the benefits of logistic regression as a classifier?</summary> # # (Answers may vary; this is not an exhaustive list!) # - Logistic regression is a classification algorithm that shares similar properties to linear regression. # - The coefficients in a logistic regression model are interpretable. (They represent the change in log-odds caused by the input variables.) # - Logistic regression is a very fast model to fit and generate predictions from. # - It is by far the most common classification algorithm. # # **Note**: The original interview question was "If you're comparing decision trees and logistic regression, what are the pros and cons of each?" # </details> # ## (BONUS) Solving for the Beta Coefficients # # Logistic regression minimizes the "deviance," which is similar to the residual sum of squares in linear regression, but is a more general form. # # **There's no closed-form solution to the beta coefficients like in linear regression, and the betas are found through optimization procedures.** # - We can't just do $\hat{\beta} = (X^TX)^{-1}X^Ty$ like we can in linear regression! # # The `solver` hyperparameter in sklearn's LogisticRegression class specifies which method should be used to solve for the optimal beta coefficients (the coefficients that minimize our cost function). A former DC DSI instructor <NAME> has a great blog post about which solver to choose [here](https://towardsdatascience.com/dont-sweat-the-solver-stuff-aea7cddc3451). # # If you're particularly interested in the math, here are two helpful resources: # - [A good blog post](http://www.win-vector.com/blog/2011/09/the-simpler-derivation-of-logistic-regression/) on the logistic regression beta coefficient derivation. # - [This paper](https://www.stat.cmu.edu/~cshalizi/402/lectures/14-logistic-regression/lecture-14.pdf) is also a good reference. # ## (BONUS) The Logistic Function # # The inverse function of the logit is called the **logistic function**. # # By inverting the logit, we can have the right side of our regression equation solve explicitly for $P(y = 1)$: # # $$ # P(y=1) = logit^{-1}\left(\beta_0 + \sum_{j}^p\beta_jx_j\right) # $$ # # Where: # # $$ # logit^{-1}(a) = logistic(a) = \frac{e^{a}}{e^{a} + 1} # $$ # # Giving us: # # $$ # P(y=1) = \frac{e^{\left(\beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p\right)}}{e^{\left(\beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p\right)}+1} # $$
Notebook/Lesson-logistic-regression/starter-code.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Ray RLlib Multi-Armed Bandits - Exploration-Exploitation Strategies # # © 2019-2020, Anyscale. All Rights Reserved # # ![Anyscale Academy](../../images/AnyscaleAcademyLogo.png) # What strategy should we follow for selecting actions that balance the exploration-exploitation tradeoff, yielding the maximum average reward over time? This is the core challenge of RL/bandit algorithms. # # This lesson has two goals, to give you an intuitive sense of what makes a good algorithm and to introduce several popular examples. # # > **Tip:** For the first time through this material, you may wish to focus on the first goal, developing an intuitive sense of the requirements for a good algorithm. Come back later to explore the details of the algorithms discussed. # # So, at least read through the first sections, stopping at _UCB in More Detail_ under _Upper Confidence Bound_. # ## What Makes a Good Exploration-Exploitation Algorithm? # # Let's first assume we are considered only stationary bandits. The ideal algorithm achieves these properties: # # 1. It explores all the actions reasonably aggressively. # 2. When exploring, it picks the action most likely to produce an optimal reward, rather than making random choices. # 3. It converges quickly to the action that optimizes the mean reward. # 4. It stops exploration once the optimal action is known and just exploits! # # For non-stationary and context bandits, the optimal action will likely change over time, so some exploration may always be needed. # ## Popular Algorithms # # With these properties in mind, let's briefly discuss four algorithms. We'll use two of them in examples over several subsequent lessons. # ### $\epsilon$-Greedy # # One possible strategy is quite simple, called $\epsilon$-Greedy, where $\epsilon$ is small number that determines how frequently exploration is done. The best-known action is exploited most of the time ("greedily"), governed by probability $1 - \epsilon$ (i.e., in percentage terms $100*(1 - \epsilon)$%). With probability $\epsilon$, an action is picked at random in the hopes of finding a new action that provides even better rewards. # # Typical values of $\epsilon$ are between 0.01 and 0.1. A larger value, like 0.1, explores more aggressively and finds the optimal policy more quickly, but afterwards the aggressive exploration strategy becomes a liability, as it only selects the optimal action ~90% of the time, continuing excessive exploration that is now counterproductive. In contrast, smaller values, like 0.01, are slower to find the optimal policy, but once found continue to select it ~99% of the time, so over time the mean reward is _higher_ for _smaller_ $\epsilon$ values, as the optimal action is selected more often. # # How does $\epsilon$-Greedy stack up against our desired properties? # # 1. The higher the $\epsilon$ value, the more quickly the action space is explored. # 2. It randomly picks the next action, so there is no "intelligence" involved in optimizing the choice. # 3. The higher the $\epsilon$ value, the more quickly the optimal action is found. # 4. Just as this algorithm makes no attempt to optimize the choice of action during exploration, it makes no attempt to throttle back exploration when the optimal value is found. # # To address point 4, you could adopt an enhancement that decays the $\epsilon$ value over time, rather than keeping it fixed. # # See [Wikipedia - MAB Approximate Solutions](https://en.wikipedia.org/wiki/Multi-armed_bandit) and [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for more information. # ### Upper Confidence Bound # # A limitation about $\epsilon$-greedy is that exploration is done indiscriminately. Is it possible to make a more informed choice about which alternative actions are more likely to yield a good result, so we preferentially pick one of them? That's what the Upper Confidence Bound (UCB) algorithm attempts to do. It weights some choices over others. # # It's worth looking at the formula that governs the choice for the next action at time $t$: # # $$A_t \doteq \frac{argmax}{a}\bigg[ Q_t(a) + c\sqrt{ \dfrac{\ln(t)}{N_t(a)} }\bigg]$$ # # It's not essential to fully understand all the details, but here is the gist of it; the best action to take at time $t$, $A_t$, is decided by picking the best known action for returning the highest value (the $Q_t(a)$ term in the brackets [...] computes this), but with a correction that encourages exploration, especially for smaller $t$, but penalizing particular actions $a$ if we've already picked them a lot previously (the second term starting with a constant $c$ that governs the "strength" of this correction). # # UCB is one of the best performing algorithms [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition). How does it stack up against our desired properties? # # 1. Exploration is reasonably quick, governed by the $c$ hyperparameter for the "correction term". # 2. It attempts to pick a good action when exploring, rather than randomly. # 3. Finding the optimal action occurs efficiently, governed by the constant $c$. # 4. The $ln(t)$ factor in the correction term grows more slowly over time relative to the counts $N_t(a)$, so exploration occurs less frequently at longer time scales. # # Because UCB is based on prior measured results, it is an example of a _Frequentist_ approach that is _model free_, meaning we just measure outcomes, we don't build a model to explain the environment. # #### UCB in More Detail # # Let's explain the equation in more detail. If you are just interested in developing an intuition about strategies, this is a good place to stop and go to the next lesson, [Simple Multi-Armed-Bandit](03-Simple-Multi-Armed-Bandit.ipynb). # # * $A_t$ is the action we want to select at time $t$, the action that is most likely to produce the best reward or most likely to be worth exploring. # * For all the actions we can choose from, we pick the action $a$ that maximizes the formula in the brackets [...]. # * $Q_t(a)$ is any equation we're using to measure the "value" received at time $t$ for action $a$. This is the greedy choice, i.e., the equation that tells us which action $a$ we currently know will give us the highest value. If we never wanted to explore, the second term in the brackets wouldn't exist. $Q_t(a)$ alone would always tell us to pick the best action we already know about. (The use of $Q$ comes from an early RL algorithm called _Q learning_ that models the _value_ returned from actions over time.) # * The second term in the brackets is the correction that UCB gives us. As time $t$ increases, the natural log of $t$ also increases, but slower and slower for larger $t$. This is good because we hope we will find the optimal action at some earlier time $t$, so exploration at large $t$ is less useful (as long as the bandit is stationary or slowly changing). However, the denominator, $N_t(a)$ is the number of times we've selected $a$ already. The more times we've already tried $a$, the less "interesting" it is to try again, so this term penalizes choosing $a$. Finally, $c$ is a constant, a "knob" or _hyperparameter_ that determines how much we weight exploration vs. exploitation. # # # When we use UCB in subsequent lessons, we'll use a simple _linear_ equation for $Q_t(a)$, i.e., something of the form $z = ax + by + c$. # # See [Wikipedia - MAB Approximate solutions for contextual bandit](https://en.wikipedia.org/wiki/Multi-armed_bandit), [these references](../06-RL-References.ipynb#Upper-Confidence-Bound), and the [RLlib documentation](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-upper-confidence-bound-contrib-linucb) for more information. # ### Thompson Sampling # # Thompson sampling, developed in the 30s, is similar to UCB in that it picks the action that is believed to have the highest potential of maximum reward. It is a _Bayesian, model-based_ approach, where the model is the posterior distribution and may incorporate prior belief about the environment. # # The agent samples weights for each action, using their posterior distributions, and chooses the action that produces the highest reward. Calculating the exact posterior is intractable in most cases, so they are usually approximated. Hence, the algorithm models beliefs about the problem. Then, during each iteration, the agent initializes with a random belief acts acts optimally based on it. # # One trade-off is that Thompson Sampling requires an accurate model of the past policy and may suffer from large variance when the past policy differs significantly from a policy being evaluated. You may observe this if you rerun experiments in subsequent lessons that use Thompson Sampling. The graphs of rewards and especially the ranges from high to low, may change significantly from run to run. # # Relatively speaking, the Thompson Sampling exploration strategies are newer than UCB and tend to perform better (as we'll see in subsequent lessons), although the math for their theoretical performance is less rigorous than for UCB. # # For more information, see [Wikipedia](https://en.wikipedia.org/wiki/Thompson_sampling), [A Tutorial on Thompson Sampling](https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf), [RLlib documentation](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-thompson-sampling-contrib-lints), and other references in [RL References](../References-Reinforcement-Learning.ipynb). # ### Gradient Bandit Algorithms # # Focusing explicitly on rewards isn't the only approach. What if we use a more general measure, a _preference_, for selecting an action $a$ at time $t$? We'll use $H_t(a)$ to represent this preference at time $t$ for action $a$. We need to model this so we have a probability of selecting an action $a$. Using the _soft-max distribution_ works, also known as the Gibbs or Boltzmann distribution: # # $Pr\{A_t = a\} \doteq \frac{e^{H_t(a)}}{\sum^{k}_{b=1}e^{H_t(b)}} \doteq \pi_t(a)$ # # $\pi_t(a)$ is defined to encapsulate this formula for the probability of taking action $a$ at time $t$. # # The term _gradient_ is used for this algorithm because the training update formula for $H_t(a)$ is very similar to the _stochastic gradient descent_ formula used in other ML problems. # # After an action $A_t$ is selected at a time $t$ and reward $R_t$ is received, the action preferences are updated as follows: # # $ H_{t+1}(A_t) \doteq H_t(A_t) + \alpha(R_t - \overset{\_}{R_t})(1 - \pi_t(A_t))$, and # # $ H_{t+1}(a) \doteq H_t(a) - \alpha(R_t - \overset{\_}{R_t})(\pi_t(a))$, for all $a \ne A_t$ # # where $H_0(a)$ values are initialized to zero, $\alpha > 0$ is a step size parameter and $\overset{\_}{R_t}$ is the average of all the rewards up through and including time $t$. Note that if $R_t - \overset{\_}{R_t}$ is positive, meaning the current reward is larger than the average, the preference $H(A_t)$ increases. Otherwise, it decreases. # # Note the plus vs. minus signs in the two equations before the $\alpha$ term. If our preference for $A_t$ increases, our preferences for the other actions should decrease. # # How does Thompson Sampling satisfy our desired properties? # # 1. As shown, this algorithm doesn't have tuning parameters to control the rate of exploration or convergence to the optimal solution. However, the convergence is reasonably quick if the variance in reward values is relatively high, so that the difference $R_t - \overset{\_}{R_t}$ is also relatively large for low $t$ values. # 2. It attempts to pick a good action when exploring, rather than randomly. # 3. See 1. # 4. As $\overset{\_}{R_t}$ converges to a maximum, the difference $R_t - \overset{\_}{R_t}$ and hence all the preference values $H_t(a)$ will become relatively stationary, with the optimal action having the highest $H$. Since the $H_t(a)$ values govern the probability of being selected, based on the _soft-max distribution_, if the optimal action has a significantly higher $H_t(a)$ than the other actions, it will be chosen most frequently. If the differences between $H_t(a)$ values are not large, then several will be chosen frequently, but that also means their rewards are relatively close. Hence, in either case, the average reward over time will still be close to optimal. # # There are many more details about Thompson Sampling, but we won't discuss them further here. See [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for the details.
ray-rllib/multi-armed-bandits/02-Exploration-vs-Exploitation-Strategies.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exercise 3.2 Learning From Data # + pycharm={"is_executing": true} from matplotlib import pyplot as plt import numpy as np import random from random import seed # - thk = 5 rad = 10 s = np.linspace(0.2, 5, 25) misrepresented_list_count = [] # + for sep in s: xs_red = [] ys_red = [] for x_coord in np.arange(-(rad + thk), rad + thk, 0.6): for y_coord in np.arange(0, rad + thk, 0.6): if rad ** 2 <= (x_coord - 0) ** 2 + (y_coord - 0) ** 2 <= (rad + thk) ** 2: xs_red.append(x_coord) ys_red.append(y_coord) xs_blue = [] ys_blue = [] for x_coord in np.arange(-(thk / 2), (thk / 2 + (2 * rad) + thk), 0.6): for y_coord in np.arange(-sep, -(rad + +sep + thk), -0.6): if rad ** 2 <= (x_coord - ((thk / 2) + rad)) ** 2 + (y_coord - (-sep)) ** 2 <= (rad + thk) ** 2: xs_blue.append(x_coord) ys_blue.append(y_coord) """ A function for prediction of Y """ def Y_predict(x_vector, w): x_new = [1] for i in x_vector: x_new.append(i) x_new = np.array((x_new)) res = (np.dot(x_new, w)) if res > 0: Y = 1 return Y elif res < 0: Y = -1 return Y elif res == 0: Y = 0 return Y count = 0 """ The main training function for the data, with the Attributes ---------- X - The data set iterations - the number of times the weights are iterated eta - the learning rate """ misrepresented_list = [] def train(X, iterations, eta): global count global w global all_combined_targets for y_idx in range(len(X)): ran_num = random.randint(0, len(X) - 1) x_train = X[ran_num] y_t = Y_predict(x_train, w) misrepresented_list = [] for i, j in enumerate(all_combined_targets): if j != y_t: misrepresented_list.append(i) if len(misrepresented_list) == 0: print('Full accuracy achieved') break random_selection = random.randint(0, len(misrepresented_list) - 1) random_index = misrepresented_list[random_selection] x_selected = X[random_index] y_selected = all_combined_targets[random_index] x_with1 = [1] for i in x_selected: x_with1.append(i) x_with1 = np.array((x_with1)) s_t = np.matmul(w, x_with1) if (y_selected * s_t) <= 1: w = w + (eta * (y_selected - s_t) * x_with1) count += 1 if (count == iterations): break xs_red = np.array(xs_red) ys_red = np.array(ys_red) xs_blue = np.array(xs_blue) ys_blue = np.array(ys_blue) points_1 = [] res1 = [] for i in range(len(xs_red)): points_1.append([xs_red[i], ys_red[i]]) res1.append(-1) points_1 = np.array(points_1) points_2 = [] res2 = [] for i in range(len(xs_blue)): points_2.append([xs_blue[i], ys_blue[i]]) res2.append(1) points_2 = np.array(points_2) all_input = np.concatenate((points_1, points_2)) # creating a combined dataset all_d = np.concatenate((res2, res1)) # Visualizing the linearly separable dataset length_dataset = len(xs_red) d1 = -1 * (np.ones(int(length_dataset / 2))) d2 = np.ones(int(length_dataset / 2)) all_combined_targets = np.concatenate((d2, d1)) # initializing all parameters count = 0 w0, w1, w2 = 0, 0, 0 w = np.array((w0, w1, w2)) weight = 0 iterations = 100 eta = 0.01 # calling the function train(all_input, iterations, eta) iter_list.append(count) plt.plot(iter_list) plt.show()
.ipynb_checkpoints/Exercise_3_2-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.1 # language: julia # name: julia-1.4 # --- # Encode MNIST images as compressed vectors that can later be decoded back into # images. using Flux, Flux.Data.MNIST using Flux: @epochs, onehotbatch, mse, throttle using Base.Iterators: partition using Parameters: @with_kw using CUDAapi if has_cuda() @info "CUDA is on" import CuArrays CuArrays.allowscalar(false) end @with_kw mutable struct Args lr::Float64 = 1e-3 # Learning rate epochs::Int = 10 # Number of epochs N::Int = 32 # Size of the encoding batchsize::Int = 1000 # Batch size for training sample_len::Int = 20 # Number of random digits in the sample image throttle::Int = 5 # Throttle timeout end function get_processed_data(args) # Loading Images imgs = MNIST.images() #Converting image of type RGB to float imgs = channelview.(imgs) # Partition into batches of size 1000 train_data = [float(hcat(vec.(imgs)...)) for imgs in partition(imgs, args.batchsize)] train_data = gpu.(train_data) return train_data end function train(; kws...) args = Args(; kws...) train_data = get_processed_data(args) @info("Constructing model......") # You can try to make the encoder/decoder network larger # Also, the output of encoder is a coding of the given input. # In this case, the input dimension is 28^2 and the output dimension of # encoder is 32. This implies that the coding is a compressed representation. # We can make lossy compression via this `encoder`. encoder = Dense(28^2, args.N, leakyrelu) |> gpu decoder = Dense(args.N, 28^2, leakyrelu) |> gpu # Defining main model as a Chain of encoder and decoder models m = Chain(encoder, decoder) @info("Training model.....") loss(x) = mse(m(x), x) ## Training evalcb = throttle(() -> @show(loss(train_data[1])), args.throttle) opt = ADAM(args.lr) @epochs args.epochs Flux.train!(loss, params(m), zip(train_data), opt, cb = evalcb) return m, args end # + using Images img(x::Vector) = Gray.(reshape(clamp.(x, 0, 1), 28, 28)) function sample(m, args) imgs = MNIST.images() #Converting image of type RGB to float imgs = channelview.(imgs) # `args.sample_len` random digits before = [imgs[i] for i in rand(1:length(imgs), args.sample_len)] # Before and after images after = img.(map(x -> cpu(m)(float(vec(x))), before)) # Stack them all together hcat(vcat.(before, after)...) end # - cd(@__DIR__) m, args= train() # Sample output @info("Saving image sample as sample_ae.png") save("test_flux_autoencoder.png", sample(m, args)) # + img(x::Vector) = Gray.(reshape(clamp.(x, 0, 1), 28, 28)) function sample_encoder(m, args) imgs = MNIST.images() #Converting image of type RGB to float imgs = channelview.(imgs) # `args.sample_len` random digits before = [imgs[i] for i in rand(1:length(imgs), args.sample_len)] # Before and after images after = img.(map(x -> cpu(m(1))(float(vec(x))), before)) # Stack them all together hcat(vcat.(before, after)...) end
flux/test_flux_autoencoder.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Regresión Lineal con Python # # Su vecina es una agente de bienes raíces y quiere ayuda para predecir los precios de las viviendas en las regiones de EE. UU. Sería genial si de alguna manera pudieras crear un modelo para ella que le permitiera poner algunas características de una casa y devolver un estimado del precio en la que la casa se vendería. # # Ella le ha preguntado si podría ayudarla con sus nuevas habilidades de ciencia de datos. ¡Usted dice que sí y decide que la Regresión lineal podría ser un buen camino para resolver este problema! # # Luego, su vecino le brinda información sobre un grupo de casas en regiones de los Estados Unidos, todo está en el conjunto de datos: USA_Housing.csv. # # Los datos contienen las siguientes columnas: # # * 'Avg. Area Income': Prom de ingresos de los residentes de la ciudad donde la casa está ubicada. # * 'Avg. Area House Age': Promedio de edad de las casas en la misma ciudad # * 'Avg. Area Number of Rooms': Promedio del Número de ambientes de las casas en la misma ciudad # * 'Avg. Area Number of Bedrooms': Promedio del número de dormitorios para las casas en la misma ciudad # * 'Area Population': Población de la ciudad en la que la casa esta ubicada # * 'Price': Precio en la que la casa se vendió # * 'Address': Dirección de la casa # **¡Empecemos!** # ## Revisemos los datos # Hemos podido obtener algunos datos de los precios de vivienda como un conjunto de csv, ¡preparemos nuestro entorno con las bibliotecas que necesitaremos y luego importemos los datos! # ### Importación de librerias import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # ### Revisemos los datos USAhousing = pd.read_csv('USA_Housing.csv') USAhousing.head(10) USAhousing.tail() USAhousing.info() USAhousing.describe() USAhousing.columns # # Análisis de Datos Exploratorio # # ¡Creamos algunos gráficos simples para verificar los datos! sns.pairplot(USAhousing) sns.distplot(USAhousing['Price']) sns.heatmap(USAhousing.corr()) # ## Entrenando el Modelo de Regresión Lineal # # ¡Comencemos ahora por entrenar el modelo de regresión! Tendremos que dividir primero nuestros datos en una matriz X que contenga las características para entrenar, y un arreglo y con la variable objetivo, en este caso la columna Precio. Descartamos la columna 'Address' porque solo tiene información de texto que el modelo de regresión lineal no puede usar. # # ### arreglos x e y X = USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms', 'Avg. Area Number of Bedrooms', 'Area Population']] y = USAhousing['Price'] # ## Dividir datos de entrenamiento y prueba # # Ahora dividamos los datos en un conjunto de entrenamiento y un conjunto de prueba. Formaremos un modelo en el conjunto de entrenamiento y luego usaremos el conjunto de prueba para evaluar el modelo. from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101) # ## Crear y Entrenar el Modelo from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X_train,y_train) # ## Evaluación del Modelo # # Evaluemos el modelo comprobando sus coeficientes y cómo podemos interpretarlos. # imprime el interceptor print(lm.intercept_) coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coeficiente']) coeff_df # Interpretación de los coeficientes: # # - Manteniendo las otras características fijas, un incremento de 1 unidad en **Avg. Area Income** está asociado con un **incremento de \$21.52 **. # - Manteniendo las otras características fijas, un incremento de 1 unidad en **Avg. Area House Age** está asociado con un **incremento de \$164883.28 **. # - Manteniendo las otras características fijas, un incremento de 1 unidad en **Avg. Area Number of Rooms** está asociado con un **incremento de \$122368.67 **. # - Manteniendo las otras características fijas, un incremento de 1 unidad en **Avg. Area Number of Bedrooms** está asociado con un **incremento de \$2233.80 **. # - Manteniendo las otras características fijas, un incremento de 1 unidad en **Area Population** está asociado con un **incremento de \$15.15 **. # # ¿Esto tiene sentido? Probablemente no porque se inventó esta información. Si quieres datos reales para repetir este tipo de análisis, revise el dataset [boston](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html): # # # from sklearn.datasets import load_boston # boston = load_boston() # print(boston.DESCR) # boston_df = boston.data # ## Predicciones de nuestro modelo # # ¡Aprovechemos las predicciones de nuestro conjunto de pruebas y veamos qué tan bien lo hizó! X_test.head() predictions = lm.predict(X_test) predictions plt.scatter(y_test,predictions) # **Histograma residual** sns.distplot((y_test-predictions),bins=50); # ## Metricas de Evaluación de Regresión # # # Aquí hay tres métricas de evaluación comunes para los problemas de regresión: # # **Mean Absolute Error** (MAE) es la media del valor absoluto de los errores: # # $$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$ # # **Mean Squared Error** (MSE) es la media de los errores al cuadrado: # # $$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$ # # **Root Mean Squared Error** (RMSE) es la raíz cuadrada de la media de los errores al cuadrado: # # $$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$ # # Comparing these metrics: # # - **MAE** es el más fácil de entender, porque es el error promedio. # - **MSE** es más popular que MAE, porque MSE "castiga" los errores más grandes, lo que tiende a ser útil en el mundo real. # - **RMSE** es aún más popular que MSE, porque RMSE es interpretable en las unidades de "y". # # Todas estas son **funciones de pérdida**, y las queremos minimizar. from sklearn import metrics print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) # Explora el Dataset Boston mencionado anteriormente. # # ¡A continuación resuelve el ejercicio propuesto! # # ## ¡Muy bien!
02RegresionLineal/01RegresionLinealConPython.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.3.1 # language: julia # name: julia-1.3 # --- using Plots, LaTeXStrings using STMO # ![](Figures/logo.png) # # # Motivation # # Up to now, we confidently assumed that we would always be able to compute the derivative or gradient of any function. Despite differentiation being a relatively easy operation, it is frequenty not feasible (or desirable) to compute this by hand. *Numerical differentiation* can provide approximations of th derivate or gradient at a particular point. *Automatic differentiation* directly manipulates the computational graph to generate a function that computes the (exact) derivate. Such methods have advanced greatly in the last years and it is no exageration that their easy use in popular software libraries such as TenserFlow and PyTorch are a cornerstone of deep learning and other machine learning and scientific computing fields. # # # Definition of a derivative using Plots, BenchmarkTools using STMO # # Definition of a derivative # # $$ # \frac{\text{d}f(x)}{\text{d}x} = f'(x) = \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}. # $$ # # Derivation is in essence a mechanical process, following the rules below. # # ![](Figures/derivatives.jpeg) # # When we work with function of several variables, we use *partial derivatives* (e.g. $\frac{\partial f(x, y)}{\partial x}$), indicating we keep all variables but $x$ fixed. # # Our running example: # # $$ # f(x) = \log x + \frac{\sin x}{x} # $$ f(x) = log(x) + sin(x) / x; # # Symbolic differentiation # # Computing derivatives, as you have seen in basic calculus courses. # # By hand or automatically: # - Maple # - Sympy (python) # - Mathematica # - Maxima # # Differentiation is *easy* compared to *integration* or *sampling*. # # Advantages: # - exact derivatives! # - gives the formula for different evaluations. # - insight in the system # - in some cases, closed-form solution extrema by solving $\frac{\text{d}f(x)}{\text{d}x}=0$ # - no hyperparameters or tweaking: just works! # # Disadvantages: # - some software not flexible enough (gradients, arrays, for-loops,...) # - sometimes explosion of terms: *expression swell* # - not always numerically optimal! using SymEngine @vars x # define variable df = diff(f(x), x) df(2.0) plot(f, 1, 5, label="\$f(x)\$", xlabel="\$x\$", lw=2, color=mygreen) plot!(df, 1, 5, label="\$f'(x)\$", lw=2, color=myorange) # # Numerical differentiation # # Finite difference approximation of the derivative/gradient based on a number of function evaluations. # # Often based on the limit definition of a derivative. Theoretical analysis using Taylor approximation: # # $$ # f(x + h) = f(x) + \frac{h}{1!}f'(x) + \frac{h^2}{2!}f''(x) + \frac{h^3}{3!}f^{(3)}(x)+\ldots # $$ # # **Forward difference** # # $$ # f'(x)\approx \frac{f(x+h) - f(x)}{h} # $$ # # **Central difference** # # $$ # f'(x)\approx \frac{f(x+h) - f(x-h)}{2h} # $$ # # **Complex step method** # # $$ # f'(x)\approx \frac{\text{Im}(f(x +ih))}{h} # $$ diff_fordiff(f, x; h=1e-10) = (f(x + h) - f(x)) / h; diff_centrdiff(f, x; h=1e-10) = (f(x + h) - f(x - h)) / 2h; diff_complstep(f, x; h=1e-10) = imag(f(x + im * h)) / h; diff_fordiff(f, 2.0) diff_centrdiff(f, 2.0) diff_complstep(f, 2.0) # ## Intermezzo: floats # # Real numbers are always represented as floating point numbers in a computer. # # ![Encoding of a real number using a `Float32`.](Figures/floats.png) # # By default, Julia uses double precision floats (`Float64`). For brevity, let us take a look at the bit representation of a float. We use `Float32` for brevity's sake. num = Float32(10.789) bitstring(num) # The first bit encodes the *sign*, here positive. sign(num) # The next eight bits specify the *exponent*, the magnitude of the number. exponent(num) # While the final 23 bits specify the *mantissa*, a number between $[1,2]$ representing the precision. significand(num) # These can be used to reconstrunct the number. significand(num) * 2^exponent(num) # The *machine precision* of a number can be retained using `eps`. This is the relative error. eps(num) # not very high because it is only Float32 # This means that larger numbers have a larger absolute error compared to small numbers. eps(1.2) eps(1.2e10) eps(1.2e-10) # This brings us with numerical issues we might encounter using numerical differentiation. # # **First sin of numerical analysis**: # # > *thou shalt not add small numbers to big numbers* # # **second sin of numerical analysis**: # # > *thou shalt not subtract numbers which are approximately equal* # # ## Back to numerical differentiation fexamp(x) = 64x*(1-x)*(1-2x)^2*(1-8x+8x^2)^2 dfexamp = diff(fexamp(x), x) error(diff, h; x=1.0) = max(abs(Float64(dfexamp(x)) - diff(fexamp, x, h=h)), 1e-50); stepsizes = map(t->10.0^t, -20:0.1:-1); plot(stepsizes, error.(diff_fordiff, stepsizes), label="forward difference", xscale=:log10, yscale=:log10, lw=2, legend=:bottomright, color=myblue) plot!(stepsizes, error.(diff_centrdiff, stepsizes), label="central difference", lw=2, color=myred) plot!(stepsizes, error.(diff_complstep, stepsizes), label="complex step", lw=2, color=myyellow) #xlims!(1e-15, 1e-1) xlabel!("\$h\$") ylabel!("absolute error") # Advantages of numerical differentiation: # - easy to implement # - general, no assumptions needed # # Disadvantages: # - not numerically stable (round-off errors) # - not efficient for gradients ($\mathcal{O}(n)$ evaluations for $n$-dimensional vectors) # # # ## Approximations of multiplications with gradients # # **Gradient-vector approximation** # # $$ # \nabla f(\mathbf{x})^\intercal \mathbf{d} \approx \frac{f(\mathbf{x}+h\cdot\mathbf{d}) - f(\mathbf{x}-h\cdot\mathbf{d})}{2h} # $$ # # **Hessian-vector approximation** # # $$ # \nabla^2 f(\mathbf{x}) \mathbf{d} \approx \frac{\nabla f(\mathbf{x}+h\cdot\mathbf{d}) - \nabla f(\mathbf{x}-h\cdot\mathbf{d})}{2h} # $$ grad_vect(f, x, d; h=1e-10) = (f(x + h * d) - f(x - h * d)) / (2h) # + dvect = randn(10) / 10 xvect = 2rand(10) A = randn(10, 10) A = A * A' / 100 #g(x) = exp(- x' * A * x) # adjoint does not play with Zygote g(x) = exp(- sum(x .* (A * x))) # correct gradient and Hessian (by hand) Dg(x) = -2g(x) * A * x D²g(x) = -2g(x) * A - 2A * x * Dg(x)' # - g(xvect) Dg(xvect) Dg(xvect)' * dvect grad_vect(g, xvect, dvect) D²g(xvect) * dvect h = 1e-10 (Dg(xvect + h * dvect) - Dg(xvect - h * dvect)) / 2h # # Forward differentiation # # Accumulation of the gradients along the *computational graph*. # # <img src="Figures/compgraph.png" alt="drawing" width="400"/> # # Forward differentiation computes the gradient from the inputs to the outputs. # # ## Differentiation rules # # **Sum rule**: # # $$ # \frac{\partial (f(x)+g(x))}{\partial x} = \frac{\partial f(x)}{\partial x} + \frac{\partial f(x)}{\partial x} # $$ # # **Product rule**: # # $$ # \frac{\partial (f(x)g(x))}{\partial x} = f(x)\frac{\partial g(x)}{\partial x} + g(x)\frac{\partial f(x)}{\partial x} # $$ # # **Chain rule**: # # $$ # \frac{\partial (g(f(x))}{\partial x} = \frac{\partial g(u)}{\partial u}\mid_{u=f(x)} \frac{\partial f(x)}{\partial x} # $$ # # ## Example of the forward differentiation # # <img src="Figures/forwarddiff.png" alt="drawing" width="600"/> # # ## Dual numbers # # Forward differentiation can be viewed as evaluating function using *dual numbers*, which can be viewed as truncated Taylor series: # # $$ # v + \dot{v}\epsilon\,, # $$ # # where $v,\dot{v}\in\mathbb{R}$ and $\epsilon$ a nilpotent number, i.e. $\epsilon^2=0$. For example, we have # # $$ # (v + \dot{v}\epsilon) + (u + \dot{u}\epsilon) = (v+u) + (\dot{v} +\dot{u})\epsilon # $$ # # # $$ # (v + \dot{v}\epsilon)(u + \dot{u}\epsilon) = (vu) + (v\dot{u} +\dot{v}u)\epsilon\,. # $$ # # # These dual numbers can be used as # # $$ # f(v+\dot{v}\epsilon) = f(v) + f'(v)\dot{v}\epsilon\,. # $$ struct Dual{T} v::T vdot::T end # Let's implement some basic rules showing linearity. Base.:+(a::Dual, b::Dual) = Dual(a.v + b.v, a.vdot + b.vdot) Base.:*(a::Dual, b::Dual) = Dual(a.v * b.v, a.v * b.vdot + b.v * a.vdot) Base.:+(c::Real, b::Dual) = Dual(c + b.v, b.vdot) Base.:*(v::Real, b::Dual) = Dual(v, 0.0) * b # And some more advanced ones, based on differentiation. Base.:sin(a::Dual) = Dual(sin(a.v), cos(a.v) * a.vdot) Base.:exp(a::Dual) = Dual(exp(a.v), exp(a.v) * a.vdot) Base.:log(a::Dual) = Dual(log(a.v), 1.0 / a.v * a.vdot) Base.:/(a::Dual, b::Dual) = Dual(a.v / b.v, (a.vdot * b.v - a.v * b.vdot) / b.v^2) f(Dual(2.0, 1.0)) # + myforwarddiff(f, x) = f(Dual(x, 1.0)).vdot myforwarddiff(f, 2.0) # - # This directly works for vectors! q(x) = 10.0 * x[1] * x[2] + x[1] * x[1] + sin(x[1]) / x[2] q([1, 2]) q(Dual.([1, 2], [1, 0])) # partial wrt x1 q(Dual.([1, 2], [0, 1])) # partial wrt x2 # In practice, we prefer to use a package to do this. using ForwardDiff ForwardDiff.derivative(f, 2.0) ForwardDiff.gradient(g, xvect) ForwardDiff.gradient(q, [1, 2]) # Forward differentiation: # # - exact gradients! # - computational complexity scales with **number of inputs** # - used when you have more outputs than inputs # # # Reverse differentiation # # Compute the gradient from the output toward the inputs using the chain rule. # # <img src="Figures/reversediff.png" alt="drawing" width="600"/> # # Reverse differentiation: # # - also exact! # - main workhorse for training artificial neural networks. # - efficient when more inputs than outputs (machine learning: thousands of parameters vs. one loss) using Zygote f'(2.0) # that's it # Works as well: Zygote.gradient(f, 2.0) # Fuctions with more than one variable. g'(xvect) # Finding the Hessian: Zygote.hessian(g, xvect) # ## Artificial neural networks # # Multi-layer perceptron. # # <img src="Figures/ANN_example.png" alt="drawing" width="200"/> # # Forward differentiation. # # <img src="Figures/Forwardprop.png" alt="drawing" width="500"/> # # # Reverse differentation or backpropagation. # # <img src="Figures/Backprop.png" alt="drawing" width="500"/> # # Returns effect of changing layer output on the loss. Can be related directly to the parameters! # # ## Exercise: logistic regression # # Recall logistic regression on a training set $S=\{(\mathbf{x}_i, y_i)\mid i=1,\ldots,n\}$ with $y\in\{0,1\}$. # # Prediction: # # $$ # f(\mathbf{x}) = \sigma(\mathbf{w}^\intercal\mathbf{x})\,, # $$ # # with $\sigma(t) = 1 /(1+exp(t))$. # # To find the parameter vector $\mathbf{w}$, we minimize the cross-entropy: # # $$ # L(\mathbf{w};S)= \sum_{i=1}^n = - y_i \log(f(\mathbf{x})) - (1-y_i)\log(1-f(\mathbf{x}))\,. # $$ # + # artificial data X = [randn(50, 2); randn(50, 2) .+ [-1.0 2.4]]; y = [i <= 50 ? 0 : 1 for i in 1:100]; n = length(y); scatter(X[:,1], X[:,2], color=y) # - σ(t) = 1.0 / (1.0 + exp(t)) f(x, w) = σ(sum(x .* w)) L(w; X=X, y=y) = sum(- y .* log.(σ.(X * w)) - (1.0 .- y) .* log.(1. .- σ.( X * w))) w = [0.1, 0.1] L(w) # **Assignments** # # 1. Compute the gradient of $L$ w.r.t. $\mathbf{w}$ using # - numerical method # - forward differentiation # - backward differentiation # 2. (optional) Implement a simple gradient descent to find $\mathbf{w}^\star$. # 3. Add a bias to the prediction function. Use `Zygote` to compute the gradients w.r.t. both parameters. # # Differentiating ODE # # Automatic differentiation can be used beyond machine learning and optimization: # # - [physical engines](https://arxiv.org/abs/1611.01652) to learn robot control # - differentiating [protein](https://github.com/lupoglaz/TorchProteinLibrary) [structures](https://www.cell.com/cell-systems/fulltext/S2405-4712(19)30076-6) # - Sinkhorn algorithm # - [dynamic programming](https://arxiv.org/abs/1802.03676) # - [differential equations](https://julialang.org/blog/2019/01/fluxdiffeq) # # Everything is computed by some straightforward and differentiable functions! # # # Exercise # # Consider the *Wheeler's Ridge* function: # # $$ # f(\mathbf{x}) = -\exp(-(x_1 x_2 - a)^2 -(x_2 -a)^2)\,, # $$ # # at the point $\mathbf{x}_0=[1.5, 1.5]^T$. We set $a=1.5$. # # Implement this function. # Compute the gradient by hand. # # Find the gradient and Hessian at $\mathbf{x}_0$ by numerical differentiation. # Compute the gradient and Hessian at $\mathbf{x}_0$ using automatic differentiation. # # References # # - <NAME>. al. (2015) *Automatic differentiation in machine learning: a survey* # - <NAME>. and <NAME>., '*Algorithms for Optimization*'. MIT Press (2019)
chapters/03.AutoDiff/autodiff.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Spectral Embedding Methods # One of the primary embedding tools we'll use in this book is a set of methods called *spectral embedding* {cite:t}`spectraltutorial`. You'll see spectral embedding and variations on it repeatedly, both throughout this section and when we get into applications, so it's worth taking the time to understand spectral embedding deeply. If you're familiar with Principal Component Analysis (PCA), this method has a lot of similarities. We'll need to get into a bit of linear algebra to understand how it works. # # Remember that the basic idea behind any network embedding method is to take the network and put it into Euclidean space - meaning, a nice data table with rows as observations and columns as features (or dimensions), which you can then plot on an x-y axis. In this section, you'll see the linear algebra-centric approach that spectral embedding uses to do this. # # Spectral methods are based on a bit of linear algebra, but hopefully a small enough amount to still be understandable. The overall idea has to do with eigenvectors, and more generally, something called "singular vectors" - a generalization of eigenvectors. It turns out that the biggest singular vectors of a network's adjacency matrix contain the most information about that network - and as the singular vectors get smaller, they contain less information about the network (we're glossing over what 'information' means a bit here, so just think about this as a general intuition). So if you represent a network in terms of its singular vectors, you can drop the smaller ones and still retain most of the information. This is the essence of what spectral embedding is about (here "biggest" means "the singular vector corresponding to the largest singular value"). # # ```{admonition} Singular Values and Singular Vectors # If you don't know what singular values and singular vectors are, don't worry about it. You can think of them as a generalization of eigenvalues/vectors (it's also ok if you don't know what those are): all matrices have singular values and singular vectors, but not all matrices have eigenvalues and eigenvectors. In the case of square, symmetric matrices with positive eigenvalues, the eigenvalues/vectors and singular values/vectors are the same thing. # # If you want some more background information on eigenstuff and singularstuff, there are some explanations in the Math Refresher section in the introduction. They're an important set of vectors associated with matrices with a bunch of interesting properties. A lot of linear algebra is built around exploring those properties. # ``` # # You can see visually how Spectral Embedding works below. We start with a 20-node Stochastic Block Model with two communities, and then found its singular values and vectors. It turns out that because there are only two communities, only the first two singular vectors contain information -- the rest are just noise! (you can see this if you look carefully at the first two columns of the eigenvector matrix). So, we took these two columns and scaled them by the first two singular vectors of the singular value matrix $D$. The final embedding is that scaled matrix, and the plot you see takes the rows of that matrix and puts them into Euclidean space (an x-y axis) as points. This matrix is called the *latent position matrix*, and the embeddings for the nodes are called the *latent positions*. Underneath the figure is a list that explains how the algorithm works, step-by-step. # + tags=["hide-input"] from graspologic.simulations import sbm from graphbook_code import heatmap, cmaps, plot_latents from graspologic.utils import to_laplacian from scipy.linalg import svd import seaborn as sns import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as patches def rm_ticks(ax, x=False, y=False, **kwargs): if x is not None: ax.axes.xaxis.set_visible(x) if y is not None: ax.axes.yaxis.set_visible(y) sns.despine(ax=ax, **kwargs) # Make network B = np.array([[0.8, 0.1], [0.1, 0.8]]) n = [10, 10] A, labels = sbm(n=n, p=B, return_labels=True) L = to_laplacian(A) U, E, Ut = svd(L) n_components = 2 Uc = U[:, :n_components] Ec = E[:n_components] latents = Uc @ np.diag(Ec) fig = plt.figure(); ax = fig.add_axes([.06, -.06, .8, .8]) ax = heatmap(L, ax=ax, cbar=False) ax.set_title("Network Representation", loc="left", fontsize=16) # add arrow arrow_ax = fig.add_axes([.8, .3, .3, .1]) rm_ticks(arrow_ax, left=True, bottom=True) plt.arrow(x=0, y=0, dx=1, dy=0, width=.1, color="black") # add joint matrix ax = fig.add_axes([1, -.02*3, .8, .8]) ax = heatmap(U, ax=ax, cbar=False) ax.set_title("Left Singular vector matrix $U$", loc="left") ax = fig.add_axes([1.55, -.06, .8, .8]) ax = heatmap(np.diag(E), ax=ax, cbar=False) ax.set_title("Singular value matrix $S$", loc="left") ax = fig.add_axes([2.1, -.06, .8, .8]) ax = heatmap(Ut, ax=ax, cbar=False) ax.set_title("Right singular vector matrix $V^T$", loc="left") # add second arrow arrow_ax = fig.add_axes([1.5, -1.2, 1.2, 1]) rm_ticks(arrow_ax, left=True, bottom=True) style = "Simple, tail_width=10, head_width=40, head_length=20" kw = dict(arrowstyle=style, color="k", alpha=1) text_arrow = patches.FancyArrowPatch((0.33, .9), (.1, .5), connectionstyle="arc3, rad=-.55", **kw) arrow_ax.add_patch(text_arrow) # Embedding ax = fig.add_axes([.185, -1.2, .4, .8]) cmap = cmaps["sequential"] ax = sns.heatmap(latents, cmap=cmap, ax=ax, cbar=False, xticklabels=False, yticklabels=False) ax.set_title("Latent Positions \n(matrix representation)", loc="left") ax.set_xlabel("First two scaled columns of $U$") ax = fig.add_axes([.185+.45, -1.2, .8, .8]) plot_latents(latents, ax=ax, labels=labels) ax.set_title("Latent Positions (Euclidean representation)", loc="left") ax.set_xlabel("Plotting the rows of U as points in space") fig.suptitle("The Spectral Embedding Algorithm", fontsize=32, x=1.5); # - # ```{admonition} The Spectral Embedding Algorithm # 1. Take a network's adjacency matrix. Optionally take its Laplacian as a network representation. # 2. Decompose it into a a singular vector matrix, a singular value matrix, and the singular vector matrix's transpose. # 3. Remove every column of the singular vector matrix except for the first $k$ vectors, corresponding to the $k$ largest singular values. # 4. Scale the $k$ remaining columns by their corresponding singular values to create the embedding. # 5. The rows of this embedding matrix are the locations in Euclidean space for the nodes of the network (called the latent positions). The embedding matrix is an estimate of the latent position matrix (which we talked about in the 'why embed networks' section) # ``` # We need to dive into a few specifics to understand spectral embedding better. We need to figure out how to find our network's singular vectors, for instance, and we also need to understand why those singular vectors can be used to form a representation of our network. To do this, we'll explore a few concepts from linear algebra like matrix rank, and we'll see how understanding these concepts connects to understanding spectral embedding. # # Let's scale down and make a simple network, with only six nodes. We'll take its Laplacian just to show what that optional step looks like, and then we'll find its singular vectors with a technique we'll explore called Singular Value Decomposition. Then, we'll explore why we can use the first $k$ singular values and vectors to find an embedding. Let's start with creating the simple network. # ## A Simple Network # Say we have the simple network below. There are six nodes total, numbered 0 through 5, and there are two distinct connected groups (called "connected components" in network theory land). Nodes 0 through 2 are all connected to each other, and nodes 3 through 5 are also all connected to each other. # + from itertools import combinations import numpy as np def add_edge(A, edge: tuple): """ Add an edge to an undirected graph. """ i, j = edge A[i, j] = 1 A[j, i] = 1 return A A = np.zeros((6, 6)) for edge in combinations([0, 1, 2], 2): add_edge(A, edge) for edge in combinations([3, 4, 5], 2): add_edge(A, edge) # - # You can see the adjacency matrix and network below. Notice that there are two distrinct blocks in the adjacency matrix: in its upper-left, you can see the edges between the first three nodes, and in the bottom right, you can see the edges between the second three nodes. # + tags=["hide-input"] from graphbook_code import draw_multiplot import networkx as nx draw_multiplot(A, pos=nx.kamada_kawai_layout, title="Our Simple Network"); # - # ## The Laplacian Matrix # With spectral embedding, we'll either find the singular vectors of the Laplacian or the singular vectors of the Adjacency Matrix itself (For undirected Laplacians, the singular vectors are the same thing as the eigenvectors). Since we already have the adjacency matrix, let's take the Laplacian just to see what that looks like. # # Remember from chapter four that there are a few different types of Laplacian matrices. By default, for undirected networks, Graspologic uses the normalized Laplacian $L = D^{-1/2} A D^{-1/2}$, where $D$ is the degree matrix. Remember that the degree matrix has the degree, or number of edges, of each node along the diagonals. Variations on the normalized Laplacian are generally what we use in practice, but for simplicity and illustration, we'll just use the basic, cookie-cutter version of the Laplacian $L = D - A$. # Here's the degree matrix $D$. # Build the degree matrix D degrees = np.count_nonzero(A, axis=0) D = np.diag(degrees) D # And here's the Laplacian matrix, written out in full. # Build the Laplacian matrix L L = D-A L # Below, you can see these matrices visually. # + tags=["hide-input"] from graphbook_code import heatmap import seaborn as sns from matplotlib.colors import Normalize from graphbook_code import GraphColormap import matplotlib.cm as cm import matplotlib.pyplot as plt fig, axs = plt.subplots(1, 5, figsize=(25, 5)) # First axis (Degree) heatmap(D, ax=axs[0], cbar=False, title="Degree Matrix $D$") # Second axis (-) axs[1].text(x=.5, y=.5, s="-", fontsize=200, va='center', ha='center') axs[1].get_xaxis().set_visible(False) axs[1].get_yaxis().set_visible(False) sns.despine(ax=axs[1], left=True, bottom=True) # Third axis (Adjacency matrix) heatmap(A, ax=axs[2], cbar=False, title="Adjacency Matrix $A$") # Third axis (=) axs[3].text(x=.5, y=.5, s="=", fontsize=200, va='center', ha='center') axs[3].get_xaxis().set_visible(False) axs[3].get_yaxis().set_visible(False) sns.despine(ax=axs[3], left=True, bottom=True) # Fourth axis heatmap(L, ax=axs[4], cbar=False, title="Laplacian Matrix $L$") # Colorbar vmin, vmax = np.array(L).min(), np.array(L).max() norm = Normalize(vmin=vmin, vmax=vmax) im = cm.ScalarMappable(cmap=GraphColormap("sequential").color, norm=norm) fig.colorbar(im, ax=axs, shrink=0.8, aspect=10); fig.suptitle("The Laplacian is just a function of the adjacency matrix", fontsize=24); # - # ## Finding Singular Vectors With Singular Value Decomposition # + [markdown] tags=["hide-input"] # Now that we have a Laplacian matrix, we'll want to find its singular vectors. To do this, we'll need to use a technique called *Singular Value Decomposition*, or SVD. # # SVD is a way to break a single matrix apart (also known as factorizing) into three distinct new matrices -- In our case, the matrix will be the Laplacian we just built. These three new matrices correspond to the singular vectors and singular values of the original matrix: the algorithm will collect all of the singular vectors as columns of one matrix, and the singular values as the diagonals of another matrix. # # In the case of the Laplacian (as with all symmetric matrices that have real, positive eigenvalues), remember that the singular vectors/values and the eigenvectors/values are the same thing. For more technical and generalized details on how SVD works, or for explicit proofs, we would recommend a Linear Algebra textbook [Trefethan, LADR]. Here, we'll look at the SVD with a bit more detail here in the specific case where we start with a matrix which is square, symmetric, and has real eigenvalues. # # **Singular Value Decomposition** Suppose you have a square, symmetrix matrix $X$ with real eigenvalues. In our case, $X$ corresponds to the Laplacian $L$ (or the adjacency matrix $A$). # # \begin{align*} # \begin{bmatrix} # x_{11} & & & " \\ # & x_{22} & & \\ # & & \ddots & \\ # " & & & x_{nn} # \end{bmatrix} # \end{align*} # # Then, you can find three matrices - one which rotates vectors in space, one which scales them along each coordinate axis, and another which rotates them back - which, when you multiply them all together, recreate the original matrix $X$. This is the essence of singular value decomposition: you can break down any linear transformation into a rotation, a scaling, and another rotation. Let's call the matrix which rotates $U$ (this type of matrix is called "orthogonal"), and the matrix that scales $S$. # # \begin{align*} # X &= U S V^T # \end{align*} # # Since $U$ is a matrix that just rotates any vector, all of its column-vectors are orthogonal (all at right angles) from each other and they all have the unit length of 1. These columns are more generally called the **singular vectors** of X. In some specific cases, these are also called the eigenvectors. Since $S$ just scales, it's a diagonal matrix: there are values on the diagonals, but nothing (0) on the off-diagonals. The amount that each coordinate axis is scaled are the values on the diagonal entries of $S$, $\sigma_{i}$. These are **singular values** of the matrix $X$, and, also when some conditions are met, these are also the eigenvalues. Assuming our network is undirected, this will be the case with the Laplacian matrix, but not necessarily the adjacency matrix. # # \begin{align*} # X &= \begin{bmatrix} # \uparrow & \uparrow & & \uparrow \\ # u_1 & \vec u_2 & ... & \vec u_n \\ # \downarrow & \downarrow & & \downarrow # \end{bmatrix}\begin{bmatrix} # \sigma_1 & & & \\ # & \sigma_2 & & \\ # & & \ddots & \\ # & & & \sigma_n # \end{bmatrix}\begin{bmatrix} # \leftarrow & \vec u_1^T & \rightarrow \\ # \leftarrow & \vec u_2^T & \rightarrow \\ # & \vdots & \\ # \leftarrow & \vec u_n^T & \rightarrow \\ # \end{bmatrix} # \end{align*} # - # ## Breaking Down Our Network's Laplacian matrix # Now we know how to break down any random matrix into singular vectors and values with SVD, so let's apply it to our toy network. We'll break down our Laplacian matrix into $U$, $S$, and $V^\top$. The Laplacian is a special case where the singular values and singular vectors are the same as the eigenvalues and eigenvectors, so we'll just refer to them as eigenvalues and eigenvectors from here on, since those terms are more common. For similar (actually the same) reasons, in this case $V^\top = U^\top$. # # Here, the leftmost column of $U$ (and the leftmost eigenvalue in $S$) correspond to the eigenvector with the highest eigenvalue, and they're organized in descending order (this is standard for Singular Value Decomposition). from scipy.linalg import svd U, S, Vt = svd(L) # + tags=["hide-input"] fig, axs = plt.subplots(1, 5, figsize=(25, 5)) # First axis (Laplacian) heatmap(L, ax=axs[0], cbar=False, title="$L$") # Second axis (=) axs[1].text(x=.5, y=.5, s="=", fontsize=200, va='center', ha='center') axs[1].get_xaxis().set_visible(False) axs[1].get_yaxis().set_visible(False) sns.despine(ax=axs[1], left=True, bottom=True) # Third axis (U) U_ax = heatmap(U, ax=axs[2], cbar=False, title="$U$") U_ax.set_xlabel("Columns of eigenvectors") # Third axis (S) E_ax = heatmap(np.diag(S), ax=axs[3], cbar=False, title="$S$") E_ax.set_xlabel("Eigenvalues on diagonal") # Fourth axis (V^T) Ut_ax = heatmap(Vt, ax=axs[4], cbar=False, title="$V^T$") Ut_ax.set_xlabel("Rows of eigenvectors") # Colorbar vmin, vmax = np.array(L).min(), np.array(L).max() norm = Normalize(vmin=vmin, vmax=vmax) im = cm.ScalarMappable(cmap=GraphColormap("sequential").color, norm=norm) fig.colorbar(im, ax=axs, shrink=0.8, aspect=10); fig.suptitle("Decomposing our simple Laplacian into eigenvectors and eigenvalues with SVD", fontsize=24); # - # So now we have a collection of eigenvectors organized into a matrix with $U$, and a collection of their corresponding eigenvalues organized into a matrix with $S$. Remember that with Spectral Embedding, we keep only the largest eigenvalues/vectors and "clip" columns off of $U$. # # Why exactly do these matrices reconstruct our Laplacian when multiplied together? Why does the clipped version of $U$ give us a lower-dimensional representation of our network? To answer that question, we'll need to start talking about a concept in linear algebra called the *rank* of a matrix. # # The essential idea is that you can turn each eigenvector/eigenvalue pair into a low-information matrix instead of a vector and number. Summing all of these matrices lets you reconstruct $L$. Summing only a few of these matrices lets you get *close* to $L$. In fact, if you were to unwrap the two matrices into single vectors, the vector you get from summing is as close in Euclidean space as you possibly can get to $L$ given the information you deleted when you removed the smaller eigenvectors. # # Let's dive into it! # ## Why We Care About Taking Eigenvectors: Matrix Rank # When we embed anything to create a new representation, we're essentially trying to find a simpler version of that thing which preserves as much information as possible. This leads us to the concept of **matrix rank**. # # **Matrix Rank**: The rank of a matrix $X$, defined $rank(X)$, is the number of linearly independent rows and columns of $X$. # # At a very high level, we can think of the matrix rank as telling us just how "simple" $X$ is. A matrix which is rank $1$ is very simple: all of its rows or columns can be expressed as a weighted sum of just a single vector. On the other hand, a matrix which has "full rank", or a rank equal to the number of rows (or columns, whichever is smaller), is a bit more complex: no row nor column can be expressed as a weighted sum of other rows or columns. # # There are a couple ways that the rank of a matrix and the singular value decomposition interact which are critical to understand: First, you can make a matrix from your singular vectors and values (eigenvectors and values, in our Laplacian's case), and summing all of them recreates your original, full-rank matrix. Each matrix that you add to the sum increases the rank of the result by one. Second, summing only a few of them gets you to the best estimation of the original matrix that you can get to, given the low-rank result. Let's explore this with a bit more depth. # # We'll be using the Laplacian as our examples, which has the distinctive quality of having its eigenvectors be the same as its singular vectors. For the adjacency matrix, this theory all still works, but you'd just have to replace $\vec u_i \vec u_i^\top$ with $\vec u_i \vec v_i^\top$ throughout (the adjacency matrices' SVD is $A = U S V^\top$, since the right singular vectors might be different than the left singular vectors). # ### Summing Rank 1 Matrices Recreates The Original Matrix # You can actually create an $n \times n$ matrix using any one of the original Laplacian's eigenvectors $\vec u_i$ by taking its outer product $\vec{u_i} \vec{u_i}^T$. This creates a rank one matrix which only contains the information stored in the first eigenvector. Scale it by its eigenvalue $\sigma_i$ and you have something that feels suspiciously similar to how we take the first few singular vectors of $U$ and scale them in the spectral embedding algorithm. # # It turns out that we can express any matrix $X$ as the sum of all of these rank one matrices. # Take the $i^{th}$ column of $U$. Remember that we've been calling this $\vec u_i$: the $i^{th}$ eigenvector of our Laplacian. Its corresponding eigenvalue is the $i^{th}$ element of the diagonal eigenvalue matrix $E$. You can make a rank one matrix from this eigenvalue/eigenvector pair by taking the outer product and scaling the result by the eigenvalue: $\sigma_i \vec u_i \vec u_i^T$. # # It turns out that when we take the sum of all of these rank $1$ matrices--each one corresponding to a particular eigenvalue/eigenvector pair--we'll recreate the original matrix. # # \begin{align*} # X &= \sum_{i = 1}^n \sigma_i \vec u_i \vec u_i^T = \sigma_1 \begin{bmatrix}\uparrow \\ \vec u_1 \\ \downarrow\end{bmatrix}\begin{bmatrix}\leftarrow & \vec u_1^T & \rightarrow \end{bmatrix} + # \sigma_2 \begin{bmatrix}\uparrow \\ \vec u_2 \\ \downarrow\end{bmatrix}\begin{bmatrix}\leftarrow & \vec u_2^T & \rightarrow \end{bmatrix} + # ... + # \sigma_n \begin{bmatrix}\uparrow \\ \vec u_n \\ \downarrow\end{bmatrix}\begin{bmatrix}\leftarrow & \vec u_n^T & \rightarrow \end{bmatrix} # \end{align*} # # Here are all of the $\sigma_i \vec u_i \vec u_i^T$ for our Laplacian L. Since there were six nodes in the original network, there are six eigenvalue/vector pairs, and six rank 1 matrices. # + n_nodes = U.shape[0] # For each eigenvector/value, # find its outer product, # and append it to a list. low_rank_matrices = [] for node in range(n_nodes): ui = np.atleast_2d(U[:, node]).T vi = np.atleast_2d(Vt.T[:, node]).T low_rank_matrix = S[node] * ui @ vi.T low_rank_matrices.append(low_rank_matrix) # Take the elementwise sum of every matrix in the list. laplacian_sum = np.array(low_rank_matrices).sum(axis=0) # - # You can see the result of the sum below. On the left are all of the low-rank matrices - one corresponding to each eigenvector - and on the right is the sum of all of them. You can see that the sum is just our Laplacian! # + tags=["hide-input"] from matplotlib.gridspec import GridSpec import warnings fig = plt.figure(figsize=(10, 6)) gs = GridSpec(3, 5) ax_laplacian = fig.add_subplot(gs[:, 2:]) # Plot low-rank matrices i = 0 for row in range(3): for col in range(2): ax = fig.add_subplot(gs[row, col]) title = f"$\sigma_{i+1} u_{i+1} v_{i+1}^T$" heatmap(low_rank_matrices[i], ax=ax, cbar=False, title=title) i += 1 # Plot Laplacian heatmap(laplacian_sum, ax=ax_laplacian, cbar=False, title="$L = \sum_{i = 1}^n \sigma_i u_i v_i^T$") # # Colorbar cax = fig.add_axes([1, 0, .04, .8]) vmin, vmax = np.array(laplacian_sum).min(), np.array(laplacian_sum).max() norm = Normalize(vmin=vmin, vmax=vmax) im = cm.ScalarMappable(cmap=GraphColormap("sequential").color, norm=norm) fig.colorbar(im, cax=cax, use_gridspec=False); fig.suptitle("We can recreate our simple Laplacian by summing all the low-rank matrices", fontsize=24) with warnings.catch_warnings(): warnings.simplefilter("ignore") plt.tight_layout(); # - # Next up, we'll estimate the Laplacian by only taking a few of these matrices. You can already kind of see in the figure above that this'll work - the last two matrices don't even have anything in them (they're just 0)! # ### We can approximate our simple Laplacian by only summing a few of the low-rank matrices # When you sum the first few of these low-rank $\sigma_i u_i u_i^T$, you can *approximate* your original matrix. # # This tells us something interesting about Spectral Embedding: the information in the first few eigenvectors of a high rank matrix lets us find a more simple approximation to it. You can take a matrix that's extremely complicated (high-rank) and project it down to something which is much less complicated (low-rank). # # Look below. In each plot, we're summing more and more of these low-rank matrices. By the time we get to the fourth sum, we've totally recreated the original Laplacian. # + tags=["hide-input"] fig, axs = plt.subplots(2, 3, figsize=(9,6)) current = np.zeros(L.shape) for i, ax in enumerate(axs.flat): new = low_rank_matrices[i] current += new heatmap(current, ax=ax, cbar=False, title=f"$\sum_{{i = 1}}^{i+1} \sigma_i u_i u_i^T$") fig.suptitle("Each of these is the sum of an \nincreasing number of low-rank matrices", fontsize=16) plt.tight_layout() # - # ### Approximating becomes extremely useful when we have a bigger (now regularized) Laplacian # This becomes even more useful when we have huge networks with thousands of nodes, but only a few communities. It turns out, especially in this situation, we can usually sum a very small number of low-rank matrices and get to an excellent approximation for our network that uses much less information. # # Take the network below, for example. It's generated from a Stochastic Block Model with 1000 nodes total (500 in one community, 500 in another). We took its normalized Laplacian (remember that this means $L = D^{-1/2} A D^{-1/2}$), decomposed it, and summed the first two low-rank matrices that we generated from the eigenvector columns. # # The result is not exact, but it looks pretty close. And we only needed the information from the first two singular vectors instead of all of the information in our full $n \times n$ matrix! # + from graspologic.simulations import sbm from graspologic.utils import to_laplacian # Make network B = np.array([[0.8, 0.1], [0.1, 0.8]]) n = [25, 25] A2, labels2 = sbm(n=n, p=B, return_labels=True) # Form new laplacian L2 = to_laplacian(A2) # decompose k = 2 U2, E2, Ut2 = svd(L2) k_matrices = U2[:, k] low_rank_approximation = U2[:,0:k] @ (np.diag(E2[0:k]) @ Ut2[0:k, :]) # Plotting fig, axs = plt.subplots(1, 2, figsize=(12, 6)) l2_hm = heatmap(L2, ax=axs[0], cbar=False, title="$L$") l2approx_hm = heatmap(low_rank_approximation, ax=axs[1], cbar=False, title="$\sum_{{i = 1}}^{2} \sigma_i u_i u_i^T$") l2_hm.set_xlabel("Full-rank Laplacian for a 50-node matrix", fontdict={'size': 15}) l2approx_hm.set_xlabel("Sum of only two low-rank matrices", fontdict={'size': 15}); fig.suptitle("Summing only two low-rank matrices approximates the normalized Laplacian pretty well!", fontsize=24) plt.tight_layout() # - # This is where a lot of the power of an SVD comes from: you can approximate extremely complicated (high-rank) matrices with extremely simple (low-rank) matrices. # ## How This Matrix Rank Stuff Helps Us Understand Spectral Embedding # Remember the actual spectral embedding algorithm: we take a network, decompose it with Singular Value Decomposition into its singular vectors and values, and then cut out everything but the top $k$ singular vector/value pairs. Once we scale the columns of singular vectors by their corresponding values, we have our embedding. That embedding is called the latent position matrix, and the locations in space for each of our nodes are called the latent positions. # # Let's go back to our original, small (six-node) network and make an estimate of the latent position matrix from it. We'll embed down to three dimensions. # + k = 3 U_cut = U[:, :k] E_cut = E[:k] latents_small = U_cut @ np.diag(E_cut) # + tags=["hide-input"] fig, ax = plt.subplots(figsize=(4, 8)) cmap = cmaps["sequential"] ax = sns.heatmap(latents_small, cmap=cmap, ax=ax, cbar=False, xticklabels=1, yticklabels=1) ax.set_xlabel("Eigenvector") ax.set_ylabel("Node") ax.set_title("Latent Position Matrix", fontsize=22, y=1.01) plt.tight_layout(); # - # How does what we just talked about help us understand spectral embedding? # # Well, each column of the latent position matrix is the $i^{th}$ eigenvector scaled by the $i^{th}$ eigenvalue: $\sigma_i \vec{u_i}$. If we right-multiplied one of those columns by its unscaled transpose $\vec{u_i}^\top$, we'd have one of our rank one matrices. This means that you can think of our rank-one matrices as essentially just fancy versions of the columns of a latent position matrix (our embedding). They contain all the same information - they're just matrices instead of vectors! # + tags=["hide-input"] fig, axs = plt.subplots(1, 4, figsize=(20, 5)) # First axis (Degree) first_col = E[0] * latents_small[:, 0, None] first_mat = first_col @ first_col.T ax = sns.heatmap(first_col, cmap=cmap, ax=axs[0], cbar=False, xticklabels=1, yticklabels=1) ax.set_aspect(1.5) ax.set_xlabel("First Eigenvector") ax.set_ylabel("Node") ax.set_title("First column of \nlatent position matrix $u_0$", fontsize=12, y=1.01) # Third axis (Adjacency matrix) ax = sns.heatmap(first_col.T, cmap=cmap, ax=axs[1], cbar=False, xticklabels=1, yticklabels=1, square=False) ax.set_aspect(1) ax.set_xlabel("Node") ax.set_title("First column of latent position matrix $u_0^T$", fontsize=12, y=1.01) # Third axis (=) axs[2].text(x=.5, y=.5, s="=", fontsize=200, va='center', ha='center') axs[2].get_xaxis().set_visible(False) axs[2].get_yaxis().set_visible(False) sns.despine(ax=axs[2], left=True, bottom=True) # Fourth axis heatmap(first_mat, ax=axs[3], cbar=False, title="First low-rank \nmatrix $\sigma_0 u_0 u_0^T$") # Colorbar vmin, vmax = np.array(L).min(), np.array(L).max() norm = Normalize(vmin=vmin, vmax=vmax) im = cm.ScalarMappable(cmap=GraphColormap("sequential").color, norm=norm) fig.colorbar(im, ax=axs, shrink=0.8, aspect=10); fig.suptitle("Our low-rank matrices contain the same information\n as the columns of the latent position matrix", fontsize=22, y=1.1); # - # In fact, you can express the sum we did earlier - our lower-rank estimation of L - with just our latent position matrix! Remember that $U_k$ is the first $k$ eigenvectors of our Laplacian, and $S_k$ is the diagonal matrix with the first $k$ eigenvalues (and that we named them $\sigma_1$ through $\sigma_k$). # # + tags=["hide-input"] fig, axs = plt.subplots(1, 5, figsize=(20, 5)) from matplotlib.transforms import Affine2D import mpl_toolkits.axisartist.floating_axes as floating_axes # First axis (sum matrix) current = np.zeros(L.shape) for i in range(2): new = low_rank_matrices[i] current += new heatmap(current, ax=axs[0], cbar=False, title="$\sum_{i=1}^2 \sigma_i u_i u_i^T$") # Second axis (=) axs[1].text(x=.5, y=.5, s="=", fontsize=200, va='center', ha='center') axs[1].get_xaxis().set_visible(False) axs[1].get_yaxis().set_visible(False) sns.despine(ax=axs[1], left=True, bottom=True) # Third axis (Uk) k = 2 Uk = U[:, :k] Ek = np.diag(E)[:k, :k] ax = sns.heatmap(Uk, cmap=cmap, ax=axs[2], cbar=False, xticklabels=1, yticklabels=1) ax.set_box_aspect(2) ax.set_xlabel("Eigenvector") ax.set_title("$U_k$", fontsize=12, y=1.01) # Ek ax = sns.heatmap(Ek, cmap=cmap, ax=axs[3], cbar=False, xticklabels=1, yticklabels=1, square=True) ax.set_title("$S_k$", fontsize=12, y=1.01) sns.despine(bottom=False, top=False, right=False, left=False, ax=ax) # Uk^T # TODO: make this the same size as Uk, just rotated (currently too small) # Will probably involve revamping all this code to make subplots differently, # because the reason it's that size is that the dimensions are constrained by the `plt.subplots` call. transform = Affine2D().rotate_deg(90) axs[4].set_transform(transform) ax = sns.heatmap(Uk.T, cmap=cmap, ax=axs[4], cbar=False, xticklabels=1, yticklabels=1) ax.set_box_aspect(.5) ax.set_title("$U_k^T$", fontsize=12, y=1.01) sns.despine(bottom=False, top=False, right=False, left=False, ax=ax) # - # This helps gives an intuition for why our latent position matrix gives a representation of our network. You can take columns of it, turn those columns into matrices, and sum those matrices, and then estimate the Laplacian for the network. That means the columns of our embedding network contain all of the information necessary to estimate the network! # ## Figuring Out How Many Dimensions To Embed Your Network Into # One thing we haven't addressed is how to figure out how many dimensions to embed down to. We've generally been embedding into two dimensions throughout this chapter (mainly because it's easier to visualize), but you can embed into as many dimensions as you want. # # If you don't have any prior information about the "true" dimensionality of your latent positions, by default you'd just be stuck guessing. Fortunately, there are some rules-of-thumb to make your guess better, and some methods people have developed to make fairly decent guesses automatically. # # The most common way to pick the number of embedding dimensions is with something called a scree plot. Essentially, the intuition is this: the top singular vectors of an adjacency matrix contain the most useful information about your network, and as the singular vectors have smaller and smaller singular values, they contain less important and so are less important (this is why we're allowed to cut out the smallest $n-k$ singular vectors in the spectral embedding algorithm). # # The scree plot just plots the singular values by their indices: the first (biggest) singular value is in the beginning, and the last (smallest) singular value is at the end. # # You can see the scree plot for the Laplacian we made earlier below. We're only plotting the first ten singular values for demonstration purposes. # + tags=["hide-input"] # from graspologic.plot import screeplot from matplotlib.patches import Circle from matplotlib.patheffects import withStroke from mpl_toolkits.axes_grid1.anchored_artists import AnchoredDrawingArea from scipy.linalg import svdvals fig, ax = plt.subplots(figsize=(8, 5)) # eigval plot D = svdvals(L2) ax.plot(D[:10]) ax.set_xlabel("Singular value index") ax.set_ylabel("Singular value") # plot circle x, y = .15, .15 radius = .15 ada = AnchoredDrawingArea(150, 150, 0, 0, loc='lower left', pad=0., frameon=False) circle = Circle((105, 35), 20, clip_on=False, zorder=10, linewidth=1, edgecolor='black', facecolor=(0, 0, 0, .0125), path_effects=[withStroke(linewidth=5, foreground='w')]) ada.da.add_artist(circle) ax.add_artist(ada) # add text def text(x, y, text): ax.text(x, y, text, backgroundcolor="white", ha='center', va='top', color='blue') text(2, .19, "Elbow") # - # You'll notice that there's a marked area called the "elbow". This is an area where singular values stop changing in magnitude as much when they get smaller: before the elbow, singular values change rapidly, and after the elbow, singular values barely change at all. (It's called an elbow because the plot kind of looks like an arm, viewed from the side!) # # The location of this elbow gives you a rough indication for how many "true" dimensions your latent positions have. The singular values after the elbow are quite close to each other and have singular vectors which are largely noise, and don't tell you very much about your data. It looks from the scree plot that we should be embedding down to two dimensions, and that adding more dimensions would probably just mean adding noise to our embedding. # # One drawback to this method is that a lot of the time, the elbow location is pretty subjective - real data will rarely have a nice, pretty elbow like the one you see above. The advantage is that it still generally works pretty well; embedding into a few more dimensions than you need isn't too bad, since you'll only have a few noies dimensions and there still may be *some* signal there. # # In any case, Graspologic automates the process of finding an elbow using a popular method developed in 2006 by <NAME> and <NAME> at the University of Waterloo. We won't get into the specifics of how it works here, but you can usually find fairly good elbows automatically. # ## Using Graspologic to embed networks # It's pretty straightforward to use graspologic's API to embed a network. The setup works like an SKlearn class: you instantiate an AdjacencySpectralEmbed class, and then you use it to transform data. You set the number of dimensions to embed to (the number of eigenvector columns to keep!) with `n_components`. # ### Adjacency Spectral Embedding # + from graspologic.embed import AdjacencySpectralEmbed as ASE # Generate a network from an SBM B = np.array([[0.8, 0.1], [0.1, 0.8]]) n = [25, 25] A, labels = sbm(n=n, p=B, return_labels=True) # Instantiate an ASE model and find the embedding ase = ASE(n_components=2) embedding = ase.fit_transform(A) # - plot_latents(embedding, labels=labels, title="Adjacency Spectral Embedding"); # ### Laplacian Spectral Embedding # + from graspologic.embed import LaplacianSpectralEmbed as LSE embedding = LSE(n_components=2).fit_transform(A) # - plot_latents(embedding, labels=labels, title="Laplacian Spectral Embedding") # + [markdown] tags=[] # ## When should you use ASE and when should you use LSE? # - # Throughout this article, we've primarily used LSE, since Laplacians have some nice properties (such as having singular values being the same as eigenvalues) that make stuff like SVD easier to explain. However, you can embed the same network with either ASE or LSE, and you'll get two different (but equally true) embeddings. # # Since both embeddings will give you a reasonable clustering, how are they different? When should you use one compared to the other? # # Well, it turns out that LSE and ASE capture different notions of "clustering". <NAME> and collaborators at Johns Hopkins University investigated this recently - in 2018 - and discovered that LSE lets you capture "affinity" structure, whereas ASE lets you capture "core-periphery" structure (their paper is called "On a two-truths phenomenon in spectral graph clustering" - it's an interesting read for the curious). The difference between the two types of structure is shown in the image below. # # # ```{figure} ../../Images/two-truths.jpeg # --- # height: 400px # name: two-truths # --- # Affinity vs. Core-periphery Structure # ``` # The "affinity" structure - the one that LSE is good at finding - means that you have two groups of nodes which are well-connected within the groups, and aren't very connected with each other. Think of a friend network in two schools, where people within the same school are much more likely to be friends than people in different schools. This is a type of structure we've seen a lot in this book in our Stochastic Block Model examples. If you think the communities in your data look like this, you should apply LSE to your network. # # The name "core-periphery" is a good description for this type of structure (which ASE is good at finding). In this notion of clustering, you have a core group of well-connected nodes surrounded by a bunch of "outlier" nodes which just don't have too many edges with anything in general. Think of a core of popular, well-liked, and charismatic kids at a high school, with a periphery of loners or people who prefer not to socialize as much.
network_machine_learning_in_python/representations/ch6/spectral-embedding.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="WBk0ZDWY-ff8" # <table align="center"> # <td align="center"><a target="_blank" href="http://introtodeeplearning.com"> # <img src="https://i.ibb.co/Jr88sn2/mit.png" style="padding-bottom:5px;" /> # Visit MIT Deep Learning</a></td> # <td align="center"><a target="_blank" href="https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab1/Part1_TensorFlow.ipynb"> # <img src="https://i.ibb.co/2P3SLwK/colab.png" style="padding-bottom:5px;" />Run in Google Colab</a></td> # <td align="center"><a target="_blank" href="https://github.com/aamini/introtodeeplearning/blob/master/lab1/Part1_TensorFlow.ipynb"> # <img src="https://i.ibb.co/xfJbPmL/github.png" height="70px" style="padding-bottom:5px;" />View Source on GitHub</a></td> # </table> # # # # Copyright Information # # + id="3eI6DUic-6jo" # Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved. # # Licensed under the MIT License. You may not use this file except in compliance # with the License. Use and/or modification of this code outside of 6.S191 must # reference: # # © MIT 6.S191: Introduction to Deep Learning # http://introtodeeplearning.com # # + [markdown] id="57knM8jrYZ2t" # # Lab 1: Intro to TensorFlow and Music Generation with RNNs # # In this lab, you'll get exposure to using TensorFlow and learn how it can be used for solving deep learning tasks. Go through the code and run each cell. Along the way, you'll encounter several ***TODO*** blocks -- follow the instructions to fill them out before running those cells and continuing. # # # # Part 1: Intro to TensorFlow # # ## 0.1 Install TensorFlow # # TensorFlow is a software library extensively used in machine learning. Here we'll learn how computations are represented and how to define a simple neural network in TensorFlow. For all the labs in 6.S191 2022, we'll be using the latest version of TensorFlow, TensorFlow 2, which affords great flexibility and the ability to imperatively execute operations, just like in Python. You'll notice that TensorFlow 2 is quite similar to Python in its syntax and imperative execution. Let's install TensorFlow and a couple of dependencies. # # + id="LkaimNJfYZ2w" outputId="ad23af07-8721-47bb-f66c-d256b2c682c5" import tensorflow as tf # Download and import the MIT 6.S191 package # !pip install mitdeeplearning import mitdeeplearning as mdl import numpy as np import matplotlib.pyplot as plt # + [markdown] id="2QNMcdP4m3Vs" # ## 1.1 Why is TensorFlow called TensorFlow? # # TensorFlow is called 'TensorFlow' because it handles the flow (node/mathematical operation) of Tensors, which are data structures that you can think of as multi-dimensional arrays. Tensors are represented as n-dimensional arrays of base dataypes such as a string or integer -- they provide a way to generalize vectors and matrices to higher dimensions. # # The ```shape``` of a Tensor defines its number of dimensions and the size of each dimension. The ```rank``` of a Tensor provides the number of dimensions (n-dimensions) -- you can also think of this as the Tensor's order or degree. # # Let's first look at 0-d Tensors, of which a scalar is an example: # + id="tFxztZQInlAB" outputId="56cbaf8d-1391-4c8a-aec0-055fd1d0c609" sport = tf.constant("Tennis", tf.string) number = tf.constant(1.41421356237, tf.float64) print("`sport` is a {}-d Tensor".format(tf.rank(sport).numpy())) print("`number` is a {}-d Tensor".format(tf.rank(number).numpy())) # + [markdown] id="-dljcPUcoJZ6" # Vectors and lists can be used to create 1-d Tensors: # + id="oaHXABe8oPcO" outputId="ce718788-29bd-49ea-e99b-4f1e3207deb1" sports = tf.constant(["Tennis", "Basketball"], tf.string) numbers = tf.constant([3.141592, 1.414213, 2.71821], tf.float64) print("`sports` is a {}-d Tensor with shape: {}".format(tf.rank(sports).numpy(), tf.shape(sports))) print("`numbers` is a {}-d Tensor with shape: {}".format(tf.rank(numbers).numpy(), tf.shape(numbers))) # + [markdown] id="gvffwkvtodLP" # Next we consider creating 2-d (i.e., matrices) and higher-rank Tensors. For examples, in future labs involving image processing and computer vision, we will use 4-d Tensors. Here the dimensions correspond to the number of example images in our batch, image height, image width, and the number of color channels. # + id="tFeBBe1IouS3" ### Defining higher-order Tensors ### '''TODO: Define a 2-d Tensor''' matrix = tf.constant([[1,2,3], [7,8,9]]) assert isinstance(matrix, tf.Tensor), "matrix must be a tf Tensor object" assert tf.rank(matrix).numpy() == 2 # + id="Zv1fTn_Ya_cz" '''TODO: Define a 4-d Tensor.''' # Use tf.zeros to initialize a 4-d Tensor of zeros with size 10 x 256 x 256 x 3. # You can think of this as 10 images where each image is RGB 256 x 256. images = tf.zeros([10,256,256,3]) assert isinstance(images, tf.Tensor), "matrix must be a tf Tensor object" assert tf.rank(images).numpy() == 4, "matrix must be of rank 4" assert tf.shape(images).numpy().tolist() == [10, 256, 256, 3], "matrix is incorrect shape" # + [markdown] id="wkaCDOGapMyl" # As you have seen, the ```shape``` of a Tensor provides the number of elements in each Tensor dimension. The ```shape``` is quite useful, and we'll use it often. You can also use slicing to access subtensors within a higher-rank Tensor: # + id="FhaufyObuLEG" outputId="47aee9ef-1f50-4183-e876-dcf8050841e0" row_vector = matrix[1] column_vector = matrix[:,2] scalar = matrix[1, 2] print("`row_vector`: {}".format(row_vector.numpy())) print("`column_vector`: {}".format(column_vector.numpy())) print("`scalar`: {}".format(scalar.numpy())) # + [markdown] id="iD3VO-LZYZ2z" # ## 1.2 Computations on Tensors # # A convenient way to think about and visualize computations in TensorFlow is in terms of graphs. We can define this graph in terms of Tensors, which hold data, and the mathematical operations that act on these Tensors in some order. Let's look at a simple example, and define this computation using TensorFlow: # # ![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/add-graph.png) # + id="X_YJrZsxYZ2z" outputId="d029ec1b-ff6a-411d-cfef-c404c5ad873b" # Create the nodes in the graph, and initialize values a = tf.constant(15) b = tf.constant(61) # Add them! c1 = tf.add(a,b) c2 = a + b # TensorFlow overrides the "+" operation so that it is able to act on Tensors print(c1) print(c2) # + [markdown] id="Mbfv_QOiYZ23" # Notice how we've created a computation graph consisting of TensorFlow operations, and how the output is a Tensor with value 76 -- we've just created a computation graph consisting of operations, and it's executed them and given us back the result. # # Now let's consider a slightly more complicated example: # # ![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/computation-graph.png) # # Here, we take two inputs, `a, b`, and compute an output `e`. Each node in the graph represents an operation that takes some input, does some computation, and passes its output to another node. # # Let's define a simple function in TensorFlow to construct this computation function: # + id="PJnfzpWyYZ23" ### Defining Tensor computations ### # Construct a simple computation function def func(a,b): '''TODO: Define the operation for c, d, e (use tf.add, tf.subtract, tf.multiply).''' c = tf.add(a,b) d = tf.subtract(b,1) e = tf.multiply(c,d) return e # + [markdown] id="AwrRfDMS2-oy" # Now, we can call this function to execute the computation graph given some inputs `a,b`: # + id="pnwsf8w2uF7p" outputId="029cfd71-915b-4f55-cd99-7d1e452c84f3" # Consider example values for a,b a, b = 1.5, 2.5 # Execute the computation e_out = func(a,b) print(e_out) # + [markdown] id="6HqgUIUhYZ29" # Notice how our output is a Tensor with value defined by the output of the computation, and that the output has no shape as it is a single scalar value. # + [markdown] id="1h4o9Bb0YZ29" # ## 1.3 Neural networks in TensorFlow # We can also define neural networks in TensorFlow. TensorFlow uses a high-level API called [Keras](https://www.tensorflow.org/guide/keras) that provides a powerful, intuitive framework for building and training deep learning models. # # Let's first consider the example of a simple perceptron defined by just one dense layer: $ y = \sigma(Wx + b)$, where $W$ represents a matrix of weights, $b$ is a bias, $x$ is the input, $\sigma$ is the sigmoid activation function, and $y$ is the output. We can also visualize this operation using a graph: # # ![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/computation-graph-2.png) # # Tensors can flow through abstract types called [```Layers```](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) -- the building blocks of neural networks. ```Layers``` implement common neural networks operations, and are used to update weights, compute losses, and define inter-layer connectivity. We will first define a ```Layer``` to implement the simple perceptron defined above. # + id="HutbJk-1kHPh" outputId="cd5c70ed-c296-4da6-ab73-64d8ebc7234b" ### Defining a network Layer ### # n_output_nodes: number of output nodes # input_shape: shape of the input # x: input to the layer class OurDenseLayer(tf.keras.layers.Layer): def __init__(self, n_output_nodes): super(OurDenseLayer, self).__init__() self.n_output_nodes = n_output_nodes def build(self, input_shape): d = int(input_shape[-1]) # Define and initialize parameters: a weight matrix W and bias b # Note that parameter initialization is random! self.W = self.add_weight("weight", shape=[d, self.n_output_nodes]) # note the dimensionality self.b = self.add_weight("bias", shape=[1, self.n_output_nodes]) # note the dimensionality def call(self, x): '''TODO: define the operation for z (hint: use tf.matmul)''' z = tf.matmul(x, self.W) + self.b '''TODO: define the operation for out (hint: use tf.sigmoid)''' y = tf.sigmoid(z) return y # Since layer parameters are initialized randomly, we will set a random seed for reproducibility tf.random.set_seed(1) layer = OurDenseLayer(3) layer.build((1,2)) x_input = tf.constant([[1,2.]], shape=(1,2)) y = layer.call(x_input) # test the output! print(y.numpy()) mdl.lab1.test_custom_dense_layer_output(y) # + [markdown] id="Jt1FgM7qYZ3D" # Conveniently, TensorFlow has defined a number of ```Layers``` that are commonly used in neural networks, for example a [```Dense```](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?version=stable). Now, instead of using a single ```Layer``` to define our simple neural network, we'll use the [`Sequential`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Sequential) model from Keras and a single [`Dense` ](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense) layer to define our network. With the `Sequential` API, you can readily create neural networks by stacking together layers like building blocks. # + id="7WXTpmoL6TDz" ### Defining a neural network using the Sequential API ### # Import relevant packages from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense # Define the number of outputs n_output_nodes = 3 # First define the model model = Sequential() '''TODO: Define a dense (fully connected) layer to compute z''' # Remember: dense layers are defined by the parameters W and b! # You can read more about the initialization of W and b in the TF documentation :) # https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?version=stable dense_layer = # TODO # Add the dense layer to the model model.add(dense_layer) # + [markdown] id="HDGcwYfUyR-U" # That's it! We've defined our model using the Sequential API. Now, we can test it out using an example input: # + id="sg23OczByRDb" # Test model with example input x_input = tf.constant([[1,2.]], shape=(1,2)) '''TODO: feed input into the model and predict the output!''' model_output = # TODO print(model_output) # + [markdown] id="596NvsOOtr9F" # In addition to defining models using the `Sequential` API, we can also define neural networks by directly subclassing the [`Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=stable) class, which groups layers together to enable model training and inference. The `Model` class captures what we refer to as a "model" or as a "network". Using Subclassing, we can create a class for our model, and then define the forward pass through the network using the `call` function. Subclassing affords the flexibility to define custom layers, custom training loops, custom activation functions, and custom models. Let's define the same neural network as above now using Subclassing rather than the `Sequential` model. # + id="K4aCflPVyViD" ### Defining a model using subclassing ### from tensorflow.keras import Model from tensorflow.keras.layers import Dense class SubclassModel(tf.keras.Model): # In __init__, we define the Model's layers def __init__(self, n_output_nodes): super(SubclassModel, self).__init__() '''TODO: Our model consists of a single Dense layer. Define this layer.''' self.dense_layer = '''TODO: Dense Layer''' # In the call function, we define the Model's forward pass. def call(self, inputs): return self.dense_layer(inputs) # + [markdown] id="U0-lwHDk4irB" # Just like the model we built using the `Sequential` API, let's test out our `SubclassModel` using an example input. # # # + id="LhB34RA-4gXb" n_output_nodes = 3 model = SubclassModel(n_output_nodes) x_input = tf.constant([[1,2.]], shape=(1,2)) print(model.call(x_input)) # + [markdown] id="HTIFMJLAzsyE" # Importantly, Subclassing affords us a lot of flexibility to define custom models. For example, we can use boolean arguments in the `call` function to specify different network behaviors, for example different behaviors during training and inference. Let's suppose under some instances we want our network to simply output the input, without any perturbation. We define a boolean argument `isidentity` to control this behavior: # + id="P7jzGX5D1xT5" ### Defining a model using subclassing and specifying custom behavior ### from tensorflow.keras import Model from tensorflow.keras.layers import Dense class IdentityModel(tf.keras.Model): # As before, in __init__ we define the Model's layers # Since our desired behavior involves the forward pass, this part is unchanged def __init__(self, n_output_nodes): super(IdentityModel, self).__init__() self.dense_layer = tf.keras.layers.Dense(n_output_nodes, activation='sigmoid') '''TODO: Implement the behavior where the network outputs the input, unchanged, under control of the isidentity argument.''' def call(self, inputs, isidentity=False): x = self.dense_layer(inputs) '''TODO: Implement identity behavior''' if isidentity: return inputs return x # + [markdown] id="Ku4rcCGx5T3y" # Let's test this behavior: # + id="NzC0mgbk5dp2" outputId="c1f430d7-6a60-4e04-9d18-ca4e0fad63de" n_output_nodes = 3 model = IdentityModel(n_output_nodes) x_input = tf.constant([[1,2.]], shape=(1,2)) '''TODO: pass the input into the model and call with and without the input identity option.''' out_activate = model.call(x_input) out_identity = model.call(x_input, isidentity = True) print("Network output with activation: {}; network identity output: {}".format(out_activate.numpy(), out_identity.numpy())) # + [markdown] id="7V1dEqdk6VI5" # Now that we have learned how to define `Layers` as well as neural networks in TensorFlow using both the `Sequential` and Subclassing APIs, we're ready to turn our attention to how to actually implement network training with backpropagation. # + [markdown] id="dQwDhKn8kbO2" # ## 1.4 Automatic differentiation in TensorFlow # # [Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) # is one of the most important parts of TensorFlow and is the backbone of training with # [backpropagation](https://en.wikipedia.org/wiki/Backpropagation). We will use the TensorFlow GradientTape [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape?version=stable) to trace operations for computing gradients later. # # When a forward pass is made through the network, all forward-pass operations get recorded to a "tape"; then, to compute the gradient, the tape is played backwards. By default, the tape is discarded after it is played backwards; this means that a particular `tf.GradientTape` can only # compute one gradient, and subsequent calls throw a runtime error. However, we can compute multiple gradients over the same computation by creating a ```persistent``` gradient tape. # # First, we will look at how we can compute gradients using GradientTape and access them for computation. We define the simple function $ y = x^2$ and compute the gradient: # + id="tdkqk8pw5yJM" ### Gradient computation with GradientTape ### # y = x^2 # Example: x = 3.0 x = tf.Variable(3.0) # Initiate the gradient tape with tf.GradientTape() as tape: # Define the function y = x * x # Access the gradient -- derivative of y with respect to x dy_dx = tape.gradient(y, x) assert dy_dx.numpy() == 6.0 # + [markdown] id="JhU5metS5xF3" # In training neural networks, we use differentiation and stochastic gradient descent (SGD) to optimize a loss function. Now that we have a sense of how `GradientTape` can be used to compute and access derivatives, we will look at an example where we use automatic differentiation and SGD to find the minimum of $L=(x-x_f)^2$. Here $x_f$ is a variable for a desired value we are trying to optimize for; $L$ represents a loss that we are trying to minimize. While we can clearly solve this problem analytically ($x_{min}=x_f$), considering how we can compute this using `GradientTape` sets us up nicely for future labs where we use gradient descent to optimize entire neural network losses. # + attributes={"classes": ["py"], "id": ""} id="7g1yWiSXqEf-" outputId="b277e685-51c4-4b37-<PASSWORD>-c<PASSWORD>" ### Function minimization with automatic differentiation and SGD ### # Initialize a random value for our initial x x = tf.Variable([tf.random.normal([1])]) print("Initializing x={}".format(x.numpy())) learning_rate = 1e-2 # learning rate for SGD history = [] # Define the target value x_f = 4 # We will run SGD for a number of iterations. At each iteration, we compute the loss, # compute the derivative of the loss with respect to x, and perform the SGD update. for i in range(500): with tf.GradientTape() as tape: '''TODO: define the loss as described above''' loss = (x - x_f)**2 # loss minimization using gradient tape grad = tape.gradient(loss, x) # compute the derivative of the loss with respect to x new_x = x - learning_rate*grad # sgd update x.assign(new_x) # update the value of x history.append(x.numpy()[0]) # Plot the evolution of x as we optimize towards x_f! plt.plot(history) plt.plot([0, 500],[x_f,x_f]) plt.legend(('Predicted', 'True')) plt.xlabel('Iteration') plt.ylabel('x value') # + [markdown] id="pC7czCwk3ceH" # `GradientTape` provides an extremely flexible framework for automatic differentiation. In order to back propagate errors through a neural network, we track forward passes on the Tape, use this information to determine the gradients, and then use these gradients for optimization using SGD. # + id="Gfj2AobCmYQj"
Part1_TensorFlow.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Example-2-GP-BS-Derivatives # Author: <NAME> # Version: 1.0 (28.4.2020) # License: MIT # Email: <EMAIL> # Notes: tested on Mac OS X running Python 3.6.9 with the following packages: # scikit-learn=0.22.1, numpy=1.18.1, matplotlib=3.1.3 # Citation: Please cite the following reference if this notebook is used for research purposes: # <NAME>. and <NAME>, Machine Learning in Finance: From Theory to Practice, Springer Graduate textbook Series, 2020. # - # # Calculating the Greeks # # Overview # The purpose of this notebook is to demonstrate the derivation of the greeks in a Gaussian Process Regression model (GP), fitted to option price data. # # In this notebook, European option prices are generated from the Black-Scholes model. The notebook begins by building a GP call model, where the input is the underlying price. The delta is then derived and compared with the Black-Scholes (BS) # delta. The exercise is repeated, but using the volatility as the input instead of the underlying price. The vega of the GP is then derived and compared with the BS vega. # + from BlackScholes import bsformula import numpy as np import scipy as sp from sklearn import gaussian_process from sklearn.gaussian_process.kernels import RBF import matplotlib.pyplot as plt # %matplotlib inline # - # ## Black-Scholes Model # First, set the model parameters KC = 130 # Call strike KP = 70 # Put strike r = 0.002 # risk-free rate sigma = 0.4 # implied volatility T = 2.0 # Time to maturity S0 = 100 # Underlying spot lb = 0 # lower bound on domain ub = 300 # upper bound on domain training_number = 100 # Number of training samples testing_number = 50 # Number of testing samples sigma_n = 1e-8 # additive noise in GP # Define the call and put prices using the BS model call = lambda x, y: bsformula(1, lb+(ub-lb)*x, KC, r, T, y, 0)[0] put = lambda x, y: bsformula(-1, lb+(ub-lb)*x, KP, r, T, y, 0)[0] # # Delta # Generate the training and testing data, where the input is the gridded underlying and the output are the option prices. # + x_train = np.array(np.linspace(0.01, 1.2, training_number), dtype='float32').reshape(training_number, 1) x_test = np.array(np.linspace(0.01, 1.0, testing_number), dtype='float32').reshape(testing_number, 1) y_train = [] for idx in range(len(x_train)): y_train.append(call(x_train[idx], sigma)) y_train = np.array(y_train) # - # Fit the GP model to the generated data sk_kernel = RBF(length_scale=1.0, length_scale_bounds=(0.01, 10000.0)) gp = gaussian_process.GaussianProcessRegressor(kernel=sk_kernel, n_restarts_optimizer=20) gp.fit(x_train, y_train) # Get the model's predicted outputs for each of the test inputs y_pred, sigma_hat = gp.predict(x_test, return_std=True) # Derive the GP delta # + l = gp.kernel_.length_scale rbf = gaussian_process.kernels.RBF(length_scale=l) Kernel = rbf(x_train, x_train) K_y = Kernel + np.eye(training_number) * sigma_n L = sp.linalg.cho_factor(K_y) alpha_p = sp.linalg.cho_solve(np.transpose(L), y_train) k_s = rbf(x_test, x_train) k_s_prime = (x_train.T - x_test) * k_s / l**2 f_prime = np.dot(k_s_prime, alpha_p) / (ub - lb) # - # Calculate the BS delta delta = lambda x, y: bsformula(1, lb+(ub-lb)*x, KC, r, T, y, 0)[1] delta(x_test, sigma) - f_prime # Compare the GP delta with the BS delta plt.figure(figsize = (10,6),facecolor='white', edgecolor='black') plt.plot(lb+(ub-lb)*x_test, delta(x_test,sigma), color = 'black', label = 'Exact') plt.plot(lb+(ub-lb)*x_test, f_prime, color = 'red', label = 'GP') plt.grid(True) plt.xlabel('S') plt.ylabel('$\Delta$') plt.legend(loc = 'best', prop={'size':10}); # Show the error between the GP delta and the BS delta plt.figure(figsize = (10,6),facecolor='white', edgecolor='black') plt.plot(lb+(ub-lb)*x_test, delta(x_test,sigma) - f_prime, color = 'black', label = 'GP Error') plt.grid(True) plt.xlabel('S') plt.ylabel('Error in $\Delta$') plt.legend(loc = 'best', prop={'size':10}); # ## Vega # Generate the training and testing data, where the input is the gridded underlying and the output are the option prices. The inputs are again scaled to the unit domain. # + x_train = np.array(np.linspace(0.01, 1.2, training_number), dtype='float32').reshape(training_number, 1) x_test = np.array(np.linspace(0.01, 1.0, testing_number), dtype='float32').reshape(testing_number, 1) y_train = [] for idx in range(len(x_train)): y_train.append(call((S0-lb)/(ub-lb), x_train[idx])) y_train = np.array(y_train) # - # Fit the GP model to the generated data sk_kernel = RBF(length_scale=1.0, length_scale_bounds=(0.01, 10000.0)) gp = gaussian_process.GaussianProcessRegressor(kernel=sk_kernel, n_restarts_optimizer=20) gp.fit(x_train, y_train) # Get the model's predicted outputs for each of the test inputs y_pred, sigma_hat = gp.predict(x_test, return_std=True) # Derive the GP delta # + l = gp.kernel_.length_scale rbf = gaussian_process.kernels.RBF(length_scale=l) Kernel= rbf(x_train, x_train) K_y = Kernel + np.eye(training_number) * sigma_n L = sp.linalg.cho_factor(K_y) alpha_p = sp.linalg.cho_solve(np.transpose(L), y_train) k_s = rbf(x_test, x_train) k_s_prime = np.zeros([len(x_test), len(x_train)]) for i in range(len(x_test)): for j in range(len(x_train)): k_s_prime[i, j] = (1.0/l**2) * (x_train[j] - x_test[i]) * k_s[i, j] f_prime = np.dot(k_s_prime, alpha_p) # - # Calculate the BS delta vega = lambda x, y: bsformula(1, lb + (ub-lb) * x, KC, r, T, y, 0)[2] vega((S0-lb)/(ub-lb), x_test) - f_prime # Compare the GP vega with the BS vega # plt.figure(figsize = (10,6), facecolor='white', edgecolor='black') plt.plot(x_test, vega((S0-lb)/(ub-lb), x_test), color = 'black', label = 'Exact') plt.plot(x_test, f_prime, color = 'red', label = 'GP') plt.grid(True) plt.xlabel('$\\sigma$') plt.ylabel('$\\nu$') plt.legend(loc = 'best', prop={'size':10}); # Plot the error between the GP vega and the BS vega # plt.figure(figsize = (10,6), facecolor='white', edgecolor='black') plt.plot(x_test, vega((S0-lb)/(ub-lb), x_test)-f_prime, color = 'black', label = 'GP Error') plt.grid(True) plt.xlabel('$\\sigma$') plt.ylabel('Error in $\\nu$') plt.legend(loc = 'best', prop={'size':10}); # # Idea: Calculate Gamma and then explain what a Gamma Squeeze is # # https://www.fool.com/investing/2021/01/28/what-is-a-gamma-squeeze/
Chapter3-GPs/.ipynb_checkpoints/Example-2-GP-BS-Derivatives-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pprint import sys if "../" not in sys.path: sys.path.append("../") from lib.envs.gridworld import GridworldEnv pp = pprint.PrettyPrinter(indent=2) env = GridworldEnv() def value_iteration(env, theta=0.0001, discount_factor=1.0): """ Value Iteration Algorithm. Args: env: OpenAI env. env.P represents the transition probabilities of the environment. env.P[s][a] is a list of transition tuples (prob, next_state, reward, done). env.nS is a number of states in the environment. env.nA is a number of actions in the environment. theta: We stop evaluation once our value function change is less than theta for all states. discount_factor: Gamma discount factor. Returns: A tuple (policy, V) of the optimal policy and the optimal value function. """ def one_step_lookahead(state, V): """ Helper function to calculate the value for all action in a given state. Args: state: The state to consider (int) V: The value to use as an estimator, Vector of length env.nS Returns: A vector of length env.nA containing the expected value of each action. """ A = np.zeros(env.nA) for a in range(env.nA): for prob, next_state, reward, done in env.P[state][a]: A[a] += prob * (reward + discount_factor * V[next_state]) return A V = np.zeros(env.nS) while True: # Stopping condition delta = 0 # Update each state... for s in range(env.nS): # Do a one-step lookahead to find the best action A = one_step_lookahead(s, V) best_action_value = np.max(A) # Calculate delta across all states seen so far delta = max(delta, np.abs(best_action_value - V[s])) # Update the value function. Ref: Sutton book eq. 4.10. V[s] = best_action_value # Check if we can stop if delta < theta: break # Create a deterministic policy using the optimal value function policy = np.zeros([env.nS, env.nA]) for s in range(env.nS): # One step lookahead to find the best action for this state A = one_step_lookahead(s, V) best_action = np.argmax(A) # Always take the best action policy[s, best_action] = 1.0 return policy, V # + policy, v = value_iteration(env) print("Policy Probability Distribution:") print(policy) print("") print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):") print(np.reshape(np.argmax(policy, axis=1), env.shape)) print("") print("Value Function:") print(v) print("") print("Reshaped Grid Value Function:") print(v.reshape(env.shape)) print("") # - # Test the value function expected_v = np.array([ 0, -1, -2, -3, -1, -2, -3, -2, -2, -3, -2, -1, -3, -2, -1, 0]) np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
DP/Value Iteration Solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Configurations for Colab # + import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: # !apt-get install -y xvfb python-opengl > /dev/null 2>&1 # !pip install gym pyvirtualdisplay > /dev/null 2>&1 # !pip install JSAnimation==0.1 # !pip install pyglet==1.3.2 from pyvirtualdisplay import Display # Start virtual display dis = Display(visible=0, size=(400, 400)) dis.start() # - # # 04. Dueling Network # # [<NAME> al., "Dueling Network Architectures for Deep Reinforcement Learning." arXiv preprint arXiv:1511.06581, 2015.](https://arxiv.org/pdf/1511.06581.pdf) # # The proposed network architecture, which is named *dueling architecture*, explicitly separates the representation of state values and (state-dependent) action advantages. # # ![fig1](https://user-images.githubusercontent.com/14961526/60322956-c2f0b600-99bb-11e9-9ed4-443bd14bc3b0.png) # # The dueling network automatically produces separate estimates of the state value function and advantage function, without any extra supervision. Intuitively, the dueling architecture can learn which states are (or are not) valuable, without having to learn the effect of each action for each state. This is particularly useful in states where its actions do not affect the environment in any relevant way. # # The dueling architecture represents both the value $V(s)$ and advantage $A(s, a)$ functions with a single deep model whose output combines the two to produce a state-action value $Q(s, a)$. Unlike in advantage updating, the representation and algorithm are decoupled by construction. # # $$A^\pi (s, a) = Q^\pi (s, a) - V^\pi (s).$$ # # The value function $V$ measures the how good it is to be in a particular state $s$. The $Q$ function, however, measures the the value of choosing a particular action when in this state. Now, using the definition of advantage, we might be tempted to construct the aggregating module as follows: # # $$Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + A(s, a; \theta, \alpha),$$ # # where $\theta$ denotes the parameters of the convolutional layers, while $\alpha$ and $\beta$ are the parameters of the two streams of fully-connected layers. # # Unfortunately, the above equation is unidentifiable in the sense that given $Q$ we cannot recover $V$ and $A$ uniquely; for example, there are uncountable pairs of $V$ and $A$ that make $Q$ values to zero. To address this issue of identifiability, we can force the advantage function estimator to have zero advantage at the chosen action. That is, we let the last module of the network implement the forward mapping. # # $$ # Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + \big( A(s, a; \theta, \alpha) - \max_{a' \in |\mathcal{A}|} A(s, a'; \theta, \alpha) \big). # $$ # # This formula guarantees that we can recover the unique $V$ and $A$, but the optimization is not so stable because the advantages have to compensate any change to the optimal action’s advantage. Due to the reason, an alternative module that replaces the max operator with an average is proposed: # # $$ # Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + \big( A(s, a; \theta, \alpha) - \frac{1}{|\mathcal{A}|} \sum_{a'} A(s, a'; \theta, \alpha) \big). # $$ # # Unlike the max advantage form, in this formula, the advantages only need to change as fast as the mean, so it increases the stability of optimization. # + import os from typing import Dict, List, Tuple import gym import matplotlib.pyplot as plt import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from IPython.display import clear_output from torch.nn.utils import clip_grad_norm_ # - # ## Replay buffer # # Please see *01.dqn.ipynb* for detailed description. class ReplayBuffer: """A simple numpy replay buffer.""" def __init__(self, obs_dim: int, size: int, batch_size: int = 32): self.obs_buf = np.zeros([size, obs_dim], dtype=np.float32) self.next_obs_buf = np.zeros([size, obs_dim], dtype=np.float32) self.acts_buf = np.zeros([size], dtype=np.float32) self.rews_buf = np.zeros([size], dtype=np.float32) self.done_buf = np.zeros(size, dtype=np.float32) self.max_size, self.batch_size = size, batch_size self.ptr, self.size, = 0, 0 def store( self, obs: np.ndarray, act: np.ndarray, rew: float, next_obs: np.ndarray, done: bool, ): self.obs_buf[self.ptr] = obs self.next_obs_buf[self.ptr] = next_obs self.acts_buf[self.ptr] = act self.rews_buf[self.ptr] = rew self.done_buf[self.ptr] = done self.ptr = (self.ptr + 1) % self.max_size self.size = min(self.size + 1, self.max_size) def sample_batch(self) -> Dict[str, np.ndarray]: idxs = np.random.choice(self.size, size=self.batch_size, replace=False) return dict(obs=self.obs_buf[idxs], next_obs=self.next_obs_buf[idxs], acts=self.acts_buf[idxs], rews=self.rews_buf[idxs], done=self.done_buf[idxs]) def __len__(self) -> int: return self.size # ## Dueling Network # # Carefully take a look at advantage and value layers separated from feature layer. class Network(nn.Module): def __init__(self, in_dim: int, out_dim: int): """Initialization.""" super(Network, self).__init__() # set common feature layer self.feature_layer = nn.Sequential( nn.Linear(in_dim, 128), nn.ReLU(), ) # set advantage layer self.advantage_layer = nn.Sequential( nn.Linear(128, 128), nn.ReLU(), nn.Linear(128, out_dim), ) # set value layer self.value_layer = nn.Sequential( nn.Linear(128, 128), nn.ReLU(), nn.Linear(128, 1), ) def forward(self, x: torch.Tensor) -> torch.Tensor: """Forward method implementation.""" feature = self.feature_layer(x) value = self.value_layer(feature) advantage = self.advantage_layer(feature) q = value + advantage - advantage.mean(dim=-1, keepdim=True) return q # ## DQN + DuelingNet Agent (w/o Double-DQN & PER) # # Here is a summary of DQNAgent class. # # | Method | Note | # | --- | --- | # |select_action | select an action from the input state. | # |step | take an action and return the response of the env. | # |compute_dqn_loss | return dqn loss. | # |update_model | update the model by gradient descent. | # |target_hard_update| hard update from the local model to the target model.| # |train | train the agent during num_frames. | # |test | test the agent (1 episode). | # |plot | plot the training progresses. | # # # Aside from the dueling network architecture, the authors suggest to use Double-DQN and Prioritized Experience Replay as extra components for better performance. However, we don't implement them to simplify the tutorial. Here, DQNAgent is totally same as the one from *01.dqn.ipynb*. class DQNAgent: """DQN Agent interacting with environment. Attribute: env (gym.Env): openAI Gym environment memory (ReplayBuffer): replay memory to store transitions batch_size (int): batch size for sampling epsilon (float): parameter for epsilon greedy policy epsilon_decay (float): step size to decrease epsilon max_epsilon (float): max value of epsilon min_epsilon (float): min value of epsilon target_update (int): period for target model's hard update gamma (float): discount factor dqn (Network): model to train and select actions dqn_target (Network): target model to update optimizer (torch.optim): optimizer for training dqn transition (list): transition information including state, action, reward, next_state, done """ def __init__( self, env: gym.Env, memory_size: int, batch_size: int, target_update: int, epsilon_decay: float, max_epsilon: float = 1.0, min_epsilon: float = 0.1, gamma: float = 0.99, ): """Initialization. Args: env (gym.Env): openAI Gym environment memory_size (int): length of memory batch_size (int): batch size for sampling target_update (int): period for target model's hard update epsilon_decay (float): step size to decrease epsilon lr (float): learning rate max_epsilon (float): max value of epsilon min_epsilon (float): min value of epsilon gamma (float): discount factor """ obs_dim = env.observation_space.shape[0] action_dim = env.action_space.n self.env = env self.memory = ReplayBuffer(obs_dim, memory_size, batch_size) self.batch_size = batch_size self.epsilon = max_epsilon self.epsilon_decay = epsilon_decay self.max_epsilon = max_epsilon self.min_epsilon = min_epsilon self.target_update = target_update self.gamma = gamma # device: cpu / gpu self.device = torch.device( "cuda" if torch.cuda.is_available() else "cpu" ) print(self.device) # networks: dqn, dqn_target self.dqn = Network(obs_dim, action_dim).to(self.device) self.dqn_target = Network(obs_dim, action_dim).to(self.device) self.dqn_target.load_state_dict(self.dqn.state_dict()) self.dqn_target.eval() # optimizer self.optimizer = optim.Adam(self.dqn.parameters()) # transition to store in memory self.transition = list() # mode: train / test self.is_test = False def select_action(self, state: np.ndarray) -> np.ndarray: """Select an action from the input state.""" # epsilon greedy policy if self.epsilon > np.random.random(): selected_action = self.env.action_space.sample() else: selected_action = self.dqn( torch.FloatTensor(state).to(self.device) ).argmax() selected_action = selected_action.detach().cpu().numpy() if not self.is_test: self.transition = [state, selected_action] return selected_action def step(self, action: np.ndarray) -> Tuple[np.ndarray, np.float64, bool]: """Take an action and return the response of the env.""" next_state, reward, done, _ = self.env.step(action) if not self.is_test: self.transition += [reward, next_state, done] self.memory.store(*self.transition) return next_state, reward, done def update_model(self) -> torch.Tensor: """Update the model by gradient descent.""" samples = self.memory.sample_batch() loss = self._compute_dqn_loss(samples) self.optimizer.zero_grad() loss.backward() # we clip the gradients to have their norm less than or equal to 10. clip_grad_norm_(self.dqn.parameters(), 10.0) self.optimizer.step() return loss.item() def train(self, num_frames: int, plotting_interval: int = 200): """Train the agent.""" self.is_test = False state = self.env.reset() update_cnt = 0 epsilons = [] losses = [] scores = [] score = 0 for frame_idx in range(1, num_frames + 1): action = self.select_action(state) next_state, reward, done = self.step(action) state = next_state score += reward # if episode ends if done: state = env.reset() scores.append(score) score = 0 # if training is ready if len(self.memory) >= self.batch_size: loss = self.update_model() losses.append(loss) update_cnt += 1 # linearly decrease epsilon self.epsilon = max( self.min_epsilon, self.epsilon - ( self.max_epsilon - self.min_epsilon ) * self.epsilon_decay ) epsilons.append(self.epsilon) # if hard update is needed if update_cnt % self.target_update == 0: self._target_hard_update() # plotting if frame_idx % plotting_interval == 0: self._plot(frame_idx, scores, losses, epsilons) self.env.close() def test(self) -> List[np.ndarray]: """Test the agent.""" self.is_test = True state = self.env.reset() done = False score = 0 frames = [] while not done: frames.append(self.env.render(mode="rgb_array")) action = self.select_action(state) next_state, reward, done = self.step(action) state = next_state score += reward print("score: ", score) self.env.close() return frames def _compute_dqn_loss(self, samples: Dict[str, np.ndarray]) -> torch.Tensor: """Return dqn loss.""" device = self.device # for shortening the following lines state = torch.FloatTensor(samples["obs"]).to(device) next_state = torch.FloatTensor(samples["next_obs"]).to(device) action = torch.LongTensor(samples["acts"].reshape(-1, 1)).to(device) reward = torch.FloatTensor(samples["rews"].reshape(-1, 1)).to(device) done = torch.FloatTensor(samples["done"].reshape(-1, 1)).to(device) # G_t = r + gamma * v(s_{t+1}) if state != Terminal # = r otherwise curr_q_value = self.dqn(state).gather(1, action) next_q_value = self.dqn_target(next_state).max( dim=1, keepdim=True )[0].detach() mask = 1 - done target = (reward + self.gamma * next_q_value * mask).to(self.device) # calculate dqn loss loss = F.smooth_l1_loss(curr_q_value, target) return loss def _target_hard_update(self): """Hard update: target <- local.""" self.dqn_target.load_state_dict(self.dqn.state_dict()) def _plot( self, frame_idx: int, scores: List[float], losses: List[float], epsilons: List[float], ): """Plot the training progresses.""" clear_output(True) plt.figure(figsize=(20, 5)) plt.subplot(131) plt.title('frame %s. score: %s' % (frame_idx, np.mean(scores[-10:]))) plt.plot(scores) plt.subplot(132) plt.title('loss') plt.plot(losses) plt.subplot(133) plt.title('epsilons') plt.plot(epsilons) plt.show() # ## Environment # # You can see the [code](https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.py) and [configurations](https://github.com/openai/gym/blob/master/gym/envs/__init__.py#L53) of CartPole-v0 from OpenAI's repository. # environment env_id = "CartPole-v0" env = gym.make(env_id) # ## Set random seed # + seed = 777 def seed_torch(seed): torch.manual_seed(seed) if torch.backends.cudnn.enabled: torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True np.random.seed(seed) seed_torch(seed) env.seed(seed) # - # ## Initialize # + # parameters num_frames = 10000 memory_size = 1000 batch_size = 32 target_update = 200 epsilon_decay = 1 / 2000 # train agent = DQNAgent(env, memory_size, batch_size, target_update, epsilon_decay) # - # ## Train agent.train(num_frames) # ## Test # # Run the trained agent (1 episode). frames = agent.test() # ## Render # + # Imports specifically so we can render outputs in Colab. from matplotlib import animation from JSAnimation.IPython_display import display_animation from IPython.display import display def display_frames_as_gif(frames): """Displays a list of frames as a gif, with controls.""" patch = plt.imshow(frames[0]) plt.axis('off') def animate(i): patch.set_data(frames[i]) anim = animation.FuncAnimation( plt.gcf(), animate, frames = len(frames), interval=50 ) display(display_animation(anim, default_mode='loop')) # display display_frames_as_gif(frames)
04.dueling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Erasmus+ ICCT project (2018-1-SI01-KA203-047081) # Toggle cell visibility from IPython.display import HTML tag = HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide() } else { $('div.input').show() } code_show = !code_show } $( document ).ready(code_toggle); </script> Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''') display(tag) # Hide the code completely # from IPython.display import HTML # tag = HTML('''<style> # div.input { # display:none; # } # </style>''') # display(tag) # + [markdown] lang="it" # ## Dalle equazioni differenziali alla forma di stato # # ### Modellazione di un sistema Massa-Molla-Smorzatore # # Questo esempio consente di sperimentare gli effetti delle variazioni dei parametri e sulla risposta libera e/o forzata del sistema massa-molla-smorzatore presentato nel manuale. # # <img src="Images\mass-spring-damper.png" alt="drawing" width="300"> # # L'equazione che descrive il sistema è: # # $$m\ddot{x}=-kx-c\dot{x}+F(t),$$ # # dove $x$ è la posizione della massa lungo la direzione del suo grado di libertà di movimento, $m$ è la sua massa, $k$ è la rigidità della molla (quindi $kx$ è la forza esercitata dalla molla sulla massa), $c$ la costante che descrive l'attrito viscoso (quindi $c\dot{x}$ è la forza esercitata dallo smorzatore sulla massa e $F(t)$ è la forza esterna applicata alla massa (rappresenta l'effettivo input del sistema). # # Definendo il vettore di stato $\textbf{x}=[x_1, x_2]^T$, dove $x_1=x$ e $x_2=\dot{x}$, e l'input come $u(t)=F(t)$, è possibile descrivere il comportamento del sistema mediante le due equazioni: # \begin{cases} # \dot{x_2}=-\frac{k}{m}x_1-\frac{c}{m}x_2+u(t) \\ # \dot{x_1}=x_2 # \end{cases} # (nota che $\dot{x_2}=\ddot{x}$). Quindi in forma matriciale # $$ # \begin{bmatrix} # \dot{x_1} \\ # \dot{x_2} # \end{bmatrix}=\underbrace{\begin{bmatrix} # 0 && 1 \\ # -\frac{k}{m} && -\frac{c}{m} # \end{bmatrix}}_{A}\begin{bmatrix} # x_1 \\ # x_2 # \end{bmatrix}+\underbrace{\begin{bmatrix} # 0 \\ # \frac{1}{m} # \end{bmatrix}}_{B}u. # $$ # # ### Come usare questo notebook? # # - Esplora le diverse risposte del sistema alle diverse condizioni iniziali e ai diversi input. # - Nota come il valore del coefficiente di smorzamento $c$ influisca sulla presenza o meno di oscillazioni della posizione della massa. # # Cerca di lasciare che la massa oscilli per sempre o che si sposti rapidamente da un punto all'altro senza oscillazioni. # # Prova a trovare, se puoi, un insieme di parametri ($m$, $k$ e $c$) e una funzione di input che faccia aumentare l'ampiezza delle oscillazioni nel tempo verso l'infinito. È possibile? # + #Preparatory Cell # %matplotlib notebook import control import numpy from IPython.display import display, Markdown import ipywidgets as widgets import matplotlib.pyplot as plt from matplotlib import animation # %matplotlib inline #print a matrix latex-like def bmatrix(a): """Returns a LaTeX bmatrix - by <NAME> (ICCT project) :a: numpy array :returns: LaTeX bmatrix as a string """ if len(a.shape) > 2: raise ValueError('bmatrix can at most display two dimensions') lines = str(a).replace('[', '').replace(']', '').splitlines() rv = [r'\begin{bmatrix}'] rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines] rv += [r'\end{bmatrix}'] return '\n'.join(rv) # Display formatted matrix: def vmatrix(a): if len(a.shape) > 2: raise ValueError('bmatrix can at most display two dimensions') lines = str(a).replace('[', '').replace(']', '').splitlines() rv = [r'\begin{vmatrix}'] rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines] rv += [r'\end{vmatrix}'] return '\n'.join(rv) #matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value ! class matrixWidget(widgets.VBox): def updateM(self,change): for irow in range(0,self.n): for icol in range(0,self.m): self.M_[irow,icol] = self.children[irow].children[icol].value #print(self.M_[irow,icol]) self.value = self.M_ def dummychangecallback(self,change): pass def __init__(self,n,m): self.n = n self.m = m self.M_ = numpy.matrix(numpy.zeros((self.n,self.m))) self.value = self.M_ widgets.VBox.__init__(self, children = [ widgets.HBox(children = [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)] ) for j in range(n) ]) #fill in widgets and tell interact to call updateM each time a children changes value for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] self.children[irow].children[icol].observe(self.updateM, names='value') #value = Unicode('<EMAIL>', help="The email value.").tag(sync=True) self.observe(self.updateM, names='value', type= 'All') def setM(self, newM): #disable callbacks, change values, and reenable self.unobserve(self.updateM, names='value', type= 'All') for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].unobserve(self.updateM, names='value') self.M_ = newM self.value = self.M_ for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].observe(self.updateM, names='value') self.observe(self.updateM, names='value', type= 'All') #self.children[irow].children[icol].observe(self.updateM, names='value') #overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?) class sss(control.StateSpace): def __init__(self,*args): #call base class init constructor control.StateSpace.__init__(self,*args) #disable function below in base class def _remove_useless_states(self): pass # + #define matrixes C = numpy.matrix([[1,0],[0,1]]) D = numpy.matrix([[0],[0]]) X0 = matrixWidget(2,1) m = widgets.FloatSlider( value=5, min=0.1, max=10.0, step=0.1, description='m [kg]:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.1f', ) k = widgets.FloatSlider( value=1, min=0, max=10.0, step=0.1, description='k [N/m]:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.1f', ) c = widgets.FloatSlider( value=0.5, min=0, max=10.0, step=0.1, description='c [Ns/m]:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.1f', ) u = widgets.FloatSlider( value=1, min=0, max=10.0, step=0.1, description='', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.1f', ) omega = widgets.FloatSlider( value=5, min=0, max=10.0, step=0.1, description='', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.1f', ) # + def main_callback(X0, m, k, c, u, selu, omega):#m, k, c, u, selu, DW a = numpy.matrix([[0,1],[-k/m,-c/m]]) b = numpy.matrix([[0],[1/m]]) eig = numpy.linalg.eig(a) sys = sss(a,b,C,D) if min(numpy.real(abs(eig[0]))) != 0: T = numpy.linspace(0,100/min(numpy.real(abs(eig[0]))),1000) else: if max(numpy.real(abs(eig[0]))) != 0: T = numpy.linspace(0,100/max(numpy.real(abs(eig[0]))),1000) else: T = numpy.linspace(0,1000,1000) if selu == 'impulse': #selu U = [0 for t in range(0,len(T))] U[0] = u y = control.forced_response(sys,T,U,X0) if selu == 'step': U = [u for t in range(0,len(T))] y = control.forced_response(sys,T,U,X0) if selu == 'sinusoid': U = u*numpy.sin(omega*T) y = control.forced_response(sys,T,U,X0) fig=plt.figure(num=1,figsize=[15, 4]) fig.add_subplot(121) plt.plot(T,y[1][0]) plt.grid() plt.xlabel('t [s]') plt.ylabel('posizione [m]') fig.add_subplot(122) plt.plot(T,y[1][1]) plt.grid() plt.xlabel('t [s]') plt.ylabel('velocità [m]') #display(Markdown('The A matrix is: $%s$ and the eigenvalues are: $%s$' % (bmatrix(a),eig[0]))) #create dummy widget DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px')) #create button widget START = widgets.Button( description='Test', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Test', icon='check' ) def on_start_button_clicked(b): #This is a workaround to have intreactive_output call the callback: # force the value of the dummy widget to change if DW.value> 0 : DW.value = -1 else: DW.value = 1 pass START.on_click(on_start_button_clicked) #define type of ipout SELECT = widgets.Dropdown( options=[('impulso','impulse'), ('gradino','step'), ('sinusoide','sinusoid')], value='impulse', description='', disabled=False ) #create a graphic structure to hold all widgets alltogether = widgets.VBox([widgets.HBox([widgets.VBox([m, k, c]), widgets.HBox([widgets.VBox([widgets.Label('seleziona il tipo di input:',border=3), widgets.Label('u [N]:',border=3), widgets.Label('omega [rad/s]:',border=3)]), widgets.VBox([SELECT,u,omega])])]), widgets.HBox([widgets.Label('Stato iniziale X0:',border=3),X0])]) out = widgets.interactive_output(main_callback,{'X0':X0, 'm': m, 'k': k, 'c': c, 'u': u, 'selu': SELECT, 'omega':omega}) #out.layout.height = '300px' display(out,alltogether)
ICCT_it/examples/04/SS-07-Dalle_equazioni_differenziali_allo_spazio_di_stato.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # A. Using the **same Amazon product dataset from HW1**, **process the reviews** however you deem appropriate using the tools we have learned today (tokenizing, stemming, lemmatization, removing stopwords), and **produce a correlation matrix** of the top 500 words by frequency. Then, **sample your dataframe for only the top 200 words, and identify the two reviews that are the most "similar" based upon cosine similarity**. (7 pts) # + files = ["good_amazon_toy_reviews.txt", "poor_amazon_toy_reviews.txt"] corpus = [] for file in files: corpus += open(file, "r").readlines() # - # replace backslashes and new line carriage symbols corpus = list(map(lambda review: review.replace('\n', '').replace('\\', ''), corpus))[:10000] # + from nltk.stem import WordNetLemmatizer from nltk import word_tokenize import string lemmatizer = WordNetLemmatizer() # make my own function that takes in a full sentence, tokenizes it, lemmatizes the words, then joins it back # on white space def lemmatize_sentence(sentence): words = word_tokenize(sentence) res_words = [] for word in words: res_words.append(lemmatizer.lemmatize(word).strip(string.punctuation)) return " ".join(res_words) # + from nltk.stem import WordNetLemmatizer import pandas as pd from sklearn.feature_extraction.text import CountVectorizer # iterate through the corpus, lemmatizing each sentence (this is a substitute for a for loop!) lemmatized_corpus = map(lemmatize_sentence, corpus) vectorizer = CountVectorizer(stop_words="english") # vectorize the corpus vector = vectorizer.fit_transform(lemmatized_corpus) # convert into Pandas dataframe count_df = pd.DataFrame(vector.toarray(), columns=vectorizer.get_feature_names()) # - # find the top 200 words by first summing along the columns, sorting the values descending order, picking # the top 200, and grabbing the indices (words) top_200_words = count_df.sum(axis=0).sort_values(ascending=False)[:200].index.values # # Co-Occurence Matrix top_200_count_df = count_df[top_200_words] top_200_count_df.corr() # + from itertools import product from scipy.spatial.distance import cosine # sample only 200 random reviews to save time # save the original review text in this df, so we can go back and inspect the most similar reviews top_200_count_df["text"] = corpus top_200_count_df = top_200_count_df.sample(200) # - review_lookup = pd.DataFrame(columns=["review_text"]) review_lookup["review_text"] = top_200_count_df["text"].values review_lookup.index = top_200_count_df["text"].index.values top_200_count_df.drop(columns=["text"], inplace=True) # # Computing Similarity # # ### Option 1: Using For Loops # + computed = set() # create a set to store computed values to minimize calculations results = [] # store the results here for idx, (reviewA, reviewB) in enumerate(list(product(top_200_count_df.index.values, repeat=2))): if idx % 1000 == 0: # print out progress print(f"Done with {idx}") if reviewA == reviewB: continue if (reviewA, reviewB) in computed or (reviewB, reviewA) in computed: # if these reviews are already computed continue reviewA_vector = top_200_count_df.loc[reviewA].values reviewB_vector = top_200_count_df.loc[reviewB].values similarity = 1 - cosine(reviewA_vector, reviewB_vector) computed.add((reviewA, reviewB)) results.append((reviewA, reviewB, similarity)) # - similarities = pd.DataFrame(results, columns=["review A", "review B", "similarity"]) top_50_similar = similarities.sort_values(by="similarity", ascending=False).head(50) # get top 50 # use pandas' iterrows() to quickly iterate through rows, and print the most similar reviews for idx, row in top_50_similar.iterrows(): a_index = row["review A"] b_index = row["review B"] a_text = review_lookup.loc[a_index]["review_text"] b_text = review_lookup.loc[b_index]["review_text"] print(f"({row['similarity']})\n{a_text}\n{b_text}\n\n") # ## Option B (More Efficient): Using Sklearn's Cosine Similarity Functions (Student Answer) from sklearn.metrics.pairwise import cosine_similarity # iterate through the corpus, lemmatizing each sentence (this is a substitute for a for loop!) lemmatized_corpus = map(lemmatize_sentence, corpus) vectorizer = CountVectorizer(stop_words="english") # vectorize the corpus vector = vectorizer.fit_transform(lemmatized_corpus) similarity_matrix = sklearn.metrics.pairwise.cosine_similarity(vector.toarray())
solutions/HW2 (Instructor Solution).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/cp-sadag/deehive/blob/master/PyTorch_cifar10_tutorial_ROB313_2020.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] deletable=true editable=true id="4zsDWUk5cavn" # # Deep Learning for Image Classification # # Welcome to deep learning for image classification tutorial! # **In this notebook, you will**: # - Learn the basics of PyTorch, a powerful but easy to use package for scientific computing (and deep learning) # - Learn how to build and train a convolutional neural network for image classification. # # If you have never used jupyter notebooks, nor Colab notebooks, [here](https://colab.research.google.com/notebooks/welcome.ipynb) is a short intro. # # # ## I. PyTorch Tutorial # # We will briefly go through the basics of the PyTorch package, playing with toy examples. # # If you know already how to use PyTorch, then you can directly go to the second part of this tutorial # # ## II. Training a classifier # # In this part, we will train a Convolutional Neural Network to classify images of 10 different classes (dogs, cats, car, ...) and see how our model performs on the test set. # # # ## III. Exploring CNN Architectures # # This is the part where you get your hands dirty ;). Your mission is to experiment different CNN architectures and set hyperparameters in order to obtain the best accuracy on the test set! # # + [markdown] id="JgRltjas9PpN" # The following command sets the backend of matplotlib to the 'inline' backend so that the output of plotting commands is displayed inline within frontends like the Jupyter notebook, directly below the code cell that produced it: # + deletable=true editable=true id="GkjN23FKt2D-" # %matplotlib inline # + [markdown] id="YAz-fhRRdFaR" # ### Plotting functions and useful imports # # You can skip this part # + id="nnee2WPudA9K" # Python 2/3 compatibility from __future__ import print_function, division import itertools import time import numpy as np import matplotlib.pyplot as plt # Colors from Colorbrewer Paired_12 colors = [[31, 120, 180], [51, 160, 44]] colors = [(r / 255, g / 255, b / 255) for (r, g, b) in colors] # functions to show an image def imshow(img): """ :param img: (PyTorch Tensor) """ # unnormalize img = img / 2 + 0.5 # Convert tensor to numpy array npimg = img.numpy() # Color channel first -> color channel last plt.imshow(np.transpose(npimg, (1, 2, 0))) def plot_losses(train_history, val_history): x = np.arange(1, len(train_history) + 1) plt.figure(figsize=(8, 6)) plt.plot(x, train_history, color=colors[0], label="Training loss", linewidth=2) plt.plot(x, val_history, color=colors[1], label="Validation loss", linewidth=2) plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend(loc='upper right') plt.title("Evolution of the training and validation loss") plt.show() def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. from http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html :param cm: (numpy matrix) confusion matrix :param classes: [str] :param normalize: (bool) :param title: (str) :param cmap: (matplotlib color map) """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] plt.figure(figsize=(8, 8)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # + [markdown] deletable=true editable=true id="aH_K9V7icav6" # # I. What is PyTorch ? # # It’s a Python based scientific computing package targeted at two sets of audiences: # # - A replacement for numpy to use the power of GPUs # - A deep learning research platform that provides maximum flexibility and speed # # # ## PyTorch Basics # # In the next steps, we will briefly see how to use PyTorch and exploit its power: # # 1. PyTorch Installation # 2. PyTorch Tensors # 3. Numpy Bridge # 4. Automatic differentiation # 5. PyTorch and GPU (CUDA support) # # # ### 1. Install PyTorch and Torchvision # # # + id="e0y5PLM6ciB5" colab={"base_uri": "https://localhost:8080/"} outputId="f1fe85e1-e02d-40a6-bfd7-114e435e7914" # !pip install torch #http://download.pytorch.org/whl/cu92/torch-0.4.1-cp36-cp36m-linux_x86_64.whl # !pip install torchvision # + deletable=true editable=true id="LcGVaagRcav8" import numpy as np # Import torch and create the alias "th" # instead of writing torch.name_of_a_method() , we only need to write th.name_of_a_method() # (similarly to numpy imported as np) import torch as th # + deletable=true editable=true id="g2-brDDHcawE" colab={"base_uri": "https://localhost:8080/"} outputId="a173ce78-7361-4f18-d92b-04c9099ff68b" # Create tensor of ones (FloatTensor by default) ones = th.ones(3, 2) print(ones) # + [markdown] deletable=true editable=true id="6RvPibnScawC" # ### 2. PyTorch Tensors # # A `torch.Tensor` is a multi-dimensional matrix containing elements of a single data type. # # Tensors are similar to numpy’s ndarrays, but they have a super-power: Tensors can also be used on a GPU to accelerate computing. # + [markdown] deletable=true editable=true id="QcJgJQERcawQ" # #### Tensor Shape # To know the shape of a given tensor, you can use the `.size()` method (the numpy equivalent is `.shape`) # + deletable=true editable=true id="SI96-W9acawS" colab={"base_uri": "https://localhost:8080/"} outputId="c88269cb-ca04-48c2-b304-5ff13f3926e7" # Display the shape of a tensor # it can be used as a tuple print("Tensor Shape: {}".format(ones.size())) # + [markdown] deletable=true editable=true id="pUPWrNarcawZ" # #### Reshape tensors # # To reshape tensors (e.g. flatten a 3D tensor to a 1D array), you can use the `.view()` method: # # - **x.view(new_shape)**: Returns a new tensor with the same data but different size. It is the equivalent of numpy function *reshape* (Gives a new shape to an array without changing its data.). You can read the full documentation [here.](http://pytorch.org/docs/master/tensors.html#torch.Tensor.view) # # [WARNING] when precising a new shape, you have to make sure that the number of elements is constant. # For example, a 2D matrix of size 3x3 can only be viewed as a 1D array of size $3 \cdot 3 = 9$ # + deletable=true editable=true id="vX-oxI6Vcawb" colab={"base_uri": "https://localhost:8080/"} outputId="244cbff1-8e19-4ff2-d4be-393e83b01c0b" # Create a 3D tensor of size 3x2x2 zeros_3d_tensor = th.zeros(3, 2, 2) print("Original size:", zeros_3d_tensor.size()) # Reshape it to a 1D array of size 3*2*2 = 12 zeros_1d_array = zeros_3d_tensor.view(3 * 2 * 2) print("Reshaped tensor:", zeros_1d_array.size()) # Let's view our original tensor as a 2D matrix # If you want PyTorch to guess one remaining dimension, # you specify '-1' instead of the actual size zeros_2d_matrix = zeros_3d_tensor.view(-1, 2 * 2) print("Matrix shape:", zeros_2d_matrix.size()) # + [markdown] deletable=true editable=true id="kTO_FFswcawj" # #### Basic Operations on tensors # # Tensor support all basic linear algebra operations. You can read the full documentation [here](http://pytorch.org/docs/master/tensors.html) # + deletable=true editable=true id="Ay7LvYeVcawl" colab={"base_uri": "https://localhost:8080/"} outputId="e1e5601b-6e46-49a6-e681-33fa1e20d9b7" 2 * ones + 1 # + [markdown] deletable=true editable=true id="OD7ZOT4jcaws" # PyTorch tensors also supports numpy indexing: # + deletable=true editable=true id="srzDzj_ocawu" colab={"base_uri": "https://localhost:8080/"} outputId="684c46de-18c9-4f14-935a-04c1c516cec8" print("\n Indexing Demo:") print(ones[:, 1]) # + [markdown] deletable=true editable=true id="xrjqKguqcaw0" # ### 3. Numpy Bridge # WARNING: PyTorch Tensors are different from numpy arrays # even if they have a lot in common # # Though, it is **easy with PyTorch to tranform Tensors to Numpy arrays and vice versa** # + [markdown] deletable=true editable=true id="gVAntrTVcaw3" # #### Numpy <-> PyTorch # # Creating PyTorch tensors from numpy array is done via the `torch.from_numpy()` function # (here `th.from_numpy()` because we renamed *torch* as *th*) # # To transform a PyTorch tensor to a numpy array, you can simply call `.numpy()` method. # + deletable=true editable=true id="t2ENcAKOcaw5" colab={"base_uri": "https://localhost:8080/"} outputId="36c74338-1b12-475f-b81c-c63ef767dcd7" # np.float32 -> th.FloatTensor ones_matrix = np.ones((2, 2), dtype=np.float32) # the matrix is passed by reference: # if we modify the original numpy array, the tensor is also edited ones_tensor = th.from_numpy(ones_matrix) # Convert back to a numpy matrix numpy_matrix = ones_tensor.numpy() print("PyTorch Tensor:") print(ones_tensor) print("Numpy Matrix:") print(numpy_matrix) # + [markdown] deletable=true editable=true id="Y0Itjyg-caxD" # ### 4. Automatic Differentiation # # Pytorch tensors allow to **automatically compute gradients**. That is particulary useful for backpropagation. # # Once you finish your computation you can call `.backward()` and have all the gradients computed automatically. # # You can access the gradient w.r.t. this variable using `.grad`. # # + deletable=true editable=true id="WrPNcIpYcaxK" colab={"base_uri": "https://localhost:8080/"} outputId="9112b95d-f99b-4d8d-9df2-a121cec32a95" # We need to specify that we want to compute the gradient # as it requires extra memory and computation ones_tensor = th.ones(2,2, requires_grad=True) print(ones_tensor) # + [markdown] deletable=true editable=true id="IEZDUibxcaxj" # To demonstrate the use of PyTorch Variable, # let's define a simple linear transformation of a variable $x$ : # # $$y = a \cdot x + b$$ # # PyTorch will allows us to automatically compute $$\frac{dy}{dx} $$ # + deletable=true editable=true id="A4j85JjZcaxl" colab={"base_uri": "https://localhost:8080/"} outputId="009fef9e-ecf9-4a35-f86a-8fdf71a04eee" # Create a tensor and tell PyTorch # that we want to compute the gradient x = th.ones(1, requires_grad=True) # Transformation constants a = 4 b = 5 # Define the tranformation and store the result # in a new variable y = a * x + b print(y) # + [markdown] deletable=true editable=true id="_mxnlvwxcaxq" # Let's backprop! # + deletable=true editable=true id="X1i-pN-Fcaxs" y.backward() # + [markdown] deletable=true editable=true id="skgIGZdmcaxw" # `x.grad` prints the gradient: # # $$\frac{dy}{dx} = a$$ # # because: # # $$y = a \cdot x + b$$ # + deletable=true editable=true id="_TYbuwsXcaxx" colab={"base_uri": "https://localhost:8080/"} outputId="825d34c7-4560-442a-d0fa-ddf76f37afab" x.grad # + [markdown] deletable=true editable=true id="ggu-PBGvcax3" # You can now change the values of $a$ and $b$ see their effects on the gradient # (HINT: `x.grad` only depends on the value of `a`) # + [markdown] deletable=true editable=true id="8iPn0C59cax5" # ### 5. PyTorch and GPU (CUDA support) # # Google colab provides a CUDA enabled GPU, so we are going to use its power. # You can move tensor to the GPU by simply using the `to()` method. # Otherwise, PyTorch will use the CPU. # # Here, we will demonstrate the usefulness of the GPU on a simple matrix multiplication: # + id="EwF6ePTpeefQ" colab={"base_uri": "https://localhost:8080/"} outputId="70967875-3c90-42a6-cf86-d85e07ce2a44" if th.cuda.is_available(): # Create tensors x = th.ones(1000, 1000) y = 2 * x + 3 # Do the calculation on cpu (default) start_time = time.time() # Matrix multiplication (for benchmark purpose) results = th.mm(x, y) time_cpu = time.time() - start_time # Do the same calculation but on the gpu # First move tensors to gpu x = x.to("cuda") y = y.to("cuda") start_time = time.time() # Matrix multiplication (for benchmark purpose) results = th.mm(x, y) time_gpu = time.time() - start_time print("Time on CPU: {:.5f}s \t Time on GPU: {:.5f}s".format(time_cpu, time_gpu)) print("Speed up: Computation was {:.0f}X faster on GPU!".format(time_cpu / time_gpu)) else: print("You need to enable GPU accelaration in colab (runtime->change runtime type)") # + [markdown] id="E-AOzDy9lFwi" # As expected, matrix multiplication is way faster on a GPU, so we'd better use it. # + [markdown] deletable=true editable=true id="0kqEBjG6t2Eh" # # # II. Training a classifier # # # For this tutorial, we will use the CIFAR10 dataset. # There are 10 classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, # ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of # size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size. # # # ![CIFAR10](http://pytorch.org/tutorials/_images/cifar10.png) # # # Training an image classifier # ---------------------------- # # We will do the following steps in order: # # 1. Load and normalize the CIFAR10 training and test datasets using # ``torchvision`` # 2. Define a Convolution Neural Network # 3. Define a loss function # 4. Train the network on the training data # 5. Test the network on the test data # + [markdown] deletable=true editable=true id="UWTdj2uYcax7" # ### 1. Loading and normalizing CIFAR10 Dataset # # Using ``torchvision``, it’s extremely easy to load CIFAR10. # + deletable=true editable=true id="KRrvrIi0t2Em" import torch import torchvision import torchvision.transforms as transforms # + [markdown] deletable=true editable=true id="iX2ltR_zcayA" # Seed the random generator to have reproducible results: # + deletable=true editable=true id="335xvR6acayB" seed = 42 np.random.seed(seed) torch.manual_seed(seed) if th.cuda.is_available(): # Make CuDNN Determinist th.backends.cudnn.deterministic = True th.cuda.manual_seed(seed) # Define default device, we should use the GPU (cuda) if available device = th.device("cuda" if th.cuda.is_available() else "cpu") # + [markdown] deletable=true editable=true id="7EzIeyD4cayG" # ### Define subset of the dataset (so it is faster to train) # + deletable=true editable=true id="Nwu-wWh3cayI" from torch.utils.data.sampler import SubsetRandomSampler n_training_samples = 20000 # Max: 50 000 - n_val_samples n_val_samples = 5000 n_test_samples = 5000 train_sampler = SubsetRandomSampler(np.arange(n_training_samples, dtype=np.int64)) val_sampler = SubsetRandomSampler(np.arange(n_training_samples, n_training_samples + n_val_samples, dtype=np.int64)) test_sampler = SubsetRandomSampler(np.arange(n_test_samples, dtype=np.int64)) # (In the last case, indexes do not need to account for training ones because the train=False parameter in datasets.CIFAR will select from the test set) # + [markdown] deletable=true editable=true id="evFXNmbst2Ez" # The output of torchvision datasets are PILImage images of range [0, 1]. # We transform them to Tensors of normalized range [-1, 1] # # # + deletable=true editable=true id="ZJ-hYN00t2E2" colab={"base_uri": "https://localhost:8080/", "height": 99, "referenced_widgets": ["38fc72a174824e3c8ff218f48949da0d", "63b1fe49eecc402782646ffda444f970", "66d03931cca744a795c4ebe5c0c836d7", "a07651a5b3a44436adbef486ecf99f3d", "3fdb78add0394ba5a4de876fbb596b70", "3c768c72053c491094d8b0697007714e", "b8f03628ce2f49e48dac36dc8148bf12", "2b956fec680f4cb7b71d80fb49021af0"]} outputId="f64809a0-920b-4eac-f783-4a40ea39ff15" num_workers = 2 test_batch_size = 4 transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) train_set = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) train_loader = torch.utils.data.DataLoader(train_set, batch_size=test_batch_size, sampler=train_sampler, num_workers=num_workers) test_set = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) test_loader = torch.utils.data.DataLoader(test_set, batch_size=test_batch_size, sampler=test_sampler, num_workers=num_workers) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # + [markdown] deletable=true editable=true id="cGWVnBOft2FI" # Let us show some of the training images, for fun. # # # + deletable=true editable=true id="68OfC35ut2FM" colab={"base_uri": "https://localhost:8080/", "height": 155} outputId="e4e7ad3d-0a95-4a9e-f85e-26e6b61b76d7" # get some random training images dataiter = iter(train_loader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('{:>10}'.format(classes[labels[j]]) for j in range(test_batch_size))) # + [markdown] deletable=true editable=true id="8ULHEu5Zt2Fa" # ### 2. Define a Convolution Neural Network # # + deletable=true editable=true id="6k6rJyTTcayi" # Useful imports import torch.nn as nn import torch.nn.functional as F # + [markdown] deletable=true editable=true id="0JcmlEe8t2Fe" # #### Forward propagation # # In PyTorch, there are built-in functions that carry out the convolution steps for you. # # - **nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0):** Convolution layer. You can read the full documentation [here](http://pytorch.org/docs/master/nn.html#conv2d) # # - **nn.MaxPool2d(kernel_size, stride=None, padding=0):** Max pooling layer. You can read the full documentation [here](http://pytorch.org/docs/master/nn.html#maxpool2d) # # - **F.relu(Z1):** computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation [here.](http://pytorch.org/docs/master/nn.html#torch.nn.ReLU) # # - **x.view(new_shape)**: Returns a new tensor with the same data but different size. It is the equivalent of numpy function *reshape* (Gives a new shape to an array without changing its data). You can read the full documentation [here.](http://pytorch.org/docs/master/tensors.html#torch.Tensor.view) # # - **nn.Linear(in_features, out_features):** Applies a linear transformation to the incoming data: $y = Ax + b$, it is also called a fully connected layer. You can read the full documentation [here.](http://pytorch.org/docs/master/nn.html#linear-layers) # + [markdown] deletable=true editable=true id="rbykSRDTcaym" # #### Simple Convolutional Neural Network # # ConvNet with one convolution layer followed by a max pooling operation, # one fully connected layer and an output layer # + deletable=true editable=true id="X4pljAWycayn" class SimpleConvolutionalNetwork(nn.Module): def __init__(self): super(SimpleConvolutionalNetwork, self).__init__() self.conv1 = nn.Conv2d(3, 18, kernel_size=3, stride=1, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) # cf comments in forward() to have step by step comments # on the shape (how we pass from a 3x32x32 input image to a 18x16x16 volume) self.fc1 = nn.Linear(18 * 16 * 16, 64) self.fc2 = nn.Linear(64, 10) def forward(self, x): """ Forward pass, x shape is (batch_size, 3, 32, 32) (color channel first) in the comments, we omit the batch_size in the shape """ # shape : 3x32x32 -> 18x32x32 x = F.relu(self.conv1(x)) # 18x32x32 -> 18x16x16 x = self.pool(x) # 18x16x16 -> 4608 x = x.view(-1, 18 * 16 * 16) # 4608 -> 64 x = F.relu(self.fc1(x)) # 64 -> 10 # The softmax non-linearity is applied later (cf createLossAndOptimizer() fn) x = self.fc2(x) return x # + [markdown] deletable=true editable=true id="4m-VHCtRcayr" # #### Linear Classifier # + deletable=true editable=true id="Rj-togN6cays" class LinearClassifier(nn.Module): """ Linear Classifier """ def __init__(self): super(LinearClassifier, self).__init__() self.linear = nn.Linear(32 * 32 * 3, 10) def forward(self, x): # Flatten input 3x32x32 -> 3072 x = x.view(x.size(0), -1) return self.linear(x) # + [markdown] deletable=true editable=true id="2SQi9Xf-t2Fu" # ### 3. Define a loss function and optimizer # # Let's use a Classification Cross-Entropy loss and ADAM (optionally, SGD with momentum). You can read more about [optimization methods](https://pytorch.org/docs/stable/optim.html). # # # + deletable=true editable=true id="DOUiPtZQt2Fx" import torch.optim as optim def createLossAndOptimizer(net, learning_rate=0.001): # it combines softmax with negative log likelihood loss criterion = nn.CrossEntropyLoss() #optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9) optimizer = optim.Adam(net.parameters(), lr=learning_rate) return criterion, optimizer # + [markdown] deletable=true editable=true id="saJW5bKRt2F9" # ### 4. Train the network # # # This is when things start to get interesting. # We simply have to loop over our data iterator, feed the inputs to the network, and optimize # # # + [markdown] deletable=true editable=true id="mNf1e8QZcay1" # #### Data loader # + deletable=true editable=true id="EqDD8_z8cay2" def get_train_loader(batch_size): return torch.utils.data.DataLoader(train_set, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers) # Use larger batch size for validation to speed up computation val_loader = torch.utils.data.DataLoader(train_set, batch_size=128, sampler=val_sampler, num_workers=num_workers) # + [markdown] deletable=true editable=true id="yTDHHbLpcay5" # #### Training loop # The training script: it takes ~10s per epoch with batch_size = 32 # + deletable=true editable=true id="dATbDR5pt2GE" def train(net, batch_size, n_epochs, learning_rate): """ Train a neural network and print statistics of the training :param net: (PyTorch Neural Network) :param batch_size: (int) :param n_epochs: (int) Number of iterations on the training set :param learning_rate: (float) learning rate used by the optimizer """ print("===== HYPERPARAMETERS =====") print("batch_size=", batch_size) print("n_epochs=", n_epochs) print("learning_rate=", learning_rate) print("=" * 30) train_loader = get_train_loader(batch_size) n_minibatches = len(train_loader) criterion, optimizer = createLossAndOptimizer(net, learning_rate) # Init variables used for plotting the loss train_history = [] val_history = [] training_start_time = time.time() best_error = np.inf best_model_path = "best_model.pth" # Move model to gpu if possible net = net.to(device) for epoch in range(n_epochs): # loop over the dataset multiple times running_loss = 0.0 print_every = n_minibatches // 10 start_time = time.time() total_train_loss = 0 for i, (inputs, labels) in enumerate(train_loader): # Move tensors to correct device inputs, labels = inputs.to(device), labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() total_train_loss += loss.item() # print every 10th of epoch if (i + 1) % (print_every + 1) == 0: print("Epoch {}, {:d}% \t train_loss: {:.2f} took: {:.2f}s".format( epoch + 1, int(100 * (i + 1) / n_minibatches), running_loss / print_every, time.time() - start_time)) running_loss = 0.0 start_time = time.time() train_history.append(total_train_loss / len(train_loader)) total_val_loss = 0 # Do a pass on the validation set # We don't need to compute gradient, # we save memory and computation using th.no_grad() with th.no_grad(): for inputs, labels in val_loader: # Move tensors to correct device inputs, labels = inputs.to(device), labels.to(device) # Forward pass predictions = net(inputs) val_loss = criterion(predictions, labels) total_val_loss += val_loss.item() val_history.append(total_val_loss / len(val_loader)) # Save model that performs best on validation set if total_val_loss < best_error: best_error = total_val_loss th.save(net.state_dict(), best_model_path) print("Validation loss = {:.2f}".format(total_val_loss / len(val_loader))) print("Training Finished, took {:.2f}s".format(time.time() - training_start_time)) # Load best model net.load_state_dict(th.load(best_model_path)) return train_history, val_history # + deletable=true editable=true id="cJX2anB5cay_" colab={"base_uri": "https://localhost:8080/"} outputId="37170dbf-e26c-4352-ac61-de30499dd34a" net = SimpleConvolutionalNetwork() train_history, val_history = train(net, batch_size=32, n_epochs=10, learning_rate=0.001) # + [markdown] deletable=true editable=true id="UkVKNPtccazC" # Now, let's look at the evolution of the losses # + deletable=true editable=true id="4CUQt-HJcazF" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="6238707a-cb8b-4222-d509-7c84640c3f12" plot_losses(train_history, val_history) # + [markdown] deletable=true editable=true id="O90WcUTwt2GU" # ### 5. Test the network on the test data # # # We have trained the network for 2 passes over the training dataset. # But we need to check if the network has learnt anything at all. # # We will check this by predicting the class label that the neural network # outputs, and checking it against the ground-truth. If the prediction is # correct, we add the sample to the list of correct predictions. # # Okay, first step. Let us display an image from the test set to get familiar. # # # + deletable=true editable=true id="V4vljwBlt2GX" colab={"base_uri": "https://localhost:8080/", "height": 189} outputId="f1c808f4-27b9-4274-975c-1bb960c14fed" try: images, labels = next(iter(test_loader)) except EOFError: pass # print images imshow(torchvision.utils.make_grid(images)) print("Ground truth:\n") print(' '.join('{:>10}'.format(classes[labels[j]]) for j in range(test_batch_size))) # + [markdown] deletable=true editable=true id="KpmaQT4Zt2Gn" # Okay, now let us see what the neural network thinks these examples above are: # # # + deletable=true editable=true id="utIfocFrt2Gs" colab={"base_uri": "https://localhost:8080/"} outputId="11d37608-ef31-4dc4-fe78-ad2dd7580ad2" outputs = net(images.to(device)) print(outputs.size()) # + [markdown] deletable=true editable=true id="6mU42O0Gt2G2" # The outputs are energies for the 10 classes. # The higher the energy for a class, the more the network # thinks that the image is from that particular class. # So, let's get the index of the highest energy: # # # + deletable=true editable=true id="IWTWHHs9t2G5" colab={"base_uri": "https://localhost:8080/", "height": 189} outputId="ea5cffa8-417c-497a-e08d-7a068d26575b" _, predicted = torch.max(outputs, 1) print("Predicted:\n") imshow(torchvision.utils.make_grid(images)) print(' '.join('{:>10}'.format(classes[predicted[j]]) for j in range(test_batch_size))) # + [markdown] deletable=true editable=true id="AUpCEAOTt2HK" # The results seem pretty good. # # Let us look at how the network performs on the whole test set. # # # + deletable=true editable=true id="LI6JtYwTt2HM" colab={"base_uri": "https://localhost:8080/"} outputId="bef99e0a-c24b-4b0f-9004-43100e3dc9fe" def dataset_accuracy(net, data_loader, name=""): net = net.to(device) correct = 0 total = 0 for images, labels in data_loader: images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs, 1) total += labels.size(0) correct += (predicted == labels).sum() accuracy = 100 * float(correct) / total print('Accuracy of the network on the {} {} images: {:.2f} %'.format(total, name, accuracy)) def train_set_accuracy(net): dataset_accuracy(net, train_loader, "train") def val_set_accuracy(net): dataset_accuracy(net, val_loader, "validation") def test_set_accuracy(net): dataset_accuracy(net, test_loader, "test") def compute_accuracy(net): train_set_accuracy(net) val_set_accuracy(net) test_set_accuracy(net) print("Computing accuracy...") compute_accuracy(net) # + [markdown] deletable=true editable=true id="iGGyra-4t2HW" # That initial 59.78 % on the test set of images looks waaay better than chance, which is 10% accuracy (randomly picking # a class out of 10 classes). # Seems like the network learnt something. # As a baseline, a linear model achieves around 30% accuracy. # # What are the classes that performed well, and the classes that did not perform well? # # # + deletable=true editable=true id="rkim9_INt2HY" colab={"base_uri": "https://localhost:8080/"} outputId="ff26987d-4f29-417e-8dd4-2884299ee99a" def accuracy_per_class(net): net = net.to(device) n_classes = 10 # (real, predicted) confusion_matrix = np.zeros((n_classes, n_classes), dtype=np.int64) for images, labels in test_loader: images, labels = images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) for i in range(test_batch_size): confusion_matrix[labels[i], predicted[i]] += 1 label = labels[i] print("{:<10} {:^10}".format("Class", "Accuracy (%)")) for i in range(n_classes): class_total = confusion_matrix[i, :].sum() class_correct = confusion_matrix[i, i] percentage_correct = 100.0 * float(class_correct) / class_total print('{:<10} {:^10.2f}'.format(classes[i], percentage_correct)) return confusion_matrix confusion_matrix = accuracy_per_class(net) # + [markdown] deletable=true editable=true id="AZKLymOacazg" # ### Confusion Matrix # + [markdown] deletable=true editable=true id="ekJHz3vpcazg" # Let's look at what type of error our networks makes... # It seems that our network is pretty good at classifying ships, # but has some difficulties to differentiate cats and dogs. # Also, it classifies a lot of trucks as cars. # + deletable=true editable=true id="1aYMqD1Ocazi" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="69322c84-c555-4649-e34e-44ff918ebb9c" # Plot normalized confusion matrix plot_confusion_matrix(confusion_matrix, classes, normalize=True, title='Normalized confusion matrix') # Plot non-normalized confusion matrix plot_confusion_matrix(confusion_matrix, classes, title='Confusion matrix, without normalization') # + [markdown] deletable=true editable=true id="MVv-mV8Pt2Hs" # # III. Exploring CNN Architectures # # Now, it is your turn to build a Convolutional Neural Network. The goal of this section is to explore different CNN architectures and set hyperparameters in order to obtain the best accuracy on the **test** set! # # The network that you have to tweak is called **MyConvolutionalNetwork**. # # You can start changing the batch_size, number of epochs and then try adding more convolutional layers. # + [markdown] deletable=true editable=true id="h1blK9eicazo" # ### PyTorch functions to build the network # - **nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0):** Convolution layer. You can read the full documentation [here](http://pytorch.org/docs/master/nn.html#conv2d) # # - **nn.MaxPool2d(kernel_size, stride=None, padding=0):** Max pooling layer. You can read the full documentation [here](http://pytorch.org/docs/master/nn.html#maxpool2d) # # - **F.relu(Z1):** computes the element-wise ReLU of Z1 (which can be of any shape). You can read the full documentation [here.](http://pytorch.org/docs/master/nn.html#torch.nn.ReLU) # # - **x.view(new_shape)**: Returns a new tensor with the same data but different size. It is the equivalent of numpy function *reshape* (Gives a new shape to an array without changing its data.). You can read the full documentation [here.](http://pytorch.org/docs/master/tensors.html#torch.Tensor.view) # # - **nn.Linear(in_features, out_features):** Applies a linear transformation to the incoming data: $y = Ax + b$, it is also called a fully connected (fc) layer. You can read the full documentation [here.](http://pytorch.org/docs/master/nn.html#linear-layers) # + [markdown] deletable=true editable=true id="a8-lKBaacazp" # **Convolution Formulas**: # # The formulas relating the output shape $(C_2, H_2, W_2)$ of the convolution to the input shape $(C_1, H_1, W_1)$ are: # # # $$ H_2 = \lfloor \frac{H_1 - kernel\_size + 2 \times padding}{stride} \rfloor +1 $$ # # $$ W_2 = \lfloor \frac{W_1 - kernel\_size + 2 \times padding}{stride} \rfloor +1 $$ # # $$ C_2 = \text{number of filters used in the convolution}$$ # # NOTE: $C_2 = C_1$ in the case of max pooling # # where: # - $H_2$: height of the output volume # - $W_2$: width of the output volume # - $C_1$: in_channels, number of channels in the input volume # - $C_2$: out_channels # + deletable=true editable=true id="acppf3nkcazr" def get_output_size(in_size, kernel_size, stride=1, padding=0): """ Get the output size given all the parameters of the convolution :param in_size: (int) input size :param kernel_size: (int) :param stride: (int) :param paddind: (int) :return: (int) """ return int((in_size - kernel_size + 2 * padding) / stride) + 1 # + [markdown] deletable=true editable=true id="SEsbZoTOcazu" # #### Example of use of helper method get_output_size() # # Let's assume you have an *input volume of size 3x32x32* (where 3 is the number of channels) # and you use a 2D convolution with the following parameters: # # ```python # conv1 = nn.Conv2d(3, 18, kernel_size=7, stride=2, padding=1) # ``` # then, the size of the output volume is 18x?x? (because we have 18 filters) where ? is given by the convolution formulas (see above). # # **get_output_size()** function allows to compute that size: # # ``` # out_size = get_output_size(in_size=32, kernel_size=7, stride=2, padding=1) # print(out_size) # prints 14 # ``` # # That is to say, *the output volume is 18x14x14* # + deletable=true editable=true id="2JFQ1wgKcazv" colab={"base_uri": "https://localhost:8080/"} outputId="7dcceb6a-ba7f-4e70-b0b7-3121f3222057" out_size = get_output_size(in_size=32, kernel_size=3, stride=1, padding=1) print(out_size) # + [markdown] deletable=true editable=true id="wviV5iQIcazz" # Below is the neural network you have to edit: # + deletable=true editable=true id="fnKUPUDTcaz1" class MyConvolutionalNetwork(nn.Module): def __init__(self): super(MyConvolutionalNetwork, self).__init__() self.conv1 = nn.Conv2d(3, 18, kernel_size=3, stride=1, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) #### START CODE: ADD NEW LAYERS #### # (do not forget to update `flattened_size`: # the input size of the first fully connected layer self.fc1) # self.conv2 = ... # Size of the output of the last convolution: self.flattened_size = 18 * 16 * 16 ### END CODE ### self.fc1 = nn.Linear(self.flattened_size, 64) self.fc2 = nn.Linear(64, 10) def forward(self, x): """ Forward pass, x shape is (batch_size, 3, 32, 32) (color channel first) in the comments, we omit the batch_size in the shape """ # shape : 3x32x32 -> 18x32x32 x = F.relu(self.conv1(x)) # 18x32x32 -> 18x16x16 x = self.pool(x) #### START CODE: USE YOUR NEW LAYERS HERE #### # x = ... #### END CODE #### # Check the output size output_size = np.prod(x.size()[1:]) assert output_size == self.flattened_size,\ "self.flattened_size is invalid {} != {}".format(output_size, self.flattened_size) # 18x16x16 -> 4608 x = x.view(-1, self.flattened_size) # 4608 -> 64 x = F.relu(self.fc1(x)) # 64 -> 10 x = self.fc2(x) return x # + deletable=true editable=true id="ruLWyTSocaz5" colab={"base_uri": "https://localhost:8080/"} outputId="36855794-37c4-4d96-c90c-5f8e87060589" net = MyConvolutionalNetwork() train_history, val_history = train(net, batch_size=32, n_epochs=10, learning_rate=0.001) # + [markdown] deletable=true editable=true id="u7cgVbkDcaz9" # ### Losses Plot # + deletable=true editable=true id="XtXu67qbcaz-" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="e0a2c885-618a-4826-e5f7-fc98d2f55228" plot_losses(train_history, val_history) # + [markdown] deletable=true editable=true id="TuGKgAMWcaz_" # ### Accuracy of the trained model # + deletable=true editable=true id="TWowqQhYca0B" colab={"base_uri": "https://localhost:8080/"} outputId="eeb7ade8-f016-4a86-d27c-e4fca1ead388" compute_accuracy(net) # + [markdown] deletable=true editable=true id="st9f_4opca0F" # **Baseline: Simple Convolutional Neural Network (form part II)** # # <table> # <tr> # <td>Accuracy on the test set:</td> # <td>59.98 %</td> # </tr> # </table> # + deletable=true editable=true id="OKvmb4p-ca0I" colab={"base_uri": "https://localhost:8080/"} outputId="cc38f6b9-7572-42f7-a41f-c9f1e5ad2a87" confusion_matrix = accuracy_per_class(net) # + deletable=true editable=true id="ih5Pj0WBca0L" colab={"base_uri": "https://localhost:8080/", "height": 585} outputId="fb058fab-9e5d-4208-aea0-beaa7731419a" plot_confusion_matrix(confusion_matrix, classes, title='Confusion matrix, without normalization') # + [markdown] deletable=true editable=true id="7xzIkZqit2IA" # ### Going further # # - [Coursera Course on CNN](https://www.coursera.org/learn/convolutional-neural-networks) # - [Stanford Course](http://cs231n.stanford.edu/syllabus.html) # - [PyTorch Tutorial](http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) # - [How backpropagation works](http://michaelnielsen.org/blog/how-the-backpropagation-algorithm-works/) (<NAME>) # # If you feel like this was too easy peasy: # # -Investigate further [optimization methods](https://pytorch.org/docs/stable/optim.html) beyond SGD, and Adam and their parameters. # # -Look at ways to improve your network using regularization techniques # # -Look at ways to visualize network activations for model interpretability # # -Use transfer learning, in order to use torchvision with pretrained=True with some pretrained models # # # Acknowledgements: # This tutorial is based on the [original PyTorch tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) and was adapted by [<NAME>](http://araffin.github.io/) for the ROB313 course at ENSTA Paris. Thanks to <NAME> for feedback! # # + [markdown] id="XEFW6M6jZtWk" # ### More documentation/ questions to explore about Google Colab: # # -How to connect your Google Drive with Google Colab? # # -How to import a new notebook and save it to your GDrive? # # -How to use files which are contained in your GDrive? # # Some tips [here](https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d) # # # # # ## Extras to read later # ### Visualizing Convolution parameters: # [A guide to convolution arithmetic for deep learning](https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md) # by <NAME>, <NAME> # # # ### Documentation of autograd and Function: # [Autograd](http://pytorch.org/docs/autograd) # # + id="FoJh0SAfb7H8"
PyTorch_cifar10_tutorial_ROB313_2020.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook was prepared by [<NAME>](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). # # Challenge Notebook # ## Problem: Determine whether there is a path between two nodes in a graph. # # * [Constraints](#Constraints) # * [Test Cases](#Test-Cases) # * [Algorithm](#Algorithm) # * [Code](#Code) # * [Unit Test](#Unit-Test) # * [Solution Notebook](#Solution-Notebook) # ## Constraints # # * Is the graph directed? # * Yes # * Can we assume we already have Graph and Node classes? # * Yes # * Can we assume this is a connected graph? # * Yes # * Can we assume the inputs are valid? # * Yes # * Can we assume this fits memory? # * Yes # ## Test Cases # # Input: # * `add_edge(source, destination, weight)` # # ``` # graph.add_edge(0, 1, 5) # graph.add_edge(0, 4, 3) # graph.add_edge(0, 5, 2) # graph.add_edge(1, 3, 5) # graph.add_edge(1, 4, 4) # graph.add_edge(2, 1, 6) # graph.add_edge(3, 2, 7) # graph.add_edge(3, 4, 8) # ``` # # Result: # * search_path(start=0, end=2) -> True # * search_path(start=0, end=0) -> True # * search_path(start=4, end=5) -> False # ## Algorithm # # Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/graph_path_exists/path_exists_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. # ## Code # %run ../graph/graph.py # %load ../graph/graph.py class GraphPathExists(Graph): def path_exists(self, start, end): # TODO: Implement me pass # ## Unit Test # **The following unit test is expected to fail until you solve the challenge.** # + # # %load test_path_exists.py import unittest class TestPathExists(unittest.TestCase): def test_path_exists(self): nodes = [] graph = GraphPathExists() for id in range(0, 6): nodes.append(graph.add_node(id)) graph.add_edge(0, 1, 5) graph.add_edge(0, 4, 3) graph.add_edge(0, 5, 2) graph.add_edge(1, 3, 5) graph.add_edge(1, 4, 4) graph.add_edge(2, 1, 6) graph.add_edge(3, 2, 7) graph.add_edge(3, 4, 8) self.assertEqual(graph.path_exists(nodes[0], nodes[2]), True) self.assertEqual(graph.path_exists(nodes[0], nodes[0]), True) self.assertEqual(graph.path_exists(nodes[4], nodes[5]), False) print('Success: test_path_exists') def main(): test = TestPathExists() test.test_path_exists() if __name__ == '__main__': main() # - # ## Solution Notebook # # Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/graph_path_exists/path_exists_solution.ipynb) for a discussion on algorithms and code solutions.
graphs_trees/graph_path_exists/path_exists_challenge.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Preparing a state with antiferromagnetic order in the Ising model # # This notebook illustrates how to use Pulser to build a sequence for studying an antiferromagnetic state in an Ising-like model. It is based on [10.1103/PhysRevX.8.021070](https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.021070), where arrays of Rydberg atoms were programmed and whose correlations were studied. # # We begin by importing some basic modules: # + import numpy as np import matplotlib.pyplot as plt import qutip from pulser import Pulse, Sequence, Register from pulser.simulation import Simulation from pulser.waveforms import RampWaveform from pulser.devices import Chadoq2 # - # ## Waveforms # # We are realizing the following program # # <center> # <img src="attachment:AF_Ising_program.png" alt="AF Pulse Sequence" width="300"> # </center> # The pulse and the register are defined by the following parameters: # + # Parameters in rad/µs and ns Omega_max = 2.3 * 2*np.pi U = Omega_max / 2.3 delta_0 = -6 * U delta_f = 2 * U t_rise = 252 t_fall = 500 t_sweep = (delta_f - delta_0)/(2 * np.pi * 10) * 1000 R_interatomic = Chadoq2.rydberg_blockade_radius(U) N_side = 3 reg = Register.square(N_side, R_interatomic, prefix='q') print(f'Interatomic Radius is: {R_interatomic}µm.') reg.draw() # - # ## Creating my sequence # We compose our pulse with the following objects from Pulser: rise = Pulse.ConstantDetuning(RampWaveform(t_rise, 0., Omega_max), delta_0, 0.) sweep = Pulse.ConstantAmplitude(Omega_max, RampWaveform(t_sweep, delta_0, delta_f), 0.) fall = Pulse.ConstantDetuning(RampWaveform(t_fall, Omega_max, 0.), delta_f, 0.) # + seq = Sequence(reg, Chadoq2) seq.declare_channel('ising', 'rydberg_global') seq.add(rise, 'ising') seq.add(sweep, 'ising') seq.add(fall, 'ising') seq.draw() # - # ## Phase Diagram # The pulse sequence travels though the following path in the phase diagram of the system (the shaded area represents the antiferromagnetic phase): # + delta = [] omega = [] for x in seq._schedule['ising']: if isinstance(x.type,Pulse): omega += list(x.type.amplitude.samples / U) delta += list(x.type.detuning.samples / U) fig, ax = plt.subplots() ax.grid(True, which='both') ax.set_ylabel(r"$\hbar\delta(t)/U$", fontsize=16) ax.set_xlabel(r"$\hbar\Omega(t)/U$", fontsize=16) ax.set_xlim(0, 3) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') y = np.arange(0.0, 6, 0.01) x = 1.522 * (1 - 0.25 * (y - 2)**2) ax.fill_between(x, y, alpha=0.4) ax.plot(omega,delta, 'red', lw=2) plt.show() # - # ## Simulation: Spin-Spin Correlation Function # We shall now evaluate the quality of the obtained state by calculating the *spin-spin correlation function*, defined as: # # # $$g^c(k,l)= \frac{1}{N_{k,l}}\sum_{(i,j) = (kR,lR)} \left[ \langle n_i n_j \rangle - \langle n_i \rangle \langle n_j \rangle \right]$$ # # where the $c$ indicates that we are calculating the *connected* part, and where the sum is over all pairs $(i,j)$ whose distance is ${\bf r}_i - {\bf r}_j = (k R,l R)$ in the atomic array coordinate (both $k$ and $l$ are positive or negative integers within the size of the array). # # We run a simulation of the sequence: simul = Simulation(seq, sampling_rate=0.02) results = simul.run(progress_bar=True) # Sample from final state using `sample_final_state()` method: # + count = results.sample_final_state() most_freq = {k:v for k,v in count.items() if v>10} plt.bar(list(most_freq.keys()), list(most_freq.values())) plt.xticks(rotation='vertical') plt.show() # - # The observable to measure will be the occupation operator $|r\rangle \langle r|_i$ on each site $i$ of the register, where the Rydberg state $|r\rangle$ represents the excited state. def occupation(j,N): up = qutip.basis(2,0) prod = [qutip.qeye(2) for _ in range(N)] prod[j] = up * up.dag() return qutip.tensor(prod) occup_list = [occupation(j, N_side*N_side) for j in range(N_side*N_side)] # We define a function that returns all couples $(i,j)$ for a given $(k,l)$: def get_corr_pairs(k, l, register, R_interatomic): corr_pairs = [] for i, qi in enumerate(register.qubits): for j, qj in enumerate(register.qubits): r_ij = register.qubits[qi]-register.qubits[qj] distance = np.linalg.norm(r_ij - R_interatomic*np.array([k, l])) if distance < 1: corr_pairs.append([i, j]) return corr_pairs # The correlation function is calculated with the following routines: # + def get_corr_function(k, l, reg, R_interatomic, state): N_qubits = len(reg.qubits) corr_pairs = get_corr_pairs(k, l, reg, R_interatomic) operators = [occupation(j, N_qubits) for j in range(N_qubits)] covariance = 0 for qi, qj in corr_pairs: covariance += qutip.expect(operators[qi]*operators[qj], state) covariance -= qutip.expect(operators[qi], state)*qutip.expect(operators[qj], state) return covariance/len(corr_pairs) def get_full_corr_function(reg, state): N_qubits = len(reg.qubits) correlation_function = {} N_side = int(np.sqrt(N_qubits)) for k in range(-N_side+1, N_side): for l in range(-N_side+1, N_side): correlation_function[(k, l)] = get_corr_function(k, l, reg, R_interatomic, state) return correlation_function # - # With these functions, we operate on the final state of evolution obtained by our simulation. final = results.states[-1] correlation_function = get_full_corr_function(reg, final) expected_corr_function = {} xi = 1 # Estimated Correlation Length for k in range(-N_side+1,N_side): for l in range(-N_side+1,N_side): kk = np.abs(k) ll = np.abs(l) expected_corr_function[(k, l)] = (-1)**(kk + ll) * np.exp(-np.sqrt(k**2 + l**2)/xi) # + A = 4*np.reshape(list(correlation_function.values()), (2*N_side-1, 2*N_side-1)) A = A/np.max(A) B = np.reshape(list(expected_corr_function.values()), (2*N_side-1, 2*N_side-1)) B = B*np.max(A) for i, M in enumerate([A.copy(),B.copy()]): M[N_side-1, N_side-1] = None plt.figure(figsize=(3.5,3.5)) plt.imshow(M, cmap='coolwarm', vmin=-.6, vmax=.6) plt.xticks(range(len(M)), [f'{x}' for x in range(-N_side + 1, N_side)]) plt.xlabel(r'$\mathscr{k}$', fontsize=22) plt.yticks(range(len(M)), [f'{-y}' for y in range(-N_side + 1, N_side)]) plt.ylabel(r'$\mathscr{l}$', rotation=0, fontsize=22, labelpad=10) plt.colorbar(fraction=0.047, pad=0.02) if i == 0 :plt.title(r'$4\times\.g^{(2)}(\mathscr{k},\mathscr{l})$ after simulation', fontsize=14) if i == 1 :plt.title(r'Exponential $g^{(2)}(\mathscr{k},\mathscr{l})$ expected', fontsize=14) plt.show() # - # Note that the correlation function would follow an exponential decay (modulo finite-size effects), which is best observed at larger system sizes (see for example https://arxiv.org/pdf/2012.12268.pdf) np.around(A, 4) np.around(B, 4) # ### Néel Structure Factor # One way to explore the $\Omega = 0$ line on the phase diagram is to calculate the *Néel Structure Factor*, $S_{\text{Néel}}=4 \times \sum_{(k,l) \neq (0,0)} (-1)^{|k|+|l|} g^c(k,l)$, which should be highest when the state is more antiferromagnetic. We will sweep over different values of $\delta_{\text{final}}$ to show that the region $0<\hbar \delta_{\text{final}}/U<4$ is indeed where the antiferromagnetic phase takes place. def get_neel_structure_factor(reg, R_interatomic, state): N_qubits = len(reg.qubits) N_side = int(np.sqrt(N_qubits)) st_fac = 0 for k in range(-N_side+1, N_side): for l in range(-N_side+1, N_side): kk = np.abs(k) ll = np.abs(l) if not (k == 0 and l == 0): st_fac += 4 * (-1)**(kk + ll) * get_corr_function(k, l, reg, R_interatomic, state) return st_fac def calculate_neel(det, N, Omega_max = 2.3 * 2 * np.pi): #Setup: U = Omega_max / 2.3 delta_0 = -6 * U delta_f = det * U t_rise = 252 t_fall = 500 t_sweep = int((delta_f - delta_0)/(2 * np.pi * 10) * 1000) t_sweep += 4 - t_sweep % 4 # To be a multiple of the clock period of Chadoq2 (4ns) R_interatomic = Chadoq2.rydberg_blockade_radius(U) reg = Register.rectangle(N, N, R_interatomic) #Pulse Sequence rise = Pulse.ConstantDetuning(RampWaveform(t_rise, 0., Omega_max), delta_0, 0.) sweep = Pulse.ConstantAmplitude(Omega_max, RampWaveform(t_sweep, delta_0, delta_f), 0.) fall = Pulse.ConstantDetuning(RampWaveform(t_fall, Omega_max, 0.), delta_f, 0.) seq = Sequence(reg, Chadoq2) seq.declare_channel('ising', 'rydberg_global') seq.add(rise, 'ising') seq.add(sweep, 'ising') seq.add(fall, 'ising') simul = Simulation(seq, sampling_rate=0.02) results = simul.run() final = results.states[-1] return get_neel_structure_factor(reg, R_interatomic, final) # + N_side = 3 occup_list = [occupation(j, N_side*N_side) for j in range(N_side*N_side)] detunings = np.linspace(-1, 5, 20) results=[] for det in detunings: print(f'Detuning = {np.round(det,3)} x 2π Mhz.') results.append(calculate_neel(det, N_side)) plt.xlabel(r'$\hbar\delta_{final}/U$') plt.ylabel(r'Néel Structure Factor $S_{Neel}$') plt.plot(detunings, results, 'o', ls='solid') plt.show() max_index = results.index(max(results)) print(f'Max S_Neel {np.round(max(results),2)} at detuning = {np.round(detunings[max_index],2)} x 2π Mhz.')
tutorials/applications/Preparing state with antiferromagnetic order in the Ising model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Machine Learning: Intermediate report # # *Write your name and your student ID below.* # # + Your name (Your ID) # Prepare an environment for running Python codes on Jupyter notebook. The most easiest way is to use [Google Colaboratory](https://colab.research.google.com/). # # Write codes for the following three problems, and submit the notebook file (`.ipynb`) on OCW. *We do not accept a report in other formats (e.g., Word, PDF, HTML)*. Write a code at the specified cell in the notebook. One can add more cells if necessary. # # These are the links to the sample codes used in the lecture: # # + [Binary classification](https://github.com/chokkan/deeplearningclass/blob/master/mlp_binary.ipynb) # + [MNIST](https://github.com/chokkan/deeplearningclass/blob/master/mnist.ipynb) # # *Please accept that your report may be shared among students who take this course.* # ## 1. Multi-class classification on MNIST # # Train a model on the training set of MNIST, and report the performance of the model on the test set in the following evaluation measures: # # + Accuracy # + Precision, recall, and F1 scores on each category (digit) # + Macro-averaged precision, recall, and F1 scores (i.e., the averages of the above measures for all categories) # # One can use the same code shown in the lecture. Write a code here and show the output. # ## 2. Confusion matrix # # Show a confusion matrix of the predictions of the model on the test set. This is an example of a confusion matrix. # # ![example](example-confusion-matrix.png) # # Write a code here and show the confusion matrix. # ## 3. Top-3 easy and confusing examples # # Show the top three easy and three confusing, respectively, images where the model recognized their digits with strong confidences. More specifically, let $y_n$ and $\hat{y}_n$ the true and predicted, respectively, digits of the image $x_n$. We want to find three images with high $P(\hat{y}_n | x_n)$ when $y_n = \hat{y}_n$ (easy examples) $y_n \neq \hat{y}_n$ (confusing examples). # # Please show $y_n$, $P(y_n | x_n)$, $\hat{y}_n$, and $P(\hat{y}_n | x_n)$. This is an example of an output for an image (you need this kind of outputs for top-three easy and top-three confusing images). # # ![example](example-confusing-sample.png) # # Write a code here and show the output.
assignment/(YourID)_report2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tree LSTM modeling for semantic relatedness # # Just five years ago, many of the most successful models for doing supervised learning with text # ignored word order altogether. # Some of the most successful models represented documents or sentences # with the order-invariant *bag-of-words* representation. # Anyone thinking hard should probably have realized that these models couldn't dominate forever. # That's because we all know that word order actually does matter. # Bag-of-words models, which ignored word order, left some information on the table. # # The recurrent neural networks that # [we introduced in chapter 5](../chapter05_recurrent-neural-networks/simple-rnn.ipynb) # model word order, by passing over the sequence of words in order, # updating the models representation of the sentence after each word. # And, with LSTM recurrent cells and training on GPUs, # even the straightforward LSTM far outpaces classical approaches, # on a number of tasks, including language modeling, # named entity recognition and more. # # But while those models are impressive, they still may be leaving some knowledge on the table. # To begin with, we know a priori that sentence have a grammatical structure. # And we already have some tools that are very good at recovering parse trees that reflect grammatical structure of the sentences. # While it may be possible for an LSTM to learn this information implicitly, # it's often a good idea to build known information into the structure of a neural network. # Take for example convolutional neural networks. # They build in the prior knowledge that low level feature should be translation-invariant. # It's possible to come up with a fully connected net that does the same thing, # but it would require many more nodes and would be much more susceptible to overfitting. # In this case, we would like to build the grammatical tree structure of the sentences # into the architecture of an LSTM recurrent neural network. # This tutorial walks through *tree LSTMs*, # an approach that does precisely that. # The models here are based on the [tree-structured LSTM](https://nlp.stanford.edu/pubs/tai-socher-manning-acl2015.pdf) # by <NAME>, <NAME>, and <NAME>. # Our implementation borrows from [this Pytorch example](https://github.com/dasguptar/treelstm.pytorch). # # # ### Sentences involving Compositional Knowledge # This tutorial walks through training a child-sum Tree LSTM model for analyzing semantic relatedness of sentence pairs given their dependency parse trees. # # ### Preliminaries # Before getting going, you'll probably want to note a couple preliminary details: # # * Use of GPUs is preferred if one wants to run the complete training to match the state-of-the-art results. # * To show a progress meter, one should install the `tqdm` ("progress" in Arabic) through `pip install tqdm`. One should also install the HTTP library through `pip install requests`. # # import mxnet as mx from mxnet.gluon import Block, nn from mxnet.gluon.parameter import Parameter class Tree(object): def __init__(self, idx): self.children = [] self.idx = idx def __repr__(self): if self.children: return '{0}: {1}'.format(self.idx, str(self.children)) else: return str(self.idx) tree = Tree(0) tree.children.append(Tree(1)) tree.children.append(Tree(2)) tree.children.append(Tree(3)) tree.children[1].children.append(Tree(4)) print(tree) # ### Model # The model is based on [child-sum tree LSTM](https://nlp.stanford.edu/pubs/tai-socher-manning-acl2015.pdf). For each sentence, the tree LSTM model extracts information following the dependency parse tree structure, and produces the sentence embedding at the root of each tree. This embedding can be used to predict semantic similarity. # # #### Child-sum Tree LSTM class ChildSumLSTMCell(Block): def __init__(self, hidden_size, i2h_weight_initializer=None, hs2h_weight_initializer=None, hc2h_weight_initializer=None, i2h_bias_initializer='zeros', hs2h_bias_initializer='zeros', hc2h_bias_initializer='zeros', input_size=0, prefix=None, params=None): super(ChildSumLSTMCell, self).__init__(prefix=prefix, params=params) with self.name_scope(): self._hidden_size = hidden_size self._input_size = input_size self.i2h_weight = self.params.get('i2h_weight', shape=(4*hidden_size, input_size), init=i2h_weight_initializer) self.hs2h_weight = self.params.get('hs2h_weight', shape=(3*hidden_size, hidden_size), init=hs2h_weight_initializer) self.hc2h_weight = self.params.get('hc2h_weight', shape=(hidden_size, hidden_size), init=hc2h_weight_initializer) self.i2h_bias = self.params.get('i2h_bias', shape=(4*hidden_size,), init=i2h_bias_initializer) self.hs2h_bias = self.params.get('hs2h_bias', shape=(3*hidden_size,), init=hs2h_bias_initializer) self.hc2h_bias = self.params.get('hc2h_bias', shape=(hidden_size,), init=hc2h_bias_initializer) def forward(self, F, inputs, tree): children_outputs = [self.forward(F, inputs, child) for child in tree.children] if children_outputs: _, children_states = zip(*children_outputs) # unzip else: children_states = None with inputs.context as ctx: return self.node_forward(F, F.expand_dims(inputs[tree.idx], axis=0), children_states, self.i2h_weight.data(ctx), self.hs2h_weight.data(ctx), self.hc2h_weight.data(ctx), self.i2h_bias.data(ctx), self.hs2h_bias.data(ctx), self.hc2h_bias.data(ctx)) def node_forward(self, F, inputs, children_states, i2h_weight, hs2h_weight, hc2h_weight, i2h_bias, hs2h_bias, hc2h_bias): # comment notation: # N for batch size # C for hidden state dimensions # K for number of children. # FC for i, f, u, o gates (N, 4*C), from input to hidden i2h = F.FullyConnected(data=inputs, weight=i2h_weight, bias=i2h_bias, num_hidden=self._hidden_size*4) i2h_slices = F.split(i2h, num_outputs=4) # (N, C)*4 i2h_iuo = F.concat(*[i2h_slices[i] for i in [0, 2, 3]], dim=1) # (N, C*3) if children_states: # sum of children states, (N, C) hs = F.add_n(*[state[0] for state in children_states]) # concatenation of children hidden states, (N, K, C) hc = F.concat(*[F.expand_dims(state[0], axis=1) for state in children_states], dim=1) # concatenation of children cell states, (N, K, C) cs = F.concat(*[F.expand_dims(state[1], axis=1) for state in children_states], dim=1) # calculate activation for forget gate. addition in f_act is done with broadcast i2h_f_slice = i2h_slices[1] f_act = i2h_f_slice + hc2h_bias + F.dot(hc, hc2h_weight) # (N, K, C) forget_gates = F.Activation(f_act, act_type='sigmoid') # (N, K, C) else: # for leaf nodes, summation of children hidden states are zeros. hs = F.zeros_like(i2h_slices[0]) # FC for i, u, o gates, from summation of children states to hidden state hs2h_iuo = F.FullyConnected(data=hs, weight=hs2h_weight, bias=hs2h_bias, num_hidden=self._hidden_size*3) i2h_iuo = i2h_iuo + hs2h_iuo iuo_act_slices = F.SliceChannel(i2h_iuo, num_outputs=3) # (N, C)*3 i_act, u_act, o_act = iuo_act_slices[0], iuo_act_slices[1], iuo_act_slices[2] # (N, C) each # calculate gate outputs in_gate = F.Activation(i_act, act_type='sigmoid') in_transform = F.Activation(u_act, act_type='tanh') out_gate = F.Activation(o_act, act_type='sigmoid') # calculate cell state and hidden state next_c = in_gate * in_transform if children_states: next_c = F.sum(forget_gates * cs, axis=1) + next_c next_h = out_gate * F.Activation(next_c, act_type='tanh') return next_h, [next_h, next_c] # #### Similarity regression module # module for distance-angle similarity class Similarity(nn.Block): def __init__(self, sim_hidden_size, rnn_hidden_size, num_classes): super(Similarity, self).__init__() with self.name_scope(): self.wh = nn.Dense(sim_hidden_size, in_units=2*rnn_hidden_size) self.wp = nn.Dense(num_classes, in_units=sim_hidden_size) def forward(self, F, lvec, rvec): # lvec and rvec will be tree_lstm cell states at roots mult_dist = F.broadcast_mul(lvec, rvec) abs_dist = F.abs(F.add(lvec,-rvec)) vec_dist = F.concat(*[mult_dist, abs_dist],dim=1) out = F.log_softmax(self.wp(F.sigmoid(self.wh(vec_dist)))) return out # #### Final model # putting the whole model together class SimilarityTreeLSTM(nn.Block): def __init__(self, sim_hidden_size, rnn_hidden_size, embed_in_size, embed_dim, num_classes): super(SimilarityTreeLSTM, self).__init__() with self.name_scope(): self.embed = nn.Embedding(embed_in_size, embed_dim) self.childsumtreelstm = ChildSumLSTMCell(rnn_hidden_size, input_size=embed_dim) self.similarity = Similarity(sim_hidden_size, rnn_hidden_size, num_classes) def forward(self, F, l_inputs, r_inputs, l_tree, r_tree): l_inputs = self.embed(l_inputs) r_inputs = self.embed(r_inputs) # get cell states at roots lstate = self.childsumtreelstm(F, l_inputs, l_tree)[1][1] rstate = self.childsumtreelstm(F, r_inputs, r_tree)[1][1] output = self.similarity(F, lstate, rstate) return output # ### Dataset classes # #### Vocab # + import os import logging logging.basicConfig(level=logging.INFO) import numpy as np import random from tqdm import tqdm import mxnet as mx # class for vocabulary and the word embeddings class Vocab(object): # constants for special tokens: padding, unknown, and beginning/end of sentence. PAD, UNK, BOS, EOS = 0, 1, 2, 3 PAD_WORD, UNK_WORD, BOS_WORD, EOS_WORD = '<blank>', '<unk>', '<s>', '</s>' def __init__(self, filepaths=[], embedpath=None, include_unseen=False, lower=False): self.idx2tok = [] self.tok2idx = {} self.lower = lower self.include_unseen = include_unseen self.add(Vocab.PAD_WORD) self.add(Vocab.UNK_WORD) self.add(Vocab.BOS_WORD) self.add(Vocab.EOS_WORD) self.embed = None for filename in filepaths: logging.info('loading %s'%filename) with open(filename, 'r') as f: self.load_file(f) if embedpath is not None: logging.info('loading %s'%embedpath) with open(embedpath, 'r') as f: self.load_embedding(f, reset=set([Vocab.PAD_WORD, Vocab.UNK_WORD, Vocab.BOS_WORD, Vocab.EOS_WORD])) @property def size(self): return len(self.idx2tok) def get_index(self, key): return self.tok2idx.get(key.lower() if self.lower else key, Vocab.UNK) def get_token(self, idx): if idx < self.size: return self.idx2tok[idx] else: return Vocab.UNK_WORD def add(self, token): token = token.lower() if self.lower else token if token in self.tok2idx: idx = self.tok2idx[token] else: idx = len(self.idx2tok) self.idx2tok.append(token) self.tok2idx[token] = idx return idx def to_indices(self, tokens, add_bos=False, add_eos=False): vec = [BOS] if add_bos else [] vec += [self.get_index(token) for token in tokens] if add_eos: vec.append(EOS) return vec def to_tokens(self, indices, stop): tokens = [] for i in indices: tokens += [self.get_token(i)] if i == stop: break return tokens def load_file(self, f): for line in f: tokens = line.rstrip('\n').split() for token in tokens: self.add(token) def load_embedding(self, f, reset=[]): vectors = {} for line in tqdm(f.readlines(), desc='Loading embeddings'): tokens = line.rstrip('\n').split(' ') word = tokens[0].lower() if self.lower else tokens[0] if self.include_unseen: self.add(word) if word in self.tok2idx: vectors[word] = [float(x) for x in tokens[1:]] dim = len(vectors.values()[0]) def to_vector(tok): if tok in vectors and tok not in reset: return vectors[tok] elif tok not in vectors: return np.random.normal(-0.05, 0.05, size=dim) else: return [0.0]*dim self.embed = mx.nd.array([vectors[tok] if tok in vectors and tok not in reset else [0.0]*dim for tok in self.idx2tok]) # - # #### Data iterator # Iterator class for SICK dataset class SICKDataIter(object): def __init__(self, path, vocab, num_classes, shuffle=True): super(SICKDataIter, self).__init__() self.vocab = vocab self.num_classes = num_classes self.l_sentences = [] self.r_sentences = [] self.l_trees = [] self.r_trees = [] self.labels = [] self.size = 0 self.shuffle = shuffle self.reset() def reset(self): if self.shuffle: mask = list(range(self.size)) random.shuffle(mask) self.l_sentences = [self.l_sentences[i] for i in mask] self.r_sentences = [self.r_sentences[i] for i in mask] self.l_trees = [self.l_trees[i] for i in mask] self.r_trees = [self.r_trees[i] for i in mask] self.labels = [self.labels[i] for i in mask] self.index = 0 def next(self): out = self[self.index] self.index += 1 return out def set_context(self, context): self.l_sentences = [a.as_in_context(context) for a in self.l_sentences] self.r_sentences = [a.as_in_context(context) for a in self.r_sentences] def __len__(self): return self.size def __getitem__(self, index): l_tree = self.l_trees[index] r_tree = self.r_trees[index] l_sent = self.l_sentences[index] r_sent = self.r_sentences[index] label = self.labels[index] return (l_tree, l_sent, r_tree, r_sent, label) # ### Training with autograd # + import argparse, pickle, math, os, random import logging logging.basicConfig(level=logging.INFO) import numpy as np import mxnet as mx from mxnet import gluon from mxnet.gluon import nn from mxnet import autograd as ag # training settings and hyper-parameters use_gpu = False optimizer = 'AdaGrad' seed = 123 batch_size = 25 training_batches_per_epoch = 10 learning_rate = 0.01 weight_decay = 0.0001 epochs = 1 rnn_hidden_size, sim_hidden_size, num_classes = 150, 50, 5 # initialization context = [mx.gpu(0) if use_gpu else mx.cpu()] # seeding mx.random.seed(seed) np.random.seed(seed) random.seed(seed) # read dataset def verified(file_path, sha1hash): import hashlib sha1 = hashlib.sha1() with open(file_path, 'rb') as f: while True: data = f.read(1048576) if not data: break sha1.update(data) matched = sha1.hexdigest() == sha1hash if not matched: logging.warn('Found hash mismatch in file {}, possibly due to incomplete download.' .format(file_path)) return matched data_file_name = 'tree_lstm_dataset-3d85a6c4.cPickle' data_file_hash = '3d85a6c44a335a33edc060028f91395ab0dcf601' if not os.path.exists(data_file_name) or not verified(data_file_name, data_file_hash): from mxnet.test_utils import download download('https://apache-mxnet.s3-accelerate.amazonaws.com/gluon/dataset/%s'%data_file_name, overwrite=True) with open('tree_lstm_dataset-3d85a6c4.cPickle', 'rb') as f: train_iter, dev_iter, test_iter, vocab = pickle.load(f) logging.info('==> SICK vocabulary size : %d ' % vocab.size) logging.info('==> Size of train data : %d ' % len(train_iter)) logging.info('==> Size of dev data : %d ' % len(dev_iter)) logging.info('==> Size of test data : %d ' % len(test_iter)) # get network net = SimilarityTreeLSTM(sim_hidden_size, rnn_hidden_size, vocab.size, vocab.embed.shape[1], num_classes) # use pearson correlation and mean-square error for evaluation metric = mx.metric.create(['pearsonr', 'mse']) # the prediction from the network is log-probability vector of each score class # so use the following function to convert scalar score to the vector # e.g 4.5 -> [0, 0, 0, 0.5, 0.5] def to_target(x): target = np.zeros((1, num_classes)) ceil = int(math.ceil(x)) floor = int(math.floor(x)) if ceil==floor: target[0][floor-1] = 1 else: target[0][floor-1] = ceil - x target[0][ceil-1] = x - floor return mx.nd.array(target) # and use the following to convert log-probability vector to score def to_score(x): levels = mx.nd.arange(1, 6, ctx=x.context) return [mx.nd.sum(levels*mx.nd.exp(x), axis=1).reshape((-1,1))] # when evaluating in validation mode, check and see if pearson-r is improved # if so, checkpoint and run evaluation on test dataset def test(ctx, data_iter, best, mode='validation', num_iter=-1): data_iter.reset() samples = len(data_iter) data_iter.set_context(ctx[0]) preds = [] labels = [mx.nd.array(data_iter.labels, ctx=ctx[0]).reshape((-1,1))] for _ in tqdm(range(samples), desc='Testing in {} mode'.format(mode)): l_tree, l_sent, r_tree, r_sent, label = data_iter.next() z = net(mx.nd, l_sent, r_sent, l_tree, r_tree) preds.append(z) preds = to_score(mx.nd.concat(*preds, dim=0)) metric.update(preds, labels) names, values = metric.get() metric.reset() for name, acc in zip(names, values): logging.info(mode+' acc: %s=%f'%(name, acc)) if name == 'pearsonr': test_r = acc if mode == 'validation' and num_iter >= 0: if test_r >= best: best = test_r logging.info('New optimum found: {}.'.format(best)) return best def train(epoch, ctx, train_data, dev_data): # initialization with context if isinstance(ctx, mx.Context): ctx = [ctx] net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx[0]) net.embed.weight.set_data(vocab.embed.as_in_context(ctx[0])) train_data.set_context(ctx[0]) dev_data.set_context(ctx[0]) # set up trainer for optimizing the network. trainer = gluon.Trainer(net.collect_params(), optimizer, {'learning_rate': learning_rate, 'wd': weight_decay}) best_r = -1 Loss = gluon.loss.KLDivLoss() for i in range(epoch): train_data.reset() num_samples = min(len(train_data), training_batches_per_epoch*batch_size) # collect predictions and labels for evaluation metrics preds = [] labels = [mx.nd.array(train_data.labels[:num_samples], ctx=ctx[0]).reshape((-1,1))] for j in tqdm(range(num_samples), desc='Training epoch {}'.format(i)): # get next batch l_tree, l_sent, r_tree, r_sent, label = train_data.next() # use autograd to record the forward calculation with ag.record(): # forward calculation. the output is log probability z = net(mx.nd, l_sent, r_sent, l_tree, r_tree) # calculate loss loss = Loss(z, to_target(label).as_in_context(ctx[0])) # backward calculation for gradients. loss.backward() preds.append(z) # update weight after every batch_size samples if (j+1) % batch_size == 0: trainer.step(batch_size) # translate log-probability to scores, and evaluate preds = to_score(mx.nd.concat(*preds, dim=0)) metric.update(preds, labels) names, values = metric.get() metric.reset() for name, acc in zip(names, values): logging.info('training acc at epoch %d: %s=%f'%(i, name, acc)) best_r = test(ctx, dev_data, best_r, num_iter=i) train(epochs, context, train_iter, dev_iter) # - # ### Conclusion # - Gluon offers great tools for modeling in an imperative way.
chapter09_natural-language-processing/tree-lstm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/mooglol/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module3-make-explanatory-visualizations/LS_DS_123_Make_Explanatory_Visualizations_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="NMEswXWh9mqw" # # ASSIGNMENT # # ### 1) Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit). # # Get caught up to where we got our example in class and then try and take things further. How close to "pixel perfect" can you make the lecture graph? # # Once you have something that you're proud of, share your graph in the cohort channel and move on to the second exercise. # # ### 2) Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/). # # **WARNING**: There are a lot of very custom graphs and tables at the above link. I **highly** recommend not trying to reproduce any that look like a table of values or something really different from the graph types that we are already familiar with. Search through the posts until you find a graph type that you are more or less familiar with: histogram, bar chart, stacked bar chart, line chart, [seaborn relplot](https://seaborn.pydata.org/generated/seaborn.relplot.html), etc. Recreating some of the graphics that 538 uses would be a lot easier in Adobe photoshop/illustrator than with matplotlib. # # - If you put in some time to find a graph that looks "easy" to replicate you'll probably find that it's not as easy as you thought. # # - If you start with a graph that looks hard to replicate you'll probably run up against a brick wall and be disappointed with your afternoon. # # # # # # # # # # # # + id="7SY1ZHawyZvz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 355} outputId="436772fa-d6a3-489c-9435-6b843841bd94" import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import AutoMinorLocator, MultipleLocator, FuncFormatter from IPython.display import display, Image url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png' example = Image(url=url, width=400) display(example) # + id="OC9CsePzGRyn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 271} outputId="968c49a5-0670-4cfc-e407-a4605616c376" # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33], index=range(1,11)) fake.plot.bar(color='C1', width=0.9); # + id="yyNWxQ35IL0T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 271} outputId="235ed1d3-79fc-4d35-afc1-711b8319ebb4" fake2 = pd.Series( [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]) ax = fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9); ax.grid(True) # + id="iQgCP_yuIqXI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 334} outputId="bb0dd228-c53a-4fa7-f764-af4bfccb8960" # From lesson import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') fig = plt.figure() fig.patch.set(facecolor='white') ax = fake.plot.bar(color="#ED713A", width=0.9) ax.set(facecolor='white') ax.text(x=-1.8, y=44, s="'An Inconvenient Sequel: Truth To Power' is divisive", fontweight='bold', fontsize=12); ax.text(x=-1.8, y=41.5, s="IMDb ratings for the film as of Aug. 29", fontsize=11) ax.set_ylabel("Percent of total votes", fontsize=9, fontweight='bold', labelpad=10) ax.set_xlabel("Rating", fontsize=9, fontweight='bold', labelpad=10) ax.set_xticklabels(range(1,11), rotation=0) ax.set_yticks(range(0,50,10)) ax.set_yticklabels(range(0, 50, 10)) plt.show() # + id="eOxCaiQEUZ4I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 355} outputId="4e0288de-eeb5-4bdf-8e43-45af4af3808c" display(example) # + id="bnhOSTIkVh6k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 321} outputId="3e7f74da-0a81-40f9-82cc-021d2e85c99c" import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') fig = plt.figure(figsize=(5.47,3.75)) fig.patch.set(facecolor='white') plt.rcParams["font.family"] = "Atlas Grotesk" ax = fake.plot.bar(color="#ED713A", width=0.9) ax.set(facecolor='white') ax.set_ylim(bottom=-1.5) ax.set_ylim(top=40.5) ax.text(x=-1.8, y=46, s="'An Inconvenient Sequel: Truth To Power' is divisive", fontweight='bold', fontsize=12); ax.text(x=-1.8, y=43.5, s="IMDb ratings for the film as of Aug. 29", fontsize=11) ax.set_ylabel("Percent of total votes", fontsize=9.7, fontweight='bold', labelpad=10, fontname="Atlas Grotesk") ax.set_xlabel("Rating", fontsize=9.7, fontweight='bold', labelpad=10) ax.set_xticklabels(range(1,11), rotation=0) yt1 = [0,10,20,30,40] yt2 = ['0 ', '10 ', '20 ', '30 ', '40%'] ax.set_yticks(yt1) ax.set_yticklabels(yt2) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) ax.tick_params(labelsize=9.42, grid_alpha=0.75, labelcolor = '#bfbfbf') plt.show() # + id="eqyRJO8mYziZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 355} outputId="3f143b56-353c-4f7e-ce84-bf45ecb6d6f6" display(example) # + id="1bjV7sl1kMr2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="83db8e96-923c-4fa8-c9e8-a40fbd666067" url2 = 'https://fivethirtyeight.com/wp-content/uploads/2015/04/barry-jester-datalab-boomersdruguse-actual.png?w=575' example2 = Image(url=url2, width=400) display(example2) # + id="v2zAqT8LmLyn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 326} outputId="30f543d7-b2ef-474c-e3ae-0ca58b8ec9f4" df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/drug-use-by-age/drug-use-by-age.csv') df.round(2) df.tail(5) # + id="jOfsFrLooilY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 151} outputId="a38c6479-3296-4369-9e41-cfe80c0f241a" boomer_drug = df.loc[[15]] boomer_drug # + id="LsL9uodSq-6B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 151} outputId="053588a9-717e-4898-a805-bd19064e4a3c" boomer_drug.drop([col for col in df.columns if 'frequency' in col],axis=1,inplace=True) boomer_drug # + id="p3eruQ2GtA1I" colab_type="code" colab={} boomer_drug.drop('alcohol-use', axis=1, inplace=True) # + id="Gs_0HPONtGiP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 131} outputId="e4f36463-9391-484b-b32e-fb624d5bbfe4" boomer_drug # + id="Sh-Hx3LguGca" colab_type="code" colab={} boomer_drug = boomer_drug.T # + id="g8ECqaW3xtCi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 483} outputId="fe3f9f1a-e8ee-4e2c-c74e-078d99012631" boomer_drug # + id="6n11vGK0x5Bk" colab_type="code" colab={} boomer_drug = boomer_drug.drop(['age', 'n']) # + id="hXA55l5fyCKg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 421} outputId="5831572b-ec5b-4348-aa17-ae0d753739cf" boomer_drug # + id="_Lxx5h-mympE" colab_type="code" colab={} boomer_drug.rename(index = {'marijuana-use': 'Marijuana', 'cocaine-use': 'Cocaine', 'crack-use': 'Crack', 'heroin-use': 'Heroin', 'hallucinogen-use': 'Hallucinogen', 'inhalant-use': 'Inhalant', 'pain-releiver-use': 'Pain reliever', 'oxycontin-use': 'OxyContin', 'tranquilizer-use': 'Tranquilizer', 'stimulant-use': 'Stimulant', 'meth-use': 'Meth', 'sedative-use': 'Sedative'}, inplace = True) # + id="cU-ua6OX0jzV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 421} outputId="249fed0b-715a-4e29-a247-6df34acd6287" boomer_drug # + id="JZm0yvqY076j" colab_type="code" colab={} boomer_drug.columns = boomer_drug.columns.astype(str) # + id="f484RmMb1_Hn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 421} outputId="55b51adc-78f1-461b-fffe-37f3db94b490" boomer_drug # + id="yqJbmXGK2QpJ" colab_type="code" colab={} boomer_drug = boomer_drug.sort_values(by=['15']) # + id="tyqUrKkH2WNt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 421} outputId="ccedaf34-5558-4042-93f9-bc4524de22bb" boomer_drug # + id="jYAwKlo6oztA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 318} outputId="02cecdd3-699d-4cc6-cbdc-6a227237e59d" import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') fig = plt.figure(figsize=(5.47,3.75)) fig.patch.set(facecolor='white') ax = boomer_drug.plot.barh(color='r', width=0.9) ax.set(facecolor='white') ax.text(x=-2, y=12.2, s="Percentage of Americans aged 50-64 who said in a 2012 survey \n that they had used the following drugs in the past year", fontsize=14); y = [0.05, 0.15, 0.15, 0.23 ,0.25, 0.28, 0.36, 0.36, 0.87, 1.43, 2.52, 7.29] for i, v in enumerate(y): ax.text(v + 0.15, i + -0.18, str(v), color='black', size='12') ax.grid(False) ax.axes.get_xaxis().set_visible(False) ax.get_legend().remove() plt.show(); # + id="MaVVHz9TW1F7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="415e2540-e717-4903-d173-8b38384671ed" display(example2) # + id="N9_r-0nXW7DU" colab_type="code" colab={} # Order of values are different because the data 538 used for their plot was with 2 decimal points, # but the data they uploaded on github only uses 1 decimal place. # I am fairly certain I put in the code for facecolor correctly, but the background color will not change from gray. # Unsure about how to add a % next to 7.29 without breaking the code, gives errors when I try to. # + [markdown] id="0wSrBzmJyWaV" colab_type="text" # # STRETCH OPTIONS # # ### 1) Reproduce one of the following using the matplotlib or seaborn libraries: # # - [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) # - [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) # - or another example of your choice! # # ### 2) Make more charts! # # Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary). # # Find the chart in an example gallery of a Python data visualization library: # - [Seaborn](http://seaborn.pydata.org/examples/index.html) # - [Altair](https://altair-viz.github.io/gallery/index.html) # - [Matplotlib](https://matplotlib.org/gallery.html) # - [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html) # # Reproduce the chart. [Optionally, try the "<NAME>."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes. # # Take notes. Consider sharing your work with your cohort! # + id="dRJkKftiy5BJ" colab_type="code" colab={} # More Work Here
module3-make-explanatory-visualizations/LS_DS_123_Make_Explanatory_Visualizations_Assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Object Detection API TFRecord Generation # # This notebook generates TFRecords, that are needed to use custom datasets with the TensorFlow Object Detection API. # These TFRecords can then be used to configure the training and the validation of the gesture detection model. # The documentation can be found on the official [TensorFlow Object Detection API Respository](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/using_your_own_dataset.md#conversion-script-outline). # # The [TFRecord generation notebook by Dat Tran](https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py) for his Raccoon Dataset proved to be a valuable resource. # It provides an implementation of the TFRecord format that fits the goal of this notebook very well. # # Some lines of code like paths have to be adjusted for your case. All needed adjustments are marked with "Todo". # + from __future__ import division from __future__ import print_function from __future__ import absolute_import import os import io import pandas as pd import tensorflow as tf from PIL import Image from object_detection.utils import dataset_util from collections import namedtuple # - # ## Helper methods def split(df, group): """ Groups same image names together. One image can contain multiple hands. With this method those hands are grouped together and attached to one image object. """ data = namedtuple('data', ['filename', 'object']) gb = df.groupby(group) return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)] # ## Main method for TFRecord creation # + pycharm={"name": "#%%\n"} def create_tf_example(group, path): """ Create the TFRecord. This method creates the TFRecord according to the input data. It specifies how the data looks like. """ with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid: encoded_jpg = fid.read() encoded_jpg_io = io.BytesIO(encoded_jpg) image = Image.open(encoded_jpg_io) width, height = image.size filename = group.filename.encode('utf8') # Todo: Change file format if needed. Alternative: b'png' image_format = b'jpg' xmins = [] xmaxs = [] ymins = [] ymaxs = [] classes_text = [] classes = [] for index, row in group.object.iterrows(): xmins.append(row['xmin'] / width) xmaxs.append(row['xmax'] / width) ymins.append(row['ymin'] / height) ymaxs.append(row['ymax'] / height) # Todo: Map class to label # Should you need multiple labels a dedicated label mapping file can be used. # See https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb classes_text.append('hand'.encode('utf8')) classes.append(1) tf_example = tf.train.Example(features=tf.train.Features(feature={ 'image/height': dataset_util.int64_feature(height), 'image/width': dataset_util.int64_feature(width), 'image/filename': dataset_util.bytes_feature(filename), 'image/source_id': dataset_util.bytes_feature(filename), 'image/encoded': dataset_util.bytes_feature(encoded_jpg), 'image/format': dataset_util.bytes_feature(image_format), 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins), 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs), 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins), 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs), 'image/object/class/text': dataset_util.bytes_list_feature(classes_text), 'image/object/class/label': dataset_util.int64_list_feature(classes), })) return tf_example # - # ## TFRecord writer methods # After executing the data preprocessing script all files can be found under your defined folder name. # A folder will be generated with the following structure: # # <pre> # ego_lared_tiny/ # ├── images/ # │ ├── aishwaryfist000000109.jpg # │ ├── aishwaryfist000000611.jpg # │ ├── aishwaryfist000000782.jpg # │ ├── ... # │ ├── aishwaryfist000001356.jpg # │ └── ... # ├── train/ # │ ├── aishwaryfist000000109.jpg # │ ├── aishwaryfist000000782.jpg # │ └── ... # ├── val/ # │ ├── aishwaryfist000000611.jpg # │ ├── aishwaryfist000001356.jpg # │ └── ... # ├── labels_all.csv # ├── labels_train.csv # └── labels_val.csv # </pre> # # Please have a look at the following cells to see how these folders have to placed inside your detection_training/ folder. # + pycharm={"name": "#%%\n"} # Write training TFRecord # Todo: Change file paths writer = tf.python_io.TFRecordWriter("/home/jetbot/Documents/detection_training/train.record") path = "/home/jetbot/Documents/detection_training/images_train/" # Path to training images examples = pd.read_csv("/home/jetbot/Documents/detection_training/labels_train.csv") # Path to training labels grouped = split(examples, 'frame') for group in grouped: tf_example = create_tf_example(group, path) writer.write(tf_example.SerializeToString()) writer.close() print("done") # + pycharm={"name": "#%%\n"} # Write validation TFRecord # Todo: Change file paths writer = tf.python_io.TFRecordWriter("/home/jetbot/Documents/detection_training/val.record") path = "/home/jetbot/Documents/detection_training/images_val/" # Path to validation images examples = pd.read_csv("/home/jetbot/Documents/detection_training/labels_val.csv") # Path to validation labels grouped = split(examples, 'frame') for group in grouped: tf_example = create_tf_example(group, path) writer.write(tf_example.SerializeToString()) writer.close() print("done") # - # ## Output TFRecords to console for verification # # Outputting the records to the console allows manual verification of their correctness. # + pycharm={"name": "#%%\n"} # Write training record to console # Todo: Change file path i = 1 for example in tf.python_io.tf_record_iterator("/home/jetbot/Documents/detection_training/train.record"): example = tf.train.Example.FromString(example) print(example) if i % 3 == 0: break i = i + 1 # + pycharm={"name": "#%%\n"} # Write validation record to console # Todo: Change file path i = 1 for example in tf.python_io.tf_record_iterator("/home/jetbot/Documents/detection_training/val.record"): example = tf.train.Example.FromString(example) print(example) if i % 3 == 0: break i = i + 1
howto/2_detection/generate_tfrecords.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # <a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a> # $ \newcommand{\bra}[1]{\langle #1|} $ # $ \newcommand{\ket}[1]{|#1\rangle} $ # $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ # $ \newcommand{\dot}[2]{ #1 \cdot #2} $ # $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ # $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ # $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ # $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ # $ \newcommand{\mypar}[1]{\left( #1 \right)} $ # $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ # $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ # $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ # $ \newcommand{\onehalf}{\frac{1}{2}} $ # $ \newcommand{\donehalf}{\dfrac{1}{2}} $ # $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ # $ \newcommand{\vzero}{\myvector{1\\0}} $ # $ \newcommand{\vone}{\myvector{0\\1}} $ # $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $ # $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ # $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ # $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ # $ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $ # $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ # $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ # $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ # $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ # $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ # $ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $ # $ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $ # $ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $ # $ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $ # $ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $ # <font style="font-size:28px;" align="left"><b>Vectors: One Dimensional Lists</b></font> # <br> # _prepared by <NAME>_ # <br><br> # A <b>vector</b> is a list of numbers. # # Vectors are very useful to describe the state of a system, as we will see in the main tutorial. # # A list is a single object in python. # # Similarly, a vector is a single mathematical object. # # The number of elements in a list is its size or length. # # Similarly, the number of entries in a vector is called as the <b>size</b> or <b>dimension</b> of the vector. # consider the following list with 4 elements L = [1,-2,0,5] print(L) # Vectors can be in horizontal or vertical shape. # # We show this list as a <i><u>four dimensional</u></i> <b>row vector</b> (horizontal) or a <b>column vector</b> (vertical): # # $$ # u = \mypar{1~~-2~~0~~-5} ~~~\mbox{ or }~~~ v =\mymatrix{r}{1 \\ -2 \\ 0 \\ 5}, ~~~\mbox{ respectively.} # $$ # # Remark that we do not need to use any comma in vector representation. # <h3> Multiplying a vector with a number</h3> # # A vector can be multiplied by a number. # # Multiplication of a vector with a number is also a vector: each entry is multiplied by this number. # # $$ # 3 \cdot v = 3 \cdot \mymatrix{r}{1 \\ -2 \\ 0 \\ 5} = \mymatrix{r}{3 \\ -6 \\ 0 \\ 15} # ~~~~~~\mbox{ or }~~~~~~ # (-0.6) \cdot v = (-0.6) \cdot \mymatrix{r}{1 \\ -2 \\ 0 \\ 5} = \mymatrix{r}{-0.6 \\ 1.2 \\ 0 \\ -3}. # $$ # # We may consider this as enlarging or making smaller the entries of a vector. # # We verify our calculations in python. # + # 3 * v v = [1,-2,0,5] print("v is",v) # we use the same list for the result for i in range(len(v)): v[i] = 3 * v[i] print("3v is",v) # -0.6 * u # reinitialize the list v v = [1,-2,0,5] for i in range(len(v)): v[i] = -0.6 * v[i] print("0.6v is",v) # - # <h3> Summation of vectors</h3> # # Two vectors (with same dimension) can be summed up. # # The summation of two vectors is a vector: the numbers on the same entries are added up. # # $$ # u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}. # ~~~~~~~ \mbox{Then, }~~ # u+v = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} + \myrvector{-1\\ -1 \\2 \\ -3 \\ 5} = # \myrvector{-3+(-1)\\ -2+(-1) \\0+2 \\ -1+(-3) \\ 4+5} = \myrvector{-4\\ -3 \\2 \\ -4 \\ 9}. # $$ # # We do the same calculations in Python. # + u = [-3,-2,0,-1,4] v = [-1,-1,2,-3,5] result=[] for i in range(len(u)): result.append(u[i]+v[i]) print("u+v is",result) # print the result vector similarly to a column vector print() # print an empty line print("the elements of u+v are") for j in range(len(result)): print(result[j]) # - # <h3> Task 1 </h3> # # Create two 7-dimensional vectors $u$ and $ v $ as two different lists in Python having entries randomly picked between $-10$ and $10$. # # Print their entries. # + from random import randrange # # your solution is here # #r=randrange(-10,11) # randomly pick a number from the list {-10,-9,...,-1,0,1,...,9,10} # - # <a href="Math20_Vectors_Solutions.ipynb#task1">click for our solution</a> # <h3> Task 2 </h3> # # By using the same vectors, find the vector $ (3 u-2 v) $ and print its entries. Here $ 3u $ and $ 2v $ means $u$ and $v$ are multiplied by $3$ and $2$, respectively. # # your solution is here # # <a href="Math20_Vectors_Solutions.ipynb#task2">click for our solution</a> # <h3> Visualization of vectors </h3> # # We can visualize the vectors with dimension at most 3. # # For simplicity, we give examples of 2-dimensional vectors. # # Consider the vector $ v = \myvector{1 \\ 2} $. # # A 2-dimensional vector can be represented on the two-dimensional plane by an arrow starting from the origin $ (0,0) $ to the point $ (1,2) $. # %run math.py visualize_vectors("example1") # We represent the vectors $ 2v = \myvector{2 \\ 4} $ and $ -v = \myvector{-1 \\ -2} $ below. # %run math.py visualize_vectors("example2") # As we can observe, after multiplying by 2, the vector is enlarged, and, after multiplying by $(-1)$, the vector is the same but its direction is opposite. # <h3> The length of a vector </h3> # # The length of a vector is the (shortest) distance from the points represented by the entries of vector to the origin point $(0,0)$. # # The length of a vector can be calculated by using Pythagoras Theorem. # # We visualize a vector, its length, and the contributions of each entry to the length. # # Consider the vector $ u = \myrvector{-3 \\ 4} $. # %run math.py visualize_vectors("example3") # The length of $ u $ is denoted as $ \norm{u} $, and it is calculated as $ \norm{u} =\sqrt{(-3)^2+4^2} = 5 $. # # Here each entry contributes with its square value. All contributions are summed up. Then, we obtain the square of the length. # # This formula is generalized to any dimension. # # We find the length of the following vector by using Python: # # $$ # v = \myrvector{-1 \\ -3 \\ 5 \\ 3 \\ 1 \\ 2} # ~~~~~~~~~~ # \mbox{and} # ~~~~~~~~~~ # \norm{v} = \sqrt{(-1)^2+(-3)^2+5^2+3^2+1^2+2^2} . # $$ # <div style="font-style:italic;background-color:#fafafa;font-size:10pt;"> Remember: There is a short way of writing power operation in Python. # <ul> # <li> In its generic form: $ a^x $ can be denoted by $ a ** x $ in Python. </li> # <li> The square of a number $a$: $ a^2 $ can be denoted by $ a ** 2 $ in Python. </li> # <li> The square root of a number $ a $: $ \sqrt{a} = a^{\frac{1}{2}} = a^{0.5} $ can be denoted by $ a ** 0.5 $ in Python.</li> # </ul> # </div> # + v = [-1,-3,5,3,1,2] length_square=0 for i in range(len(v)): print(v[i],":square ->",v[i]**2) # print each entry and its square value length_square = length_square + v[i]**2 # sum up the square of each entry length = length_square ** 0.5 # take the square root of the summation of the squares of all entries print("the summation is",length_square) print("then the length is",length) # for square root, we can also use built-in function math.sqrt print() # print an empty line from math import sqrt print("the square root of",length_square,"is",sqrt(length_square)) # - # <h3> Task 3 </h3> # # Let $ u = \myrvector{1 \\ -2 \\ -4 \\ 2} $ be a four dimensional vector. # # Verify that $ \norm{4 u} = 4 \cdot \norm{u} $ in Python. # # Remark that $ 4u $ is another vector obtained from $ u $ by multiplying it with 4. # # your solution is here # # <a href="Math20_Vectors_Solutions.ipynb#task3">click for our solution</a> # <h3> Notes:</h3> # # When a vector is multiplied by a number, then its length is also multiplied with the same number. # # But, we should be careful with the sign. # # Consider the vector $ -3 v $. It has the same length of $ 3v $, but its direction is opposite. # # So, when calculating the length of $ -3 v $, we use absolute value of the number: # # $ \norm{-3 v} = |-3| \norm{v} = 3 \norm{v} $. # # Here $ |-3| $ is the absolute value of $ -3 $. # # The absolute value of a number is its distance to 0. So, $ |-3| = 3 $. # <h3> Task 4 </h3> # # Let $ u = \myrvector{1 \\ -2 \\ -4 \\ 2} $ be a four dimensional vector. # # Randomly pick a number $r$ from $ \left\{ \dfrac{1}{10}, \dfrac{2}{10}, \cdots, \dfrac{9}{10} \right\} $. # # Find the vector $(-r)\cdot u$ and then its length. # # your solution is here # # <a href="Math20_Vectors_Solutions.ipynb#task4">click for our solution</a>
math/Math20_Vectors.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Лабораторная работа №4 # ## CCN для задачи классификации набора данных CIFAR-10 # # В работе используется CIFAR-10, который включает 60000 цветных изображений размера 32х32 для 10 классов (6к изображений на класс). Объем обучающей выборки - 50000 изображений, а тестовой - 10000 изображений. # # Набор разделен на 6 батчей: 5 батчей для обучения и 1 для тестирования. Тестовый батч соистоит из 1000 изображений, отобранных из общей выборки случайным образом. Обучающие батчи содержат оставшиеся изображения, перемешанные в случайном порядке. В общем обучающая выборка содержит по 5000 изображений каждого класса, но каждый отдельный батч не является сбалансированным ( изображений одного класса может быть больше, чем изображений другого класса). # # # ## Шаг 0: Получение данных # # **Скачайте данные с сайта https://www.cs.toronto.edu/~kriz/cifar.html (CIFAR-100 python version). Прочитайте описание данных.** # Укажите путь к файлам CIFAR_DIR = 'cifar-10-batches-py/' # Скачанный архив содержит следующие файлы data_batch_1, data_batch_2, ..., data_batch_5, test_batch, которые являются python словарями, сериализованными с помощью модуля cPickle. # # ** Загрузите все данные. Используйте функцию, приведенную на сайте. ** # This function is from the download site, it is a custom function that goes hand-in-hand with the data def unpickle(file): import pickle with open(file, 'rb') as fo: cifar_dict = pickle.load(fo, encoding='bytes') return cifar_dict dirs = ['batches.meta','data_batch_1','data_batch_2', 'data_batch_3','data_batch_4','data_batch_5','test_batch'] all_data = [0,1,2,3,4,5,6] for i,direc in zip(all_data,dirs): all_data[i] = unpickle(CIFAR_DIR+direc) batch_meta = all_data[0] data_batch1 = all_data[1] data_batch2 = all_data[2] data_batch3 = all_data[3] data_batch4 = all_data[4] data_batch5 = all_data[5] test_batch = all_data[6] batch_meta data_batch1.keys() # ### Выведите одно изображение с помощью matplotlib. # # ** Используйте plt.imshow(). Вам необходимо воспользоваться функциями reshape and transpose для получения двумерных RGB изображений. Объясните нижеприведенное решение.** # # #X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("uint8") import matplotlib.pyplot as plt # %matplotlib inline import numpy as np X = data_batch1[b"data"] # The input data is an array with 10000 rows for each image and 3072 columns for pixel info X.shape # Need to reshape the 2nd dimension into a higher order tensor in order to deal with an image for plotting X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("uint8") # Normalize the pixel data for imshow X[0].max() (X[0]/255).max() plt.imshow(X[0]) plt.imshow(X[1]) plt.imshow(X[4]) # # Функции для работы с данными # # **Объясните принцип работы и назначение приведенного ниже кода.** def one_hot_encode(vec, vals=10): ''' 10- possible labels ''' n = len(vec) out = np.zeros((n, vals)) out[range(n), vec] = 1 return out class CifarHelper(): def __init__(self): self.i = 0 # Grabs a list of all the data batches for training self.all_train_batches = [data_batch1,data_batch2,data_batch3,data_batch4,data_batch5] # Grabs a list of all the test batches self.test_batch = [test_batch] # Intialize some empty variables self.training_images = None self.training_labels = None self.test_images = None self.test_labels = None def set_up_images(self): print("Setting Up Training Images and Labels") # Vertically stacks the training images self.training_images = np.vstack([d[b"data"] for d in self.all_train_batches]) train_len = len(self.training_images) # Reshapes and normalizes training images self.training_images = self.training_images.reshape(train_len,3,32,32).transpose(0,2,3,1)/255 # One hot Encodes the training labels ([0,0,0,1,0,0,0,0,0,0]) self.training_labels = one_hot_encode(np.hstack([d[b"labels"] for d in self.all_train_batches]), 10) print("Setting Up Test Images and Labels") # Vertically stacks the test images self.test_images = np.vstack([d[b"data"] for d in self.test_batch]) test_len = len(self.test_images) # Reshapes and normalizes test images self.test_images = self.test_images.reshape(test_len,3,32,32).transpose(0,2,3,1)/255 # One hot Encodes the test labels (e.g. [0,0,0,1,0,0,0,0,0,0]) self.test_labels = one_hot_encode(np.hstack([d[b"labels"] for d in self.test_batch]), 10) def next_batch(self, batch_size): # the first dimension is the batch size of 100 x = self.training_images[self.i:self.i+batch_size].reshape(100,32,32,3) y = self.training_labels[self.i:self.i+batch_size] self.i = (self.i + batch_size) % len(self.training_images) return x, y # **Создайте экземпляр класса CifarHelper, вызовите метод set_up_images(). Объясните, для чего это нужно сделать.** # + # Before Your tf.Session run these two lines ch = CifarHelper() ch.set_up_images() # During your session to grab the next batch use this line # (Just like we did for mnist.train.next_batch) # batch = ch.next_batch(100) # - # ## Создание модели # # import tensorflow as tf # **Создайте 2 плейсхолдера для x и y_true, а также плейсхолдер hold_prob, который будет содержать вероятности классов на выходе сети.** # # Place holders for x and y_true to be filled in later x = tf.placeholder(tf.float32,shape=[None,32,32,3]) y_true = tf.placeholder(tf.float32,shape=[None,10]) # # Think of this as a regularization parameter to prevent too much work on a few neurons hold_prob = tf.placeholder(tf.float32) # ### Вспомогательные функции # # ** Возьмите из примера следующие функции:** # # * init_weights # * init_bias # * conv2d # * max_pool_2by2 # * convolutional_layer # * full_layer # # Для чего они нужны? # + # Initialize weights as variables def init_weights(shape): init_random_dist = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(init_random_dist) # Initialize biases as variables def init_bias(shape): init_bias_vals = tf.constant(0.1, shape=shape) return tf.Variable(init_bias_vals) # Create a 2d convolution layer that only does one step in W,H dimensions # Remember strides [batches, H, W, channels] # batches and channels are virtually always 1, you don't want to skip any observations or channels # We can vary H,W if we like. 1,1 is relatively standard but 2,2 is used as well to streamline the feature gen. # padding = 'SAME' # https://stackoverflow.com/questions/37674306/what-is-the-difference-between-same-and-valid-padding-in-tf-nn-max-pool-of-t def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') # def max_pool_2by2(x): # ksize = 1,2,2,1 reduces the size of H, W by 1/2 return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') def convolutional_layer(input_x, shape): W = init_weights(shape) b = init_bias([shape[3]]) return tf.nn.relu(conv2d(input_x, W) + b) def normal_full_layer(input_layer, size): input_size = int(input_layer.get_shape()[1]) W = init_weights([input_size, size]) b = init_bias([size]) return tf.matmul(input_layer, W) + b # - # ### Создание слоев # # ** Создайте первый сверточный слой и следующий за ним слой подвыборки. Используйте размер ядра свертки, равный 4.** # # Will compute 32 features for each 4,4 patch # 4,4 is patch size, # next is input channels # last number is features to compute, number of output channels, convo_1 = convolutional_layer(x,shape=[4,4,3,32]) convo_1_pooling = max_pool_2by2(convo_1) # ** Создайте следующие сверточный слой и слой подвыборки. ** convo_2 = convolutional_layer(convo_1_pooling,shape=[4,4,32,64]) convo_2_pooling = max_pool_2by2(convo_2) # ** Создайте первый полносвязный слой, варьируйте количество нейронов на выходе.** 8*8*64 # Reshape it for the last DNN layer convo_2_flat = tf.reshape(convo_2_pooling,[-1,8*8*64]) # # Run a DNN activation on the last layer full_layer_one = tf.nn.relu(normal_full_layer(convo_2_flat,1024)) # **Создайте dropout слой. ** # To prevent overfitting, knock off some of the neurons in case it is doing everything. full_one_dropout = tf.nn.dropout(full_layer_one,keep_prob=hold_prob) # ** Создайте последний полносвязный слой.** # Define y_pred y_pred = normal_full_layer(full_one_dropout,10) # ### Loss Function # # ** Создайте функцию потерь cross_entropy ** # Put in a loss function cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true,logits=y_pred)) # ### Optimizer # ** Создайте Adam Optimizer. ** # Stochastic gradient descent with varying learning_rate optimizer = tf.train.AdamOptimizer(learning_rate=0.001) train = optimizer.minimize(cross_entropy) init = tf.global_variables_initializer() # ## Graph Session # # ** Выполните обучение и тестирование сети, во время обучения периодически выводите результат тестирования.** with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(2000): batch = ch.next_batch(100) sess.run(train, feed_dict={x: batch[0], y_true: batch[1], hold_prob: 0.5}) # PRINT OUT A MESSAGE EVERY 100 STEPS if i%100 == 0: print('Currently on step {}'.format(i)) print('Accuracy is:') # Test the Train Model matches = tf.equal(tf.argmax(y_pred,1),tf.argmax(y_true,1)) acc = tf.reduce_mean(tf.cast(matches,tf.float32)) print(sess.run(acc,feed_dict={x:ch.test_images,y_true:ch.test_labels,hold_prob:1.0})) print('\n')
Lab_3_CNN_CIFAR_10_ZIADE (2).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 第14讲 在数轴上标记不同的数 # + [markdown] heading_collapsed=true # ### Assignments 作业 # - # 1. **Mass** is the amount of "matter" in an object (though "matter" may be difficult to define), whereas **weight** is the force exerted on an object by gravity. In other words, an object with a mass of 1.0 kilogram weighs approximately 9.81 newtons on the surface of the Earth, which is its mass multiplied by the gravitational field strength. The object's weight is less on Mars, where gravity is weaker, and more on Saturn, and very small in space when far from any significant source of gravity, but it always has the same mass. # # **质量**是物体中“物质”的数量(尽管“物质”可能难以定义),而**重量**是地球吸引物体的力量,也就是重力作用在物体上的力,。 换句话说,一个质量为 1.0 千克的物体在地球表面上的重量约为 9.81 牛顿,即其质量乘以引力场强度。该物体的重量在火星上较小,那里的重力较弱,而在土星上则较大,并且在远离任何重要重力源的空间中非常小,但它的质量始终相同。 # # Answer the following question step by step: # 依次回答下面的问题: # 1. Measure what the mass yourself is on a scales at home with the help of your Mom or Dad, observe the reading of the scale and the unit. 在爸爸妈妈的帮助下用家里的秤称一称自己,看看上面的读数是多少,单位是什么。 # 2. Then write a program to calculate how many forces you are attracted by Earth. 编程计算你受到的球的吸引力有多少牛顿。 # # + def my_weight(): mass = int(input("your mass please. please input a integer")) weight = 9.81 * mass print("your mass is {} and your weight is {} newton".format(mass, weight)) print("the equation is: {}kg (your mass)x 9.81(number of newtons in a kg).".format(mass)) # - my_weight() # + [markdown] hidden=true # 2. Write a method to compare two numbers and tell which number is larger. Requirements: # 创建一个方法用来判断给定的两个数哪一个大。要求: # 1. the method name is `larger_num`, 方法名为`larger_num`; # 2. the method accepts two parameters, the names of which is up to you. 方法接受两个整数型参数,参数名可自由设定 # 3. within the method, compare the values of the two parameters, output the larger one with the format of "XX is larger.". If two numbers are euqal output: "Two numbers are equal." 在方法内部,比较这两个数,打印输出较大的那个数,如果两个数相等则打印输出“两个数相等”。 # # Run and test your methods 5 time with the following values of parameters. Observe if your method's output is the same as the output provided below. 创建好这个方法后,依次用下面的两组数来测试你创建的方法,观察输出的结果是否与预期的结果相同: # # | num1 | num2 | Output | # | ----------- |:------------:| ------------------:| # | 12 | 12 | Two numbers are equal. | # | 18 | 14 | 18 is larger. | # | 0 | 4 | 4 is larger. | # | 1091 | 0 | 1091 is larger. | # | 0 | 0 | Two numbers are equal. | # # - def larger_num(num1, num2): if num1 > num2: print("{} is larger.".format(num1)) elif num1 == num2: print("Two numbers are equal.") elif num1 < num2: print("{} is larger.".format(num2)) return larger_num(12, 12) larger_num(18, 14) larger_num(0, 4) larger_num(1091, 0) larger_num(0, 0) # + [markdown] hidden=true # 3. Write method to calculate and print out the sum of two integers that you input from the keyboard. Requirements: # 创建一个方法来计算两个整数的和并输出结果。要求: # 1. 方法名为`get_sum`,该方法不接受任何参数。 # 2. within the method, write codes to receive two number strings from keyboard 你的代码能够从键盘接受两个数字字符串 # 3. within the method, convert these two strings to two Integer numbers, assign them to two variables. The names of the two Integer variables can be decided as you like 将这两个字符串转化为两个整数型数字并将其赋值给两个变量,变量名可以根据你自己的喜好来设定 # 4. within the method, calculate the sum of the two Integer variables, assign the result to a new Variable named `result` 计算这两个整型变量的和,并将结果赋值给一个叫`result`的变量 # 5. within the method, print out the type of the `result` variable 打印输出变量`result`的数据类型 # 6. within the method, display the result on screen with a readable sentence. 打印输出一个通俗易懂的句子来显示你计算得到的结果 # # Run and test 5 times your method. Each time when inputs from keyboard are required, provide the following values for the two numbers. Observe whether the output of the method is the same as the sum for each test 运行5次你创建的方法,分别使用下表中的num1,num2的值作为键盘输入运行并测试你的代码共计5次,观察结果是否与对应的Sum一样 # # **Your should run the method on all test data; that being said, you may not change the codes between each run.** # **你应该用同样的代码来运行所有的测试数据,也就是说,你不应该在两次测试间隙修改你的代码** # # # | num1 | num2 | Output | # | ----------- |:------------:| ------------------:| # | 12 | 12 | The sum is 24 | # | 18 | 14 | The sum is 32 | # | 0 | 4 | The sum is 4 | # | 1091 | 0 | The sum is 1091 | # | 0 | 0 | The sum is 0 | # # - def get_sum(): num1 = int(input("num1 please")) num2 = int(input("num2 please")) result = num1 + num2 print(type(result)) print("The sum is {}".format(result)) # TODO: execute your methods here 5 times with values of parameters provided. get_sum() get_sum() get_sum() get_sum() get_sum() # 4. Write a method to calculate and print out the perimeter and area of a rectagle with the length and width provided by keyboard. 创建一个方法来计算一个长方形的周长和面积,确定这个长方形的长和宽从键盘输入得到。Requirements: 要求: # 1. 方法名为`perimeter_and_area`, 该方法接受两个整数型参数,试图代表矩形的两个边长。 # 2. within the method, verify whether two parameters provided can form a rectangle or not. If not, print out "Not a rectangle"; otherwise, continue the following steps # 3. within the method, calculate the perimiter and the area of this rectangle, assign the result to `perimeter` and `area` variable 计算这个长方形的周长和面积,并将结果分别赋值给名为`perimeter`和`area`的变量 # 4. within the method, print out the value of the `perimeter`和`area` variables 打印输出变量`perimeter`和`area`的结果 # # Run and test 5 times your method with the following inputs (length, width) for each test. Observe whether the results are equal to the Perimeter and Area for each test 分别使用下表中的length和width的值作为键盘输入运行并测试你的代码共计5次,观察结果是否与对应的Perimiter和Area值一样 # # **Your should run the method on all test data.** # **你应该用同样的代码来运行所有的测试数据。** # # # # | length | width | output | # | --------- |:----------:|:-------------------------------:| # | 12 | 12 | perimeter is: 48, area is: 144 | # | 18 | 14 | perimeter is: 48, area is: 144 | # | 0 | 4 | Not a rectangle | # | 1091 | 0 | Not a rectangle | # | 0 | 0 | Not a rectangle | def perimeter_and_area(lenght, wigth): side1 = int(lenght) side2 = int(wigth) if side1 == 0 or side2 == 0: print("Not a rectangle") else: perimeter = (side1 + side2) * 2 area = side1 * side2 print("perimeter is:{}, area is: {}".format(perimeter, area)) perimeter_and_area(12, 12) perimeter_and_area(18, 14) perimeter_and_area(0, 4) perimeter_and_area(1091, 0) perimeter_and_area(0, 0)
source/2021/500Answer/content/014_locate_points_answer.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## _*VQE; using its callback to monitor optimization progress*_ # # This notebook demonstrates using Qiskit Aqua's VQE algorithm to plot graphs of the convergence path to ground state energy with different optimizers. # # This notebook uses the callback capability of VQE to capture information at each objective functional evaluation where it is computing the energy using the parameterized variational form. While the params themselves are also part of the callback we are only interested in the energy value here to plot the convergence. # # Note: other variational algorithms such as QAOA and QSVM have similar callbacks. # + import numpy as np import pylab from qiskit import BasicAer from qiskit.aqua import Operator, QuantumInstance, aqua_globals from qiskit.aqua.algorithms.adaptive import VQE from qiskit.aqua.algorithms.classical import ExactEigensolver from qiskit.aqua.components.initial_states import Zero from qiskit.aqua.components.optimizers import COBYLA, L_BFGS_B, SLSQP from qiskit.aqua.components.variational_forms import RY # - # First we create a qubit operator for VQE. Here we have taken a set of paulis that were originally computed by qiskit-chemistry for an H2 molecule. # + pauli_dict = { 'paulis': [{"coeff": {"imag": 0.0, "real": -1.052373245772859}, "label": "II"}, {"coeff": {"imag": 0.0, "real": 0.39793742484318045}, "label": "ZI"}, {"coeff": {"imag": 0.0, "real": -0.39793742484318045}, "label": "IZ"}, {"coeff": {"imag": 0.0, "real": -0.01128010425623538}, "label": "ZZ"}, {"coeff": {"imag": 0.0, "real": 0.18093119978423156}, "label": "XX"} ] } qubit_op = Operator.load_from_dict(pauli_dict) # - # Now we loop over the set of optimizers. The defaults for maxiters/evals for the respective optimizers is more than sufficient to converge the above H2 problem so we do not need to add any logic to set accordingly. # + optimizers = [COBYLA, L_BFGS_B, SLSQP] converge_cnts = np.empty([len(optimizers)], dtype=object) converge_vals = np.empty([len(optimizers)], dtype=object) num_qubits = qubit_op.num_qubits for i in range(len(optimizers)): aqua_globals.random_seed = 250 optimizer = optimizers[i]() print('\rOptimizer: {} '.format(type(optimizer).__name__), end='') init_state = Zero(num_qubits) var_form = RY(num_qubits, initial_state=init_state) counts = [] values = [] def store_intermediate_result(eval_count, parameters, mean, std): counts.append(eval_count) values.append(mean) algo = VQE(qubit_op, var_form, optimizer, 'matrix', callback=store_intermediate_result) backend = BasicAer.get_backend('statevector_simulator') quantum_instance = QuantumInstance(backend=backend) algo_result = algo.run(quantum_instance) converge_cnts[i] = np.asarray(counts) converge_vals[i] = np.asarray(values) print('\rOptimization complete '); # - # Now from the callback data we stored we can plot the energy value at each objective function call each optimzer makes. An optimizer using a finite difference method for computing gradient has that characteristic step like plot where for a number of evaluations it is computing the value for close by points to establish a gradient (the close by points having very similiar values whose difference cannot be seen on the scale of the graph here). pylab.rcParams['figure.figsize'] = (12, 8) for i in range(len(optimizers)): pylab.plot(converge_cnts[i], converge_vals[i], label=optimizers[i].__name__) pylab.xlabel('Eval count') pylab.ylabel('Energy') pylab.title('Energy convergence for various optimizers') pylab.legend(loc='upper right') # Finally since the above problem is still easily tractable classically we can use ExactEigensolver to compute a reference value for the solution. We can now plot the difference from the resultant exact solution as the energy converges with VQE towards the minimum value which should be that exact classical solution. ee = ExactEigensolver(qubit_op) result = ee.run() ref = result['energy'] print('Reference value: {}'.format(ref)) pylab.rcParams['figure.figsize'] = (12, 8) for i in range(len(optimizers)): pylab.plot(converge_cnts[i], abs(ref - converge_vals[i]), label=optimizers[i].__name__) pylab.xlabel('Eval count') pylab.ylabel('Energy difference from solution reference value') pylab.title('Energy convergence for various optimizers') pylab.yscale('log') pylab.legend(loc='upper right')
aqua/vqe_convergence.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # Importing Packages #Scikit-Learn from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.linear_model import Lasso from sklearn.linear_model import Ridge #Computational and Visualisation packages import numpy as np import pandas as pd import matplotlib.pyplot as plt # - #Custom plot display function def display_plot(cv_scores, cv_scores_std): fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(alpha_space, cv_scores) std_error = cv_scores_std / np.sqrt(10) ax.fill_between(alpha_space, cv_scores + std_error, cv_scores - std_error, alpha=0.2) ax.set_ylabel('CV Score +/- Std Error') ax.set_xlabel('Alpha') ax.axhline(np.max(cv_scores), linestyle='--', color='.5') ax.set_xlim([alpha_space[0], alpha_space[-1]]) ax.set_xscale('log') plt.show() #Loading the requisite dataset df = pd.read_csv('gapminder_dataset.csv') df_columns = df.columns # Array for feature and target variables X = df.fertility.values y = df.life.values # + # Dimensions before reshape print("Dimensions of y before reshaping: {}".format(y.shape)) print("Dimensions of X before reshaping: {}".format(X.shape)) # Reshaping X and y y = y.reshape(-1, 1) X = X.reshape(-1, 1) # Dimensions after reshape print("Dimensions of y after reshaping: {}".format(y.shape)) print("Dimensions of X after reshaping: {}".format(X.shape)) # - # Split of the main dataset into 70% training and 30% test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=42) # + # Linear Regression reg_all = LinearRegression() # Fitting the regressor on the training dataset reg_all.fit(X_train, y_train) y_pred = reg_all.predict(X_test) # Compute and print R^2 and RMSE print("R^2: {}".format(reg_all.score(X_test, y_test))) rmse = np.sqrt(mean_squared_error(y_test, y_pred)) print("Root Mean Squared Error: {}".format(rmse)) # - #5 Fold Cross Validation Evaluation reg = LinearRegression() cv_scores_5fold = cross_val_score(reg, X, y, cv=5 ) print(cv_scores_5fold) print("Average 5-Fold CV Score: {}".format(np.mean(cv_scores_5fold))) # + #Lasso Regularization X = df.drop('fertility',1) y = df.life df_columns = df.drop('fertility',1).columns # Instantiating a lasso regressor lasso = Lasso(alpha=0.4, normalize=True) # Fit the regressor to the data lasso = Lasso(alpha=0.4, normalize=True) lasso = lasso.fit(X, y) lasso lasso_coef = lasso.coef_ lasso_coef plt.plot(range(len(df_columns)), lasso_coef) plt.xticks(range(len(df_columns)), df_columns.values, rotation=60) plt.margins(0.02) plt.show() # + #Ridge Regularization # + # Setup the array of alphas and lists to store scores alpha_space = np.logspace(-4, 0, 50) ridge_scores = [] ridge_scores_std = [] # Initialize a ridge regressor ridge = Ridge(normalize=True) for alpha in alpha_space: ridge.alpha = alpha ridge_cv_scores = cross_val_score(ridge, X, y, cv=10) ridge_scores.append(np.mean(ridge_cv_scores)) ridge_scores_std.append(np.std(ridge_cv_scores)) display_plot(ridge_scores, ridge_scores_std)
Regression/Linear-Regression-Lasso&Ridge.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Multiple Movements # # Let's see how our robot responds to moving multiple times without sensing! # # <img src='images/uncertain_motion.png' width=50% height=50% /> # # First let's include our usual resource imports and display function. # importing resources import matplotlib.pyplot as plt import numpy as np # A helper function for visualizing a distribution. def display_map(grid, bar_width=1): if(len(grid) > 0): x_labels = range(len(grid)) plt.bar(x_labels, height=grid, width=bar_width, color='b') plt.xlabel('Grid Cell') plt.ylabel('Probability') plt.ylim(0, 1) # range of 0-1 for probability values plt.title('Probability of the robot being at each cell in the grid') plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1)) plt.show() else: print('Grid is empty') # ### Write code that moves 1000 times and then prints the resulting probability distribution. # # You are given the initial variables and a complete `move` function (that incorporates uncertainty), below. # + # given initial variables p=[0, 1, 0, 0, 0] # the color of each grid cell in the 1D world world=['green', 'red', 'red', 'green', 'green'] # Z, the sensor reading ('red' or 'green') Z = 'red' pHit = 0.6 pMiss = 0.2 pExact = 0.8 pOvershoot = 0.1 pUndershoot = 0.1 # Complete the move function def move(p, U): q=[] # iterate through all values in p for i in range(len(p)): # use the modulo operator to find the new location for a p value # this finds an index that is shifted by the correct amount index = (i-U) % len(p) nextIndex = (index+1) % len(p) prevIndex = (index-1) % len(p) s = pExact * p[index] s = s + pOvershoot * p[nextIndex] s = s + pUndershoot * p[prevIndex] # append the correct, modified value of p to q q.append(s) return q # Here is code for moving twice # p = move(p, 1) # p = move(p, 1) # print(p) # display_map(p) # + ## Write code for moving 1000 times for k in range(1000): p = move(p,1) print(p) display_map(p) # -
07_Object_Tracking_Localisation/4_2_Robot_Localization/8_1. Multiple Movements.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Importing the libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt # Importing the datset dataset = pd.read_csv('creditcard.csv') x = dataset.iloc[:,1:30].values y = dataset.iloc[:,30].values # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 32) # Feature Scaling from sklearn.preprocessing import StandardScaler sc_x = StandardScaler() x_train = sc_x.fit_transform(x_train) x_test = sc_x.transform(x_test) from sklearn.naive_bayes import GaussianNB classifier = GaussianNB() # + from sklearn.naive_bayes import GaussianNB classifier = GaussianNB() classifier.fit(x_train,y_train) y_pred = classifier.predict(x_test) # Results in form of Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # - from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = classifier, X = x_train, y = y_train, cv = 10, n_jobs = -1) accuracies.mean() accuracies.std() from mlxtend.feature_selection import SequentialFeatureSelector as sfs # ### Selecting 5 features # Fitting Feature Selector sfs1 = sfs(classifier, k_features=5, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing selected Features feat_cols1 = list(sfs1.k_feature_idx_) print(feat_cols1) #[10, 12, 14, 17, 25] #Fitting in the model classifier.fit(x_train[:,feat_cols1],y_train) y_pred = classifier.predict(x_test[:,feat_cols1]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # ### Selecting 10 features # Fitting Feature Selector sfs1 = sfs(classifier, k_features=10, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing selected Features feat_cols2 = list(sfs1.k_feature_idx_) print(feat_cols2) #[10, 11, 12, 14, 15, 17, 18, 23, 24, 25] #Fitting in the models classifier.fit(x_train[:,feat_cols2],y_train) y_pred = classifier.predict(x_test[:,feat_cols2]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # ### Selecting 15 features # Fitting Feature Selector sfs1 = sfs(classifier, k_features=15, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing Features feat_cols3 = list(sfs1.k_feature_idx_) print(feat_cols3) # [3, 5, 8, 10, 11, 12, 14, 15, 17, 18, 21, 23, 24, 25, 28] #Fitting in the models classifier.fit(x_train[:,feat_cols3],y_train) y_pred = classifier.predict(x_test[:,feat_cols3]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # ### Selecting 20 features # Fitting Feature Selector sfs1 = sfs(classifier, k_features=20, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing Features feat_cols4 = list(sfs1.k_feature_idx_) print(feat_cols4) # [2, 3, 4, 5, 6, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 23, 24, 25, 28] # Fitting in the models classifier.fit(x_train[:,feat_cols4],y_train) y_pred = classifier.predict(x_test[:,feat_cols4]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # ### Selecting 21 features # Fitting Feature Selector sfs1 = sfs(classifier, k_features=21, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing Features feat_cols5 = list(sfs1.k_feature_idx_) print(feat_cols5) # [2, 3, 4, 5, 6, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 23, 24, 25, 27, 28] # Fitting in the models classifier.fit(x_train[:,feat_cols5],y_train) y_pred = classifier.predict(x_test[:,feat_cols5]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # ### Selecting 19 features # Fitting Feature Selector sfs1 = sfs(classifier, k_features=19, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing Features feat_cols6 = list(sfs1.k_feature_idx_) print(feat_cols6) # [2, 3, 4, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 23, 24, 25, 28] # Fitting in the models classifier.fit(x_train[:,feat_cols6],y_train) y_pred = classifier.predict(x_test[:,feat_cols6]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # Fitting Feature Selector sfs1 = sfs(classifier, k_features=18, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing Features feat_cols7 = list(sfs1.k_feature_idx_) print(feat_cols7) # [2, 3, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 23, 24, 25, 28] # Fitting in the models classifier.fit(x_train[:,[2, 3, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 23, 24, 25, 28]],y_train) y_pred = classifier.predict(x_test[:,[2, 3, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 23, 24, 25, 28]]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator = classifier, X = x_train, y = y_train, cv = 10, n_jobs = -1) accuracies.mean() accuracies.std() # Fitting Feature Selector sfs1 = sfs(classifier, k_features=17, verbose = 2) sfs1 = sfs1.fit(x_train, y_train) # Showing Features feat_cols8 = list(sfs1.k_feature_idx_) print(feat_cols8) # [2, 3, 5, 8, 10, 11, 12, 13, 14, 15, 17, 18, 21, 23, 24, 25, 28] # Fitting in the models classifier.fit(x_train[:,feat_cols8],y_train) y_pred = classifier.predict(x_test[:,feat_cols8]) # Comparing and evaluating results using Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm
src/2D) Using_ScoreComparison.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/DeepLearningInterpreter/occlusion_experiments/blob/master/colab_notebooks/Visualizing_Detections_With(out)_Occlusion.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="Fyk7FY6S1zB0" colab_type="text" # ##Introduction # + [markdown] id="mMFo33mL17L7" colab_type="text" # The purpose of this notebook is to detect the pistol(s) on images and visualize these detections by drawing bounding boxes. Optionally, an occlusion box can be placed in the image to see how the models deal with the occlusion. The notebook is structured as follows. # # **First** you can choose whether you want to use SSD or Faster R-CNN. # # **The next few parts** are important but can be skipped or quickly glanced over. In these parts, the repository is cloned to the cloud server, the necessary imports are taken care of, the model is loaded into memory, and some helpful functions are defined. # # **The final part** of the notebook is the most interesting. Here, you can specify the images of interest and optionally draw a bounding box in the image. The detection visualizations will be printed. # + [markdown] id="gFefh6g2FAT0" colab_type="text" # ##Choose Your Model # Choose the meta architecture that you want to use. # # + id="eEXTOOuo3CpB" colab_type="code" colab={} #Set this variable equal to "SSD" or "FRCNN" (for Faster R-CNN) meta_architecture = "SSD" # + id="rMuRCv5AHJTJ" colab_type="code" colab={} if meta_architecture == "SSD": MODEL_NAME = "SSD_ext_lrCyc" MODEL_TYPE = "SSD" elif meta_architecture == "FRCNN": MODEL_NAME = "FRCNN_ext_lr3" MODEL_TYPE = "FRCNN" else: raise ValueError( 'The meta_architecture variable has to be set to either "SSD" or "FRCNN".' ) # + [markdown] id="JnaC-s_3jaG6" colab_type="text" # --------------------------------------------------------------------------- # ##Cloning the GitHub repository to the cloud server. # # # + id="1Gl-A13XLHZc" colab_type="code" outputId="d2307f2b-d9c2-441a-d98e-e221934ce87d" colab={"base_uri": "https://localhost:8080/", "height": 671} #Downloading and installing git lfs # !curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash # !sudo apt-get install git-lfs # !git lfs install # + id="Uh3_od2-zxdl" colab_type="code" outputId="81ea8a99-b1ff-4b32-9610-98b70d8e3322" colab={"base_uri": "https://localhost:8080/", "height": 240} #This takes a few minutes but less than five for sure! #Cloning repository. The exclude flag indicates that large files from the "training" and "evaluation_outcomes" #subdirectories should not be downloaded # !git lfs clone https://github.com/DeepLearningInterpreter/occlusion_experiments.git --exclude="occlusion_experiments/main_content/multitude_of_possible_detectors/training, occlusion_experiments/main_content/multitude_of_possible_detectors/evaluation_outcomes" # + id="wOvyEGjO0bOT" colab_type="code" outputId="bfab3e26-97a9-4e57-c856-50f6beadf6df" colab={"base_uri": "https://localhost:8080/", "height": 34} import os os.chdir("/content/occlusion_experiments/TF_object_detection_API_modified") os.chdir("object_detection") os.getcwd() # + [markdown] id="Pp2K7I5G1XOf" colab_type="text" # --------------------------------------------------------------------------------------------------------------------- # ##Imports and function definitions # + id="efkJtoawELEh" colab_type="code" colab={} # coding: utf-8 # # Imports import csv import numpy as np import os import six.moves.urllib as urllib import sys import tensorflow as tf import time from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image import scipy.misc sys.path.append("..") from utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') # + id="Srqxo2frWlcb" colab_type="code" colab={} #makes sure there is no printing output # %%capture #more imports from object_detection.utils import label_map_util from utils import visualization_utils as vis_util # + id="FuVNXz2b3q0T" colab_type="code" colab={} os.chdir('/content/occlusion_experiments/main_content') # + id="jw5TJAYTBfR4" colab_type="code" colab={} # # Model preparation path_to_model = 'multitude_of_possible_detectors/frozen_models_for_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = path_to_model + MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data/main_data', 'pistol_car_label_map.pbtxt') #Load a (frozen) Tensorflow model into memory. detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') #Loading label map category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) #Helper code def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # + [markdown] id="b7yBcMrNqSfn" colab_type="text" # end of imports and function definitions. # # ------------------------------------------------------------- # ##Begin occlusion and inference # # + [markdown] id="qseaxDCT542-" colab_type="text" # Specify the name of the image of interest in this cell. The image will be printed so you can decide where to draw the occlusion box. # + id="mOgKSaAJ5vy3" colab_type="code" outputId="a57e47e0-73d4-45b7-e007-27981e92f2fc" colab={"base_uri": "https://localhost:8080/", "height": 34} #Specify the index of the image of interest. To see which index corresponds #to which image, uncomment the second to next codeblock and run it. image_index = 7 #printing image with open('data/occlusion_images/nameAndBB.csv') as f: reader = csv.reader(f) image_info = list(reader) print(image_index) image_path = "data/occlusion_images/" + image_info[image_index][0] image = Image.open(image_path) image_np = load_image_into_numpy_array(image) plt.imshow(image_np) plt.show() # + [markdown] id="A792c02WBLgW" colab_type="text" # Decide the location of the occlusion box. If you do not want to occlude anything you can set the *size* variable to zero. # + id="41w0dsqYA3dF" colab_type="code" colab={} #The next two variables decide where the top left corner of the occlusion box #will be. Choosing x0 = 0 and y0 = 0 means that the top left corner of the #occlusion box will be in the top left corner of the image. x0 = 403 #along the horizontal axis y0 = 125 #along the vertical axis #This parameter decides the size of the box in terms of the number of pixels. #Set size = 0 for no occlusion size = 80 # + [markdown] id="stndAbVuH5cJ" colab_type="text" # With the following codeblock you can find out which image corresponds to what index. # Just uncomment and run the code below: # + id="QlDS7j7dGDt5" colab_type="code" colab={} # with open('data/occlusion_images/nameAndBB.csv') as f: # reader = csv.reader(f) # image_info = list(reader) # for i in range(1,30): # print(i) # image_path = "data/occlusion_images/" + image_info[i][0] # image = Image.open(image_path) # image_np = load_image_into_numpy_array(image) # plt.imshow(image_np) # plt.show() # + [markdown] id="AzcfLJxW6OtU" colab_type="text" # Run the cell below and the visualization of the detection will be printed. # + id="dyLOXCLq7Z3Q" colab_type="code" outputId="ca765548-df77-4cbd-83aa-c55df0b651ee" colab={"base_uri": "https://localhost:8080/", "height": 311} # %matplotlib inline from occlusion_help_funcs.help_funcs import compute_IoU import csv with open('data/occlusion_images/nameAndBB.csv') as f: reader = csv.reader(f) image_info = list(reader) x1 = x0 + size y1 = y0 + size #retrieve the ground truth bounding box gt_box = image_info[image_index] image_sel = gt_box[0] gt_box = gt_box[1:5] gt_box = [int(x) for x in gt_box] #construct image path PATH_TO_TEST_IMAGES_DIR = 'data/occlusion_images' image_path = os.path.join(PATH_TO_TEST_IMAGES_DIR, image_sel) # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) #open image image = Image.open(image_path) image_np = load_image_into_numpy_array(image) #make area on image grey image_np[y0:y1, x0:x1, 0] = 163 image_np[y0:y1, x0:x1, 1] = 157 image_np[y0:y1, x0:x1, 2] = 152 #making plots nicer plt.rcParams["axes.grid"] = False #begin inference with detection_graph.as_default(): with tf.Session() as sess: #Necessary model preparation-------------- ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) #end necessary model preparation------------ # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) start = time.time() # Get handles to input and output tensors if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image_np.shape[0], image_np.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image_np, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] end = time.time() #import pdb; pdb.set_trace() print(end-start) #Convert to relative ground truth box coordinates gt_box = [gt_box[0]/image_np.shape[0], gt_box[1]/image_np.shape[1], gt_box[2]/image_np.shape[0], gt_box[3]/image_np.shape[1]] #compute IoU highest = 0.2 detected_bool = False #checking the IoU for every detection for k in range(output_dict['num_detections']): pred_box = output_dict['detection_boxes'][k] IoU = compute_IoU(gt_box, pred_box) if IoU > highest: highest = IoU index = k detected_bool = True IoU = highest #Visualize the ground truth box. vis_util.draw_bounding_box_on_image_array( image_np, gt_box[0], gt_box[1], gt_box[2], gt_box[3], color='red', thickness=8 ) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, min_score_thresh=.25, line_thickness=8) fig, ax = plt.subplots(ncols=1) im1 = ax.imshow(image_np) ax.set_ylabel('') if IoU > .2: ax.set_xlabel("IoU = {:.2f}".format(IoU)) # Turn off tick labels ax.set_yticklabels([]) ax.set_xticklabels([]) plt.show() if IoU > .2: print("The intersection over union is: ", IoU) print("Confidence is: ", output_dict['detection_scores'][index]) # + id="1Em3812E4rAQ" colab_type="code" colab={}
colab_notebooks/Visualizing_Detections_With(out)_Occlusion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Getting started with TensorFlow (Eager Mode) # # **Learning Objectives** # - Understand difference between Tensorflow's two modes: Eager Execution and Graph Execution # - Practice defining and performing basic operations on constant Tensors # - Use Tensorflow's automatic differentiation capability # # ## Introduction # # **Eager Execution** # # Eager mode evaluates operations immediatley and return concrete values immediately. To enable eager mode simply place `tf.enable_eager_execution()` at the top of your code. We recommend using eager execution when prototyping as it is intuitive, easier to debug, and requires less boilerplate code. # # **Graph Execution** # # Graph mode is TensorFlow's default execution mode (although it will change to eager with TF 2.0). In graph mode operations only produce a symbolic graph which doesn't get executed until run within the context of a tf.Session(). This style of coding is less inutitive and has more boilerplate, however it can lead to performance optimizations and is particularly suited for distributing training across multiple devices. We recommend using delayed execution for performance sensitive production code. import tensorflow as tf print(tf.__version__) # ## Eager Execution tf.enable_eager_execution() # ### Adding Two Tensors # # The value of the tensor, as well as its shape and data type are printed a = tf.constant(value = [5, 3, 8], dtype = tf.int32) b = tf.constant(value = [3, -1, 2], dtype = tf.int32) c = tf.add(x = a, y = b) print(c) # #### Overloaded Operators # We can also perform a `tf.add()` using the `+` operator. The `/,-,*` and `**` operators are similarly overloaded with the appropriate tensorflow operation. c = a + b # this is equivalent to tf.add(a,b) print(c) # ### NumPy Interoperability # # In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands. # + import numpy as np a_py = [1,2] # native python list b_py = [3,4] # native python list a_np = np.array(object = [1,2]) # numpy array b_np = np.array(object = [3,4]) # numpy array a_tf = tf.constant(value = [1,2], dtype = tf.int32) # native TF tensor b_tf = tf.constant(value = [3,4], dtype = tf.int32) # native TF tensor for result in [tf.add(x = a_py, y = b_py), tf.add(x = a_np, y = b_np), tf.add(x = a_tf, y = b_tf)]: print("Type: {}, Value: {}".format(type(result), result)) # - # You can convert a native TF tensor to a NumPy array using .numpy() a_tf.numpy() # ### Linear Regression # # Now let's use low level tensorflow operations to implement linear regression. # # Later in the course you'll see abstracted ways to do this using high level TensorFlow. # #### Toy Dataset # # We'll model the following function: # # \begin{equation} # y= 2x + 10 # \end{equation} X = tf.constant(value = [1,2,3,4,5,6,7,8,9,10], dtype = tf.float32) Y = 2 * X + 10 print("X:{}".format(X)) print("Y:{}".format(Y)) # #### Loss Function # # Using mean squared error, our loss function is: # \begin{equation} # MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2 # \end{equation} # # $\hat{Y}$ represents the vector containing our model's predictions: # \begin{equation} # \hat{Y} = w_0X + w_1 # \end{equation} def loss_mse(X, Y, w0, w1): Y_hat = w0 * X + w1 return tf.reduce_mean(input_tensor = (Y_hat - Y)**2) # #### Gradient Function # # To use gradient descent we need to take the partial derivative of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to! # # During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. The `params=[2,3]` argument tells TensorFlow to only compute derivatives with respect to the 2nd and 3rd arguments to the loss function (counting from 0, so really the 3rd and 4th). # Counting from 0, the 2nd and 3rd parameter to the loss function are our weights grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params=[2,3]) # #### Training Loop # # Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity. # + STEPS = 1000 LEARNING_RATE = .02 # Initialize weights w0 = tf.constant(value = 0.0, dtype = tf.float32) w1 = tf.constant(value = 0.0, dtype = tf.float32) for step in range(STEPS): #1. Calculate gradients d_w0, d_w1 = grad_f(X, Y, w0, w1) #2. Update weights w0 = w0 - d_w0 * LEARNING_RATE w1 = w1 - d_w1 * LEARNING_RATE #3. Periodically print MSE if step % 100 == 0: print("STEP: {} MSE: {}".format(step, loss_mse(X, Y, w0, w1))) # Print final MSE and weights print("STEP: {} MSE: {}".format(STEPS,loss_mse(X, Y, w0, w1))) print("w0:{}".format(round(float(w0), 4))) print("w1:{}".format(round(float(w1), 4))) # - # ## Bonus # Try modelling a non-linear function such as: $y=xe^{-x^2}$ # + X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32) Y = X*np.exp(-X**2) * X from matplotlib import pyplot as plt # %matplotlib inline plt.plot(X, Y) # + def make_features(X): features = [X] features.append(tf.ones_like(X)) # Bias. features.append(tf.square(X)) features.append(tf.sqrt(X)) features.append(tf.exp(X)) return tf.stack(features, axis=1) def make_weights(n_weights): W = [tf.constant(value = 0.0, dtype = tf.float32) for _ in range(n_weights)] return tf.expand_dims(tf.stack(W),-1) def predict(X, W): Y_hat = tf.matmul(X, W) return tf.squeeze(Y_hat, axis=-1) def loss_mse(X, Y, W): Y_hat = predict(X, W) return tf.reduce_mean(input_tensor = (Y_hat - Y)**2) X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32) Y = np.exp(-X**2) * X grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params=[2]) # + STEPS = 2000 LEARNING_RATE = .02 # Weights/features. Xf = make_features(X) # Xf = Xf[:,0:2] # Linear features only. W = make_weights(Xf.get_shape()[1].value) # For plotting steps = [] losses = [] plt.figure() for step in range(STEPS): #1. Calculate gradients dW = grad_f(Xf, Y, W)[0] #2. Update weights W -= dW * LEARNING_RATE #3. Periodically print MSE if step % 100 == 0: loss = loss_mse(Xf, Y, W) steps.append(step) losses.append(loss) plt.clf() plt.plot(steps, losses) # Print final MSE and weights print("STEP: {} MSE: {}".format(STEPS,loss_mse(Xf, Y, W))) # Plot results plt.figure() plt.plot(X, Y, label='actual') plt.plot(X, predict(Xf, W), label='predicted') plt.legend() # - # Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
courses/machine_learning/deepdive/02_tensorflow/a_tfstart_eager.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <font color='darkblue'>What is a function?</font> # <b>A function is generally known as a mathematical object, although the concept is also ubiquitous in everyday life</b>. Unfortunately, in everyday life, we often confuse functions and effects. And what is even more unfortunate is that we also make this mistake when working with many programming languages. # # ## <font color='darkgreen'>Functions in the real world</font> # In the real world, a function is primarily a mathematic concept. It’s a relation between a source set, called the function domain, to a target set, called the function codomain. The domain and the codomain need not be distinct. A function can have the same set of integer numbers for its domain and its codomain, for example. # # <b><font size='3'>What makes a relation between two sets a function</font></b><br/> # To be a function, a relation must fulfill one condition: all elements of the domain must have one and only one corresponding element in the codomain, as shown in below figure: # <img src='https://2.bp.blogspot.com/-oSfVsuKVXcE/Wfm5yJnJhYI/AAAAAAAAXCA/Gq5Q_jjdK9wik1LJjYoy3f1ALYomLrcYgCLcBGAs/s1600/4070_f2-1.PNG'/><br/> # # This has some interesting implications: # * There cannot exist elements in the domain with no corresponding value in the codomain. # * There cannot exist two elements in the codomain corresponding to the same element of the domain. # * There may be elements in the codomain with no corresponding element in the source set. # * There may be elements in the codomain with more than one corresponding element in the source set. # * The set of elements of the codomain that have a corresponding element in the domain is called the image of the function. # # <b><font size='3'>Partial functions</font></b><br/> # A relation that isn’t defined for all elements of the domain but that fulfills the rest of the requirement (<font color='brown'>no element of the domain can have a relationship with more than one element of the codomain</font>) is often called a <b><font color='darkblue'>partial function</font></b>. The relation predecessor(x) is a partial function on <i>N</i> (<font color='brown'>the set of positive integers plus 0</font>), but it’s a total function on N*, which is the set of positive integers without 0, and its codomain is N. # # Partial functions are important in programming because many bugs are the result of using a partial function as if it were a total one. For example, the relation <font color='blue'>f(x) = 1/x</font> is a partial function from N to Q (<font color='brown'>the rational numbers</font>) because it isn’t defined for 0. It’s a total function from N* to Q, but it’s also a total function from N to (<font color='brown'>Q plus error</font>). <b>By adding an element to the codomain</b> (<font color='brown'>the error condition</font>)<b>, you can transform the partial function into a total one.</b> But to do this, the function needs a way to return an error. Can you see an analogy with computer programs? You’ll see that turning partial functions into total ones is an important part of functional programming. # # <b><font size='3'>Function composition </font></b><br/> # Functions are building blocks that can be composed to build other functions. The composition of functions <i>f</i> and <i>g</i> is noted as <b><font color='blue'>f ˚ g</font></b>, which reads as <b>f round g</b>. If f(x) = x + 2 and g(x) = x * 2, then: # ```python # f ˚ g (x) = f(g(x)) = f(x * 2) = (x * 2) + 2 # ``` # # Note that the two notations f ˚ g (x) and f(g(x)) are equivalent. But writing a composition as f(g(x)) implies using x as a placeholder for the argument. Using the f ˚ g notation, you can express a function composition without using this placeholder. # # If you apply this function to 5, you’ll get the following: # ```python # f ˚ g (5) = f(g(5)) = f(5 * 2) = 10 + 2 = 12 # ``` # # It’s interesting to note that f ˚ g is generally different from g ˚ f, although they may sometimes be equivalent. For example: # ```python # g ˚ f (5) = g(f(5)) = g(5 + 2) = 7 * 2 = 14 # ``` # # <b><font size='3'>Functions of several arguments</font></b><br/> # So far, we’ve talked only about functions of one argument. What about functions of several arguments? Simply said, there’s no such thing as a function of several arguments. Remember the definition? <b>A function is a relation between a source set and a target set. It isn’t a relation between two or more source sets and a target set. A function can’t have several arguments. But the product of two sets is itself a set, so a function from such a product of sets into a set may appear to be a function of several arguments</b>. Let’s consider the following function: # ```python # f(x, y) = x + y # ``` # This may be a relation between N x N and N, in which case, it’s a function. But it has only one argument, which is an element of N x N. N x N is the set of all possible pairs of integers. An element of this set is a pair of integers, and a pair is a special case of the more general tuple concept used to represent combinations of several elements. A pair is a tuple of two elements. # Tuples are noted between parentheses, so (3, 5) is a tuple and an element of N x N. The function <i>f</i> can be applied to this tuple: # ```python # f((3, 5)) = 3 + 5 = 8 # ``` # # In such a case, you may, by convention, simplify writing by removing one set of parentheses: # ```python # f(3, 5) = 3 + 5 = 8 # ``` # # Nevertheless, it’s still a function of one tuple, and not a function of two arguments. # # <b><font size='3'>Function currying</font></b><br/> # Functions of tuples can be thought of differently. The function <font color='blue'>f(3, 5)</font> might be considered as a function from N to a set of functions of N. So the previous example could be rewritten as:<br/> # ```python # f(x)(y) = g(y) # ``` # where # ```python # g(y) = x + y # ``` # In such a case, you can write # ```python # f(x) = g # ``` # which means that the result of applying the function <i>f</i> to the argument x is a new function <i>g</i>. Applying this <i>g</i> function to y gives the following: # ```python # g(y) = x + y # ``` # When applying <i>g</i>, x is no longer a variable. It doesn’t depend on the argument or on anything else. It’s a constant. If you apply this to (3, 5), you get the following: # ```python # f(3)(5) = g(5) = 3 + 5 = 8 # ``` # <b>The only new thing here is that the codomain of <i>f</i> is a set of functions instead of a set of numbers. The result of applying f to an integer is a function. The result of applying this function to an integer is an integer.</b> <font color='blue'>f(x)(y)</font> is the curried form of the function <font color='blue'>f(x, y)</font>. Applying this transformation to a function of a tuple (<font color='brown'>which you can call a function of several arguments if you prefer</font>) is called <font color='darkblue'><b>currying</b></font>, after the mathematician Haskell Curry (<font color='brown'>although he wasn’t the inventor of this transformation</font>). # # <b><font size='3'>Partially applied functions</font></b><br/> # The curried form of the addition function may not seem natural, and you might wonder if it corresponds to something in the real world. After all, <b>with the curried version, you’re considering both arguments separately. One of the arguments is considered first, and applying the function to it gives you a new function</b>. Is this new function useful by itself, or is it simply a step in the global calculation? # # In the case of an addition, it doesn’t seem useful. And by the way, you could start with either of the two arguments and it would make no difference. The intermediate function would be different, but not the end result. Now consider a new function of a pair of values: # ```python # f(rate, price) = price / 100 * (100 + rate) # ``` # # That function seems to be equivalent to this: # ```python # g(price, rate) = price / 100 * (100 + rate) # ``` # # Let’s now consider the curried versions of these two functions: # ```python # f(rate)(price) # g(price)(rate) # ``` # # You know that <i>f</i> and <i>g</i> are functions. But what are <font color='blue'>f(rate)</font> and <font color='blue'>g(price)</font>? Yes, for sure, they’re the results of applying <i>f</i> to rate and <i>g</i> to price. But what are the types of these results? <font color='blue'>f(rate)</font> is a function of a price to a price. If rate = 9, this function applies a tax of 9% to a price, giving a new price. You could call the resulting function <font color='blue'>apply9-percentTax(price)</font>, and it would probably be a useful tool because the tax rate doesn’t change often. # # On the other hand, <font color='blue'>g(price)</font> is a function of a rate to a price. If the price is $100, it gives a new function applying a price of $100 to a variable tax. What could you call this function? If you can’t think of a meaningful name, that usually means that it’s useless, though this depends on the problem you have to solve. Functions like <font color='blue'>f(rate)</font> and <font color='blue'>g(price)</font> are sometimes called <font color='darkblue'><b>partially applied functions</b></font>, in reference to the forms <font color='blue'>f(rate, price)</font> and <font color='blue'>g(price, rate)</font>. <b>Partially applying functions can have huge consequences regarding argument evaluation.</b> We’ll come back to this subject in a later section. # # <b><font size='3'>Functions have no effects</font></b><br/> # Remember that pure functions only return a value and do nothing else. They don’t mutate any element of the outside world (<font color='brown'>with outside being relative to the function itself</font>), they don’t mutate their arguments, and they don’t explode (<font color='brown'>or throw an exception, or anything else</font>) <b>if an error occurs. They can return an exception or anything else, such as an error message. But they must return it, not throw it, nor log it, nor print it.</b> # # # <font color='darkblue'>Functions in Python </font> # # ## <font color='darkgreen'>Functional methods </font> # A method can be functional if it respects the <b>requirements of a pure function:</b> # * It must not mutate anything outside the function. No internal mutation may be visible from the outside. # * It must not mutate its argument. # * It must not throw errors or exceptions. # * It must always return a value. # * When called with the same argument, it must always return the same result. # # ## <font color='darkgreen'>Composing functions</font> # If you think about functions as methods, composing them seems simple: # + def square(x): return x * x def triple(x): return x * 3 print("Composing example: square(triple(2)) = {}".format(square(triple(2)))) # - # But this isn’t function composition. In this example, you’re composing function applications. <b><a href='https://en.wikipedia.org/wiki/Function_composition'>Function composition</a> is a binary operation on functions, just as addition is a binary operation on numbers</b>. So you can compose functions programmatically, using a method: # + def compose_ex(f1, f2): def cmp(x): return f1(f2(x)) return cmp print("compose_ex(square, triple)(2)={}".format(compose_ex(square, triple)(2))) # - # Now you can start seeing how powerful this concept is! # # <b><font size='3'>Problem with function compositions</font></b><br/> # <b>In imperative programming, each function is evaluated before the result is passed as the input of the next function. But in functional programming, composing functions means building the resulting function without evaluating anything.</b> Composing functions is powerful because functions can be composed without being evaluated. But as a consequence, <font color='red'>applying the composed function results in numerous embedded method calls that will eventually overflow the stack</font>. This can be demonstrated with a simple example (<font color='brown'>using lambdas, which will be introduced in the next section</font>): # + def addOne(x): return x + 1 addN = compose_ex(addOne, addOne) fnum = 1000 for i in range(fnum): addN = compose_ex(addN, addOne) print('addN(1) = {}'.format(addN(1))) # - # This program will overflow the stack when fnum is around 1000. Hopefully you won’t usually compose several thousand functions, but you should be aware of this. # # # <font color='darkblue'>Advanced function features</font> # You’ve seen how to create apply and compose functions. You’ve also learned that functions can be represented by methods or by objects. But you haven’t answered a fundamental question: why do you need function objects? Couldn’t you simply use methods? Before answering this question, you have to consider the problem of the functional representation of multiargument methods. # # ## <font color='darkgreen'>Applying curried functions</font> # You’ve seen how to write curried function types and how to implement them. But how do you apply them? Well, just like any function. You apply the function to the first argument, and then apply the result to the next argument, and so on until the last one. For example, you can apply the add function to 3 and 5: # + import functools def addTwo(x, y): return x + y addTenWith = functools.partial(addTwo, x = 10) print("addTenWith(5) = {}".format(addTenWith(y=5))) # - # Here, we leverage package <b><a href='https://docs.python.org/2/library/functools.html'>functools</a></b> to carry out the curring operation. It would be great if you could apply a function just by writing its name followed by its argument. It would allow coding, as in Scala: # ```scala # addTwo(10)(5) # ``` # # ## <font color='darkgreen'>Higher-order functions</font> # We wrote a method to compose functions before. That method was a functional one, taking as its argument a tuple of two functions and returning a function. But instead of using a method, you could use a function! This special kind of function, <b>taking functions as its arguments and returning functions, is called a <font color='darkblue'>higher-order function</font></b> (<font color='brown'>HOF</font>). # # Below we introduce higher-order function <font color='blue'>compose2</font> to compose <font color='blue'>f(g(x))</font>; and <font color='blue'>andThen</font> to compose the opposite direction (<font color='brown'>f=square; g=triple</font>): # + from fpu.fp import compose2, andThen triple_and_square = compose2(square, triple) square_andThen_square = andThen(square, triple) print("triple_and_square(2) = {}".format(triple_and_square(2))) # (2 * 3)^2 = 36 print("square_andThen_square(2) = {}".format(square_andThen_square(2))) # 2^2 * 3 = 12 # - # ## <font color='darkgreen'>Closures</font> # You’ve seen that pure functions must not depend on anything other than their arguments to evaluate their return values. Methods may even access static members of other classes. I’ve said that functional methods are methods that respect referential transparency, which means they have no observable effects besides returning a value. The same is true for functions. <b>Functions are pure if they don’t have observable side effects.</b> # # <b><a href='https://en.wikipedia.org/wiki/Closure_(computer_programming)'>Closures</a> are compatible with pure functions if you consider them as additional implicit arguments.</b> One simple example as below: # + def addN(n): def add(x): return x + n return add add5 = addN(5) print("add5(10) = {}".format(add5(10))) # - # The method <font color='blue'>addN</font> will return a closure binding with a free variable <i>n</i>. From above example, <font color='blue'>add5</font> is binding with free variable <font color='blue'>n=5</font>. For more about Python closure, you can refer to this post: <a href='http://www.codedata.com.tw/java/understanding-lambda-closure-3-python-support/'>認識 Lambda/Closure(3)Python 對 Lambda/Closure 的支援</a>.
Ch2_UsingFunction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MLG: Lab 5 (Part 2) # # ## Exercise 3: Self-organized representation of a collection of images # # ### Dataset information # # The Wang image database is a database of images grouped by class. In this dataset, we downloaded only the 1000 images. You can download them on this website: http://wang.ist.psu.edu/docs/home.shtml more precisely at this address: http://wang.ist.psu.edu/~jwang/test1.tar # # For each class we have 100 corresponding images: # - 0-100: Africans # - 100-200: Beaches # - 200-300: Monuments # - etc... # # All the images in this dataset are 250x166 pixels or 166x250 pixels. import numpy as np import matplotlib.pylab as pl import KohonenUtils import WangImageUtilities # %matplotlib inline extractor = WangImageUtilities.ImageFeatureExtractor('Wang_Data') # Here we load images from 300-399 and from 500-599 extractor.load_images(list_indices=list(np.arange(300, 400)) + list(np.arange(500, 600))) # ### Dataset Visualization pl.figure(figsize=(10, 20)) pl.subplot(121) pl.imshow(extractor.images[2]) pl.axis('off') pl.subplot(122) pl.imshow(extractor.images[101]) _ = pl.axis('off') # ### Clustering with SOM method = 1 if method == 1: histograms = extractor.extract_histogram() elif method == 2: histograms = extractor.extract_hue_histogram() elif method == 3: histograms = extractor.extract_color_histogram() else: print('Implement your own method for extracting features if you like!') # + kmap = KohonenUtils.KohonenMap(side_rows=8, side_cols=8, size_vector=histograms.shape[1]) n_iter = 5 learning_rate = KohonenUtils.ExponentialTimeseries(1, 0.05, n_iter*histograms.shape[0]) neighborhood_size = KohonenUtils.ExponentialTimeseries(2./3 * kmap._map.shape[1], 1, n_iter*histograms.shape[0]) names = [] for index in extractor.image_indices: names.append(str(index)) kmap.train(histograms, names, n_iter, learning_rate, neighborhood_size) # - kmap.plot_umatrix(plot_empty=True, plot_names=True) # To simplify analysis, we can write the images to HTML. This is easier to see and analyze. # Writes a 'som.html' file # You can visualize the results and click on a neuron to see other images assigned to this neuron extractor.to_html('som.html', kmap) # <h3>REPORT (date of submission: 27.5 before 23:55)</h3> # # 1. Explain the three different methods we provided for extracting features. What do you understand about them (input, output), how do they work ?</p> # # 2. Try the SOM with several (minimum 3) different sets of images (always 100 images per class and at least two classes). You can change the size of the Self-Organizing Map as well as its parameters. # <ul> # <li> Note that we provided three methods for extracting features: for at least one of the test you do try with all three methods and compare the results. # <li> Include for each experiment an U-Matrix (with images - print screen of html)) that you find interesting. Explain why you find it interesting (what are the input images, with which features you trained your Self-Organizing Map, with which parameters, and how it is reflected in the results)... # </ul> # </p>
src/SOM_part2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # GPU-accelerated interactive visualization of single cells with RAPIDS, Scanpy and Plotly Dash # Copyright (c) 2020, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License") you may not use this file except in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # In this notebook, we cluster cells based on a single-cell RNA-seq count matrix, and produce an interactive visualization of the clustered cells that allows for further analysis of the data in a browser window. # For demonstration purposes, we use a dataset of ~70,000 human lung cells from Travaglini et al. 2020 (https://www.biorxiv.org/content/10.1101/742320v2) and label cells using the ACE2, TMPRSS2, and EPCAM marker genes. See the README for instructions to download this dataset. # ## Import requirements # + import numpy as np import scanpy as sc import anndata import sys import time import cudf import cupy as cp from cuml.decomposition import PCA from cuml.manifold import TSNE from cuml.cluster import KMeans import rapids_scanpy_funcs import warnings warnings.filterwarnings('ignore', 'Expected ') # - # We use the RAPIDS memory manager on the GPU to control how memory is allocated. # + import rmm rmm.reinitialize( managed_memory=True, # Allows oversubscription pool_allocator=False, # default is False devices=0, # GPU device IDs to register. By default registers only GPU 0. ) cp.cuda.set_allocator(rmm.rmm_cupy_allocator) # - # ## Input data # In the cell below, we provide the path to the `.h5ad` file containing the count matrix to analyze. Please see the README for instructions on how to download the dataset we use here. # # We recommend saving count matrices in the sparse .h5ad format as it is much faster to load than a dense CSV file. To run this notebook using your own dataset, please see the README for instructions to convert your own count matrix into this format. Then, replace the path in the cell below with the path to your generated `.h5ad` file. input_file = "../data/krasnow_hlca_10x_UMIs.sparse.h5ad" # ## Set parameters # + # marker genes RIBO_GENE_PREFIX = "RPS" # Prefix for ribosomal genes to regress out markers = ["ACE2", "TMPRSS2", "EPCAM"] # Marker genes for visualization # filtering cells min_genes_per_cell = 200 # Filter out cells with fewer genes than this expressed max_genes_per_cell = 6000 # Filter out cells with more genes than this expressed # filtering genes n_top_genes = 5000 # Number of highly variable genes to retain # PCA n_components = 50 # Number of principal components to compute # KNN n_neighbors = 15 # Number of nearest neighbors for KNN graph knn_n_pcs = 50 # Number of principal components to use for finding nearest neighbors # UMAP umap_min_dist = 0.3 umap_spread = 1.0 # - # ## Load and Prepare Data # We load the sparse count matrix from an `h5ad` file using Scanpy. The sparse count matrix will then be placed on the GPU. # %%time adata = sc.read(input_file) adata = adata.T adata.shape # %%time genes = cudf.Series(adata.var_names) barcodes = cudf.Series(adata.obs_names) sparse_gpu_array = cp.sparse.csr_matrix(adata.X) # ## Preprocessing # ### Filter # We filter the count matrix to remove cells with an extreme number of genes expressed. # %%time sparse_gpu_array, barcodes = rapids_scanpy_funcs.filter_cells(sparse_gpu_array, min_genes=min_genes_per_cell, max_genes=max_genes_per_cell, barcodes=barcodes) # Some genes will now have zero expression in all cells. We filter out such genes. # %%time sparse_gpu_array, genes = rapids_scanpy_funcs.filter_genes(sparse_gpu_array, genes, min_cells=1) # The size of our count matrix is now reduced. sparse_gpu_array.shape # ### Normalize # We normalize the count matrix so that the total counts in each cell sum to 1e4. # %%time sparse_gpu_array = rapids_scanpy_funcs.normalize_total(sparse_gpu_array, target_sum=1e4) # Next, we log transform the count matrix. # %%time sparse_gpu_array = sparse_gpu_array.log1p() # ### Select Most Variable Genes # We convert the count matrix to an annData object. # %%time adata = anndata.AnnData(sparse_gpu_array.get()) adata.var_names = genes.to_pandas() # Using scanpy, we filter the count matrix to retain only the 5000 most variable genes. # %%time sc.pp.highly_variable_genes(adata, n_top_genes=n_top_genes, flavor="cell_ranger") adata = adata[:, adata.var.highly_variable] # ### Regress out confounding factors (number of counts, ribosomal gene expression) # We can now perform regression on the count matrix to correct for confounding factors - for example purposes, we use the number of counts and the expression of ribosomal genes. Many workflows use the expression of mitochondrial genes (named starting with `MT-`). # # Before regression, we save the 'raw' expression values of the ACE2 and TMPRSS2 genes to use for labeling cells afterward. We will also store the expression of an epithelial marker gene (EPCAM). # + # %%time tmp_norm = sparse_gpu_array.tocsc() raw_marker_expressions = {} for marker in markers: raw_marker_expressions[marker] = tmp_norm[:, genes[genes == marker].index[0]].todense().ravel() del tmp_norm # - # We now calculate the total counts and the percentage of ribosomal counts for each cell. # + # %%time genes = adata.var_names ribo_genes = adata.var_names.str.startswith(RIBO_GENE_PREFIX) n_counts = adata.X.sum(axis=1) percent_ribo = (adata.X[:,ribo_genes].sum(axis=1) / n_counts).ravel() n_counts = cp.array(n_counts).ravel() percent_ribo = cp.array(percent_ribo).ravel() # - # And perform regression: # %%time sparse_gpu_array = cp.sparse.csc_matrix(adata.X) sparse_gpu_array = rapids_scanpy_funcs.regress_out(sparse_gpu_array, n_counts, percent_ribo) # ### Scale # Finally, we scale the count matrix to obtain a z-score and apply a cutoff value of 10 standard deviations, obtaining the preprocessed count matrix. # %%time sparse_gpu_array = rapids_scanpy_funcs.scale(sparse_gpu_array, max_value=10) # ## Cluster & Visualize # We store the preprocessed count matrix as an AnnData object, which is currently in host memory. # We also add the barcodes of the filtered cells, and the expression levels of the marker genes, to the annData object. # + # %%time adata = anndata.AnnData(sparse_gpu_array.get()) adata.var_names = genes adata.obs_names = barcodes.to_pandas() for marker in markers: adata.obs[marker + "_raw"] = raw_marker_expressions[marker].get() # - # ### Reduce # We use PCA to reduce the dimensionality of the matrix to its top 50 principal components. # %%time adata.obsm["X_pca"] = PCA(n_components=n_components, output_type="numpy").fit_transform(adata.X) # ### UMAP + Louvain # We visualize the cells using the UMAP algorithm in Rapids. Before UMAP, we need to construct a k-nearest neighbors graph in which each cell is connected to its nearest neighbors. This can be done conveniently using rapids functionality already integrated into Scanpy. # # Note that Scanpy uses an approximation to the nearest neighbors on the CPU while the GPU version performs an exact search. While both methods are known to yield useful results, some differences in the resulting visualization and clusters can be observed. # %%time sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids') # The UMAP function from Rapids is also integrated into Scanpy. # %%time sc.tl.umap(adata, min_dist=umap_min_dist, spread=umap_spread, method='rapids') # Finally, we use the Louvain algorithm for graph-based clustering, once again using the `rapids` option in Scanpy. # %%time sc.tl.louvain(adata, flavor='rapids') # We plot the cells using the UMAP visualization, using the Louvain clusters as labels. sc.pl.umap(adata, color=["louvain"]) # ## Defining re-clustering function for interactive visualization # As we have shown above, the speed of RAPIDS allows us to run steps like dimension reduction, clustering and visualization in seconds or even less. In the sections below, we create an interactive visualization that takes advantage of this speed by allowing users to cluster and analyze selected groups of cells at the click of a button. # First, we create a function named `re_cluster`. This function can be called on selected groups of cells. According to the function defined below, PCA, KNN, UMAP and Louvain clustering will be re-computed upon the selected cells. You can customize this function for your desired analysis. def re_cluster(adata): #### Function to repeat clustering and visualization on subsets of cells #### Runs PCA, KNN, UMAP and Louvain clustering on selected cells. adata.obsm["X_pca"] = PCA(n_components=n_components).fit_transform(adata.X).get() sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids') sc.tl.umap(adata, min_dist=umap_min_dist, spread=umap_spread, method='rapids') sc.tl.louvain(adata, flavor='rapids') return adata # ## Creating an interactive visualization with Plotly Dash # <img src="https://github.com/avantikalal/rapids-single-cell-examples/blob/visualization/images/dashboard.png?raw=true" alt="Interactive Dashboard" width="400"/> # Below, we create the interactive visualization using the `adata` object and the re-clustering function defined above. To learn more about how this visualization is built, see `visualize.py`. # When you run the cell below, it returns a link. Click on this link to access the interactive visualization within your browser. # # Once opened, click the `Directions` button for instructions. # + import visualize import importlib importlib.reload(visualize) v = visualize.Visualization(adata, markers, re_cluster_callback=re_cluster) v.start('10.33.227.161') selected_cells = v.new_df # - # Within the dashboard, you can select cells using a variety of methods. You can then cluster, visualize and analyze the selected cells using the tools provided. Click on the `Directions` button for details. # # To export the selected cells and the results of your analysis back to the notebook, click the `Export to Dataframe` button. This exports the results of your analysis back to this notebook, and closes the interactive dashboard. # # See the next section for instructions on how to use the exported data. # ## Exporting a selection of cells from the dashboard # If you exported a selection cells from the interactive visualization, your selection will be available here as a data frame named `selected_cells`. The `labels` column of this dataframe contains the newly generated cluster labels assigned to these selected cells. print(selected_cells.shape) selected_cells.head() # You can link the selected cells to the original `adata` object using the cell barcodes. adata_selected_cells = adata[selected_cells.barcode.to_array(),:] adata_selected_cells
notebooks/hlca_lung_gpu_analysis-visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1> # <center>Problema 4</center> # </h1> # # <center><a href="https://github.com/Luis2501"><NAME></a></center> # # <br> # # <center>Facultad de Ciencias Físico Matemáticas, Universidad Autonoma de Coahuila</center> # # <br> # # <center><b>E-mail:</b> <EMAIL></center> # __________________ # La carga a partir de la densidad radial de carga lineal, $\sigma$, sobre el área de la sección transversal del cilindro, está definida por: # # $$q_{total}=\int_A \sigma(r)dA(r)$$ # # donde, dA es el diferencial de área de la base del cilindro: # # $$q_{total}=\int_A \sigma(r)(2\pi r dr)$$ # # Sabemos que la densidad tiene la forma: # # $$\sigma(r)=\sigma_0 \left(1-\frac{r^2}{a^2} \right)$$ # # donde # # $$\sigma_0 =1.3 \times 10^{-6} C/m^2 , \; \; a = 1 \times 10^{-3} \; m$$ # # Y el campo eléctrico: # # $$\vec{E}=\frac{kq_{total}}{d^2}\hat r$$ # # d: distancia de la superficie al espacio a evaluar el campo eléctrico. ($d\gt r$) # # # a) Determina la distribución de la carga y campo eléctrico (a distancias $d \in \{ r+r/4,r+r/2\}$ ) en función al radio utilizando 3 métodos de integración. # # b) Obtener la solución analítica, comparar los errores de distintos métodos y detallar análisis. # ## Solución # ____________ # Para comenzar, empezamos importando las librerias necesarias. # + import plotly.graph_objects as go import numpy as np import sys sys.path.append("../") from PhysicsPy.Integration import * # - # Se importa el módulo `Integration` de `PhysicsPy`, ahi se encuentran los métodos a utilizar. # # Ahora, crearemos una clase `Carga` que nos permita crear `cilindros` con distintas características, así podremos conocer su carga. class Carga: def __init__(self, sigma_0, a): self.sigma_0, self.a = sigma_0, a def __call__(self, r): sigma_0, a = self.sigma_0, self.a return np.array(sigma_0*(1 - (r**2/a**2))*2*np.pi*r) # Creamos el `Cilindro` con las condiciones que nos imponen. # # Mediante el módulo `Integration` iteraremos en cada método para obtener las distintas soluciones. # + Cilindro = Carga(1.3e-6, 1e-3) Methods = [Riemann, Trapeze, Midpoint, Simpson1_3, Simpson3_8] Names = ["Riemman", "Trapeze", "Midpoint", "Simpson1_3", "Simpson3_8"] Solutions = [] for class_name, name in zip(Methods, Names): Solucion = class_name(Cilindro) Solucion.Limits(0, Cilindro.a, 1e-7) Integral = Solucion.Solve() Solutions.append(Integral) print(name + " Integration: ", Integral) del Integral, Solucion Solutions = np.array(Solutions) # - # Se obtiene los siguientes resultados: # # ``` terminal # Riemman Integration: 2.0420352044130184e-12 # Trapeze Integration: 2.0420352044130184e-12 # Midpoint Integration: 2.042035235043548e-12 # Simpson1_3 Integration: 2.042035224833363e-12 # Simpson3_8 Integration: 2.042035204411995e-12 # ``` # # Algo importamte a considerar es la demora de cada método. Debido que la integración de Riemman toma mucho más tiempo cuando disminuimos el tamaño de paso $h$. # De la integral dada, tenemos que # # $$q_{total} = \int_{0} ^{a} \sigma_0 \left( 1 - \frac{r^2}{a^2} \right) 2 \pi r \; dr$$ # # Entonces # # $$q_{total} = \sigma_0 \int_{0} ^{a} \left( 2 \pi r - \frac{2 \pi r^3}{a^2} \right) \; dr $$ # # $$q_{total} = \sigma_0 \left[ \pi r^2 - \frac{\pi r^4}{2 a^2} \right]_{0} ^{a}$$ # # Por lo tanto, la carga total es # # $$q_{total} = \sigma_{0} \left( \pi a^2 - \frac{\pi a^{2}}{2} \right) = \frac{\sigma_{0} \pi a^2}{2}$$ # # Al sustituir $a$ y $\sigma_0$, obtenemos que la carga total es la siguiente. # + Q_total = (Cilindro.sigma_0*np.pi*(Cilindro.a**2))/2 print("Solución analítica: ", Q_total) # - # De esta manera podemos comparar los errores. # + #Error absoluto y error relativo e_abs = Q_total - Solutions e_r = (Q_total - Solutions)/Q_total print("Método \t \t", "Error absoluto \t\t", "Error relativo \t\t", "Error Porcentual \n") for i in range(len(Solutions)): print(Names[i], "\t", e_abs[i], "\t", e_r[i], "\t ", e_r[i]*100, "%") # - # <div style="text-alig: justify"> # Podemos observar que el error en todos los métdos es muy pequeño. Sin embargo, debemos destacar que considerando la magnitud de las cantidades utilizadas, el método de Simpson $1/3$ arrojo una mejor aproximación. # </div>
Problema 4/Problema 4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Linear Regression with PyTorch # # #### Part 2 of "PyTorch: Zero to GANs" # # *This post is the second in a series of tutorials on building deep learning models with PyTorch, an open source neural networks library developed and maintained by Facebook. Check out the full series:* # # 1. [PyTorch Basics: Tensors & Gradients](https://jovian.ml/aakashns/01-pytorch-basics) # 2. [Linear Regression & Gradient Descent](https://jovian.ml/aakashns/02-linear-regression) # 3. [Image Classfication using Logistic Regression](https://jovian.ml/aakashns/03-logistic-regression) # 4. [Training Deep Neural Networks on a GPU](https://jovian.ml/aakashns/04-feedforward-nn) # 5. Convolutional Neural Networks, Regularization and ResNets (coming soon..) # 6. Generative Adverserial Networks (coming soon..) # # # <div height="315"> # <iframe width="560" height="315" src="https://www.youtube.com/embed/gERrXAk9h_A" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> # <div> # # Continuing where the [previous tutorial](https://jvn.io/aakashns/3143ceb92b4f4cbbb4f30e203580b77b) left off, we'll discuss one of the foundational algorithms of machine learning in this post: *Linear regression*. We'll create a model that predicts crop yields for apples and oranges (*target variables*) by looking at the average temperature, rainfall and humidity (*input variables or features*) in a region. Here's the training data: # # ![linear-regression-training-data](https://i.imgur.com/6Ujttb4.png) # # In a linear regression model, each target variable is estimated to be a weighted sum of the input variables, offset by some constant, known as a bias : # # ``` # yield_apple = w11 * temp + w12 * rainfall + w13 * humidity + b1 # yield_orange = w21 * temp + w22 * rainfall + w23 * humidity + b2 # ``` # # Visually, it means that the yield of apples is a linear or planar function of temperature, rainfall and humidity: # # ![linear-regression-graph](https://i.imgur.com/4DJ9f8X.png) # # The *learning* part of linear regression is to figure out a set of weights `w11, w12,... w23, b1 & b2` by looking at the training data, to make accurate predictions for new data (i.e. to predict the yields for apples and oranges in a new region using the average temperature, rainfall and humidity). This is done by adjusting the weights slightly many times to make better predictions, using an optimization technique called *gradient descent*. # ## System setup # # If you want to follow along and run the code as you read, you can run this notebook by clicking the 'Run' button at the top of this page. You can also clone this notebook hosted on [Jovian.ml](https://www.jovian.ml), install the required dependencies, and start Jupyter by running the following commands on the terminal: # # ```bash # pip install jovian --upgrade # Install the jovian library # jovian clone aakashns/02-linear-regression # Download notebook & dependencies # # cd 02-linear-regression # Enter the created directory # jovian install # Install the dependencies # conda activate 02-linear-regression # Activate virtual environment # jupyter notebook # Start Jupyter # ``` # # On older versions of conda, you might need to run `source activate 02-linear-regression` to activate the environment. For a more detailed explanation of the above steps, check out the *System setup* section in the [previous notebook](https://jovian.ml/aakashns/01-pytorch-basics). # We begin by importing Numpy and PyTorch: import numpy as np import torch # ## Training data # # The training data can be represented using 2 matrices: `inputs` and `targets`, each with one row per observation, and one column per variable. # Input (temp, rainfall, humidity) inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70]], dtype='float32') # Targets (apples, oranges) targets = np.array([[56, 70], [81, 101], [119, 133], [22, 37], [103, 119]], dtype='float32') # We've separated the input and target variables, because we'll operate on them separately. Also, we've created numpy arrays, because this is typically how you would work with training data: read some CSV files as numpy arrays, do some processing, and then convert them to PyTorch tensors as follows: # Convert inputs and targets to tensors inputs = torch.from_numpy(inputs) targets = torch.from_numpy(targets) print(inputs) print(targets) # ## Linear regression model from scratch # # The weights and biases (`w11, w12,... w23, b1 & b2`) can also be represented as matrices, initialized as random values. The first row of `w` and the first element of `b` are used to predict the first target variable i.e. yield of apples, and similarly the second for oranges. # Weights and biases w = torch.randn(2, 3, requires_grad=True) b = torch.randn(2, requires_grad=True) print(w) print(b) # `torch.randn` creates a tensor with the given shape, with elements picked randomly from a [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) with mean 0 and standard deviation 1. # # Our *model* is simply a function that performs a matrix multiplication of the `inputs` and the weights `w` (transposed) and adds the bias `b` (replicated for each observation). # # ![matrix-mult](https://i.imgur.com/WGXLFvA.png) # # We can define the model as follows: def model(x): return x @ w.t() + b # `@` represents matrix multiplication in PyTorch, and the `.t` method returns the transpose of a tensor. # # The matrix obtained by passing the input data into the model is a set of predictions for the target variables. # Generate predictions preds = model(inputs) print(preds) # Let's compare the predictions of our model with the actual targets. # Compare with targets print(targets) # You can see that there's a huge difference between the predictions of our model, and the actual values of the target variables. Obviously, this is because we've initialized our model with random weights and biases, and we can't expect it to *just work*. # ## Loss function # # Before we improve our model, we need a way to evaluate how well our model is performing. We can compare the model's predictions with the actual targets, using the following method: # # * Calculate the difference between the two matrices (`preds` and `targets`). # * Square all elements of the difference matrix to remove negative values. # * Calculate the average of the elements in the resulting matrix. # # The result is a single number, known as the **mean squared error** (MSE). # MSE loss def mse(t1, t2): diff = t1 - t2 return torch.sum(diff * diff) / diff.numel() # `torch.sum` returns the sum of all the elements in a tensor, and the `.numel` method returns the number of elements in a tensor. Let's compute the mean squared error for the current predictions of our model. # Compute loss loss = mse(preds, targets) print(loss) # Here’s how we can interpret the result: *On average, each element in the prediction differs from the actual target by about 138 (square root of the loss 19044)*. And that’s pretty bad, considering the numbers we are trying to predict are themselves in the range 50–200. Also, the result is called the *loss*, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model. # ## Compute gradients # # With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have `requires_grad` set to `True`. # Compute gradients loss.backward() # The gradients are stored in the `.grad` property of the respective tensors. Note that the derivative of the loss w.r.t. the weights matrix is itself a matrix, with the same dimensions. # Gradients for weights print(w) print(w.grad) # The loss is a [quadratic function](https://en.wikipedia.org/wiki/Quadratic_function) of our weights and biases, and our objective is to find the set of weights where the loss is the lowest. If we plot a graph of the loss w.r.t any individual weight or bias element, it will look like the figure shown below. A key insight from calculus is that the gradient indicates the rate of change of the loss, or the [slope](https://en.wikipedia.org/wiki/Slope) of the loss function w.r.t. the weights and biases. # # If a gradient element is **positive**: # * **increasing** the element's value slightly will **increase** the loss. # * **decreasing** the element's value slightly will **decrease** the loss # # ![postive-gradient](https://i.imgur.com/hFYoVgU.png) # # If a gradient element is **negative**: # * **increasing** the element's value slightly will **decrease** the loss. # * **decreasing** the element's value slightly will **increase** the loss. # # ![negative=gradient](https://i.imgur.com/w3Wii7C.png) # # The increase or decrease in loss by changing a weight element is proportional to the value of the gradient of the loss w.r.t. that element. This forms the basis for the optimization algorithm that we'll use to improve our model. # Before we proceed, we reset the gradients to zero by calling `.zero_()` method. We need to do this, because PyTorch accumulates, gradients i.e. the next time we call `.backward` on the loss, the new gradient values will get added to the existing gradient values, which may lead to unexpected results. w.grad.zero_() b.grad.zero_() print(w.grad) print(b.grad) # ## Adjust weights and biases using gradient descent # # We'll reduce the loss and improve our model using the gradient descent optimization algorithm, which has the following steps: # # 1. Generate predictions # # 2. Calculate the loss # # 3. Compute gradients w.r.t the weights and biases # # 4. Adjust the weights by subtracting a small quantity proportional to the gradient # # 5. Reset the gradients to zero # # Let's implement the above step by step. # Generate predictions preds = model(inputs) print(preds) # Note that the predictions are same as before, since we haven't made any changes to our model. The same holds true for the loss and gradients. # Calculate the loss loss = mse(preds, targets) print(loss) # Compute gradients loss.backward() print(w.grad) print(b.grad) # Finally, we update the weights and biases using the gradients computed above. # Adjust weights & reset gradients with torch.no_grad(): w -= w.grad * 1e-5 b -= b.grad * 1e-5 w.grad.zero_() b.grad.zero_() # A few things to note above: # # * We use `torch.no_grad` to indicate to PyTorch that we shouldn't track, calculate or modify gradients while updating the weights and biases.  # # * We multiply the gradients with a really small number (`10^-5` in this case), to ensure that we don't modify the weights by a really large amount, since we only want to take a small step in the downhill direction of the gradient. This number is called the *learning rate* of the algorithm.  # # * After we have updated the weights, we reset the gradients back to zero, to avoid affecting any future computations. # Let's take a look at the new weights and biases. print(w) print(b) # With the new weights and biases, the model should have lower loss. # Calculate loss preds = model(inputs) loss = mse(preds, targets) print(loss) # We have already achieved a significant reduction in the loss, simply by adjusting the weights and biases slightly using gradient descent. # ## Train for multiple epochs # # To reduce the loss further, we can repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an epoch. Let's train the model for 100 epochs. # Train for 100 epochs for i in range(100): preds = model(inputs) loss = mse(preds, targets) loss.backward() with torch.no_grad(): w -= w.grad * 1e-5 b -= b.grad * 1e-5 w.grad.zero_() b.grad.zero_() # Once again, let's verify that the loss is now lower: # Calculate loss preds = model(inputs) loss = mse(preds, targets) print(loss) # As you can see, the loss is now much lower than what we started out with. Let's look at the model's predictions and compare them with the targets. # Predictions preds # Targets targets # The prediction are now quite close to the target variables, and we can get even better results by training for a few more epochs. # # At this point, we can save our notebook and upload it to [Jovian.ml](https://www.jovian.ml) for future reference and sharing. # !pip install jovian --upgrade -q import jovian jovian.commit() # `jovian.commit` uploads the notebook to [Jovian.ml](https://www.jovian.ml), captures the Python environment and creates a sharable link for the notebook. You can use this link to share your work and let anyone reproduce it easily with the `jovian clone` command. Jovian also includes a powerful commenting interface, so you (and others) can discuss & comment on specific parts of your notebook: # # ![commenting on jovian](https://cdn-images-1.medium.com/max/1600/1*b4snnr_5Ve5Nyq60iDtuuw.png) # ## Linear regression using PyTorch built-ins # # The model and training process above were implemented using basic matrix operations. But since this such a common pattern , PyTorch has several built-in functions and classes to make it easy to create and train models. # # Let's begin by importing the `torch.nn` package from PyTorch, which contains utility classes for building neural networks. import torch.nn as nn # As before, we represent the inputs and targets and matrices. # + # Input (temp, rainfall, humidity) inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70], [73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70], [73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70]], dtype='float32') # Targets (apples, oranges) targets = np.array([[56, 70], [81, 101], [119, 133], [22, 37], [103, 119], [56, 70], [81, 101], [119, 133], [22, 37], [103, 119], [56, 70], [81, 101], [119, 133], [22, 37], [103, 119]], dtype='float32') inputs = torch.from_numpy(inputs) targets = torch.from_numpy(targets) # - # We are using 15 training examples this time, to illustrate how to work with large datasets in small batches. # ## Dataset and DataLoader # # We'll create a `TensorDataset`, which allows access to rows from `inputs` and `targets` as tuples, and provides standard APIs for working with many different types of datasets in PyTorch. from torch.utils.data import TensorDataset # Define dataset train_ds = TensorDataset(inputs, targets) train_ds[0:3] # The `TensorDataset` allows us to access a small section of the training data using the array indexing notation (`[0:3]` in the above code). It returns a tuple (or pair), in which the first element contains the input variables for the selected rows, and the second contains the targets. # We'll also create a `DataLoader`, which can split the data into batches of a predefined size while training. It also provides other utilities like shuffling and random sampling of the data. from torch.utils.data import DataLoader # Define data loader batch_size = 5 train_dl = DataLoader(train_ds, batch_size, shuffle=True) # The data loader is typically used in a `for-in` loop. Let's look at an example. for xb, yb in train_dl: print(xb) print(yb) break # In each iteration, the data loader returns one batch of data, with the given batch size. If `shuffle` is set to `True`, it shuffles the training data before creating batches. Shuffling helps randomize the input to the optimization algorithm, which can lead to faster reduction in the loss. # ## nn.Linear # # Instead of initializing the weights & biases manually, we can define the model using the `nn.Linear` class from PyTorch, which does it automatically. # Define model model = nn.Linear(3, 2) print(model.weight) print(model.bias) # PyTorch models also have a helpful `.parameters` method, which returns a list containing all the weights and bias matrices present in the model. For our linear regression model, we have one weight matrix and one bias matrix. # Parameters list(model.parameters()) # We can use the model to generate predictions in the exact same way as before: # Generate predictions preds = model(inputs) preds # ## Loss Function # # Instead of defining a loss function manually, we can use the built-in loss function `mse_loss`. # Import nn.functional import torch.nn.functional as F # The `nn.functional` package contains many useful loss functions and several other utilities. # Define loss function loss_fn = F.mse_loss # Let's compute the loss for the current predictions of our model. loss = loss_fn(model(inputs), targets) print(loss) # ## Optimizer # # Instead of manually manipulating the model's weights & biases using gradients, we can use the optimizer `optim.SGD`. SGD stands for `stochastic gradient descent`. It is called `stochastic` because samples are selected in batches (often with random shuffling) instead of as a single group. # Define optimizer opt = torch.optim.SGD(model.parameters(), lr=1e-5) # Note that `model.parameters()` is passed as an argument to `optim.SGD`, so that the optimizer knows which matrices should be modified during the update step. Also, we can specify a learning rate which controls the amount by which the parameters are modified. # ## Train the model # # We are now ready to train the model. We'll follow the exact same process to implement gradient descent: # # 1. Generate predictions # # 2. Calculate the loss # # 3. Compute gradients w.r.t the weights and biases # # 4. Adjust the weights by subtracting a small quantity proportional to the gradient # # 5. Reset the gradients to zero # # The only change is that we'll work batches of data, instead of processing the entire training data in every iteration. Let's define a utility function `fit` which trains the model for a given number of epochs. # Utility function to train the model def fit(num_epochs, model, loss_fn, opt): # Repeat for given number of epochs for epoch in range(num_epochs): # Train with batches of data for xb,yb in train_dl: # 1. Generate predictions pred = model(xb) # 2. Calculate loss loss = loss_fn(pred, yb) # 3. Compute gradients loss.backward() # 4. Update parameters using gradients opt.step() # 5. Reset the gradients to zero opt.zero_grad() # Print the progress if (epoch+1) % 10 == 0: print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item())) # Some things to note above: # # * We use the data loader defined earlier to get batches of data for every iteration. # # * Instead of updating parameters (weights and biases) manually, we use `opt.step` to perform the update, and `opt.zero_grad` to reset the gradients to zero. # # * We've also added a log statement which prints the loss from the last batch of data for every 10th epoch, to track the progress of training. `loss.item` returns the actual value stored in the loss tensor. # # Let's train the model for 100 epochs. fit(100, model, loss_fn, opt) # Let's generate predictions using our model and verify that they're close to our targets. # Generate predictions preds = model(inputs) preds # Compare with targets targets # Indeed, the predictions are quite close to our targets, and now we have a fairly good model to predict crop yields for apples and oranges by looking at the average temperature, rainfall and humidity in a region. # ## Commit and update the notebook # # As a final step, we can record a new version of the notebook using the `jovian` library. import jovian jovian.commit() # Note that running `jovian.commit` a second time records a new version of your existing notebook. With Jovian.ml, you can avoid creating copies of your Jupyter notebooks and keep versions organized. Jovian also provides a visual diff ([example](https://jovian.ml/aakashns/keras-mnist-jovian/diff?base=8&remote=2)) so you can inspect what has changed between different versions: # # ![jovian-commenting](https://i.imgur.com/HF1cOVt.png) # ## Further Reading # # We've covered a lot of ground this this tutorial, including *linear regression* and the *gradient descent* optimization algorithm. Here are a few resources if you'd like to dig deeper into these topics: # # * For a more detailed explanation of derivates and gradient descent, see [these notes from a Udacity course](https://storage.googleapis.com/supplemental_media/udacityu/315142919/Gradient%20Descent.pdf). # # * For an animated visualization of how linear regression works, [see this post](https://hackernoon.com/visualizing-linear-regression-with-pytorch-9261f49edb09). # # * For a more mathematical treatment of matrix calculus, linear regression and gradient descent, you should check out [<NAME>'s excellent course notes](https://github.com/Cleo-Stanford-CS/CS229_Notes/blob/master/lectures/cs229-notes1.pdf) from CS229 at Stanford University. # # * To practice and test your skills, you can participate in the [Boston Housing Price Prediction](https://www.kaggle.com/c/boston-housing) competition on Kaggle, a website that hosts data science competitions. # With this, we complete our discussion of linear regression in PyTorch, and we’re ready to move on to the next topic: *Logistic regression*.
linear-regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Gibbs sampling in 2D # # This is BONUS content related to Day 22, where we introduce Gibbs sampling # # ## Random variables # # (We'll use 0-indexing so we have close alignment between math and python code) # # * 2D random variable $z = [z_0, z_1]$ # * each entry $z_d$ is a real scalar: $z_d \in \mathbb{R}$ # # ## Target distribution # # \begin{align} # p^*(z_0, z_1) = \mathcal{N}\left( # \left[ \begin{array}{c} # 0 \\ 0 # \end{array} \right], # \left[ # \begin{array}{c c} # 1 & 0.8 \\ # 0.8 & 2 # \end{array} \right] \right) # \end{align} # # ## Key takeaways # # * New concept: 'Gibbs sampling', which just iterates between two conditional sampling distributions: # # \begin{align} # z^{t+1}_0 &\sim p^* (z_0 | z_1 = z^t_1) \\ # z^{t+1}_1 &\sim p^* (z_1 | z_0 = z^{t+1}_0) # \end{align} # # ## Things to remember # # This is a simple example to illustrate the idea of how Gibbs sampling works. # # There are other "better" ways of sampling from a 2d normal. # # # Setup import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") sns.set_context("notebook", font_scale=2.0) # # Step 1: Prepare for Gibbs sampling # # ## Define functions to sample from target's conditionals # def draw_z0_given_z1(z1, random_state): ## First, use Bishop textbook formulas to compute the conditional mean/var mean_01 = 0.4 * z1 var_01 = 0.68 ## Then, use simple transform to obtain a sample from this conditional ## Remember, if u ~ Normal(0, 1), a "standard" normal with mean 0 variance 1, ## then using transform: x <- T(u), with T(u) = \mu + \sigma * u ## we can say x ~ Normal(\mu, \sigma^2) u_samp = random_state.randn() z0_samp = mean_01 + np.sqrt(var_01) * u_samp return z0_samp def draw_z1_given_z0(z0, random_state): ## First, use Bishop textbook formulas to compute conditional mean/var mean_10 = 0.8 * z0 var_10 = 1.36 ## Then, use simple transform to obtain a sample from this conditional ## Remember, if u ~ Normal(0, 1), a "standard" normal with mean 0 variance 1, ## then using transform: x <- T(u), with T(u) = \mu + \sigma * u ## we can say x ~ Normal(\mu, \sigma^2) u_samp = random_state.randn() z1_samp = mean_10 + np.sqrt(var_10) * u_samp return z1_samp # # Step 2: Execute the Gibbs sampling algorithm # # Perform 6000 iterations. # # Discard the first 1000 as "not yet burned in". # + S = 6000 sample_list = list() z_D = np.zeros(2) random_state = np.random.RandomState(0) # reproducible random seeds for t in range(S): z_D[0] = draw_z0_given_z1(z_D[1], random_state) z_D[1] = draw_z1_given_z0(z_D[0], random_state) if t > 1000: sample_list.append(z_D.copy()) # save copies so we get different vectors # - z_samples_SD = np.vstack(sample_list) # ## Step 3: Compare to samples from built-in routines for 2D MVNormal sampling Cov_22 = np.asarray([[1.0, 0.8], [0.8, 2.0]]) true_samples_SD = random_state.multivariate_normal(np.zeros(2), Cov_22, size=S-1000) # + fig, ax_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(10,4)) ax_grid[0].plot(z_samples_SD[:,0], z_samples_SD[:,1], 'k.') ax_grid[0].set_title('Gibbs sampler') ax_grid[0].set_aspect('equal', 'box'); ax_grid[1].plot(true_samples_SD[:,0], true_samples_SD[:,1], 'k.') ax_grid[1].set_title('np.random.multivariate_normal') ax_grid[1].set_aspect('equal', 'box'); ax_grid[1].set_xlim([-6, 6]); ax_grid[1].set_ylim([-6, 6]); # -
notebooks/GibbsSampling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf tf.enable_eager_execution() # + x = tf.ones((2, 2)) with tf.GradientTape() as t: t.watch(x) y = tf.reduce_sum(x) z = tf.multiply(y, y) # Derivative of z with respect to the original input tensor x dz_dx = t.gradient(z, x) for i in [0, 1]: for j in [0, 1]: assert dz_dx[i][j].numpy() == 8.0 # + x = tf.ones((2, 2)) with tf.GradientTape() as t: t.watch(x) y = tf.reduce_sum(x) z = tf.multiply(y, y) # Use the tape to compute the derivative of z with respect to the # intermediate value y. dz_dy = t.gradient(z, y) assert dz_dy.numpy() == 8.0 # - x = tf.constant(3.0) with tf.GradientTape(persistent=True) as t: t.watch(x) y = x * x z = y * y dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3) dy_dx = t.gradient(y, x) # 6.0 del t # Drop the reference to the tape # + def f(x, y): output = 1.0 for i in range(y): if i > 1 and i < 5: output = tf.multiply(output, x) return output def grad(x, y): with tf.GradientTape() as t: t.watch(x) out = f(x, y) return t.gradient(out, x) x = tf.convert_to_tensor(2.0) assert grad(x, 6).numpy() == 12.0 assert grad(x, 5).numpy() == 12.0 assert grad(x, 4).numpy() == 4.0 # + x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0 with tf.GradientTape() as t: with tf.GradientTape() as t2: y = x * x * x # Compute the gradient inside the 't' context manager # which means the gradient computation is differentiable as well. dy_dx = t2.gradient(y, x) d2y_dx2 = t.gradient(dy_dx, x) assert dy_dx.numpy() == 3.0 assert d2y_dx2.numpy() == 6.0
Automatic differentiation and gradient tape.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Segmented deformable mirrors # # We will use segmented deformable mirrors and simulate the PSFs that result from segment pistons and tilts. We will compare this functionality against Poppy, another optical propagation package. # # First we'll import all packages. import os import numpy as np import matplotlib.pyplot as plt import astropy.units as u import hcipy import poppy # + # Parameters for the pupil function pupil_diameter = 0.019725 # m gap_size = 90e-6 # m num_rings = 3 segment_flat_to_flat = (pupil_diameter - (2 * num_rings + 1) * gap_size) / (2 * num_rings + 1) focal_length = 1 # m # Parameters for the simulation num_pix = 1024 wavelength = 638e-9 num_airy = 20 sampling = 4 norm = False # - # ## Instantiate the segmented mirrors # # ### HCIPy SM: `hsm` # # We need to generate a pupil grid for the aperture, and a focal grid and propagator for the focal plane images after the DM. # + # HCIPy grids and propagator pupil_grid = hcipy.make_pupil_grid(dims=num_pix, diameter=pupil_diameter) focal_grid = hcipy.make_focal_grid(sampling, num_airy, pupil_diameter=pupil_diameter, reference_wavelength=wavelength, focal_length=focal_length) focal_grid = focal_grid.shifted(focal_grid.delta / 2) prop = hcipy.FraunhoferPropagator(pupil_grid, focal_grid, focal_length) # - # We generate a segmented aperture for the segmented mirror. For convenience, we'll use the HiCAT pupil without spiders. We'll use supersampling to better resolve the segment gaps. # + aper, segments = hcipy.make_hexagonal_segmented_aperture(num_rings, segment_flat_to_flat, gap_size, starting_ring=1, return_segments=True) aper = hcipy.evaluate_supersampled(aper, pupil_grid, 1) segments = hcipy.evaluate_supersampled(segments, pupil_grid, 1) plt.title('HCIPy aperture') hcipy.imshow_field(aper, cmap='gray') # - # Now we make the segmented mirror. In order to be able to apply the SM to a plane, that plane needs to be a `Wavefront`, which combines a `Field` - here the aperture - with a wavelength, here `wavelength`. # # In this example here, since the SM doesn't have any extra effects on the pupil since it's still completely flat, we don't actually have to apply the SM, although of course we could. # + # Instantiate the segmented mirror hsm = hcipy.SegmentedDeformableMirror(segments) # Make a pupil plane wavefront from aperture wf = hcipy.Wavefront(aper, wavelength) # Apply SM if you want to wf = hsm(wf) plt.figure(figsize=(8, 8)) plt.title('Wavefront intensity at HCIPy SM') hcipy.imshow_field(wf.intensity, cmap='gray') plt.colorbar() plt.show() # - # ### Poppy SM: `psm` # # We'll do the same for Poppy. psm = poppy.dms.HexSegmentedDeformableMirror(name='Poppy SM', rings=3, flattoflat=segment_flat_to_flat*u.m, gap=gap_size*u.m, center=False) # Display the transmission and phase of the poppy sm plt.figure(figsize=(8, 8)) psm.display(what='amplitude') # ## Create reference images # # ### HCIPy reference image # # We need to apply the SM to the wavefront in the pupil plane and then propagate it to the image plane. # + # Apply SM to pupil plane wf wf_sm = hsm(wf) # Propagate from SM to image plane im_ref_hc = prop(wf_sm) # + # Display intensity and phase in image plane plt.figure(figsize=(8, 8)) plt.suptitle('Image plane after HCIPy SM') # Get normalization factor for HCIPy reference image norm_hc = np.max(im_ref_hc.intensity) hcipy.imshow_psf(im_ref_hc, normalization='peak') # - # ### Poppy reference image # # For the Poppy propagation, we need to make an optical system of which we then calculate the PSF. We match HCIPy's image scale with Poppy. # + # Make an optical system with the Poppy SM and a detector psm.flatten() pxscle = np.degrees(wavelength / pupil_diameter) * 3600 / sampling fovarc = pxscle * 160 osys = poppy.OpticalSystem() osys.add_pupil(psm) osys.add_detector(pixelscale=pxscle, fov_arcsec=fovarc, oversample=1) # + # Calculate the PSF psf = osys.calc_psf(wavelength) plt.figure(figsize=(8, 8)) poppy.display_psf(psf, vmin=1e-9, vmax=0.1) # Get the PSF as an array im_ref_pop = psf[0].data print('Poppy PSF shape: {}'.format(im_ref_pop.shape)) # Get normalization from Poppy reference image norm_pop = np.max(im_ref_pop) # - # ### Both reference images side-by-side # + plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) hcipy.imshow_field(np.log10(im_ref_hc.intensity / norm_hc), vmin=-10, cmap='inferno') plt.title('HCIPy reference PSF') plt.colorbar() plt.subplot(1, 2, 2) plt.imshow(np.log10(im_ref_pop / norm_pop), origin='lower', vmin=-10, cmap='inferno') plt.title('Poppy reference PSF') plt.colorbar() # + ref_dif = im_ref_pop / norm_pop - im_ref_hc.intensity.shaped / norm_hc lims = np.max(np.abs(ref_dif)) plt.figure(figsize=(15, 6)) plt.suptitle(f'Maximum relative error: {lims:0.2g} relative to the peak intensity') plt.subplot(1, 2, 1) plt.imshow(ref_dif, origin='lower', vmin=-lims, vmax=lims, cmap='RdBu') plt.title('Full image') plt.colorbar() plt.subplot(1, 2, 2) plt.imshow(ref_dif[60:100,60:100], origin='lower', vmin=-lims, vmax=lims, cmap='RdBu') plt.title('Zoomed in') plt.colorbar() # - # ## Applying aberrations # + # Define function from rad of phase to m OPD def aber_to_opd(aber_rad, wavelength): aber_m = aber_rad * wavelength / (2 * np.pi) return aber_m aber_rad = 4.0 print('Aberration: {} rad'.format(aber_rad)) print('Aberration: {} m'.format(aber_to_opd(aber_rad, wavelength))) # Poppy and HCIPy have a different way of indexing segments # Figure out which index to poke on which mirror poppy_index_to_hcipy_index = [] for n in range(1, num_rings + 1): base = list(range(3 * (n - 1) * n + 1, 3 * n * (n + 1) + 1)) poppy_index_to_hcipy_index.extend(base[2 * n::-1]) poppy_index_to_hcipy_index.extend(base[:2 * n:-1]) poppy_index_to_hcipy_index = {j: i for i, j in enumerate(poppy_index_to_hcipy_index) if j is not None} hcipy_index_to_poppy_index = {j: i for i, j in poppy_index_to_hcipy_index.items()} # + # Flatten both SMs just to be sure hsm.flatten() psm.flatten() # Poking segment 35 and 25 for i in [35, 25]: hsm.set_segment_actuators(i, aber_to_opd(aber_rad, wavelength) / 2, 0, 0) psm.set_actuator(hcipy_index_to_poppy_index[i], aber_to_opd(aber_rad, wavelength) * u.m, 0, 0) # Display both segmented mirrors in OPD # HCIPy plt.figure(figsize=(8,8)) plt.title('OPD for HCIPy SM') hcipy.imshow_field(hsm.surface * 2, mask=aper, cmap='RdBu_r', vmin=-5e-7, vmax=5e-7) plt.colorbar() plt.show() # Poppy plt.figure(figsize=(8,8)) psm.display(what='opd') plt.show() # - # ### Show focal plane images # + ### HCIPy # Apply SM to pupil plane wf wf_fp_pistoned = hsm(wf) # Propagate from SM to image plane im_pistoned_hc = prop(wf_fp_pistoned) ### Poppy # Calculate the PSF psf = osys.calc_psf(wavelength) # Get the PSF as an array im_pistoned_pop = psf[0].data ### Display intensity of both cases image plane plt.figure(figsize=(15, 6)) plt.suptitle('Image plane after SM for $\phi$ = ' + str(aber_rad) + ' rad') plt.subplot(1, 2, 1) hcipy.imshow_field(np.log10(im_pistoned_hc.intensity / norm_hc), cmap='inferno', vmin=-9) plt.title('HCIPy pistoned pair') plt.colorbar() plt.subplot(1, 2, 2) plt.imshow(np.log10(im_pistoned_pop / norm_pop), origin='lower', cmap='inferno', vmin=-9) plt.title('Poppy pistoned pair') plt.colorbar() # - # ## A mix of piston, tip and tilt (PTT) # + aber_rad_tt = 200e-6 aber_rad_p = 1.8 opd_piston = aber_to_opd(aber_rad_p, wavelength) ### Put aberrations on both SMs # Flatten both SMs hsm.flatten() psm.flatten() ## PISTON for i in [19, 28, 23, 16]: hsm.set_segment_actuators(i, opd_piston / 2, 0, 0) psm.set_actuator(hcipy_index_to_poppy_index[i], opd_piston * u.m, 0, 0) for i in [3, 35, 30, 8]: hsm.set_segment_actuators(i, -0.5 * opd_piston / 2, 0, 0) psm.set_actuator(hcipy_index_to_poppy_index[i], -0.5 * opd_piston * u.m, 0, 0) for i in [14, 18, 1, 32, 12]: hsm.set_segment_actuators(i, 0.3 * opd_piston / 2, 0, 0) psm.set_actuator(hcipy_index_to_poppy_index[i], 0.3 * opd_piston * u.m, 0, 0) ## TIP and TILT for i in [2, 5, 11, 15, 22]: hsm.set_segment_actuators(i, 0, aber_rad_tt / 2, 0.3 * aber_rad_tt / 2) psm.set_actuator(hcipy_index_to_poppy_index[i], 0, aber_rad_tt, 0.3 * aber_rad_tt) for i in [4, 6, 26]: hsm.set_segment_actuators(i, 0, -aber_rad_tt / 2, 0) psm.set_actuator(hcipy_index_to_poppy_index[i], 0, -aber_rad_tt, 0) for i in [34, 31, 7]: hsm.set_segment_actuators(i, 0, 0, 1.3 * aber_rad_tt / 2) psm.set_actuator(hcipy_index_to_poppy_index[i], 0, 0, 1.3 * aber_rad_tt) # + # Display both segmented mirrors in OPD # HCIPy plt.figure(figsize=(8,8)) plt.title('OPD for HCIPy SM') hcipy.imshow_field(hsm.surface * 2, mask=aper, cmap='RdBu_r', vmin=-5e-7, vmax=5e-7) plt.colorbar() plt.show() # Poppy plt.figure(figsize=(8,8)) psm.display(what='opd') plt.show() # + ### Propagate to image plane ## HCIPy # Propagate from pupil plane through SM to image plane im_pistoned_hc = prop(hsm(wf)).intensity ## Poppy # Calculate the PSF psf = osys.calc_psf(wavelength) # Get the PSF as an array im_pistoned_pop = psf[0].data # + ### Display intensity of both cases image plane plt.figure(figsize=(18, 9)) plt.suptitle('Image plane after SM forrandom arangement') plt.subplot(1, 2, 1) hcipy.imshow_field(np.log10(im_pistoned_hc / norm_hc), cmap='inferno', vmin=-9) plt.title('HCIPy random arangement') plt.colorbar() plt.subplot(1, 2, 2) plt.imshow(np.log10(im_pistoned_pop / norm_pop), origin='lower', cmap='inferno', vmin=-9) plt.title('Poppy tipped arangement') plt.colorbar() plt.show()
doc/tutorial_notebooks/SegmentedDMs/SegmentedDMs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Numeric data or ... ? # # In this exercise, and throughout this chapter, you'll be working with bicycle ride sharing data in San Francisco called `ride_sharing`. It contains information on the start and end stations, the trip duration, and some user information for a bike sharing service. # # The `user_type` column contains information on whether a user is taking a free ride and takes on the following values: # # - `1` for free riders. # - `2` for pay per ride. # - `3` for monthly subscribers. # # In this instance, you will print the information of `ride_sharing` using `.info()` and see a firsthand example of how an incorrect data type can flaw your analysis of the dataset. # # Instructions # # - Print the information of `ride_sharing`. # - Use `.describe()` to print the summary statistics of the `user_type` column from `ride_sharing`. # - By looking at the summary statistics - they don't really seem to offer much description on how users are distributed along their purchase type, why do you think that is? # - Convert `user_type` into categorical by assigning it the `'category'` data type and store it in the `user_type_cat` column. # - Make sure you converted `user_type_cat` correctly by using an `assert` statement. # + # Import packages import pandas as pd # Import dataframe ride_sharing = pd.read_csv('ride_sharing.csv', index_col=0) # Print the information of ride_sharing print(ride_sharing.info(), '\n') # Print summary statistics of user_type column print(ride_sharing['user_type'].describe()) # + # Convert user_type from integer to category ride_sharing['user_type_cat'] = ride_sharing['user_type'].astype('category') # Write an assert statement confirming the change assert ride_sharing['user_type_cat'].dtype == 'category' # Print new summary statistics print(ride_sharing.info()) # - # ## Summing strings and concatenating numbers # # In the previous exercise, you were able to identify that `category` is the correct data type for `user_type` and convert it in order to extract relevant statistical summaries that shed light on the distribution of `user_type`. # # Another common data type problem is importing what should be numerical values as strings, as mathematical operations such as summing and multiplication lead to string concatenation, not numerical outputs. # # In this exercise, you'll be converting the string column `duration` to the type `int`. Before that however, you will need to make sure to strip `"minutes"` from the column in order to make sure pandas reads it as numerical. The pandas package has been imported as pd. # # Instructions # # - Use the `.strip()` method to strip `duration` of `"minutes"` and store it in the `duration_trim` column. # - Convert `duration_trim` to `int` and store it in the `duration_time` column. # - Write an `assert` statement that checks if `duration_time`'s data type is now an `int`. # - Print the average ride duration. # + # Strip duration of minutes ride_sharing['duration_trim'] = ride_sharing['duration'].str.strip('minutes') # Convert duration to integer ride_sharing['duration_time'] = ride_sharing['duration_trim'].astype('int') # Write an assert statement making sure of conversion assert ride_sharing['duration_time'].dtype == 'int' # Print formed columns and calculate average ride duration print(ride_sharing[['duration', 'duration_trim', 'duration_time']]) print(ride_sharing['duration_time'].mean()) # - # ## Tire size constraints # # In this lesson, you're going to build on top of the work you've been doing with the `ride_sharing` DataFrame. You'll be working with the `tire_sizes` column which contains data on each bike's tire size. # # Bicycle tire sizes could be either 26″, 27″ or 29″ and are here correctly stored as a categorical value. In an effort to cut maintenance costs, the ride sharing provider decided to set the maximum tire size to be 27″. # # In this exercise, you will make sure the `tire_sizes` column has the correct range by first converting it to an integer, then setting and testing the new upper limit of 27″ for tire sizes. # # Instructions # # - Convert the `tire_sizes` column from `category` to `'int'`. # - Use `.loc[]` to set all values of tire_sizes above 27 to 27. # - Reconvert back `tire_sizes` to `'category'` from int. # - Print the description of the `tire_sizes`. # + # Convert tire_sizes to integer ride_sharing['tire_sizes'] = ride_sharing['tire_sizes'].astype('int') # Set all values above 27 to 27 ride_sharing.loc[ride_sharing['tire_sizes'] > 27, 'tire_sizes'] = 27 # Reconvert tire_sizes back to categorical ride_sharing['tire_sizes'] = ride_sharing['tire_sizes'].astype('category') # Print tire size description print(ride_sharing['tire_sizes'].describe()) # - # ## Back to the future # # A new update to the data pipeline feeding into the `ride_sharing` DataFrame has been updated to register each ride's date. This information is stored in the `ride_date column` of the type `object`, which represents strings in `pandas`. # # A bug was discovered which was relaying rides taken today as taken next year. To fix this, you will find all instances of the `ride_date` column that occur anytime in the future, and set the maximum possible value of this column to today's date. Before doing so, you would need to convert `ride_date` to a `datetime` object. # # The `datetime` package has been imported as `dt`, alongside all the packages you've been using till now. # # Instructions # # - Convert `ride_date` to a `datetime` object and store it in `ride_dt` column using `to_datetime()`. # - Create the variable `today`, which stores today's date by using the `dt.date.today()` function. # - For all instances of `ride_dt` in the future, set them to today's date. # - Print the maximum date in the `ride_dt` column. # + # Import datetime import datetime as dt # Convert ride_date to datetime ride_sharing['ride_dt'] = pd.to_datetime(ride_sharing['ride_date']) # Save today's date today = dt.date.today() #######ERRO####### Set all in the future to today's date #ride_sharing.loc[ride_sharing['ride_dt'] > today, 'ride_dt'] = today # Print maximum of ride_dt column print(ride_sharing['ride_dt'].max()) # - # ## How big is your subset? # # You have the following `loans` DataFrame which contains loan and credit score data for consumers, and some metadata such as their first and last names. You want to find both complete and incomplete duplicates using `.duplicated()`. # # ``` # first_name last_name credit_score has_loan # --------------------------------------------------------------- # Justin Saddlemeyer 600 1 # Hadrien Lacroix 450 0 # ``` # # Choose the **correct** usage of `.duplicated()` below: # `loans.duplicated(subset = ['first_name', 'last_name'], keep = False)` because subsetting on consumer metadata and not discarding any duplicate returns all duplicated rows. # ## Finding duplicates # # A new update to the data pipeline feeding into `ride_sharing` has added the `ride_id` column, which represents a unique identifier for each ride. # # The update however coincided with radically shorter average ride duration times and irregular user birth dates set in the future. Most importantly, the number of rides taken has increased by 20% overnight, leading you to think there might be both complete and incomplete duplicates in the `ride_sharing` DataFrame. # # In this exercise, you will confirm this suspicion by finding those duplicates. A sample of `ride_sharing` is in your environment, as well as all the packages you've been working with thus far. # # Instructions # # - Find duplicated rows of `ride_id` in the `ride_sharing` DataFrame while setting `keep` to `False`. # - Subset `ride_sharing` on `duplicates` and sort by `ride_id` and assign the results to `duplicated_rides`. # - Print the `ride_id`, `duration` and `user_birth_year` columns of `duplicated_rides` in that order. # + # Find duplicates duplicates = ride_sharing.duplicated(subset='ride_id', keep=False) # Sort your duplicated rides duplicated_rides = ride_sharing[duplicates].sort_values('ride_id') # Print relevant columns print(duplicated_rides[['ride_id','duration','user_birth_year']]) # - # ## Treating duplicates # # In the last exercise, you were able to verify that the new update feeding into `ride_sharing contains` a bug generating both complete and incomplete duplicated rows for some values of the `ride_id` column, with occasional discrepant values for the `user_birth_year` and `duration` columns. # # In this exercise, you will be treating those duplicated rows by first dropping complete duplicates, and then merging the incomplete duplicate rows into one while keeping the average `duration`, and the minimum `user_birth_year` for each set of incomplete duplicate rows. # # Instructions # # - Drop complete duplicates in `ride_sharing` and store the results in `ride_dup`. # - Create the statistics dictionary which holds **minimum** aggregation for `user_birth_year` and **mean** aggregation for `duration`. # - Drop incomplete duplicates by grouping by `ride_id` and applying the aggregation in `statistics`. # - Find duplicates again and run the `assert` statement to verify de-duplication. # + # Drop complete duplicates from ride_sharing ride_dup = ride_sharing.drop_duplicates() # Create statistics dictionary for aggregation function statistics = {'user_birth_year': 'min', 'duration': 'mean'} # Group by ride_id and compute new statistics ride_unique = ride_dup.groupby('ride_id').agg(statistics).reset_index() # Find duplicated values again duplicates = ride_unique.duplicated(subset='ride_id', keep=False) duplicated_rides = ride_unique[duplicates == True] # Assert duplicates are processed assert duplicated_rides.shape[0] == 0
cleaning_data_in_python/1_common_data_problems.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Building two simple polynomial regression model # We generate polynomial regressions for two data sets and compare the R2 scores to linear regressions. # ## Case 1: Profit prediction for an agricultural problem # # In the following we would like to predict profits on harvest for certain field sizes. # + #importing the necessary packages import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures # - #define data frame df = pd.read_csv("fields.csv") df.head() # We start off with a simple linear regression: # + #define the variables X = df[["width", "length"]].values Y = df[["profit"]].values #split the data set into training and test set X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 0, test_size = 0.25) #train the model model = LinearRegression() model.fit(X_train, Y_train) #report the R2 score print(model.score(X_test, Y_test)) # - # In a next step we proceed with an attempt at polynomial fitting for the data: # + # #PolynomialFeatures? pf = PolynomialFeatures(degree = 2, include_bias = False) #bias term not needed here #need to fit the training data accordingly (demanded by sklearn) to adapt to polynomial fitting pf.fit(X_train) #generate new columns X_train_transformed = pf.transform(X_train) X_test_transformed = pf.transform(X_test) #print all possible arrangements to get to a polynomial of degree 2 (as done by the transform method) #print(pf.powers_) model = LinearRegression() model.fit(X_train_transformed, Y_train) print(model.score(X_test_transformed, Y_test)) # - # We may redo the analysis without the random_state option in the train test function: # + scores = [] intercepts = [] coefs = [] for i in range(0,1000): X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.25) X_train_transformed = pf.transform(X_train) X_test_transformed = pf.transform(X_test) model = LinearRegression() model.fit(X_train_transformed, Y_train) intercepts.append(model.intercept_) coefs.append(model.coef_) scores.append(model.score(X_test_transformed, Y_test)) print("Average score: " + str(sum(scores)/ len(scores))) # + #np.array(coefs).shape # - # Now we would like to filter out columns from the fitting procedure: # + scores = [] intercepts = [] coefs = [] for i in range(0,1000): X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.25) #we may exclude certain columns from the fitting and check if the score improves X_train_transformed = pf.transform(X_train)[:, [0, 1, 2, 3, 4]] X_test_transformed = pf.transform(X_test)[:, [0, 1, 2, 3, 4]] model = LinearRegression() model.fit(X_train_transformed, Y_train) intercepts.append(model.intercept_) coefs.append(model.coef_) scores.append(model.score(X_test_transformed, Y_test)) print("Average score: " + str(sum(scores)/ len(scores))) # - # ## Case 2: Diamond price prediction # # In the following we would like to model the prices of diamonds via linear and polynomial regressions and compare the quality of the results via the R2 score. #define data frame df = pd.read_csv("diamonds.csv") df.head() # We start off with two simple linear regressions to get a feeling for the system: # + # price over carat #define the variables X = df[["carat"]].values Y = df[["price"]].values #split the data set into training and test set X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 0, test_size = 0.25) #train the model model = LinearRegression() model.fit(X_train, Y_train) #report the R2 score print(model.score(X_test, Y_test)) # + # price over dimensions x,y,z #define the variables X = df[["x","y","z"]].values Y = df[["price"]].values #split the data set into training and test set X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 0, test_size = 0.25) #train the model model = LinearRegression() model.fit(X_train, Y_train) #report the R2 score print(model.score(X_test, Y_test)) # - # Comment: Given this carat seems to be a good indicator for gauging the price of a diamond. # Now we try a polynomial regression: # + # price over dimensions x,y,z #define the variables X = df[["x", "y", "z"]].values Y = df[["price"]].values pf = PolynomialFeatures(degree = 2, include_bias = False) #bias term not needed here #need to fit the training data accordingly (demanded by sklearn) to adapt to polynomial fitting pf.fit(X_train) scores = [] intercepts = [] coefs = [] for i in range(0,200): X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.25) #we may exclude certain columns from the fitting and check if the score improves X_train_transformed = pf.transform(X_train) X_test_transformed = pf.transform(X_test) model = LinearRegression() model.fit(X_train_transformed, Y_train) intercepts.append(model.intercept_) coefs.append(model.coef_) scores.append(model.score(X_test_transformed, Y_test)) print("Average score: " + str(sum(scores)/ len(scores))) # - # This result indicates that in the current setting the linear regression via dimensions outperforms the polynomial one while the linear and polynomial regressions via carats outperform those with respect to dimensions.
simplepolynomialregressions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # # MPG Cars # ### Introduction: # # The following exercise utilizes data from [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Auto+MPG) # # ### Step 1. Import the necessary libraries # ### Step 2. Import the first dataset [cars1](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars1.csv) and [cars2](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/05_Merge/Auto_MPG/cars2.csv). # ### Step 3. Assign each to a variable called cars1 and cars2 # ### Step 4. Oops, it seems our first dataset has some unnamed blank columns, fix cars1 # ### Step 5. What is the number of observations in each dataset? # ### Step 6. Join cars1 and cars2 into a single DataFrame called cars # ### Step 7. Oops, there is a column missing, called owners. Create a random number Series from 15,000 to 73,000. # ### Step 8. Add the column owners to cars
05_Merge/Auto_MPG/Exercises.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torch import torchvision import torchvision.transforms as transforms transform = transforms.Compose([ transforms.ToTensor()]) train = torchvision.datasets.MNIST( root="data/train", train=True, transform=transform, target_transform=None, download=True) test = torchvision.datasets.MNIST( root="data/test", train=False, transform=transform, target_transform=None, download=True) # + from matplotlib import pyplot as plt import numpy as np print(train.data.size()) print(test.data.size()) img = train.data[0].numpy() plt.imshow(img, cmap='gray') print('Label:', train.targets[0]) # + train_data_resized = train.data.numpy() #torchテンソルからnumpyに test_data_resized = test.data.numpy() train_data_resized = torch.FloatTensor(np.stack((train_data_resized,)*3, axis=1)) #RGBに変換 test_data_resized = torch.FloatTensor(np.stack((test_data_resized,)*3, axis=1)) print(train_data_resized.size()) # + import torch.utils.data as data mean = (0.485, 0.456, 0.406) std = (0.229, 0.224, 0.225) #画像の前処理 class ImgTransform(): def __init__(self): self.transform = transforms.Compose([ transforms.ToTensor(), # テンソル変換 transforms.Normalize(mean, std) # 標準化 ]) def __call__(self, img): return self.transform(img) #Datasetクラスを継承 class _3ChannelMnistDataset(data.Dataset): def __init__(self, img_data, target, transform): #[データ数,高さ,横,チャネル数]に self.data = img_data.numpy().transpose((0, 2, 3, 1)) /255 self.target = target self.img_transform = transform #画像前処理クラスのインスタンス def __len__(self): #画像の枚数を返す return len(self.data) def __getitem__(self, index): #画像の前処理(標準化)したデータを返す img_transformed = self.img_transform(self.data[index]) return img_transformed, self.target[index] # + train_dataset = _3ChannelMnistDataset(train_data_resized, train.targets, transform=ImgTransform()) test_dataset = _3ChannelMnistDataset(test_data_resized, test.targets, transform=ImgTransform()) # データセットをテストしてみる index = 0 print(train_dataset.__getitem__(index)[0].size()) print(train_dataset.__getitem__(index)[1]) print(train_dataset.__getitem__(index)[0][1]) #ちゃんと標準化されていることがわかる # - train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=100, shuffle=True) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=100, shuffle=False) # + from torch import nn class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.relu = nn.ReLU() self.pool = nn.MaxPool2d(3) self.conv = nn.Conv2d(3, 10, kernel_size=4) self.fc1 = nn.Linear(640, 300) self.fc2 = nn.Linear(300, 100) self.fc3 = nn.Linear(100, 10) def forward(self, x): x = self.conv(x) x = self.relu(x) x = self.pool(x) x = x.view(x.size()[0], -1) #行列を線形処理できるようにベクトルに(view(高さ、横)) x = self.fc1(x) x = self.relu(x) x = self.fc2(x) x = self.relu(x) x = self.fc3(x) return x model = Model() print(model) # + import tqdm from torch import optim # 推論モード def eval_net(net, data_loader, device="cpu"): #GPUある人はgpuに #推論モードに net.eval() ypreds = [] #予測したラベル格納変数 for x, y in (data_loader): # toメソッドでデバイスに転送 x = x.to(device) y = [y.to(device)] # 確率が最大のクラスを予測 # forwardプロパゲーション with torch.no_grad(): _, y_pred = net(x).max(1) ypreds.append(y_pred) # ミニバッチごとの予測を一つのテンソルに y = torch.cat(y) ypreds = torch.cat(ypreds) # 予測値を計算(正解=予測の要素の和) acc = (y == ypreds).float().sum()/len(y) return acc.item() # 訓練モード def train_net(net, train_loader, test_loader,optimizer_cls=optim.Adam, loss_fn=nn.CrossEntropyLoss(),n_iter=4, device="cpu"): train_losses = [] train_acc = [] eval_acc = [] optimizer = optimizer_cls(net.parameters()) for epoch in range(n_iter): #4回回す runnig_loss = 0.0 # 訓練モードに net.train() n = 0 n_acc = 0 for i, (xx, yy) in tqdm.tqdm(enumerate(train_loader), total=len(train_loader)): xx = xx.to(device) yy = yy.to(device) output = net(xx) loss = loss_fn(output, yy) optimizer.zero_grad() #optimizerの初期化 loss.backward() #損失関数(クロスエントロピー誤差)からバックプロパゲーション optimizer.step() runnig_loss += loss.item() n += len(xx) _, y_pred = output.max(1) n_acc += (yy == y_pred).float().sum().item() train_losses.append(runnig_loss/i) # 訓練データの予測精度 train_acc.append(n_acc / n) # 検証データの予測精度 eval_acc.append(eval_net(net, test_loader, device)) # このepochでの結果を表示 print("epoch:",epoch+1, "train_loss:",train_losses[-1], "train_acc:",train_acc[-1], "eval_acc:",eval_acc[-1], flush=True) # - eval_net(model, test_loader) train_net(model, train_loader, test_loader) data = train_dataset.__getitem__(0)[0].reshape(1, 3, 28, 28) #リサイズ(データローダーのサイズに注意) print("ラベル",train_dataset.__getitem__(0)[1].data) model.eval() output = model(data) print(output.size()) output # モデルの保存 model.eval() #サンプル入力サイズ example = torch.rand(1, 3, 28, 28) traced_script_module = torch.jit.trace(model, example) traced_script_module.save("./CNNModel.pt") print(model)
Pytorch_CNN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Objective: Learn to do clustering and noise reduction in data using PCA # + import matplotlib.pyplot as plt import numpy as np from numpy.linalg import svd from sklearn.datasets import load_digits digits = load_digits() # - # ## PCA using SVD def PCA(X, do_mean_centering=False): if do_mean_centering: X_mean_centered = np.zeros_like(X) for col in range(X_mean_centered.shape[1]): X_col = X[:, col] X_mean_centered[:, col] = X_col - np.mean(X_col) X = X_mean_centered U, S, PT = svd(X, full_matrices=False) Sigma = np.diag(S) T = np.dot(U, Sigma) return T, PT.T, Sigma # Score, Loadings, Variance def plot_digits(data): fig, axes = plt.subplots( 4, 10, figsize=(10, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1) ) for i, ax in enumerate(axes.flat): ax.imshow( data[i].reshape(8, 8), cmap='binary', interpolation='nearest', clim=(0, 16) ) # + # Find out the original dimension of the data X = digits.data y = digits.target print("Shape of X", X.shape) print("Shape of y", y.shape) # - # Visualize the original data plot_digits(X) # ### Task 1: Dimensionality reduction: Conduct PCA on the the matrix $X$ to find out the dimension required to capture 80% of the variance # + # TODO: Make plots comparing normalization to not # Get variance explained by singular values def conduct_PCA(X): n_samples = X.shape[0] T, P, Sigma = PCA(X) # Compute sample variance explained_variance = (Sigma ** 2) / (n_samples - 1) total_variance = explained_variance.sum() explained_variance_ratio = explained_variance / total_variance #cumulative_explained_variance = np.cumsum(explained_variance_ratio) #explained_variance_threshold = 0.8 #required_dimensions = np.argmax(cumulative_explained_variance > explained_variance_threshold) component_wise_sum = explained_variance_ratio.sum(axis=0) cumulative_explained_variance = np.cumsum(component_wise_sum) explained_variance_threshold = 0.8 required_dimensions = np.argmax(cumulative_explained_variance > explained_variance_threshold) plt.figure() plt.plot(cumulative_explained_variance) plt.grid() plt.xlabel('Number of components') plt.ylabel('Cumulative explained variance'); print(f'Required components/dimensions to explain {explained_variance_threshold*100}% variance: {required_dimensions}') conduct_PCA(X) conduct_PCA(X / np.linalg.norm(X)) # - # Using the "eyeballing" method to pinpoint the necessary dimension to capture a given amount of variance is not very precise when the dimensionality of the underlying data is *larger than it is here*. For data sets with eg. 100+ dimensions we could solve for the exact dimension that exceeds the set explained variance, say 80%, as # # ``` # cumulative_explained = np.cumsum(explained_variance_ratio) # n_required_dims = np.argmax(cumulative_explained > 0.8) # ``` # We can also specify the required amount of explained variance if we use e.g. sklearn's PCA implementation. # ### Task 2: Clustering: Project the original data matrix X on the first two PCs and draw the scalar plot # + # Need mean centered data here to make more sense out of the plots # Clustering without mean centering yields more overlap. # Columns of T = U@S are principle components T, _, _ = PCA(X, do_mean_centering=True) t1 = T[:,0] t2 = T[:,1] plt.figure(figsize=(15, 10)) plt.scatter( t1, t2, c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('Spectral', 10) ) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar(); # - # ### Task 3: Denoising: Remove noise from the noisy data # + # Adding noise to the original data X = digits.data y = digits.target np.random.seed(42) noisy = np.random.normal(X, 4) plot_digits(noisy) # - # Tips: # # * Decompose the noisy data using PCA # * Reconstruct the data using just a few dominant components. For eg. check the variance plot # # Since the nature of the noise is more or less similar across all the digits, they are not the fearues with enough variance to discriminate between the digits. # + def denoise_signal(signal, do_mean_centering): T, P, Sigma = PCA(noisy, do_mean_centering) # Select how many components to use explained_variance = (Sigma ** 2) / (signal.shape[0] - 1) total_variance = explained_variance.sum() explained_variance_ratio = explained_variance / total_variance component_wise_sum = explained_variance_ratio.sum(axis=0) cumulative_explained_variance = np.cumsum(component_wise_sum) explained_variance_threshold = 0.8 required_dimensions = np.argmax(cumulative_explained_variance > explained_variance_threshold) print(required_dimensions) n_components = required_dimensions # Project data down to n principle components, then reconstruct the data based on this. noise_reduced = T[:, :n_components] noise_reconstructed = noise_reduced @ P[:, :n_components].T return noise_reconstructed plot_digits(denoise_signal(noise, do_mean_centering=False)) # - # ### Task 4: Study the impact of normalization of the dataset before conducting PCA. Discuss if it is critical to normalize this particular data compared to the dataset in other notebooks # + normalized_signal = noise / np.linalg.norm(noise) plot_digits(denoise_signal(normalized_signal, do_mean_centering=False)) # Normalizing here does not yield any noticable improvement, if any. # This is likely because of how each column in the dataset represent some # part of the same source (image of the digits), and thus we should expect # similar properties between each column. I.e we are not necessarily comparing apples and oranges :) # See also Task 1 - normalization does not change anything. # - # ## All the above excercises can be done using the sklearn library as follows # + from sklearn.decomposition import PCA X = digits.data y = digits.target # + pca = PCA(2) # project from 64 to 2 dimensions projected = pca.fit_transform(digits.data) print(digits.data.shape) print(projected.shape) plot_digits(digits.data) # - plt.figure(figsize=(15,10)) plt.scatter( projected[:, 0], projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('Spectral', 10) ) plt.xlabel('Component 1') plt.ylabel('Component 2') plt.colorbar(); # + pca = PCA().fit(X) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); # + np.random.seed(42) noisy = np.random.normal(digits.data, 4) plot_digits(noisy) # + pca = PCA(0.50).fit(noisy) # 50% of the variance amounts to 12 principal components. pca.n_components_ plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); # + components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered) # -
assignment_04_PCA_clustering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9 (tensorflow) # language: python # name: tensorflow # --- # + [markdown] id="0joqdbKedFtm" # <a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_5_tabular_synthetic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="Or3fhGk9dFtn" # # T81-558: Applications of Deep Neural Networks # **Module 7: Generative Adversarial Networks** # * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) # * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # + [markdown] id="W-QArqZxdFto" # # Module 7 Material # # * Part 7.1: Introduction to GANs for Image and Data Generation [[Video]](https://www.youtube.com/watch?v=hZw-AjbdN5k&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_1_gan_intro.ipynb) # * Part 7.2: Train StyleGAN3 with your Own Images [[Video]](https://www.youtube.com/watch?v=R546LYsQk5M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_2_train_gan.ipynb) # * Part 7.3: Exploring the StyleGAN Latent Vector [[Video]](https://www.youtube.com/watch?v=goQzp8QSb2s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_3_latent_vector.ipynb) # * Part 7.4: GANs to Enhance Old Photographs Deoldify [[Video]](https://www.youtube.com/watch?v=0OTd5GlHRx4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_4_deoldify.ipynb) # * **Part 7.5: GANs for Tabular Synthetic Data Generation** [[Video]](https://www.youtube.com/watch?v=yujdA46HKwA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_5_tabular_synthetic.ipynb) # # + [markdown] id="Zn-FViihdN1M" # # Google CoLab Instructions # # The following code ensures that Google CoLab is running the correct version of TensorFlow. # Running the following code will map your GDrive to ```/content/drive```. # + colab={"base_uri": "https://localhost:8080/"} id="f7G_GEwHdOrE" outputId="020e24de-efe3-4b95-88aa-03430c473cfa" try: from google.colab import drive COLAB = True print("Note: using Google CoLab") # %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False # + [markdown] id="aCb4iUtAdFto" # # Part 7.5: GANs for Tabular Synthetic Data Generation # # Typically GANs are used to generate images. However, we can also generate tabular data from a GAN. In this part, we will use the Python tabgan utility to create fake data from tabular data. Specifically, we will use the Auto MPG dataset to train a GAN to generate fake cars. [Cite:ashrapov2020tabular](https://arxiv.org/pdf/2010.00638.pdf) # # ## Installing Tabgan # # Pytorch is the foundation of the tabgan neural network utility. The following code installs the needed software to run tabgan in Google Colab. # + colab={"base_uri": "https://localhost:8080/"} id="5-iTPkSWdsGa" outputId="bfd5ee3e-feb9-4a40-c5ad-3540ae4f8350" # HIDE OUTPUT CMD = "wget https://raw.githubusercontent.com/Diyago/"\ "GAN-for-tabular-data/master/requirements.txt" # !{CMD} # !pip install -r requirements.txt # !pip install tabgan # + [markdown] id="HlETatByeGqz" # Note, after installing; you may see this message: # # * You must restart the runtime in order to use newly installed versions. # # If so, click the "restart runtime" button just under the message. Then rerun this notebook, and you should not receive further issues. # # ## Loading the Auto MPG Data and Training a Neural Network # # We will begin by generating fake data for the Auto MPG dataset we have previously seen. The tabgan library can generate categorical (textual) and continuous (numeric) data. However, it cannot generate unstructured data, such as the name of the automobile. Car names, such as "AMC Rebel SST" cannot be replicated by the GAN, because every row has a different car name; it is a textual but non-categorical value. # # The following code is similar to what we have seen before. We load the AutoMPG dataset. The tabgan library requires Pandas dataframe to train. Because of this, we keep both the Pandas and Numpy values. # + colab={"base_uri": "https://localhost:8080/"} id="-YRAjvvMeWuz" outputId="d819599f-8023-434c-fa9a-fd8df6935132" # HIDE OUTPUT from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping from sklearn.model_selection import train_test_split import pandas as pd import io import os import requests import numpy as np from sklearn import metrics df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) COLS_USED = ['cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'year', 'origin','mpg'] COLS_TRAIN = ['cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'year', 'origin'] df = df[COLS_USED] # Handle missing value df['horsepower'] = df['horsepower'].fillna(df['horsepower'].median()) # Split into training and test sets df_x_train, df_x_test, df_y_train, df_y_test = train_test_split( df.drop("mpg", axis=1), df["mpg"], test_size=0.20, #shuffle=False, random_state=42, ) # Create dataframe versions for tabular GAN df_x_test, df_y_test = df_x_test.reset_index(drop=True), \ df_y_test.reset_index(drop=True) df_y_train = pd.DataFrame(df_y_train) df_y_test = pd.DataFrame(df_y_test) # Pandas to Numpy x_train = df_x_train.values x_test = df_x_test.values y_train = df_y_train.values y_test = df_y_test.values # Build the neural network model = Sequential() # Hidden 1 model.add(Dense(50, input_dim=x_train.shape[1], activation='relu')) model.add(Dense(25, activation='relu')) # Hidden 2 model.add(Dense(12, activation='relu')) # Hidden 2 model.add(Dense(1)) # Output model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test), callbacks=[monitor], verbose=2,epochs=1000) # + [markdown] id="YeR9CQ5weQDB" # We now evaluate the trained neural network to see the RMSE. We will use this trained neural network to compare the accuracy between the original data and the GAN-generated data. We will later see that you can use such comparisons for anomaly detection. We can use this technique can be used for security systems. If a neural network trained on original data does not perform well on new data, then the new data may be suspect or fake. # + colab={"base_uri": "https://localhost:8080/"} id="WFijxBaufVzr" outputId="1a980286-e40b-4800-becd-cdba31979c8a" pred = model.predict(x_test) score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score)) # + [markdown] id="0k33foL3eTDN" # ## Training a GAN for Auto MPG # # Next, we will train the GAN to generate fake data from the original MPG data. There are quite a few options that you can fine-tune for the GAN. The example presented here uses most of the default values. These are the usual hyperparameters that must be tuned for any model and require some experimentation for optimal results. To learn more about tabgab refer to its paper or this [Medium article](https://towardsdatascience.com/review-of-gans-for-tabular-data-a30a2199342), written by the creator of tabgan. # + colab={"base_uri": "https://localhost:8080/", "height": 81, "referenced_widgets": ["4868c1e7b0c943b594bc1ecad46db436", "6ead85f553054e4aa116920a40e49b04", "a9f4fb7eacb94aafbf64a98b5fc0fc37", "3c26c587accb4c26b0b98221b547356f", "39885cd66caa4fe79fc53f2368d7a5c0", "5ecaf538dd5744198cedd271a43a6d0f", "9030dbab18ec43f481bfc088de9447ec", "5dedd3556fd54bf58eef12635398021b", "c993e9cdf47c4c6799405a6d628128b4", "<KEY>", "<KEY>", "d778ac7cdd1e4d18b31a2a85d296a1c6", "3bb0052560414e108e0e966b36739768", "a9ef13d5399a4eb2afee41204ed24c54", "7202f83df3894af7add22a1a074617ed", "<KEY>", "e7a881fa8d964ff2ad0a2137cabce76d", "<KEY>", "<KEY>", "783fede137ea4452a39df668eb12a411", "<KEY>", "a054e62a36cc484e9b1554ed37194876"]} id="L-i4CdwYkgLU" outputId="599c8605-9570-4436-c3d8-393d88aa2f9f" from tabgan.sampler import GANGenerator import pandas as pd import numpy as np from sklearn.model_selection import train_test_split gen_x, gen_y = GANGenerator(gen_x_times=1.1, cat_cols=None, bot_filter_quantile=0.001, top_filter_quantile=0.999, \ is_post_process=True, adversarial_model_params={ "metrics": "rmse", "max_depth": 2, "max_bin": 100, "learning_rate": 0.02, "random_state": \ 42, "n_estimators": 500, }, pregeneration_frac=2, only_generated_data=False,\ gan_params = {"batch_size": 500, "patience": 25, \ "epochs" : 500,}).generate_data_pipe(df_x_train, df_y_train,\ df_x_test, deep_copy=True, only_adversarial=False, \ use_adversarial=True) # + [markdown] id="qBxYegwNdXdz" # Note: if you receive an error running the above code, you likely need to restart the runtime. You should have a "restart runtime" button in the output from the second cell. Once you restart the runtime, rerun all of the cells. This step is necessary as tabgan requires specific versions of some packages. # # ## Evaluating the GAN Results # # If we display the results, we can see that the GAN-generated data looks similar to the original. Some values, typically whole numbers in the original data, have fractional values in the synthetic data. # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="CzKROV-Pm1SE" outputId="2ddf9726-6074-41e6-82bd-a8512f493c5e" gen_x # + [markdown] id="RQ6lc2EHn8i5" # Finally, we present the synthetic data to the previously trained neural network to see how accurately we can predict the synthetic targets. As we can see, you lose some RMSE accuracy by going to synthetic data. # + colab={"base_uri": "https://localhost:8080/"} id="BXoMORyHCU0o" outputId="21196542-b7e4-4c72-cd47-5f10ec96533b" # Predict pred = model.predict(gen_x.values) score = np.sqrt(metrics.mean_squared_error(pred,gen_y.values)) print("Final score (RMSE): {}".format(score))
t81_558_class_07_5_tabular_synthetic.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import math, os, sys import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestRegressor dataset = np.load('dataset/all_data-preprocessed.npz') features, labels = dataset['features'].astype('float32'), dataset['labels'].astype('float32') train_test_split_factor = .8 validation_split_factor = .2 train_x, train_y, test_x, test_y = features[:math.floor(len(features)*train_test_split_factor)], labels[:math.floor(len(labels)*train_test_split_factor)], features[math.floor(len(features)*train_test_split_factor):], labels[math.floor(len(labels)*train_test_split_factor):] train_x, test_x = np.expand_dims(train_x, axis=-1), np.expand_dims(test_x, axis=-1) # for use with TimeDistributed input_shape = train_x.shape print(train_x.shape, train_y.shape, test_x.shape, test_y.shape) train_x = train_x.reshape(train_x.shape[0], 7).astype('float32') test_x = test_x.reshape(test_x.shape[0], 7).astype('float32') print(train_x.shape, test_x.shape) model = RandomForestRegressor(n_estimators=200 ,max_depth=10,random_state=0) model.fit(train_x, train_y) pred = model.predict(test_x[:64]) close_pred = np.reshape(pred, (-1, 1)) test_y_reshape = np.reshape(test_y[:64], (-1, 1)) days = np.arange(1, len(test_y_reshape)+1) plt.plot(days, test_y_reshape, 'b', label='Actual line') plt.plot(days, close_pred, 'r', label='Predicted line') plt.title('RFRegressor') plt.xlabel('Days') plt.ylabel('Close Prices') plt.legend() plt.show() from sklearn.metrics import mean_squared_error as MSE def evaluate(model, test_features, test_labels): predictions = model.predict(test_features) errors = abs(predictions - test_labels) mape = 100 * np.mean(errors / test_labels) accuracy = 100 - mape print('Model Performance') print('Average Error: {:0.4f} degrees.'.format(np.mean(errors))) print('RMSE: {:0.4f}' .format(math.sqrt(MSE(test_y[:64], pred)))) print('Accuracy = {:0.2f}%.'.format(accuracy)) return accuracy accuracy = evaluate(model, test_x, test_y) # save model import joblib joblib.dump(model, 'weights/rf.sav')
rf_model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Validation Playground # # **Watch** a [short tutorial video](https://greatexpectations.io/videos/getting_started/integrate_expectations) or **read** [the written tutorial](https://docs.greatexpectations.io/en/latest/tutorials/validate_data.html?utm_source=notebook&utm_medium=validate_data) # # #### This notebook assumes that you created at least one expectation suite in your project. # #### Here you will learn how to validate data loaded into a Pandas DataFrame against an expectation suite. # # # We'd love it if you **reach out for help on** the [**Great Expectations Slack Channel**](https://greatexpectations.io/slack) import json import great_expectations as ge import great_expectations.jupyter_ux from great_expectations.datasource.types import BatchKwargs from datetime import datetime # ## 1. Get a DataContext # This represents your **project** that you just created using `great_expectations init`. context = ge.data_context.DataContext() # ## 2. Choose an Expectation Suite # # List expectation suites that you created in your project context.list_expectation_suite_names() expectation_suite_name = # TODO: set to a name from the list above # ## 3. Load a batch of data you want to validate # # To learn more about `get_batch`, see [this tutorial](https://docs.greatexpectations.io/en/latest/tutorials/validate_data.html?utm_source=notebook&utm_medium=validate_data#load-a-batch-of-data-to-validate) # # list datasources of the type PandasDatasource in your project [datasource['name'] for datasource in context.list_datasources() if datasource['class_name'] == 'PandasDatasource'] datasource_name = # TODO: set to a datasource name from above # + # If you would like to validate a file on a filesystem: batch_kwargs = {'path': "YOUR_FILE_PATH", 'datasource': datasource_name} # If you already loaded the data into a Pandas Data Frame: batch_kwargs = {'dataset': "YOUR_DATAFRAME", 'datasource': datasource_name} batch = context.get_batch(batch_kwargs, expectation_suite_name) batch.head() # - # ## 4. Validate the batch with Validation Operators # # `Validation Operators` provide a convenient way to bundle the validation of # multiple expectation suites and the actions that should be taken after validation. # # When deploying Great Expectations in a **real data pipeline, you will typically discover these needs**: # # * validating a group of batches that are logically related # * validating a batch against several expectation suites such as using a tiered pattern like `warning` and `failure` # * doing something with the validation results (e.g., saving them for a later review, sending notifications in case of failures, etc.). # # [Read more about Validation Operators in the tutorial](https://docs.greatexpectations.io/en/latest/tutorials/validate_data.html?utm_source=notebook&utm_medium=validate_data#save-validation-results) # + # This is an example of invoking a validation operator that is configured by default in the great_expectations.yml file #Generate a run id, a timestamp, or a meaningful string that will help you refer to validation results. We recommend they be chronologically sortable. # Let's make a simple sortable timestamp. Note this could come from your pipeline runner (e.g., Airflow run id). run_id = datetime.utcnow().isoformat().replace(":", "") + "Z" results = context.run_validation_operator( "action_list_operator", assets_to_validate=[batch], run_id=run_id) # - # ## 5. View the Validation Results in Data Docs # # Let's now build and look at your Data Docs. These will now include an **data quality report** built from the `ValidationResults` you just created that helps you communicate about your data with both machines and humans. # # [Read more about Data Docs in the tutorial](https://docs.greatexpectations.io/en/latest/tutorials/validate_data.html?utm_source=notebook&utm_medium=validate_data#view-the-validation-results-in-data-docs) context.open_data_docs() # ## Congratulations! You ran Validations! # # ## Next steps: # # ### 1. Read about the typical workflow with Great Expectations: # # [typical workflow](https://docs.greatexpectations.io/en/latest/getting_started/typical_workflow.html?utm_source=notebook&utm_medium=validate_data#view-the-validation-results-in-data-docs) # # ### 2. Explore the documentation & community # # You are now among the elite data professionals who know how to build robust descriptions of your data and protections for pipelines and machine learning models. Join the [**Great Expectations Slack Channel**](https://greatexpectations.io/slack) to see how others are wielding these superpowers.
great_expectations/init_notebooks/pandas/validation_playground.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/debanga/DeepLearningNotebooks/blob/master/Recurrent_Neural_Networks_Introduction_to_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="VGm0xBsNQBaP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="065615ec-2e71-40e6-db38-20c249c6ceaf" # Recurrent Neural Networks - LSTM Introduction # Based on "https://www.kaggle.com/thebrownviking20/intro-to-recurrent-neural-networks-lstm-gru" import numpy as np import matplotlib.pyplot as plt import pandas as pd import math from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional from keras.optimizers import SGD from sklearn.metrics import mean_squared_error # + id="DPr_tfbUSblo" colab_type="code" colab={} # Some functions to help out with def plot_predictions(test,predicted): plt.plot(test, color='red',label='Real IBM Stock Price') plt.plot(predicted, color='blue',label='Predicted IBM Stock Price') plt.title('IBM Stock Price Prediction') plt.xlabel('Time') plt.ylabel('IBM Stock Price') plt.legend() plt.show() def return_rmse(test,predicted): rmse = math.sqrt(mean_squared_error(test, predicted)) print("The root mean squared error is {}.".format(rmse)) # + id="JLEZvhRNShWG" colab_type="code" outputId="68bfc371-b836-4ce2-acd6-412438297323" colab={"base_uri": "https://localhost:8080/", "height": 225} # Get data dataset = pd.read_csv('https://raw.githubusercontent.com/debanga/DeepLearningNotebooks/master/data/IBM_2006-01-01_to_2018-01-01.csv', index_col='Date', parse_dates=['Date']) dataset.head() # + id="1yRqxQu4V0fE" colab_type="code" colab={} # Checking for missing values training_set = dataset[:'2016'].iloc[:,1:2].values test_set = dataset['2017':].iloc[:,1:2].values # + id="0_7AdHKCXBiX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} outputId="2440e278-2887-4e82-c87c-6b704cb7a40c" # We have chosen 'High' attribute for prices. Let's see what it looks like dataset["High"][:'2016'].plot(figsize=(16,4),legend=True) dataset["High"]['2017':].plot(figsize=(16,4),legend=True) plt.legend(['Training set (Before 2017)','Test set (2017 and beyond)']) plt.title('IBM stock price') plt.show() # + id="hADuoy39Y1yj" colab_type="code" colab={} # Scaling the training set sc = MinMaxScaler(feature_range=(0,1)) training_set_scaled = sc.fit_transform(training_set) # + id="TYT3i0mcY-yl" colab_type="code" colab={} # Since LSTMs store long term memory state, we create a data structure with 60 timesteps and 1 output # So for each element of training set, we have 60 previous training set elements X_train = [] y_train = [] for i in range(60,2769): X_train.append(training_set_scaled[i-60:i,0]) y_train.append(training_set_scaled[i,0]) X_train, y_train = np.array(X_train), np.array(y_train) # + id="Qav3dFvgZgs4" colab_type="code" colab={} # Reshaping X_train for efficient modelling X_train = np.reshape(X_train, (X_train.shape[0],X_train.shape[1],1)) # + id="ka2-BKRxayTM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} outputId="2e13c95c-b5b1-498f-ccc6-656729f39003" '''The LSTM architecture ''' regressor = Sequential() # First LSTM layer with Dropout regularisation regressor.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],1))) regressor.add(Dropout(0.2)) # Second LSTM layer regressor.add(LSTM(units=50, return_sequences=True)) regressor.add(Dropout(0.2)) # Third LSTM layer regressor.add(LSTM(units=50, return_sequences=True)) regressor.add(Dropout(0.2)) # Fourth LSTM layer regressor.add(LSTM(units=50)) regressor.add(Dropout(0.2)) # The output layer regressor.add(Dense(units=1)) # + id="SIN2rRjYbMMS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 474} outputId="55a8e395-0fec-4301-b015-d0f039e214ba" regressor.summary() # + id="o0iHZ841b_Dm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="dfe31ea2-dab6-4aef-e753-8d56edbb8d89" # Compiling the RNN regressor.compile(optimizer='rmsprop',loss='mean_squared_error') # Fitting to the training set regressor.fit(X_train,y_train,epochs=50,batch_size=32) # + id="nKk53GCDfQqi" colab_type="code" colab={} # Now to get the test set ready in a similar way as the training set. # The following has been done so forst 60 entires of test set have 60 previous values which is impossible to get unless we take the whole # 'High' attribute data for processing dataset_total = pd.concat((dataset["High"][:'2016'],dataset["High"]['2017':]),axis=0) inputs = dataset_total[len(dataset_total)-len(test_set) - 60:].values inputs = inputs.reshape(-1,1) inputs = sc.transform(inputs) # + id="4bi8qXBXfahD" colab_type="code" colab={} # Preparing X_test and predicting the prices X_test = [] for i in range(60,311): X_test.append(inputs[i-60:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) predicted_stock_price = regressor.predict(X_test) predicted_stock_price = sc.inverse_transform(predicted_stock_price) # + id="wLY_6kAQhDsR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="21e67895-f7fa-47c7-b49a-784d91c999b2" # Visualizing the results for LSTM plot_predictions(test_set,predicted_stock_price) # + id="9Hlo0fmahWQv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="238f49b1-3ec8-4838-cb8a-af701239b36b" # Evaluating our model return_rmse(test_set,predicted_stock_price) # + [markdown] id="AG2djbLyiIsu" colab_type="text" # LSTM is not the only kind of unit that has taken the world of Deep Learning by a storm. We have Gated Recurrent Units(GRU). It's not known, which is better: GRU or LSTM becuase they have comparable performances. GRUs are easier to train than LSTMs. # # Gated Recurrent Units # In simple words, the GRU unit does not have to use a memory unit to control the flow of information like the LSTM unit. It can directly makes use of the all hidden states without any control. GRUs have fewer parameters and thus may train a bit faster or need less data to generalize. But, with large data, the LSTMs with higher expressiveness may lead to better results. # # They are almost similar to LSTMs except that they have two gates: reset gate and update gate. Reset gate determines how to combine new input to previous memory and update gate determines how much of the previous state to keep. Update gate in GRU is what input gate and forget gate were in LSTM. We don't have the second non linearity in GRU before calculating the outpu, .neither they have the output gate. # # Source: Quora # + id="zOMYaZtiiLml" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="7f13d6aa-5f57-4727-ce2f-ec21ca78dae9" # The GRU architecture regressorGRU = Sequential() # First GRU layer with Dropout regularisation regressorGRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],1), activation='tanh')) regressorGRU.add(Dropout(0.2)) # Second GRU layer regressorGRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],1), activation='tanh')) regressorGRU.add(Dropout(0.2)) # Third GRU layer regressorGRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],1), activation='tanh')) regressorGRU.add(Dropout(0.2)) # Fourth GRU layer regressorGRU.add(GRU(units=50, activation='tanh')) regressorGRU.add(Dropout(0.2)) # The output layer regressorGRU.add(Dense(units=1)) # Compiling the RNN regressorGRU.compile(optimizer=SGD(lr=0.01, decay=1e-7, momentum=0.9, nesterov=False),loss='mean_squared_error') # Fitting to the training set regressorGRU.fit(X_train,y_train,epochs=50,batch_size=150) # + id="sl1AOkpSoup3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 474} outputId="7e8c1aa4-dbe7-4592-a274-7f1641a07554" regressorGRU.summary() # + id="MGTMhGElkQHB" colab_type="code" colab={} # Preparing X_test and predicting the prices X_test = [] for i in range(60,311): X_test.append(inputs[i-60:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) GRU_predicted_stock_price = regressorGRU.predict(X_test) GRU_predicted_stock_price = sc.inverse_transform(GRU_predicted_stock_price) # + id="EiMxNnTpkVrg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="bffa67d5-2b9a-4e51-fd2c-6dd8fdae3ca5" # Visualizing the results for GRU plot_predictions(test_set,GRU_predicted_stock_price) # + id="cuYnDrpMlfIb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="356e5fa6-de12-4b2e-d15e-b4254a4bfc0b" # Evaluating GRU return_rmse(test_set,GRU_predicted_stock_price) # + [markdown] id="7O0Er1qCmOIV" colab_type="text" # The above models make use of test set so it is using last 60 true values for predicting the new value(I will call it a benchmark). This is why the error is so low. Strong models can bring similar results like above models for sequences too but they require more than just data which has previous values. In case of stocks, we need to know the sentiments of the market, the movement of other stocks and a lot more. So, don't expect a remotely accurate plot. # # We will generate a sequence using just initial 60 values instead of using last 60 values for every new prediction # + id="eARXqna8mOvy" colab_type="code" colab={} # Preparing sequence data initial_sequence = X_train[2708,:] sequence = [] for i in range(251): new_prediction = regressorGRU.predict(initial_sequence.reshape(initial_sequence.shape[1],initial_sequence.shape[0],1)) initial_sequence = initial_sequence[1:] initial_sequence = np.append(initial_sequence,new_prediction,axis=0) sequence.append(new_prediction) sequence = sc.inverse_transform(np.array(sequence).reshape(251,1)) # + id="t2Wd_Z_OnEWU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 313} outputId="db7cf34b-b5b9-4d50-9626-c04b08e168b6" # Visualizing the sequence plot_predictions(test_set,sequence) # Evaluating the sequence return_rmse(test_set,sequence)
Recurrent_Neural_Networks_Introduction_to_LSTM.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # #### **Title**: Segments Element # # **Dependencies** Bokeh # # **Backends** [Bokeh](./Segments.ipynb), [Matplotlib](../matplotlib/Segments.ipynb) import numpy as np import holoviews as hv from holoviews import dim hv.extension('bokeh') # `Segments` visualizes a collection of line segments, each starting at a position (`x0`, `y0`) and ending at a position (`x1`, `y1`). To specify it, we hence need four key dimensions, listed in the order (`x0`, `y0`, `x1`, `y1`), and an arbitrary number of value dimensions as attributes to each line segment. # ##### Basic usage # Declare mock data: event = ['A', 'B'] data = dict( start=[np.datetime64('1999'), np.datetime64('2001')], end=[np.datetime64('2010'), np.datetime64('2020')], start_event = event, end_event = event ) # Define the `Segments`: seg = hv.Segments(data, [hv.Dimension('start', label='Year'), hv.Dimension('start_event', label='Event'), 'end', 'end_event']) # Display and style the Element: seg.opts(color='k', line_width=10) # ##### A fractal tree # + from functools import reduce def tree(N): """ Generates fractal tree up to branch N. """ # x0, y0, x1, y1, level branches = [(0, 0, 0, 1)] theta = np.pi/5 # branching angle r = 0.5 # length ratio between successive branches # Define function to grow successive branches given previous branch and branching angle angle = lambda b: np.arctan2(b[3]-b[1], b[2]-b[0]) length = lambda b: np.sqrt((b[3]-b[1])**2 + (b[2]-b[0])**2) grow = lambda b, ang: (b[2], b[3], b[2] + r*length(b)*np.cos(angle(b)+ang), b[3] + r*length(b)*np.sin(angle(b)+ang)) ctr = 1 while ctr<=N: yield branches ctr += 1 branches = [[grow(b, theta), grow(b, -theta)] for b in branches] branches = reduce(lambda i, j: i+j, branches) t = reduce(lambda i, j: i+j, tree(14)) data = np.array(t[1:]) # - # Declare a `Segments` Element and add an additional value dimension `c` that we can use for styling: s = hv.Segments(np.c_[data, np.arange(len(data))], ['x', 'y', 'x1', 'y1'], 'c') # Now, let's style the Element into a digital broccoli painting: s.opts(xaxis=None, yaxis=None, height=400, width=400,toolbar='above', color=np.log10(1+dim('c')), cmap='Greens', line_width=15) # ##### Cantor set # + def cantor(N): """ Generates a Cantor set up to iteration N, cutting out the middle 9th of each interval at each step. """ y = 0 intervals = [(0, 1, y)] while y<=N: yield intervals dx = (intervals[0][1]-intervals[0][0])/9*4 y += 1 intervals = [[(i[0], i[0]+dx, y), (i[1]-dx, i[1], y)] for i in intervals] intervals = reduce(lambda i, j: i+j, intervals) cl = reduce(lambda i, j: i+j, cantor(12)) x0, x1, y = zip(*cl) data = np.array(cl) # - s = hv.Segments((x0, y, x1, y, y), vdims=['c']) s.opts(xaxis=None, yaxis=None, height=160, width=500, toolbar='above', line_width=8, color=dim('c'), cmap='fire_r')
examples/reference/elements/bokeh/Segments.ipynb
# ##### Copyright 2021 Google LLC. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # place_number_puzzle # <table align="left"> # <td> # <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/place_number_puzzle.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a> # </td> # <td> # <a href="https://github.com/google/or-tools/blob/master/examples/contrib/place_number_puzzle.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a> # </td> # </table> # First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab. # !pip install ortools # + # Copyright 2010 <NAME> <EMAIL> # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Place number puzzle Google CP Solver. http://ai.uwaterloo.ca/~vanbeek/Courses/Slides/introduction.pdf ''' Place numbers 1 through 8 on nodes - each number appears exactly once - no connected nodes have consecutive numbers 2 - 5 / | X | \ 1 - 3 - 6 - 8 \ | X | / 4 - 7 "" Compare with the following models: * MiniZinc: http://www.hakank.org/minizinc/place_number.mzn * Comet: http://www.hakank.org/comet/place_number_puzzle.co * ECLiPSe: http://www.hakank.org/eclipse/place_number_puzzle.ecl * SICStus Prolog: http://www.hakank.org/sicstus/place_number_puzzle.pl * Gecode: http://www.hakank.org/gecode/place_number_puzzle.cpp This model was created by <NAME> (<EMAIL>) Also see my other Google CP Solver models: http://www.hakank.org/google_or_tools/ """ import sys from ortools.constraint_solver import pywrapcp # Create the solver. solver = pywrapcp.Solver("Place number") # data m = 32 n = 8 # Note: this is 1-based for compatibility (and lazyness) graph = [[1, 2], [1, 3], [1, 4], [2, 1], [2, 3], [2, 5], [2, 6], [3, 2], [3, 4], [3, 6], [3, 7], [4, 1], [4, 3], [4, 6], [4, 7], [5, 2], [5, 3], [5, 6], [5, 8], [6, 2], [6, 3], [6, 4], [6, 5], [6, 7], [6, 8], [7, 3], [7, 4], [7, 6], [7, 8], [8, 5], [8, 6], [8, 7]] # declare variables x = [solver.IntVar(1, n, "x%i" % i) for i in range(n)] # # constraints # solver.Add(solver.AllDifferent(x)) for i in range(m): # Note: make 0-based solver.Add(abs(x[graph[i][0] - 1] - x[graph[i][1] - 1]) > 1) # symmetry breaking solver.Add(x[0] < x[n - 1]) # # solution and search # solution = solver.Assignment() solution.Add(x) collector = solver.AllSolutionCollector(solution) solver.Solve( solver.Phase(x, solver.CHOOSE_FIRST_UNBOUND, solver.ASSIGN_MIN_VALUE), [collector]) num_solutions = collector.SolutionCount() for s in range(num_solutions): print("x:", [collector.Value(s, x[i]) for i in range(len(x))]) print() print("num_solutions:", num_solutions) print("failures:", solver.Failures()) print("branches:", solver.Branches()) print("WallTime:", solver.WallTime()) print()
examples/notebook/contrib/place_number_puzzle.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda env tflearn # language: python # name: tflearn # --- # # <NAME> # # In this notebook, I'll build a character-wise RNN trained on <NAME>, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. # # This network is based off of <NAME>'s [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [<NAME>](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN. # # <img src="assets/charseq.jpeg" width="500"> # + deletable=true editable=true import time from collections import namedtuple import numpy as np import tensorflow as tf # - # First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) # Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever. text[:100] # And we can see the characters encoded as integers. chars[:100] # ## Making training and validation batches # # Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. # # Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. # # The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. def split_data(chars, batch_size, num_steps, split_frac=0.9): """ Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y """ slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the virst split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y # Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. train_x, train_y, val_x, val_y = split_data(chars, 10, 50) train_x.shape # Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this: train_x[:,:50] # I'll write another function to grab batches out of the arrays made by `split_data`. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch. def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] # ## Building the model # # Below is a function where I build the graph for the network. # + deletable=true editable=true def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') # One-hot encoding the input and target characters x_one_hot = tf.one_hot(inputs, num_classes) y_one_hot = tf.one_hot(targets, num_classes) ### Build the RNN layers # Use a basic LSTM cell #lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size) # Add dropout to the cell #drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) drop = tf.nn.rnn_cell.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning #cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) cell = tf.nn.rnn_cell.MultiRNNCell([drop] * num_layers) initial_state = cell.zero_state(batch_size, tf.float32) ### Run the data through the RNN layers # This makes a list where each element is on step in the sequence #rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)] rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(1, num_steps, x_one_hot)] # Run each sequence step through the RNN and collect the outputs #outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state) outputs, state = tf.nn.rnn(cell, rnn_inputs, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one output row for each step for each batch #seq_output = tf.concat(outputs, axis=1) seq_output = tf.concat(1, outputs) output = tf.reshape(seq_output, [-1, lstm_size]) # Now connect the RNN putputs to a softmax layer with tf.variable_scope('softmax'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(num_classes)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and batch logits = tf.matmul(output, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters preds = tf.nn.softmax(logits, name='predictions') # Reshape the targets to match the logits y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) cost = tf.reduce_mean(loss) # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes # NOTE: I'm using a namedtuple here because I think they are cool export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph # - # ## Hyperparameters # # Here I'm defining the hyperparameters for the network. # # * `batch_size` - Number of sequences running through the network in one pass. # * `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. # * `lstm_size` - The number of units in the hidden layers. # * `num_layers` - Number of hidden LSTM layers to use # * `learning_rate` - Learning rate for training # * `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this. # # Here's some good advice from <NAME> on training the network. I'm going to write it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks). # # > ## Tips and Tricks # # >### Monitoring Validation Loss vs. Training Loss # >If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: # # > - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. # > - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer) # # > ### Approximate number of parameters # # > The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are: # # > - The number of parameters in your model. This is printed when you start training. # > - The size of your dataset. 1MB file is approximately 1 million characters. # # >These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: # # > - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger. # > - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that heps the validation loss. # # > ### Best models strategy # # >The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. # # >It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. # # >By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. # # + deletable=true editable=true batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 keep_prob = 0.5 # - # ## Training # # Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint. # # Here I'm saving checkpoints with the format # # `i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt` # + deletable=true editable=true epochs = 20 # Save every N iterations save_every_n = 200 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) # - # #### Saved checkpoints # # Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables # + deletable=true editable=true tf.train.get_checkpoint_state('checkpoints') # - # ## Sampling # # Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. # # The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. # # def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) # Here, pass in the path to a checkpoint and sample from the network. checkpoint = "checkpoints/____.ckpt" samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp)
intro-to-rnns/.ipynb_checkpoints/Anna KaRNNa-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Part 5 - 欢迎来到沙盒 # # 在上一教程中,我们一直在手工初始化hook和所有工作机。 当您只是在玩耍/了解接口时,这可能会有些烦人。因此,从现在开始,我们将使用特殊的便捷函数创建所有这些相同的变量。 # In the last tutorials, we've been initializing our hook and all of our workers by hand every time. This can be a bit annoying when you're just playing around / learning about the interfaces. So, from here on out we'll be creating all these same variables using a special convenience function. import torch import syft as sy sy.create_sandbox(globals()) # ### 沙盒能给我们什么? # # 如您在上面所看到的,我们创建了几个虚拟工作机,并加载了很多测试数据集,将它们分布在各个工作机周围,以便我们可以使用诸如联邦学习之类的隐私保护技术进行练习。 # # 我们创造了六台工作机…… workers # 我们还填充了大量的全局变量,我们可以立即使用的! hook bob # ## 1: 工作机搜索功能 # # 进行远程数据科学的一个重要方面是我们希望能够在远程计算机上搜索数据集。设想一个研究实验室想要向医院查询“无线电”数据集。 torch.Tensor([1,2,3,4,5]) x = torch.tensor([1,2,3,4,5]).tag("#fun", "#boston", "#housing").describe("The input datapoints to the boston housing dataset.") y = torch.tensor([1,2,3,4,5]).tag("#fun", "#boston", "#housing").describe("The input datapoints to the boston housing dataset.") z = torch.tensor([1,2,3,4,5]).tag("#fun", "#mnist",).describe("The images in the MNIST training dataset.") x # + x = x.send(bob) y = y.send(bob) z = z.send(bob) # 这会在标签或说明中搜索完全匹配 results = bob.search(["#boston", "#housing"]) # - results print(results[0].description) # ## 2: 虚拟网格 # # A Grid is simply a collection of workers which gives you some convenience functions for when you want to put together a dataset. grid = sy.VirtualGrid(*workers) results, tag_ctr = grid.search("#boston") boston_data, _ = grid.search("#boston","#data") boston_target, _ = grid.search("#boston","#target")
examples/tutorials/translations/chinese/Part 05 - Welcome to the Sandbox.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] colab_type="text" id="5hIbr52I7Z7U" # Deep Learning # ============= # # Assignment 1 # ------------ # # The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later. # # This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="apJbCsBHl-2A" # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython # %matplotlib inline # + [markdown] colab_type="text" id="jNWGtZaXn-5j" # First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 186058, "status": "ok", "timestamp": 1444485672507, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "2a0a5e044bb03b66", "userId": "102167687554210253930"}, "user_tz": 420} id="EYRJ4ICW6-da" outputId="0d0f85df-155f-4a89-8e7e-ee32df36ec8d" url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 1% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) # + [markdown] colab_type="text" id="cC3p0oEyF8QT" # Extract the dataset from the compressed .tar.gz file. # This should give you a set of directories, labelled A through J. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 186055, "status": "ok", "timestamp": 1444485672525, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "2a0a5e044bb03b66", "userId": "102167687554210253930"}, "user_tz": 420} id="H8CBE-WZ8nmj" outputId="ef6c790c-2513-4b09-962e-27c79390c762" num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) # + [markdown] colab_type="text" id="4riXK3IoHgx6" # --- # Problem 1 # --------- # # Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display. # # --- # + image_num = 133 # Select a number globally def display_sample_images(folder, image_num): """Display a sample image from each character A through J""" image_files = os.listdir(folder) image = image_files[image_num] image_file = os.path.join(folder, image) print('A sample of image from',folder,'\r') display(Image(image_file)) for folder in train_folders: display_sample_images(folder, image_num) # + [markdown] colab_type="text" id="PBdkjESPK8tw" # Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size. # # We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. # # A few images might not be readable, we'll just skip them. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 30}]} colab_type="code" executionInfo={"elapsed": 399874, "status": "ok", "timestamp": 1444485886378, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "2a0a5e044bb03b66", "userId": "102167687554210253930"}, "user_tz": 420} id="h7q0XhG3MJdf" outputId="92c391bb-86ff-431d-9ada-315568a19e59" image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) # + [markdown] colab_type="text" id="vUdbskYE2d87" # --- # Problem 2 # --------- # # Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot. # # --- # + def read_from_pickle(filename): with open(filename, 'rb') as f: dataset = pickle.load(f) return dataset plt.rcParams['figure.figsize'] = (15.0, 15.0) f, ax = plt.subplots(nrows=1, ncols=10) for i, filename in enumerate(train_datasets): image_slice = read_from_pickle(filename)[image_num, :, :] ax[i].axis('off') ax[i].set_title(filename[15], loc='center') ax[i].imshow(image_slice) del image_slice # + [markdown] colab_type="text" id="cYznx5jUwzoO" # --- # Problem 3 # --------- # Another check: we expect the data to be balanced across classes. Verify that. # # --- # + for i, filename in enumerate(train_datasets): image_dataset = read_from_pickle(filename) image_shape = image_dataset.shape[0] print('Dataset of', filename[15], 'contains', image_shape, 'images.') del image_dataset, image_shape # + [markdown] colab_type="text" id="LA7M7K22ynCt" # Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9. # # Also create a validation dataset for hyperparameter tuning. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 411281, "status": "ok", "timestamp": 1444485897869, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "2a0a5e044bb03b66", "userId": "102167687554210253930"}, "user_tz": 420} id="s3mWgZLpyuzq" outputId="8af66da6-902d-4719-bedc-7c9fb7ae7948" def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 20000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) # + [markdown] colab_type="text" id="GPTCnjIcyuKN" # Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="6WZ2l2tN2zOL" def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) # + [markdown] colab_type="text" id="puDUTe6t6USl" # --- # Problem 4 # --------- # Convince yourself that the data is still good after shuffling! # # --- # + f, ax = plt.subplots(nrows=1, ncols=10) for i, j in enumerate(np.random.randint(0, train_size, 10)): image_slice = train_dataset[j, :, :] image_label = train_labels[j] ax[i].axis('off') title = '#' + str(j) + ': ' + chr(image_label+65) ax[i].set_title(title, loc='center') ax[i].imshow(image_slice) del image_slice # + [markdown] colab_type="text" id="tIQJaJuwg5Hw" # Finally, let's save the data for later reuse: # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="QiR_rETzem6C" pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise # + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 413065, "status": "ok", "timestamp": 1444485899688, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "2a0a5e044bb03b66", "userId": "102167687554210253930"}, "user_tz": 420} id="hQbLjrW_iT39" outputId="b440efc6-5ee1-4cbc-d02d-93db44ebd956" statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) # + [markdown] colab_type="text" id="gE_cRAQB33lk" # --- # Problem 5 # --------- # # By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it. # Measure how much overlap there is between training, validation and test samples. # # Optional questions: # - What about near duplicates between datasets? (images that are almost identical) # - Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments. # --- # + train_r = train_dataset.reshape(train_dataset.shape[0],-1) train_idx = np.lexsort(train_r.T) train_dataset_sanitized = train_dataset[train_idx][np.append(True,(np.diff(train_r[train_idx],axis=0)!=0).any(1))] train_labels_sanitized = train_labels[train_idx][np.append(True,(np.diff(train_r[train_idx],axis=0)!=0).any(1))] valid_r = valid_dataset.reshape(valid_dataset.shape[0],-1) valid_idx = np.lexsort(valid_r.T) valid_dataset_sanitized = valid_dataset[valid_idx][np.append(True,(np.diff(valid_r[valid_idx],axis=0)!=0).any(1))] valid_labels_sanitized = valid_labels[valid_idx][np.append(True,(np.diff(valid_r[valid_idx],axis=0)!=0).any(1))] test_r = test_dataset.reshape(test_dataset.shape[0],-1) test_idx = np.lexsort(test_r.T) test_dataset_sanitized = test_dataset[test_idx][np.append(True,(np.diff(test_r[test_idx],axis=0)!=0).any(1))] test_labels_sanitized = test_labels[test_idx][np.append(True,(np.diff(test_r[test_idx],axis=0)!=0).any(1))] del train_r, valid_r, test_r print('Training dataset has', train_dataset_sanitized.shape[0],'unique images.') print('Sanitized training dataset has', train_dataset_sanitized.shape[0],'images.\n') print('Validation dataset has', valid_dataset_sanitized.shape[0],'unique images.') print('Test dataset has', test_dataset_sanitized.shape[0],'unique images.\n') train_r = train_dataset_sanitized.reshape(train_dataset_sanitized.shape[0],-1) valid_r = valid_dataset_sanitized.reshape(valid_dataset_sanitized.shape[0],-1) test_r = test_dataset_sanitized.reshape(test_dataset_sanitized.shape[0],-1) valid_dup = [] test_dup = [] train_r = {tuple(row):i for i,row in enumerate(train_r)} for i,row in enumerate(valid_r): if tuple(row) in train_r: valid_dup.append(i) for i,row in enumerate(test_r): if tuple(row) in train_r: test_dup.append(i) print('Validation dataset has', len(valid_dup), 'duplicate images to training dataset.') print('Test dataset has', len(test_dup), 'duplicate images to training dataset.\n') valid_dataset_sanitized = np.delete(valid_dataset_sanitized, np.asarray(valid_dup), 0) valid_labels_sanitized = np.delete(valid_labels_sanitized, np.asarray(valid_dup), 0) test_dataset_sanitized = np.delete(test_dataset_sanitized, np.asarray(test_dup), 0) test_labels_sanitized = np.delete(test_labels_sanitized, np.asarray(test_dup), 0) print('Sanitized validation dataset has', valid_dataset_sanitized.shape[0],'images.') print('Sanitized test dataset has', test_dataset_sanitized.shape[0],'images.') # + pickle_file = 'notMNIST_sanitized.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset_sanitized, 'train_labels': train_labels_sanitized, 'valid_dataset': valid_dataset_sanitized, 'valid_labels': valid_labels_sanitized, 'test_dataset': test_dataset_sanitized, 'test_labels': test_labels_sanitized, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise # + [markdown] colab_type="text" id="L8oww1s4JMQx" # --- # Problem 6 # --------- # # Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it. # # Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model. # # Optional question: train an off-the-shelf model on all the data! # # --- # + from sklearn.metrics import classification_report, confusion_matrix def train_predict(clf, n_data, train_data, train_label, test_data, test_label): clf.fit(train_dataset[:n_data,:,:].reshape(n_data,-1), train_labels[:n_data]) # Predict expected = test_labels predicted = clf.predict(test_dataset.reshape(test_dataset.shape[0],-1)) # Print Results print('Classification Report of',n_data,'training samples:\n', classification_report(expected, predicted)) print('Confusion Matrix of',n_data,'training samples:\n', confusion_matrix(expected, predicted)) # Create a Logistic Regression Classifier clf = LogisticRegression(penalty='l2', tol=0.0001, C=1.0, random_state=133, solver='sag', max_iter=100, multi_class='ovr', verbose=0, n_jobs=4) train_predict(clf, 50, train_dataset, train_labels, test_dataset, test_labels) train_predict(clf, 100, train_dataset, train_labels, test_dataset, test_labels) train_predict(clf, 1000, train_dataset, train_labels, test_dataset, test_labels) train_predict(clf, 5000, train_dataset, train_labels, test_dataset, test_labels) train_predict(clf, 20000, train_dataset, train_labels, test_dataset, test_labels) # + # Train and predict sanitized datasets train_predict(clf, train_dataset_sanitized.shape[0], train_dataset_sanitized, train_labels_sanitized, test_dataset_sanitized, test_labels_sanitized)
1_notmnist.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Task 1 # # Test the hypothesis that the delay is from Normal distribution. and that **mean** of the delay is 0. Be careful about the outliers. # ***Strategy*** # - Look into the arr_delay and determine the distribution # - Try using descriptive statistics and LOF as a means to treat the outliers # --- # **INPUT**: data with nulls removed # --- #import packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') df = pd.read_csv("flight_sample_small.csv") df = pd.read_csv("flight_sample_large.csv") # Looking at the arrival delay descriptive statistics df.arr_delay.describe() # Let's look at a graph sns.set(rc={'figure.figsize':(11.7,8.27)}) sns.distplot(df.arr_delay).set_title("Delays (no outlier treatment)") # From the graph we can see there is a large pull of outliers we will consider 2 options for outlier treatment # 1. Descriptive Statistics # 2. LOF # *Begining with Descriptive Statistics:* # - these state anything outside of 1.5XIQR is considered an outlier # - the issue lies in that the outliers are influincing the descriptive statistics #find the IQR of our data IQR = df.arr_delay.describe()[6] - df.arr_delay.describe()[4] min_delay = df.arr_delay.describe()[1]-1.5*IQR max_delay = df.arr_delay.describe()[1]+1.5*IQR print(min_delay, max_delay) # we will filter our data to only include the delays within this range df_iqr = df[df.arr_delay <= max_delay] df_iqr = df_iqr[df_iqr.arr_delay >= min_delay] df_iqr.arr_delay.describe() #determine how much of the data was treated as outliers print("The percentage of observations treated as outliers is: ",(df.shape[0]-df_iqr.shape[0])/df.shape[0]*100) #view as a plot sns.distplot(df_iqr.arr_delay, bins=60).set_title("Delays (IQR Treatment)") df_iqr.to_csv("iqr.csv", index=False) df_iqr.arr_delay.to_csv("iqr_taskone.csv", index=False) # *Local Outlier Factor* # - use unsupervised learning to determine the outliers from sklearn.neighbors import LocalOutlierFactor X = df.arr_delay.values.reshape(-1,1) clf = LocalOutlierFactor(n_neighbors=35) clf.fit(X) X_scores = clf.negative_outlier_factor_ def check_lof_drop (col1, col2): """Used to determine if the negative outlier factor is within the threshold""" threshold = -1.25 if col2 < threshold: return 1 return 0 # + #determine the boundaries on either side test = pd.DataFrame(X, X_scores) test.reset_index(level=0, inplace=True) test = test.rename(columns = {'index' : 'negative_outlier_factor', 0 : 'arr_delay'}) test["drop_lof"] = test.apply(lambda x: check_lof_drop(x.arr_delay, x.negative_outlier_factor), axis=1) # - df_LOF = df[df.arr_delay <= 266] df_LOF = df_LOF[df_LOF.arr_delay >= -58] df_LOF.arr_delay.describe() #determine how much of the data was treated as outliers print("The percentage of observations treated as outliers is: ",(df.shape[0]-df_LOF.shape[0])/df.shape[0]*100) sns.distplot(df_LOF.arr_delay).set_title("Delays (LOF)") df_LOF.to_csv("lof.csv", index=False) df_LOF.arr_delay.to_csv("lof_taskone.csv", index=False)
Notebooks/2_Task_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Qiskit Runtime # Qiskit Runtime is a new architecture offered by IBM Quantum that streamlines computations requiring many iterations. These experiments will execute significantly faster within this improved hybrid quantum/classical process. # # Using Qiskit Runtime, for example, a research team at IBM Quantum was able to achieve 120x speed # up in their lithium hydride simulation (link to come). # # Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for themselves or # others to use. A Qiskit quantum program, also called a Qiskit runtime program, is a piece of Python code that takes certain inputs, performs # quantum and maybe classical computation, and returns the processing results. The same or other # authorized users can then invoke these quantum programs by simply passing in the required input parameters. # <div class="alert alert-block alert-info"> # <b>Note:</b> Qiskit Runtime is only available to select IBM Quantum providers. You can use the `has_service()` method to check if a provider has access: # </div> # + from qiskit import IBMQ IBMQ.load_account() provider = IBMQ.get_provider(project='qiskit-runtime') # Change this to your provider. can_use_runtime = provider.has_service('runtime') # - # # If you don't have an IBM Quantum account, you can sign up for one on the [IBM Quantum](https://quantum-computing.ibm.com/) page. # ## Listing programs <a name='listing_program'> # The `provider.runtime` object is an instance of the [`IBMRuntimeService`](https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.runtime.IBMRuntimeService.html#qiskit.providers.ibmq.runtime.IBMRuntimeService) class and serves as the main entry point to using the runtime service. It has three methods that can be used to find metadata of available programs: # - `pprint_programs()`: pretty prints metadata of all available programs # - `programs()`: returns a list of `RuntimeProgram` instances # - `program()`: returns a single `RuntimeProgram` instance # # The metadata of a runtime program includes its ID, name, description, version, input parameters, return values, interim results, maximum execution time, and backend requirements. Maximum execution time is the maximum amount of time, in seconds, a program can run before being forcibly terminated. # To print the metadata of all available programs: provider.runtime.pprint_programs() # To print the metadata of the program `sample-program`: program = provider.runtime.program('sample-program') print(program) # As you can see from above, the program `sample-program` is a simple program that has only 1 input parameter `iterations`, which indicates how many iterations to run. For each iteration it generates and runs a random 5-qubit circuit and returns the counts as well as the iteration number as the interim results. When the program finishes, it returns the sentence `All done!`. This program can only run for 300 seconds (5 minutes), and requires a backend that has at least 5 qubits. # ## Invoking a runtime program <a name='invoking_program'> # You can use the [`IBMRuntimeService.run()`](https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.runtime.IBMRuntimeService.html#qiskit.providers.ibmq.runtime.IBMRuntimeService.run) method to invoke a runtime program. This method takes the following parameters: # # - `program_id`: ID of the program to run # - `inputs`: Program input parameters. These input values are passed to the runtime program. # - `options`: Runtime options. These options control the execution environment. Currently the only available option is `backend_name`, which is required. # - `callback`: Callback function to be invoked for any interim results. The callback function will receive 2 positional parameters: job ID and interim result. # - `result_decoder`: Optional class used to decode job result. # Before we run a quantum program, we may want to define a callback function that would process interim results, which are intermediate data provided by a program while its still running. # # As we saw earlier, the metadata of `sample-program` says that its interim results are the iteration number and the counts of the randomly generated circuit. Here we define a simple callback function that just prints these interim results: def interim_result_callback(job_id, interim_result): print(f"interim result: {interim_result}") # The following example runs the `sample-program` program with 3 iterations on `ibmq_montreal` and waits for its result. You can also use a different backend that supports Qiskit Runtime: backend = provider.get_backend('ibmq_montreal') program_inputs = { 'iterations': 3 } options = {'backend_name': backend.name()} job = provider.runtime.run(program_id="sample-program", options=options, inputs=program_inputs, callback=interim_result_callback ) print(f"job id: {job.job_id()}") result = job.result() print(result) # The `run()` method returns a [`RuntimeJob`](https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.runtime.RuntimeJob.html#qiskit.providers.ibmq.runtime.RuntimeJob) instace, which is similar to the `Job` instance returned by regular `backend.run()`. `RuntimeJob` supports the following methods: # # - `status()`: Return job status. # - `result()`: Wait for the job to finish and return the final result. # - `cancel()`: Cancel the job. # - `wait_for_final_state()`: Wait for the job to finish. # - `stream_results()`: Stream interim results. This can be used to start streaming the interim results if a `callback` function was not passed to the `run()` method. This method can also be used to reconnect a lost websocket connection. # - `job_id()`: Return the job ID. # - `backend()`: Return the backend where the job is run. # - `logs()`: Return job logs. # - `error_message()`: Returns the reason if the job failed and `None` otherwise. # ## Retrieving old jobs # You can use the [`IBMRuntimeService.job()`](https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.runtime.IBMRuntimeService.html#qiskit.providers.ibmq.runtime.IBMRuntimeService.job) method to retrieve a previously executed runtime job. Attributes of this [`RuntimeJob`](https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.runtime.RuntimeJob.html#qiskit.providers.ibmq.runtime.RuntimeJob) instace can tell you about the execution: retrieved_job = provider.runtime.job(job.job_id()) print(f"Job {retrieved_job.job_id()} is an execution instance of runtime program {retrieved_job.program_id}.") print(f"This job ran on backend {retrieved_job.backend()} and had input parameters {retrieved_job.inputs}") # Similarly, you can use [`IBMRuntimeService.jobs()`](https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.runtime.IBMRuntimeService.html#qiskit.providers.ibmq.runtime.IBMRuntimeService.jobs) to get a list of jobs. You can specify a limit on how many jobs to return. The default limit is 10: retrieved_jobs = provider.runtime.jobs(limit=1) for rjob in retrieved_jobs: print(rjob.job_id()) # ## Deleting a job # You can use the [`IBMRuntimeService.delete_job()`](https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.runtime.IBMRuntimeService.html#qiskit.providers.ibmq.runtime.IBMRuntimeService.delete_job) method to delete a job. You can only delete your own jobs, and this action cannot be reversed. provider.runtime.delete_job(job.job_id()) import qiskit.tools.jupyter # %qiskit_version_table
tutorials/00_introduction.ipynb