markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
2 - Load the Dataset Now, load the dataset you'll be working on. The following code will load a "flower" 2-class dataset into variables X and Y.
X, Y = load_planar_dataset()
_____no_output_____
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
# Visualize the data: plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
_____no_output_____
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1).First, get a better sense of what your data is like. Exercise 1 How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`? **Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
# (≈ 3 lines of code) # shape_X = ... # shape_Y = ... # training set size # m = ... # YOUR CODE STARTS HERE shape_X = X.shape shape_Y = Y.shape m = Y.shape[1] # YOUR CODE ENDS HERE print ('The shape of X is: ' + str(shape_X)) print ('The shape of Y is: ' + str(shape_Y)) print ('I have m = %d training examples!' % (m)) print(X.shape[0])
The shape of X is: (2, 400) The shape of Y is: (1, 400) I have m = 400 training examples! 2
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
**Expected Output**: shape of X (2, 400) shape of Y (1, 400) m 400 3 - Simple Logistic RegressionBefore building a full neural network, let's check how logistic regression performs on this problem. You can use sklearn's built-in functions for this. Run the code below to train a logistic regression classifier on the dataset.
# Train the logistic regression classifier clf = sklearn.linear_model.LogisticRegressionCV(); clf.fit(X.T, Y.T);
_____no_output_____
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
You can now plot the decision boundary of these models! Run the code below.
# Plot the decision boundary for logistic regression plot_decision_boundary(lambda x: clf.predict(x), X, Y) plt.title("Logistic Regression") print(X.shape) # Print accuracy LR_predictions = clf.predict(X.T) print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) + '% ' + "(percentage of correctly labelled datapoints)")
(2, 400) Accuracy of logistic regression: 47 % (percentage of correctly labelled datapoints)
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
**Expected Output**: Accuracy 47% **Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network modelLogistic regression didn't work well on the flower dataset. Next, you're going to train a Neural Network with a single hidden layer and see how that handles the same problem.**The model**:**Mathematically**:For one example $x^{(i)}$:$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$**Reminder**: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( of input units, of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent)In practice, you'll often build helper functions to compute steps 1-3, then merge them into one function called `nn_model()`. Once you've built `nn_model()` and learned the right parameters, you can make predictions on new data. 4.1 - Defining the neural network structure Exercise 2 - layer_sizes Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
# GRADED FUNCTION: layer_sizes def layer_sizes(X, Y): """ Arguments: X -- input dataset of shape (input size, number of examples) Y -- labels of shape (output size, number of examples) Returns: n_x -- the size of the input layer n_h -- the size of the hidden layer n_y -- the size of the output layer """ #(≈ 3 lines of code) # n_x = ... # n_h = ... # n_y = ... # YOUR CODE STARTS HERE n_x = X.shape[0] n_h = 4 n_y = Y.shape[0] print(Y.shape) # YOUR CODE ENDS HERE return (n_x, n_h, n_y) t_X, t_Y = layer_sizes_test_case() (n_x, n_h, n_y) = layer_sizes(t_X, t_Y) print("The size of the input layer is: n_x = " + str(n_x)) print("The size of the hidden layer is: n_h = " + str(n_h)) print("The size of the output layer is: n_y = " + str(n_y)) layer_sizes_test(layer_sizes)
(2, 3) The size of the input layer is: n_x = 5 The size of the hidden layer is: n_h = 4 The size of the output layer is: n_y = 2 (2, 3) (2, 3)  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
***Expected output***```The size of the input layer is: n_x = 5The size of the hidden layer is: n_h = 4The size of the output layer is: n_y = 2``` 4.2 - Initialize the model's parameters Exercise 3 - initialize_parametersImplement the function `initialize_parameters()`.**Instructions**:- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.- You will initialize the weights matrices with random values. - Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).- You will initialize the bias vectors as zeros. - Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
# GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: params -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random. #(≈ 4 lines of code) # W1 = ... # b1 = ... # W2 = ... # b2 = ... # YOUR CODE STARTS HERE W1 = np.random.randn(n_h, n_x) * 0.01 b1 = np.zeros((n_h, 1)) W2 = np.random.randn(n_y, n_h) * 0.01 b2 = np.zeros((n_y, 1)) # YOUR CODE ENDS HERE parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters n_x, n_h, n_y = initialize_parameters_test_case() parameters = initialize_parameters(n_x, n_h, n_y) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) initialize_parameters_test(initialize_parameters)
W1 = [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] b1 = [[0.] [0.] [0.] [0.]] W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]] b2 = [[0.]]  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
**Expected output**```W1 = [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]]b1 = [[0.] [0.] [0.] [0.]]W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]]b2 = [[0.]]``` 4.3 - The Loop Exercise 4 - forward_propagationImplement `forward_propagation()` using the following equations:$$Z^{[1]} = W^{[1]} X + b^{[1]}\tag{1}$$ $$A^{[1]} = \tanh(Z^{[1]})\tag{2}$$$$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}\tag{3}$$$$\hat{Y} = A^{[2]} = \sigma(Z^{[2]})\tag{4}$$**Instructions**:- Check the mathematical representation of your classifier in the figure above.- Use the function `sigmoid()`. It's built into (imported) this notebook.- Use the function `np.tanh()`. It's part of the numpy library.- Implement using these steps: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()` by using `parameters[".."]`. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).- Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function.
# GRADED FUNCTION:forward_propagation def forward_propagation(X, parameters): """ Argument: X -- input data of size (n_x, m) parameters -- python dictionary containing your parameters (output of initialization function) Returns: A2 -- The sigmoid output of the second activation cache -- a dictionary containing "Z1", "A1", "Z2" and "A2" """ # Retrieve each parameter from the dictionary "parameters" #(≈ 4 lines of code) # W1 = ... # b1 = ... # W2 = ... # b2 = ... # YOUR CODE STARTS HERE W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] # YOUR CODE ENDS HERE # Implement Forward Propagation to calculate A2 (probabilities) # (≈ 4 lines of code) # Z1 = ... # A1 = ... # Z2 = ... # A2 = ... # YOUR CODE STARTS HERE Z1 = np.dot(W1,X) + b1 A1 = np.tanh(Z1) Z2 = np.dot(W2,A1) + b2 A2 = sigmoid(Z2) # YOUR CODE ENDS HERE assert(A2.shape == (1, X.shape[1])) cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2} return A2, cache t_X, parameters = forward_propagation_test_case() A2, cache = forward_propagation(t_X, parameters) print("A2 = " + str(A2)) forward_propagation_test(forward_propagation)
A2 = [[0.21292656 0.21274673 0.21295976]]  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
***Expected output***```A2 = [[0.21292656 0.21274673 0.21295976]]``` 4.4 - Compute the CostNow that you've computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for all examples, you can compute the cost function as follows:$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$ Exercise 5 - compute_cost Implement `compute_cost()` to compute the value of the cost $J$.**Instructions**:- There are many ways to implement the cross-entropy loss. This is one way to implement one part of the equation without for loops:$- \sum\limits_{i=1}^{m} y^{(i)}\log(a^{[2](i)})$:```pythonlogprobs = np.multiply(np.log(A2),Y)cost = - np.sum(logprobs) ```- Use that to build the whole expression of the cost function.**Notes**: - You can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`). - If you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. - You can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). - You can also cast the array as a type `float` using `float()`.
# GRADED FUNCTION: compute_cost def compute_cost(A2, Y): """ Computes the cross-entropy cost given in equation (13) Arguments: A2 -- The sigmoid output of the second activation, of shape (1, number of examples) Y -- "true" labels vector of shape (1, number of examples) Returns: cost -- cross-entropy cost given equation (13) """ m = Y.shape[1] # number of examples # Compute the cross-entropy cost # (≈ 2 lines of code) # logprobs = ... # cost = ... # YOUR CODE STARTS HERE cost = (-1/m)*(np.dot(Y, np.log(A2).T) + np.dot(1-Y, np.log(1-A2).T)) # YOUR CODE ENDS HERE cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect. # E.g., turns [[17]] into 17 return cost A2, t_Y = compute_cost_test_case() cost = compute_cost(A2, t_Y) print("cost = " + str(compute_cost(A2, t_Y))) compute_cost_test(compute_cost)
cost = 0.6930587610394646  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
***Expected output***`cost = 0.6930587610394646` 4.5 - Implement BackpropagationUsing the cache computed during forward propagation, you can now implement backward propagation. Exercise 6 - backward_propagationImplement the function `backward_propagation()`.**Instructions**:Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. Figure 1: Backpropagation. Use the six equations on the right.<!--$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$- Note that $*$ denotes elementwise multiplication.- The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !-->- Tips: - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
# GRADED FUNCTION: backward_propagation def backward_propagation(parameters, cache, X, Y): """ Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". X -- input data of shape (2, number of examples) Y -- "true" labels vector of shape (1, number of examples) Returns: grads -- python dictionary containing your gradients with respect to different parameters """ m = X.shape[1] # First, retrieve W1 and W2 from the dictionary "parameters". #(≈ 2 lines of code) # W1 = ... # W2 = ... # YOUR CODE STARTS HERE W1 = parameters['W1'] W2 = parameters['W2'] # YOUR CODE ENDS HERE # Retrieve also A1 and A2 from dictionary "cache". #(≈ 2 lines of code) # A1 = ... # A2 = ... # YOUR CODE STARTS HERE A1 = cache['A1'] A2 = cache['A2'] # YOUR CODE ENDS HERE # Backward propagation: calculate dW1, db1, dW2, db2. #(≈ 6 lines of code, corresponding to 6 equations on slide above) # dZ2 = ... # dW2 = ... # db2 = ... # dZ1 = ... # dW1 = ... # db1 = ... # YOUR CODE STARTS HERE dZ2 = A2 - Y dW2 = (1/m)*(np.dot(dZ2, A1.T)) db2 = (1/m)*(np.sum(dZ2, axis = 1, keepdims=True)) dZ1 = np.dot(W2.T, dZ2)*(1 - np.power(A1, 2)) dW1 = (1/m)*(np.dot(dZ1, X.T)) db1 = (1/m)*(np.sum(dZ1, axis = 1, keepdims=True)) # YOUR CODE ENDS HERE grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2} return grads parameters, cache, t_X, t_Y = backward_propagation_test_case() grads = backward_propagation(parameters, cache, t_X, t_Y) print ("dW1 = "+ str(grads["dW1"])) print ("db1 = "+ str(grads["db1"])) print ("dW2 = "+ str(grads["dW2"])) print ("db2 = "+ str(grads["db2"])) backward_propagation_test(backward_propagation)
dW1 = [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] db1 = [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] db2 = [[-0.16655712]]  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
***Expected output***```dW1 = [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]]db1 = [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]]dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]]db2 = [[-0.16655712]]``` 4.6 - Update Parameters Exercise 7 - update_parametersImplement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).**General gradient descent rule**: $\theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter. Figure 2: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.**Hint**- Use `copy.deepcopy(...)` when copying lists or dictionaries that are passed as parameters to functions. It avoids input parameters being modified within the function. In some scenarios, this could be inefficient, but it is required for grading purposes.
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate = 1.2): """ Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters """ # Retrieve a copy of each parameter from the dictionary "parameters". Use copy.deepcopy(...) for W1 and W2 #(≈ 4 lines of code) # W1 = ... # b1 = ... # W2 = ... # b2 = ... # YOUR CODE STARTS HERE W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # YOUR CODE ENDS HERE # Retrieve each gradient from the dictionary "grads" #(≈ 4 lines of code) # dW1 = ... # db1 = ... # dW2 = ... # db2 = ... # YOUR CODE STARTS HERE dW1 = grads["dW1"] db1 = grads["db1"] dW2 = grads["dW2"] db2 = grads["db2"] # YOUR CODE ENDS HERE # Update rule for each parameter #(≈ 4 lines of code) # W1 = ... # b1 = ... # W2 = ... # b2 = ... # YOUR CODE STARTS HERE W1 = W1 - learning_rate*dW1 b1 = b1 - learning_rate*db1 W2 = W2 - learning_rate*dW2 b2 = b2 - learning_rate*db2 # YOUR CODE ENDS HERE parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) update_parameters_test(update_parameters)
W1 = [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] b1 = [[-1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [-3.20136836e-06]] W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]] b2 = [[0.00010457]]  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
***Expected output***```W1 = [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]]b1 = [[-1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [-3.20136836e-06]]W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]]b2 = [[0.00010457]]``` 4.7 - IntegrationIntegrate your functions in `nn_model()` Exercise 8 - nn_modelBuild your neural network model in `nn_model()`.**Instructions**: The neural network model has to use the previous functions in the right order.
# GRADED FUNCTION: nn_model def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): """ Arguments: X -- dataset of shape (2, number of examples) Y -- labels of shape (1, number of examples) n_h -- size of the hidden layer num_iterations -- Number of iterations in gradient descent loop print_cost -- if True, print the cost every 1000 iterations Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ np.random.seed(3) n_x = layer_sizes(X, Y)[0] n_y = layer_sizes(X, Y)[2] # Initialize parameters #(≈ 1 line of code) # parameters = ... # YOUR CODE STARTS HERE parameters = initialize_parameters(n_x, n_h, n_y) # YOUR CODE ENDS HERE # Loop (gradient descent) for i in range(0, num_iterations): #(≈ 4 lines of code) # Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache". # A2, cache = ... # Cost function. Inputs: "A2, Y". Outputs: "cost". # cost = ... # Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads". # grads = ... # Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters". # parameters = ... # YOUR CODE STARTS HERE A2, cache = forward_propagation(X, parameters) cost = compute_cost(A2, Y) grads = backward_propagation(parameters, cache, X, Y) parameters = update_parameters(parameters, grads) # YOUR CODE ENDS HERE # Print the cost every 1000 iterations if print_cost and i % 1000 == 0: print ("Cost after iteration %i: %f" %(i, cost)) return parameters t_X, t_Y = nn_model_test_case() parameters = nn_model(t_X, t_Y, 4, num_iterations=10000, print_cost=True) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) nn_model_test(nn_model)
(1, 3) (1, 3) Cost after iteration 0: 0.692739 Cost after iteration 1000: 0.000218 Cost after iteration 2000: 0.000107 Cost after iteration 3000: 0.000071 Cost after iteration 4000: 0.000053 Cost after iteration 5000: 0.000042 Cost after iteration 6000: 0.000035 Cost after iteration 7000: 0.000030 Cost after iteration 8000: 0.000026 Cost after iteration 9000: 0.000023 W1 = [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]] b1 = [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]] b2 = [[0.20459656]] (1, 3) (1, 3) (1, 3) (1, 3) (1, 3) (1, 3)  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
***Expected output***```Cost after iteration 0: 0.692739Cost after iteration 1000: 0.000218Cost after iteration 2000: 0.000107...Cost after iteration 8000: 0.000026Cost after iteration 9000: 0.000023W1 = [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]]b1 = [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]]W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]]b2 = [[0.20459656]]``` 5 - Test the Model 5.1 - Predict Exercise 9 - predictPredict with your model by building `predict()`.Use forward propagation to predict results.**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \\ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
# GRADED FUNCTION: predict def predict(parameters, X): """ Using the learned parameters, predicts a class for each example in X Arguments: parameters -- python dictionary containing your parameters X -- input data of size (n_x, m) Returns predictions -- vector of predictions of our model (red: 0 / blue: 1) """ # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold. #(≈ 2 lines of code) # A2, cache = ... # predictions = ... # YOUR CODE STARTS HERE A2, cache = forward_propagation(X, parameters) predictions = (A2 > 0.5) # YOUR CODE ENDS HERE return predictions parameters, t_X = predict_test_case() predictions = predict(parameters, t_X) print("Predictions: " + str(predictions)) predict_test(predict)
Predictions: [[ True False True]]  All tests passed.
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
***Expected output***```Predictions: [[ True False True]]``` 5.2 - Test the Model on the Planar DatasetIt's time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units!
# Build a model with a n_h-dimensional hidden layer parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y) plt.title("Decision Boundary for hidden layer size " + str(4)) # Print accuracy predictions = predict(parameters, X) print ('Accuracy: %d' % float((np.dot(Y, predictions.T) + np.dot(1 - Y, 1 - predictions.T)) / float(Y.size) * 100) + '%')
Accuracy: 90%
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
**Expected Output**: Accuracy 90% Accuracy is really high compared to Logistic Regression. The model has learned the patterns of the flower's petals! Unlike logistic regression, neural networks are able to learn even highly non-linear decision boundaries. Congrats on finishing this Programming Assignment! Here's a quick recap of all you just accomplished: - Built a complete 2-class classification neural network with a hidden layer- Made good use of a non-linear unit- Computed the cross entropy loss- Implemented forward and backward propagation- Seen the impact of varying the hidden layer size, including overfitting.You've created a neural network that can learn patterns! Excellent work. Below, there are some optional exercises to try out some other hidden layer sizes, and other datasets. 6 - Tuning hidden layer size (optional/ungraded exercise)Run the following code(it may take 1-2 minutes). Then, observe different behaviors of the model for various hidden layer sizes.
# This may take about 2 minutes to run plt.figure(figsize=(16, 32)) hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50] for i, n_h in enumerate(hidden_layer_sizes): plt.subplot(5, 2, i+1) plt.title('Hidden Layer of size %d' % n_h) parameters = nn_model(X, Y, n_h, num_iterations = 5000) plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y) predictions = predict(parameters, X) accuracy = float((np.dot(Y,predictions.T) + np.dot(1 - Y, 1 - predictions.T)) / float(Y.size)*100) print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
(1, 400) (1, 400) Accuracy for 1 hidden units: 67.5 % (1, 400) (1, 400) Accuracy for 2 hidden units: 67.25 % (1, 400) (1, 400) Accuracy for 3 hidden units: 90.75 % (1, 400) (1, 400) Accuracy for 4 hidden units: 90.5 % (1, 400) (1, 400) Accuracy for 5 hidden units: 91.25 % (1, 400) (1, 400) Accuracy for 20 hidden units: 90.0 % (1, 400) (1, 400) Accuracy for 50 hidden units: 90.25 %
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
**Interpretation**:- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.- Later, you'll become familiar with regularization, which lets you use very large models (such as n_h = 50) without much overfitting. **Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right. **Some optional/ungraded questions that you can explore if you wish**: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?- Play with the learning_rate. What happens?- What if we change the dataset? (See part 5 below!) 7- Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
# Datasets noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets() datasets = {"noisy_circles": noisy_circles, "noisy_moons": noisy_moons, "blobs": blobs, "gaussian_quantiles": gaussian_quantiles} ### START CODE HERE ### (choose your dataset) dataset = "noisy_moons" ### END CODE HERE ### X, Y = datasets[dataset] X, Y = X.T, Y.reshape(1, Y.shape[0]) # make blobs binary if dataset == "blobs": Y = Y%2 # Visualize the data plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
_____no_output_____
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects
Preprocess the image
# resize the image basewidth = 100 wpercent = (basewidth / float(img.size[0])) hsize = int((float(img.size[1]) * float(wpercent))) img = img.resize((basewidth, hsize), Image.ANTIALIAS) img = np.array(img)[:,:,:3] print(img.shape) plt.imshow(img) img = img.reshape(-1,3) img.shape
_____no_output_____
MIT
loc_clust_stripe_segmentation.ipynb
YuTian8328/flow-based-clustering
Perform the segmentation task via Kmeans
kmeans = KMeans(n_clusters=2).fit(img) plt.imshow(kmeans.labels_.reshape(29,100))
_____no_output_____
MIT
loc_clust_stripe_segmentation.ipynb
YuTian8328/flow-based-clustering
Perform the task via our algorithm
# generate graph from image img = img.reshape(-1,3)/255 n_nodes=img.shape[0] print("number of nodes:",n_nodes ) B,weight=get_B_and_weight_vec(n_nodes,0.2,1) # plt.hist(weight,bins=30) #distribution of similarity measure def run_seg(n_nodes,seeds,threshold, K=30, alpha=0.1, lambda_nLasso=0.1): B, weight_vec = get_B_and_weight_vec(n_nodes,threshold) start = datetime.datetime.now() history = algorithm(B, weight_vec, seeds=seeds, K=K, alpha=alpha, lambda_nLasso=lambda_nLasso) print('our method time: ', datetime.datetime.now() - start) return history # generate seeds according to the labels assigned by kmeans seeds = np.random.choice(np.where(kmeans.labels_==0)[0],20) # run our algorithm and visulize the result before feed it to kmeans history = run_seg(n_nodes=n_nodes,seeds=seeds,threshold = 0.95, K=1000,alpha=0.01, lambda_nLasso=1) plt.imshow(history[-1].reshape(29,100)) # Feed the node signal from our algorithm to kmeans to complete clustering (2 clusters) history=np.nan_to_num(history) kmeans = KMeans(n_clusters=2).fit(history[-1].reshape(len(history[-1]), 1)) #visulize the segmentation result segmented = kmeans.labels_ plt.imshow(segmented.reshape((29,100)))
_____no_output_____
MIT
loc_clust_stripe_segmentation.ipynb
YuTian8328/flow-based-clustering
Perform the segmentation task via spectral clustering
from sklearn.cluster import SpectralClustering s=SpectralClustering(2).fit(img) plt.imshow(s.labels_.reshape(29,100)) # Python3 Program to print BFS traversal # from a given source vertex. BFS(int s) # traverses vertices reachable from s. from collections import defaultdict # This class represents a directed graph # using adjacency list representation class Graph: # Constructor def __init__(self): # default dictionary to store graph self.graph = defaultdict(list) # function to add an edge to graph def addEdge(self,u,v): self.graph[u].append(v) # Function to print a BFS of graph def BFS(self, s): # Mark all the vertices as not visited visited = [False] * (max(self.graph) + 1) # Create a queue for BFS queue = [] # Mark the source node as # visited and enqueue it queue.append(s) visited[s] = True while queue: # Dequeue a vertex from # queue and print it s = queue.pop(0) print (s, end = " ") # Get all adjacent vertices of the # dequeued vertex s. If a adjacent # has not been visited, then mark it # visited and enqueue it for i in self.graph[s]: if visited[i] == False: queue.append(i) visited[i] = True # Driver code # Create a graph given in # the above diagram g = Graph() g.addEdge(0, 1) g.addEdge(0, 2) g.addEdge(1, 2) g.addEdge(2, 0) g.addEdge(2, 3) g.addEdge(3, 3) print ("Following is Breadth First Traversal" " (starting from vertex 2)") g.BFS(2) # This code is contributed by Neelam Yadav
Following is Breadth First Traversal (starting from vertex 2) 2 0 3 1
MIT
loc_clust_stripe_segmentation.ipynb
YuTian8328/flow-based-clustering
Solution Graded Exercise 1: Leaky-integrate-and-fire model first name: Evelast name: Rahbesciper: 235549date: 21.03.2018*Your teammate*first name of your teammate: Antoinelast name of your teammate: Alleonsciper of your teammate: 223333Note: You are allowed to discuss the concepts with your class mates. You are not allowed to share code. You have to understand every line of code you write in this notebook. We will ask you questions about your submission during a fraud detection session during the last week of the semester.If you are asked for plots: The appearance of the plots (labelled axes, useful scaling etc.) is important!If you are asked for discussions: Answer in a precise way and try to be concise. ** Submission **Rename this notebook to Ex2_FirstName_LastName_Sciper.ipynb and upload that single file on moodle before the deadline.** Link to the exercise **http://neuronaldynamics-exercises.readthedocs.io/en/stable/exercises/leaky-integrate-and-fire.html Exercise 2, getting started
%matplotlib inline import brian2 as b2 import matplotlib.pyplot as plt import numpy as np from neurodynex.leaky_integrate_and_fire import LIF from neurodynex.tools import input_factory, plot_tools LIF.getting_started() LIF.print_default_parameters()
nr of spikes: 0
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2.1 Exercise: minimal current 2.1.1. Question: minimal current (calculation) [2 points]
from neurodynex.leaky_integrate_and_fire import LIF print("resting potential: {}".format(LIF.V_REST)) i_min = (LIF.FIRING_THRESHOLD-LIF.V_REST)/LIF.MEMBRANE_RESISTANCE print("minimal current i_min: {}".format(i_min))
resting potential: -0.07 minimal current i_min: 2e-09
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
The minimal current is :$i_{min} = \frac{\theta-u_{rest}}{R} = \frac{-50-(-70) [mV]}{10 [Mohm]} = 2 [nA]$$\theta$ is the firing threshold$u_{rest}$ is the resting potential$R$ is the membrane resistance 2.1.2. Question: minimal current (simulation) [2 points]
# create a step current with amplitude= i_min step_current = input_factory.get_step_current( t_start=5, t_end=100, unit_time=b2.ms, amplitude= i_min) # set i_min to your value # run the LIF model. # Note: As we do not specify any model parameters, the simulation runs with the default values (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current, simulation_time = 100 * b2.ms) # plot I and vm plot_tools.plot_voltage_and_current_traces( state_monitor, step_current, title="min input", firing_threshold=LIF.FIRING_THRESHOLD) print("nr of spikes: {}".format(spike_monitor.count[0])) # should be 0
nr of spikes: 0
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2.2. Exercise: f-I Curve 2.2.1. Question: f-I Curve and refractoryness 1 - Sketch or plot the curve with some program. You don't have to include it here, it is just for your understanding and will not be graded.2 - What is the maximum rate at which this neuron can fire? [3 points]
# create a step current with amplitude i_max i_max = 125 * b2.namp step_current = input_factory.get_step_current( t_start=5, t_end=100, unit_time=b2.ms, amplitude=i_max) # run the LIF model and set the absolute refractory period to 3ms (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current, simulation_time = 500 * b2.ms, abs_refractory_period=3 * b2.ms) # number of spikes print("nr of spikes: {}".format(spike_monitor.count[0])) # firing frequency T = 95e-03/spike_monitor.count[0] print("T : {}".format(T)) print("firing frequency : {}".format(1/T))
nr of spikes: 32 T : 0.00296875 firing frequency : 336.842105263
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
The maximum rate at which this neuron can fire is $f = 336.84 [Hz]$. 3 - Inject currents of different amplitudes (from 0nA to 100nA) into a LIF neuron. For each current, run the simulation for 500ms and determine the firing frequency in Hz. Then plot the f-I curve. [4 points]
import numpy as np import matplotlib.pyplot as plt firing_frequency = [] # create a step current with amplitude i from 0 to 100 nA for i in range(0,100,1) : step_current = input_factory.get_step_current( t_start=5, t_end=100, unit_time=b2.ms, amplitude=i * b2.namp) # stock amplitude i from 0 to 100 nA # run the LIF model (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current, simulation_time = 500 * b2.ms, abs_refractory_period= 3 * b2.ms) if (spike_monitor.count[0] == 0) : firing_frequency.append(0) else : # firing frequency T = 95e-03/spike_monitor.count[0] firing_frequency.append(1/T) plt.xlabel("step current amplitude [nA]") plt.ylabel("firing frequency [Hz]") plt.title("f-I curve for step current") plt.plot(range(0,100,1), firing_frequency)
_____no_output_____
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2.3. Exercise: “Experimentally” estimate the parameters of a LIF neuron 2.3.1. Question: “Read” the LIF parameters out of the vm plot [6 points] My estimates for the parameters :\begin{itemize}\item Resting potential : $u_{rest} = -66 [mV]$.\item Reset potential : $u_{reset} = -63 [mV]$ is the membrane potential between spikes.\item Firing threshold : $\theta = -38 [mV]$ using $\theta = u(t_{inf})$ with step current amplitude $i_{min}$.\item Membrane resitance : $R = \frac{\theta-u_{rest}}{i_{min}} = 12.7 [Mohm]$ with $i_{min}=2.2[nA]$ at the firing threshold.\item Membrane time scale : $\tau = 11.5 [ms]$ time to reach $63\%$ of $\theta$ with step curent amplitude $i_{min}$.\item Absolute refractory period : $ t = 5 [ms]$ time before new spike after reset of the potential.\end{itemize}
# get a random parameter. provide a random seed to have a reproducible experiment random_parameters = LIF.get_random_param_set(random_seed=432) # define your test current test_current = input_factory.get_step_current( t_start=5, t_end=100, unit_time=b2.ms, amplitude= 10 * b2.namp) # probe the neuron. pass the test current AND the random params to the function state_monitor, spike_monitor = LIF.simulate_random_neuron(test_current, random_parameters) # plot plot_tools.plot_voltage_and_current_traces(state_monitor, test_current, title="experiment") # print the parameters to the console and compare with your estimates LIF.print_obfuscated_parameters(random_parameters)
Resting potential: -0.066 Reset voltage: -0.063 Firing threshold: -0.038 Membrane resistance: 13000000.0 Membrane time-scale: 0.013 Absolute refractory period: 0.005
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2.4. Exercise: Sinusoidal input current and subthreshold response 2.4.1. Question [5 points]
# note the higher resolution when discretizing the sine wave: we specify unit_time=0.1 * b2.ms sinusoidal_current = input_factory.get_sinusoidal_current(200, 1000, unit_time=0.1 * b2.ms, amplitude= 2.5 * b2.namp, frequency=250*b2.Hz, direct_current=0. * b2.namp) # run the LIF model. By setting the firing threshold to to a high value, we make sure to stay in the linear (non spiking) regime. (state_monitor, spike_monitor) = LIF.simulate_LIF_neuron(input_current=sinusoidal_current, simulation_time = 120 * b2.ms, firing_threshold=0*b2.mV) # plot the membrane voltage plot_tools.plot_voltage_and_current_traces(state_monitor, sinusoidal_current, title = "Sinusoidal input current") print("nr of spikes: {}".format(spike_monitor.count[0])) # Calculate the amplitude of the membrane voltage # get the difference betwwen min value of the voltage and the resting potential print("Amplitude of the membrane voltage : {} V" .format(abs(np.min(np.asarray(state_monitor.v))-(-0.07)))) import scipy.signal # Calculate the phase of the membrane voltage # interpolation of the signals xx = np.interp(np.linspace(1,1002,1002),np.linspace(1,1200,1200),np.transpose(np.asarray(state_monitor.v))[:,0]) # correlation corr = scipy.signal.correlate(xx,sinusoidal_current.values[:,0]) dt = np.arange(-1001,1002) # find the max correlation recovered_time_shift = dt[corr.argmax()] # convert timeshift in phase between 0 and 2pi period = 1/0.250 recovered_phase_shift = np.pi*(((0.5 + recovered_time_shift/period) %1.0)-0.5) print("Phase shift : {}" .format(recovered_phase_shift))
nr of spikes: 0 Amplitude of the membrane voltage : 0.001985117111761442 V Phase shift : -1.5707963267948966
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
The results are : $ A = 2 [mV]$ (computationnally and visually) and $phase = -\pi/2$ (computationnally and visually). 2.4.2. Question [5 points]
# For input frequencies between 10Hz and 1kHz plot the resulting amplitude of subthreshold oscillations of the # membrane potential vs. input frequency. amplitude = [] for i in range(15) : sinusoidal_current = input_factory.get_sinusoidal_current(200, 1000, unit_time=0.1 * b2.ms, amplitude= 2.5 * b2.namp, frequency=10.**(1.+i/7.)*b2.Hz, direct_current=0. * b2.namp) # run the LIF model. (state_monitor, spike_monitor) = LIF.simulate_LIF_neuron(input_current=sinusoidal_current, simulation_time = 120 * b2.ms, firing_threshold=0*b2.mV) amplitude.append(abs(np.min(np.asarray(state_monitor.v))-(-0.07))) plt.xlabel("sinusoidal current frequency [Hz]") plt.ylabel("amplitude of the membrane potential [mV]") plt.title("Amplitude vs Input frequency") plt.plot([10.**(1.+i/7.) for i in range(15)], amplitude)
_____no_output_____
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2.4.3. Question [5 points]
# For input frequencies between 10Hz and 1kHz # plot the resulting phase shift of subthreshold oscillations of the membrane potential vs. input frequency. phase = [] for f in [10,50,100,250,500,750,1000] : sinusoidal_current = input_factory.get_sinusoidal_current(200, 1000, unit_time=0.1 * b2.ms, amplitude= 2.5 * b2.namp, frequency = f*b2.Hz, direct_current=0. * b2.namp) # run the LIF model. (state_monitor, spike_monitor) = LIF.simulate_LIF_neuron(input_current=sinusoidal_current, simulation_time = 120 * b2.ms, firing_threshold=0*b2.mV) xx = np.interp(np.linspace(1,1002,1002),np.linspace(1,1200,1200),np.transpose(np.asarray(state_monitor.v))[:,0]) corr = scipy.signal.correlate(xx,sinusoidal_current.values[:,0]) dt = np.arange(-1001,1002) recovered_time_shift = dt[corr.argmax()] period = 1000/f recovered_phase_shift = np.pi*(((0.5 + recovered_time_shift/period) %1.0)-0.5) phase.append(recovered_phase_shift) plt.xlabel("sinusoidal current frequency [Hz]") plt.ylabel("phase shift between membrane potential and input current") plt.title("Phase shift of membrane potential vs Input frequency") plt.plot([10,50,100,250,500,750,1000], phase)
_____no_output_____
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2.4.4. Question [3 points] It is a \textbf{Low-Pass} filter because it amplifies low frequencies and attenuates high frequencies. 2.5 Leaky integrate-and-fire neuron with noisy inputThis exercise is not available online. All information is given here.So far you have explored the leaky integrate-and-fire model with step and sinusoidal input currents. We will now investigate the same neuron model with noisy input.The voltage equation now is:\begin{eqnarray}\tau \frac{du}{dt} = -u(t) + u_{rest} + RI(t) + RI_{noise}(t)\end{eqnarray}where the noise is simply an additional term.To implement the noise term in the above equation we will consider it as 'white noise', $I_{noise}(t) = \sigma \xi(t)$. White noise $\xi$ is a stochastic process with expectation value $=0$ and autocorrelation $=\delta(\Delta)$ Note that, as we saw in the Exercise set of Week 1, the $\delta$-function has units of $1/time$, so $\xi$ has units of $1/\sqrt{time}$.It can be shown that the discrete time implementation of a noisy voltage trajectory is:\begin{eqnarray}du = (-u(t) + u_{rest} + RI(t))\frac{dt}{\tau} + \frac{R}{\tau}\sigma \sqrt{dt}\ y,\end{eqnarray}where $y \sim \mathcal{N}(0, 1)$We can then write, again for implementational purposes:\begin{eqnarray}du = \big[-u(t) + u_{rest} + R(I(t) + \sigma \frac{1}{\sqrt{dt}} y) \big]\frac{dt}{\tau}\end{eqnarray}Note that for the physical units to be consistent $\sigma$ in our formulation has units of $current * \sqrt{time}$. Details of the above are beyond the scope of this exercise. If you would like to get more insights we refer to the paragraph 8.1 of the book (http://neuronaldynamics.epfl.ch/online/Ch8.S1.html), to http://www.scholarpedia.org/article/Stochastic_dynamical_systemsOrnstein-Uhlenbeck_process and regarding the implementational scaling of the noise to http://brian2.readthedocs.io/en/stable/user/models.htmltime-scaling-of-noise. 2.5.1 Noisy step input current [7 points]1 - Implement the noisy current $I_0 + I_{noise}$ as described above. In order to do this edit the function get_noisy_step_current provided below. This is simply a copy of the code of the function get_step_current that you used earlier, and you just need to add the noisy part of the current at the indicated line (indicated by "???").Then create a noisy step current with amplitude $I_0 = 1.5nA$ and $\sigma = 1 nA* \sqrt{\text{your time unit}}$ (e.g.: time_unit = 1 ms), run the LIF model and plot the input current and the membrane potential, as you did in the previous exercises.
def get_noisy_step_current(t_start, t_end, unit_time, amplitude, sigma, append_zero=True): """Creates a step current with added noise. If t_start == t_end, then a single entry in the values array is set to amplitude. Args: t_start (int): start of the step t_end (int): end of the step unit_time (Quantity, Time): unit of t_start and t_end. e.g. 0.1*brian2.ms amplitude (Quantity): amplitude of the step. e.g. 3.5*brian2.uamp sigma (float): amplitude (std) of the noise. e.g. 0.1*b2.uamp append_zero (bool, optional): if true, 0Amp is appended at t_end+1. Without that trailing 0, Brian reads out the last value in the array (=amplitude) for all indices > t_end. Returns: TimedArray: Brian2.TimedArray """ assert isinstance(t_start, int), "t_start_ms must be of type int" assert isinstance(t_end, int), "t_end must be of type int" assert b2.units.fundamentalunits.have_same_dimensions(amplitude, b2.amp), \ "amplitude must have the dimension of current e.g. brian2.uamp" tmp_size = 1 + t_end # +1 for t=0 if append_zero: tmp_size += 1 tmp = np.zeros((tmp_size, 1)) * b2.amp tmp[t_start] = amplitude for i in range(t_start+1, t_end) : tmp[i] = amplitude + sigma*(time_step**(-0.5))*np.random.randn() curr = b2.TimedArray(tmp, dt= unit_time) return curr # ------------------- amplitude = 1.5*b2.nA time_unit = 1.*b2.ms time_step = 1.*b2.ms sigma = 1*b2.nA*time_unit**(0.5) # Create a noisy step current noisy_step_current = get_noisy_step_current(t_start=50, t_end=500, unit_time = time_step, amplitude= amplitude, sigma = sigma) # Run the LIF model (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_step_current, \ simulation_time = 500*b2.ms) # plot I and vm plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_step_current, title="min input", \ firing_threshold=LIF.FIRING_THRESHOLD) print("nr of spikes: {}".format(spike_monitor.count[0]))
nr of spikes: 3
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2 - How does the neuron behave? Discuss your result. Your answer should be max 3 lines long. The current behaves randomly increasing or decreasing at each time step (like in a Markov process). The membrane voltage reacts to this input current by following the increasing or decreasing pattern. When the current is large enough, the membrane potential reaches the firing threshold and spikes are generated. 2.5.2 Subthreshold vs. superthreshold regime [7 + 5 = 12 points]1 - A time-dependent input current $I(t)$ is called subthreshold if it does not lead to spiking, i.e. if it leads to a membrane potential that stays - in the absence of noise - below the firing threshold. When noise is added, however, even subthreshold stimuli can induce spikes. Input stimuli that lead to spiking even in a noise-free neuron are called superthreshold. Sub- and superthreshold inputs, in the presence and absence of noise give rise to different spiking behaviour. These 4 different regimes (sub, super, noiseless, noisy) are what we will explore in this exercise.Create a function that takes the amplitudes of a step current and the noise as arguments. It should simulate the LIF-model with this input, calculate the interspike intervals (ISI) and plot a histogram of the ISI (the interspike interval is the time interval between two consecutive spikes).In order to do so edit the function test_effect_of_noise provided below. A few more details:* Use the function spike_tools.get_spike_train_stats (http://neuronaldynamics-exercises.readthedocs.io/en/latest/_modules/neurodynex/tools/spike_tools.htmlget_spike_train_stats) to get the ISI. Have a look at its source code to understand how to use it and what it returns. You may need to use other parts of the documentation as well.* You will need to simulate the neuron model for long enough to get some statistics.* Optional and recommended: What would you expect the resulting histograms to look like?2 - Run your function and create the ISI histograms for the following four regimes:* No noise, subthreshold: $I_0 = 1.9nA$, $\sigma = 0 nA* \sqrt{\text{your time unit}}$* Noise, subthreshold regime: $I_0 = 1.9nA$, $\sigma = 1 nA* \sqrt{\text{your time unit}}$* No noise, superthreshold regime: $I_0 = 2.5nA$, $\sigma = 0 nA* \sqrt{\text{your time unit}}$* Noise, superthreshold regime: $I_0 = 2.5nA$, $\sigma = 1 nA* \sqrt{\text{your time unit}}$
from neurodynex.tools import spike_tools, plot_tools # time unit. e.g. time_unit = 1.*b2.ms time_step = time_unit def test_effect_of_noise(amplitude, sigma, bins = np.linspace(0,1,50)): # Create a noisy step current noisy_step_current = get_noisy_step_current(t_start=50, t_end=5000, unit_time = time_step, amplitude= amplitude, sigma = sigma) # Run the LIF model (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_step_current, \ simulation_time = 5000 * b2.ms) plt.figure() plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_step_current, title="", \ firing_threshold=LIF.FIRING_THRESHOLD) plt.show() print("nr of spikes: {}".format(spike_monitor.count[0])) # Use the function spike_tools.get_spike_train_stats spike_stats = spike_tools.get_spike_train_stats(spike_monitor) # Make the ISI histogram if len(spike_stats._all_ISI) != 0: plt.hist(np.asarray(spike_stats._all_ISI), bins) # choose an appropriate window size for the x-axis (ISI-axis)! plt.xlabel("ISI [s]") plt.ylabel("Number of spikes") plt.show() return spike_stats # 1. No noise, subthreshold stats1 = test_effect_of_noise(amplitude = 1.9 *b2.nA, sigma = 0*b2.nA*time_unit**(0.5)) # 2. Noise, subthreshold regime stats2 = test_effect_of_noise(amplitude = 1.9*b2.nA, sigma = 1*b2.nA*time_unit**(0.5), bins = np.linspace(0, 0.1, 100)) # 3. No noise, superthreshold regime stats3 = test_effect_of_noise(amplitude = 2.5*b2.nA, sigma = 0*b2.nA*time_unit**(0.5),bins = np.linspace(0, 0.1, 100)) # 4. Noise, superthreshold regime stats4 = test_effect_of_noise(amplitude = 2.5*b2.nA, sigma = 1*b2.nA*time_unit**(0.5), bins = np.linspace(0, 0.1, 100))
_____no_output_____
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2 - Discuss yout results (ISI histograms) for the four regimes regimes. For help and inspiration, as well as for verification of your results, have a look at the book chapter 8.3 (http://neuronaldynamics.epfl.ch/online/Ch8.S3.html). Your answer should be max 5 lines long. [5 points] \begin{itemize}\item The first regime (no noise, subthreshold) does not generate any spikes. The histogram does not exist.\item The second regime (noise, subthreshold) generates spikes, but less than the superthreshold regimes, as the voltage is generally under the firing threshold. The interspike interval is concentrated around 0.02 [s].\item The third regime (no noise, superthreshold) generates regularly separated spikes. The histogram only has one column as the time between each spike is always the same (0.012 [s]).\item The fourth regime (noise, superthreshold) generates as many spikes as the noiseless superthreshold regime. The interspike interval is concentrated around 0.012 [s] which is faster than the noisy subthreshold regime as the current is generally higher.\end{itemize} 3 - For the ISI histograms you needed to simulate the neuron for a long time to gather enought statistics for the ISI. If you wanted to parallelize this procedure in order to reduce the computation time (e.g. you have multiple CPU cores on your machine), what would be a simple method to do that? Your answer should be max 3 lines long.Hint: Temporal vs. ensemble average... [2 points] You can simulate the neuron separately on 4 cores to obtain 4 times more ISI than if running on only 1 core. You can then aggregate the data of the 4 cores to compute the ISI histograms. 2.5.3 Noisy sinusoidal input currentImplement the noisy sinusoidal input current $I(t) + I_{noise}$. As before, edit the function provided below; you only have to add the noisy part ot the current.Then create a noisy sinusoidal current with amplitude = $2.5nA$, frequency = $100Hz$, $\sigma = 1 nA* \sqrt{\text{your time unit}}$ and direct_current = $1.5nA$, run the LIF model and plot the input current and the membrane potential, as you did in the previous exercises. What do you observe when compared to the noiseless case ($\sigma = 0 nA*\sqrt{\text{your time unit}}$)? [5 points]
import math def get_noisy_sinusoidal_current(t_start, t_end, unit_time, amplitude, frequency, direct_current, sigma, phase_offset=0., append_zero=True): """Creates a noisy sinusoidal current. If t_start == t_end, then ALL entries are 0. Args: t_start (int): start of the sine wave t_end (int): end of the sine wave unit_time (Quantity, Time): unit of t_start and t_end. e.g. 0.1*brian2.ms amplitude (Quantity, Current): maximum amplitude of the sinus e.g. 3.5*brian2.uamp frequency (Quantity, Hz): Frequency of the sine. e.g. 0.5*brian2.kHz direct_current(Quantity, Current): DC-component (=offset) of the current sigma (float): amplitude (std) of the noise. e.g. 0.1*b2.uamp phase_offset (float, Optional): phase at t_start. Default = 0. append_zero (bool, optional): if true, 0Amp is appended at t_end+1. Without that trailing 0, Brian reads out the last value in the array for all indices > t_end. Returns: TimedArray: Brian2.TimedArray """ assert isinstance(t_start, int), "t_start_ms must be of type int" assert isinstance(t_end, int), "t_end must be of type int" assert b2.units.fundamentalunits.have_same_dimensions(amplitude, b2.amp), \ "amplitude must have the dimension of current. e.g. brian2.uamp" assert b2.units.fundamentalunits.have_same_dimensions(direct_current, b2.amp), \ "direct_current must have the dimension of current. e.g. brian2.uamp" assert b2.units.fundamentalunits.have_same_dimensions(frequency, b2.Hz), \ "frequency must have the dimension of 1/Time. e.g. brian2.Hz" tmp_size = 1 + t_end # +1 for t=0 if append_zero: tmp_size += 1 tmp = np.zeros((tmp_size, 1)) * b2.amp if t_end > t_start: # if deltaT is zero, we return a zero current phi = range(0, (t_end - t_start) + 1) phi = phi * unit_time * frequency phi = phi * 2. * math.pi + phase_offset c = np.sin(phi) c = (direct_current + c * amplitude) # add direct current and scale by amplitude tmp[t_start: t_end + 1, 0] = c # add sinusoidal part of current for i in range(t_start, t_end) : # Add noisy part of current here # Pay attention to correct scaling with respect to the unit_time (time_step) tmp[i] += sigma*(time_step**(-0.5))*np.random.randn() curr = b2.TimedArray(tmp, dt= unit_time) return curr # ------------------ amplitude = 2.5 *b2.nA frequency = 100 *b2.Hz time_unit = 1.*b2.ms time_step = 0.1*b2.ms # This is needed for higher temporal resolution sigma = 1*b2.nA*time_unit**(0.5) direct_current = 1.5 *b2.nA # Create a noiseless sinusoidal current noisy_sinusoidal_current = get_noisy_sinusoidal_current(200, 800, unit_time = time_step, amplitude= amplitude, frequency=frequency, direct_current=direct_current, sigma = 0 *b2.nA*time_unit**(0.5)) # Run the LIF model (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_sinusoidal_current, \ simulation_time = 100 * b2.ms) # plot I and vm plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_sinusoidal_current, title="", \ firing_threshold=LIF.FIRING_THRESHOLD) print("nr of spikes: {}".format(spike_monitor.count[0])) # Create a noisy sinusoidal current noisy_sinusoidal_current = get_noisy_sinusoidal_current(200, 800, unit_time = time_step, amplitude= amplitude, frequency=frequency, direct_current=direct_current, sigma = sigma) # Run the LIF model (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_sinusoidal_current, \ simulation_time = 100 * b2.ms) # plot I and vm plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_sinusoidal_current, title="", \ firing_threshold=LIF.FIRING_THRESHOLD) print("nr of spikes: {}".format(spike_monitor.count[0]))
nr of spikes: 2
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
In the noiseles case, the voltage reaches the firing threshold but does not generate any spike. In the noisy case we observe some spikes when the firing threshold is exceeded around the maxima of the sinusoid. 2.5.4 Stochastic resonance (Bonus, not graded)Contrary to what one may expect, some amount of noise under certain circumstances can improve the signal transmission properties of neurons. In the subthreshold regime, a neuron cannot transmit any information about the temporal structure of its input since it does not spike. With some noise, there is some probability to spike, with the probability depending on the time dependent input (inhomogeneous Poisson process). However to much noise covers the signal completely and thus, there is usually an optimal value for the amplitude of the noise. This phenomenon is called "stochastic resonance" and we will briefly touch upon it in this exercise. To get an idea of the effect we suggest reading section 9.4.2 in the book: http://neuronaldynamics.epfl.ch/online/Ch9.S4.html.1 - Simulate several (e.g. n_inits = 5) trials of a LIF neuron with noisy sinusoidal current. For each trial calculate the power spectrum of the resulting spike train (using the function spike_tools.get_averaged_single_neuron_power_spectrum). Finally calculate the average power spectrum and plot it. With appropriate noise amplitudes, you should see a pronounced peak at the driving frequency, while without noise we don't see anything in the power spectrum since no spike was elicited in the subthreshold regime we are in. In order to do that use the provided parameters and edit the code provided below. Complete the function _run_sim() which creates the input current, runs a simulation and computes the power spectrum. Call it in a loop to execute several trials. Then average over the spectra to obtain a smooth spectrum to plot.
amplitude = 1.*b2.nA frequency = 20*b2.Hz time_unit = 1.*b2.ms time_step = .1*b2.ms direct_current = 1. * b2.nA sampling_frequency = .01/time_step noise_amplitude = 2. n_inits = 5 # run simulation and calculate power spectrum def _run_sim(amplitude, noise_amplitude): noisy_sinusoidal_current = get_noisy_sinusoidal_current(50, 100000, unit_time = time_step, amplitude= ???, frequency= ???, direct_current= ???, sigma = noise_amplitude*b2.nA*np.sqrt(time_unit)) # run the LIF model (state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_sinusoidal_current, \ simulation_time = 10000 * b2.ms) # get power spectrum freq, mean_ps, all_ps_dict, mean_firing_rate, mean_firing_freqs_per_neuron = \ spike_tools.get_averaged_single_neuron_power_spectrum(spike_monitor, sampling_frequency, window_t_min = 1000*b2.ms, window_t_max = 9000*b2.ms, nr_neurons_average=1, subtract_mean=True) return freq, all_ps_dict, mean_firing_rate # initialize array spectra = [] # run a few simulations, calculate the power spectrum and append it to the spectra array for i in ???: freq, spectrum, mfr = ??? spectra.append(spectrum[0]) # average spectra over trials spectrum = ??? # hint: us np.mean with axis=0 # plotting, frequencies vs the obtained spectrum: plt.figure() plt.plot(???,???) plt.xlabel("???") plt.ylabel("???")
_____no_output_____
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
2 - We now apply different noise levels to investigate the optimal noise level of stochastic resonance. The quantity to optimize is the signal-to-noise ratio (SNR). Here, the SNR is defined as the intensity of the power spectrum at the driving frequency (the peak from above), divided by the value of the background noise (power spectrum averaged around the peak).In order to do that edit the code provided below. You can re-use the function _run_sim() to obtain the power spectrum of on trial. The calculation of the SNR is already implemented and doesn't need to be changed.When you are done with completing the code, run the simulation with the proposed parameters (This could take several minutes...). The result should be a plot showing an optimal noise amplitude, i.e. a $\sigma$ where the SNR is maximal.
def get_snr(amplitude, noise_amplitude, n_inits): spectra = [] snr = 0. for i in range(0,n_inits): # run model with noisy sinusoidal freq_signal, spectrum, mfr = ??? spectra.append(spectrum[0]) # Average over trials to get power spectrum spectrum = ??? if mfr != 0.*b2.Hz: peak = np.amax(spectrum) index_of_peak = np.argmax(spectrum) # snr: divide peak value by average of surrounding values snr = peak/np.mean(np.concatenate((spectrum[index_of_peak-100:index_of_peak-1],\ spectrum[index_of_peak+1:index_of_peak+100]))) else: snr = 0. return snr noise_amplitudes = np.arange(0.,5.,.5) snr = np.zeros(len(noise_amplitudes)) for j in np.arange(0,len(noise_amplitudes)): snr[j] = get_snr(amplitude, noise_amplitudes[j], n_inits = 8) plt.figure() plt.plot(noise_amplitudes,snr) plt.xlabel("???") plt.ylabel("???") plt.show()
_____no_output_____
MIT
project1/.ipynb_checkpoints/Ex2_Eve_Rahbe_235549-checkpoint.ipynb
antoine-alleon/Biological_Modelling_Neural_Network_python_exercises
Azure ML Training Pipeline for COVID-CXRThis notebook defines an Azure machine learning pipeline for a single training run and submits the pipeline as an experiment to be run on an Azure virtual machine.
# Import statements import azureml.core from azureml.core import Experiment from azureml.core import Workspace, Datastore from azureml.data.data_reference import DataReference from azureml.pipeline.core import PipelineData from azureml.pipeline.core import Pipeline from azureml.pipeline.steps import PythonScriptStep, EstimatorStep from azureml.train.dnn import TensorFlow from azureml.train.estimator import Estimator from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException from azureml.core.environment import Environment from azureml.core.runconfig import RunConfiguration import shutil
_____no_output_____
MIT
azure/train_pipeline.ipynb
fashourr/covid-cxr
Register the workspace and configure its Python environment.
# Get reference to the workspace ws = Workspace.from_config("./ws_config.json") # Set workspace's environment env = Environment.from_pip_requirements(name = "covid-cxr_env", file_path = "./../requirements.txt") env.register(workspace=ws) runconfig = RunConfiguration(conda_dependencies=env.python.conda_dependencies) print(env.python.conda_dependencies.serialize_to_string()) # Move AML ignore file to root folder aml_ignore_path = shutil.copy('./.amlignore', './../.amlignore')
_____no_output_____
MIT
azure/train_pipeline.ipynb
fashourr/covid-cxr
Create references to persistent and intermediate dataCreate DataReference objects that point to our raw data on the blob. Configure a PipelineData object to point to preprocessed images stored on the blob.
# Get the blob datastore associated with this workspace blob_store = Datastore(ws, name='covid_cxr_ds') # Create data references to folders on the blob raw_data_dr = DataReference( datastore=blob_store, data_reference_name="raw_data", path_on_datastore="data/") mila_data_dr = DataReference( datastore=blob_store, data_reference_name="mila_data", path_on_datastore="data/covid-chestxray-dataset/") fig1_data_dr = DataReference( datastore=blob_store, data_reference_name="fig1_data", path_on_datastore="data/Figure1-COVID-chestxray-dataset/") rsna_data_dr = DataReference( datastore=blob_store, data_reference_name="rsna_data", path_on_datastore="data/rsna/") training_logs_dr = DataReference( datastore=blob_store, data_reference_name="training_logs_data", path_on_datastore="logs/training/") models_dr = DataReference( datastore=blob_store, data_reference_name="models_data", path_on_datastore="models/") # Set up references to pipeline data (intermediate pipeline storage). processed_pd = PipelineData( "processed_data", datastore=blob_store, output_name="processed_data", output_mode="mount")
_____no_output_____
MIT
azure/train_pipeline.ipynb
fashourr/covid-cxr
Compute TargetSpecify and configure the compute target for this workspace. If a compute cluster by the name we specified does not exist, create a new compute cluster.
CT_NAME = "nd12s-clust-hp" # Name of our compute cluster VM_SIZE = "STANDARD_ND12S" # Specify the Azure VM for execution of our pipeline #CT_NAME = "d2-cluster" # Name of our compute cluster #VM_SIZE = "STANDARD_D2" # Specify the Azure VM for execution of our pipeline # Set up the compute target for this experiment try: compute_target = AmlCompute(ws, CT_NAME) print("Found existing compute target.") except ComputeTargetException: print("Creating new compute target") provisioning_config = AmlCompute.provisioning_configuration(vm_size=VM_SIZE, min_nodes=1, max_nodes=4) compute_target = ComputeTarget.create(ws, CT_NAME, provisioning_config) # Create the compute cluster # Wait for cluster to be provisioned compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20) print("Azure Machine Learning Compute attached") print("Compute targets: ", ws.compute_targets) compute_target = ws.compute_targets[CT_NAME]
_____no_output_____
MIT
azure/train_pipeline.ipynb
fashourr/covid-cxr
Define pipeline and submit experiment.Define the steps of an Azure machine learning pipeline. Create an Azure Experiment that will run our pipeline. Submit the experiment to the execution environment.
# Define preprocessing step the ML pipeline step1 = PythonScriptStep(name="preprocess_step", script_name="azure/preprocess_step/preprocess_step.py", arguments=["--miladatadir", mila_data_dr, "--fig1datadir", fig1_data_dr, "--rsnadatadir", rsna_data_dr, "--preprocesseddir", processed_pd], inputs=[mila_data_dr, fig1_data_dr, rsna_data_dr], outputs=[processed_pd], compute_target=compute_target, source_directory="./../", runconfig=runconfig, allow_reuse=True) # Define training step in the ML pipeline est = TensorFlow(source_directory='./../', script_params=None, compute_target=compute_target, entry_script='azure/train_step/train_step.py', pip_packages=['tensorboard', 'pandas', 'dill', 'numpy', 'imblearn', 'matplotlib', 'scikit-image', 'matplotlib', 'pydicom', 'opencv-python', 'tqdm', 'scikit-learn'], use_gpu=True, framework_version='2.0') step2 = EstimatorStep(name="estimator_train_step", estimator=est, estimator_entry_script_arguments=["--rawdatadir", raw_data_dr, "--preprocesseddir", processed_pd, "--traininglogsdir", training_logs_dr, "--modelsdir", models_dr], runconfig_pipeline_params=None, inputs=[raw_data_dr, processed_pd, training_logs_dr, models_dr], outputs=[], compute_target=compute_target) # Construct the ML pipeline from the steps steps = [step1, step2] single_train_pipeline = Pipeline(workspace=ws, steps=steps) single_train_pipeline.validate() # Define a new experiment and submit a new pipeline run to the compute target. experiment = Experiment(workspace=ws, name='SingleTrainExperiment_v3') experiment.submit(single_train_pipeline, regenerate_outputs=False) print("Pipeline is submitted for execution") # Move AML ignore file back to original folder aml_ignore_path = shutil.move(aml_ignore_path, './.amlignore')
_____no_output_____
MIT
azure/train_pipeline.ipynb
fashourr/covid-cxr
SIT742: Modern Data Science **(Week 01: Programming Python)**---- Materials in this module include resources collected from various open-source online repositories.- You are free to use, change and distribute this package.- If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)Prepared by **SIT742 Teaching Team**--- Session 1A - IPython notebook and basic data typesIn this session, you will learn how to run *Python* code under **IPython notebook**. You have two options for the environment:1. Install the [Anaconda](https://www.anaconda.com/distribution/), and run it locally; **OR**1. Use one cloud data science platform such as: - [Google Colab](https://colab.research.google.com): SIT742 lab session will use Google Colab. - [IBM Cloud](https://www.ibm.com/cloud) - [DataBricks](https://community.cloud.databricks.com)In IPython notebook, you will be able to execute and modify your *Python* code more efficiently. - **If you are using Google Colab for SIT742 lab session practicals, you can ignore this Part 1 of this Session 1A, and start with Part 2.**In addition, you will be given an introduction on *Python*'s basic data types, getting familiar with **string**, **number**, data conversion, data comparison and data input/output. Hopefully, by using **Python** and the powerful **IPython Notebook** environment, you will find writing programs both fun and easy. Content Part 1 Create your own IPython notebook1.1 [Start a notebook server](cell_start)1.2 [A tour of IPython notebook](cell_tour)1.3 [IPython notebook infterface](cell_interface)1.4 [Open and close notebooks](cell_close) Part 2 Basic data types2.1 [String](cell_string)2.2 [Number](cell_number)2.3 [Data conversion and comparison](cell_conversion)2.4 [Input and output](cell_input) Part 1. Create your own IPython notebook- **If you are using Google Colab for SIT742 lab session practicals, you can ignore this Part 1, and start with Part 2.**This notebook will show you how to start an IPython notebook session. It guides you through the process of creating your own notebook. It provides you details on the notebook interface and show you how to nevigate with a notebook and manipulate its components. 1. 1 Start a notebook serverAs described in Part 1, you start the IPython notebnook server by keying in the command in a terminal window/command line window.However, before you do this, make sure you have created a folder **p01** under **H:/sit742**, download the file **SIT742P01A-Python.ipynb** notebook, and saved it under **H:/sit742/p01**.If you are using [Google Colab](https://colab.research.google.com), you can upload this notebook to Google Colab and run it from there. If any difficulty, please ask your tutor, or check the CloudDeakin discussions.After you complete this, you can now switch working directory to **H:/sit742**, and start the IPython notebook server by the following commands:
ipython notebook %don't run this in notebook, run it on command line to activate the server
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
You can see the message in the terminal windows as follows:This will open a new browser window(or a new tab in your browser window). In the browser, there is an **dashboard** page which shows you all the folders and files under **sit742** folder 1.2 A tour of iPython notebook Create a new ipython notebook To create a new notebook, go to the menu bar and select **File -> New Notebook -> Python 3**By default, the new notebook is named **Untitled**. To give your notebook a meaningful name, click on the notebook name and rename it. We would like to call our new notebook **hello.ipynb**. Therefore, key in the name **hello**. Run script in code cellsAfter a new notebook is created, there is an empty box in the notebook, called a **cell**. If you double click on the cell, you enter the **edit** mode of the notebook. Now we can enter the following code in the cell
text = "Hello World" print(text)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
After this, press **CTRL + ENTER**, and execute the cell. The result will be shown after the cell. After a cell is executed , the notebook is switched to the **Commmand** mode. In this mode, you can manipulte the notebook and its commponent. Alternatively, you can use **ESC** key to switch from **Edit** mode to **Command** mode without executing code. To modify the code you entered in the cell, **double click** the cell again and modify its content. For example, try to change the first line of previouse cell into the following code:
text = "Good morning World!"
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Afterwards, press **CTRL + ENTER**, and the new output is displayed. As you can see, you are switching bewteen two modes, **Command** and **Edit**, when editing a notebook. We will in later section look into these two operation modes of closely. Now practise switching between the two modes until you are comfortable with them. Add new cellsTo add a new cell to a notebook, you have to ensure the notebook is in **Command** mode. If not, refer to previous section to switch to **Command** mode. To add cell below the currrent cell, go to menubar and click **Insert-> Insert Cell Below**. Alternatively, you can use shortcut i.e. pressing **b** (or **a** to create a cell above). Add markdown cellsBy default, a code cell is created when adding a new cell. However, IPython notebook also use a **Markdown** cell for enter normal text. We use markdown cell to display the text in specific format and to provide structure for a notebook. Try to copy the text in the cell below and paste it into your new notebook. Then from toolbar(**Cell->Cell Type**), change cell type from **Code** to **Markdown**. Please note in the following cell, there is a space between the leading **-, , 0** and the text that follows.
## Heading 2 Normal text here! ### Heading 3 ordered list here 0. Fruits 0. Banana 0. Grapes 0. Veggies 0. Tomato 0. Broccoli Unordered list here - Fruits - Banana - Grapes - Veggies - Tomato - Broccoli
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Now execute the cell by press **CTRL+ ENTER**. You notebook should look like this: Here is what the formated Markdown cell looks like: Exercise:Click this cell, and practise writing markdown language here.... 1.3 IPython notebook interfaceNow you have created your first notebook, let us have a close look at the user interface of IPython notebook. Notebook componentWhen you create a new notebook document, you will be presented with the notebook name, a menu bar, a toolbar and an empty code cell.We can see the following components in a notebook:- **Title bar** is at the top of the page and contains the name of the notebook. Clicking on the notebook name brings up a dialog which allows you to rename it. Please renaming your notebook name from “Untitled0” to “hello”. This change the file name from **Untitled0.ipynb** to **hello.ipynb**.- **Menu bar** presents different options that can be used to manipulate the way the notebook functions.- **Toolbar** gives a quick way of performing the most-used operations within the notebook.- An empty computational cell is show in a new notebook where you can key in your code.The notebook has two modes of operatiopn:- **Edit**: In this mode, a single cell comes into focus and you can enter text or execute code. You activate the **Edit mode** by **clicking on a cell** or **selecting a cell and then pressing Enter key**.- **Command**: In this mode, you can perform tasks that is related to the whole notebook structure. For example, you can move, copy, cut and paste cells. A series of keyboard shortcuts are also available to enable you to performa these tasks more effiencient. One easiest way of activating the command mode by pressing the **Esc** key to exit editing mode. Get help and interruptingTo get help on the use of different cammands, shortcuts, you can go to the **Help** menu, which provides links to relevant documentation.It is also easy to get help on any objects(including functions and methods). For example, to access help on the sum() function, enter the followsing line in a cell:
sum?
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
The other improtant thing to know is how to interrupt a compuation. This can be done through the menu **Kernel->Interrupt** or **Kernel->Restart**, depending on what works on the situation. We will have chance to try this in later session. Notebook cell typesThere are basically three types of cells in a IPython notebook: Code Cells, Markdown Cells, Raw Cells.**Code cells** : Code cell can be used to enter code and will be executed by Python interpreter. Although we will not use other language in this unit, it is good to know that Jupyter Notebooks also support JavaScript, HTML, and Bash commands.*** Markdown cells***: You have created markdown cell in the previouse section. Markdown Cells are the easiest way to write and format text. It is also give structure to the notebook. Markdown language is used in this type of cell. Follow this link https://daringfireball.net/projects/markdown/basics for the basics of the syntax. This is a Markdown Cells example notebook sourced from : https://ipython.org/ipython-doc/3/notebook/notebook.htmlThis markdown cheat sheet can also be good reference to the main markdowns you might need to use in our pracs http://nestacms.com/docs/creating-content/markdown-cheat-sheet**Raw cells** : Raw cells, unlike all other Jupyter Notebook cells, have no input-output distinction. This means that raw Cells cannot be rendered into anything other than what they already are. They are mainly used to create examples.As you have seen, you can use the toolbar to choose between different cell types. In addition, shortcut **M** and **Y** can be used to quickly change a cell to Code cell or Markdown cell under Command mode. Operation modes of IPytho notebook**Edit mode**The Edit mode is used to enter text in cells and to execute code. As you have seen, after typing some code in the notebook and pressing **CTRL+Enter**, the notebook executes the cell and diplays output. The other two shortcuts used to run code in a cell are **Shift +Enter** and **Alt + Enter**. These three ways to run the the code in a cells are summarized as follows:- Pressing Shift + Enter: This runs the cell and select the next cell(A new cell is created if at the end of the notebook). This is the most usual way to execute a cell.- Pressing Ctrl + Enter: This runs the cell and keep the same cell selected. - Pressing Alt + Enter: This runs the cell and insert a new cell below it. **Command mode**In Command mode, you can edit the notebook as a whole, but not type into individual cells.You can use keyboard shortcut in this mode to perform the notebook and cell actions effeciently. For example, if you are in command mode and press **c**, you will copy the current cell. There are a large amount of shortcuts avaialbe in the command mode. However, you do not have to remember all of them, since most actions in the command mode are available in the menu. Here is a list of the most useful shortcuts. They are arrganged by the order we recommend you learn so that you can edit the cells effienctly.1. Basic navigation: - Enter: switch to Edit mode - Esc: switch to Command mode - Shift+enter: Eexecute a cell - Up, down: Move to the cell above or below2. Cell types: - y: switch to code cell) - m: switch to markdown cell) 3. Cell creation: - a: insert new sell above - b: insert new cell below4. Cell deleting: - press D twice.Note that one of the most common (and frustrating) mistakes when using thenotebook is to type something in the wrong mode. Remember to use **Esc**to switch to the Command mode and **Enter** to switch to the Edit mode.Also, remember that **clicking** on a cell automatically places it in the Editmode, so it will be necessary to press **Esc** to go to the Command mode. ExercisePlease go ahead and try these shortcuts. For example, try to insert new cell, modify and delete an existing cell. You can also switch cells between code type and markdown type, and practics different kinds of formatting in a markdown cell. For a complete list of shortcut in **Command** mode, go to menu bar **Help->Keyboardshorcut**. Feel free to explore the other shortcuts. 1.4 open and close notebooks You can open multiple notebooks in a browser windows. Simply go to menubar and choose **File->open...**, and select one **.ipynb** file. The second notebook will be opened in a seperated tab. Now make sure you still have your **hello.ipynb** open. Also please download **ControlAdvData.ipynb** from cloudDeakin, and save under **H:/sit742/prac01**. Now go to the manu bar, click on **File->open ...**, locate the file **ControlAdvData.ipynb**, and open this file. When you finish your work, you will need to close your notebooks and shutdown the IPython notebook server. Instead of simply close all the tabs in the browser, you need to shutdown each notebook first. To do this, swich to the **Home** tab(**Dashboard page**) and **Running** section(see below). Click on **Shutdown** button to close each notebook. In case **Dashboard** page is not open, click on the **Jupyter** icon to reopen it. After each notebook is shutdown, it is time to showdown the IPython notebook server. To do this, go to the terminal window and press **CTRL + C**, and then enter **Y**. After the notebook server is shut down, the terminal window is ready for you to enter any new command. Part 2 Basic Data Types In this part, you will get better understanding with Python's basic data type. We will look at **string** and **number** data type in this section. Also covered are:- Data conversion- Data comparison- Receive input from users and display results effectively You will be guided through completing a simple program which receives input from a user, process the information, and display results with specific format. 2.1 StringA string is a *sequence of characters*. We are using strings in almost every Pythonprograms. As we can seen in the **”Hello, World!”** example, strings can be specifiedusing single quotes **'**. The **print()** function can be used to display a string.
print('Hello, World!')
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
We can also use a variable to store the string value, and use the variable in the**print()** function.
# Assign a string to a variable text = 'Hello, World!' print(text)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
A *variable* is basically a name that represents (or refers to) some value. We use **=**to assign a value to a variable before we use it. Variable names are given by a programerin a way that the program is easy to understanding. Variable names are *case sensitive*.It can consist of letters, digits and underscores. However, it can not begin with a digit.For example, **plan9** and **plan_9** are valid names, where **9plan** is not.
text = 'Hello, World!' # with print() function, content is displayed without quotation mark print(text)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
With variables, we can also display its value without **print()** function. Note thatyou can not display a variable without **print()** function in Python script(i.e. in a **.py** file). This method only works under interactive mode (i.e. in the notebook).
# without print() function, quotation mark is displayed together with content text
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Back to representation of string, there will be issues if you need to include a quotationmark in the text.
text = ’What’ s your name ’
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Since strings in double quotes **"** work exactly the same way as string in single quotes.By mixing the two types, it is easy to include quaotation mark itself in the text.
text = "What' s your name?" print(text)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Alternertively, you can use:
text = '"What is the problem?", he asked.' print(text)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
You can specify multi-line strings using triple quotes (**"""** or **'''**). In this way, singlequotes and double quotes can be used freely in the text.Here is one example:
multiline = '''This is a test for multiline. This is the first line. This is the second line. I asked, "What's your name?"''' print(multiline)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Notice the difference when the variable is displayed without **print()** function in this case.
multiline = '''This is a test for multiline. This is the first line. This is the second line. I asked, "What's your name?"''' multiline
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Another way of include the special characters, such as single quotes is with help ofescape sequences **\\**. For example, you can specify the single quote using **\\' ** as follows.
string = 'What\'s your name?' print(string)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
There are many more other escape sequences (See Section 2.4.1 in [Python3.0 official document](https://docs.python.org/3.1/reference/lexical_analysis.html)). But I am going to mention the most useful two examples here. First, use escape sequences to indicate the backslash itself e.g. **\\\\**
path = 'c:\\windows\\temp' print(path)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Second, used escape sequences to specify a two-line string. Apart from using a triple-quotedstring as shown previously, you can use **\n** to indicate the start of a new line.
multiline = 'This is a test for multiline. This is the first line.\nThis is the second line.' print(multiline)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
To manipulate strings, the following two operators are most useful: * **+** is use to concatenatetwo strings or string variables; * ***** is used for concatenating several copies of the samestring.
print('Hello, ' + 'World' * 3)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Below is another example of string concatenation based on variables that store strings.
name = 'World' greeting = 'Hello' print(greeting + ', ' + name + '!')
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Using variables, change part of the string text is very easy.
name greeting # Change part of the text is easy greeting = 'Good morning' print(greeting + ', ' + name + '!')
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
2.2 Number There are two types of numbers that are used most frequently: integers and floats. As weexpect, the standard mathematic operation can be applied to these two types. Pleasetry the following expressions. Note that **\*\*** is exponent operator, which indicatesexponentation exponential(power) caluclation.
2 + 3 3 * 5 #3 to the power of 4 3 ** 4
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Among the number operations, we need to look at division closely. In Python 3.0, classic division is performed using **/**.
15 / 5 14 / 5
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
*//* is used to perform floor division. It truncates the fraction and rounds it to the next smallest whole number toward the left on the number line.
14 // 5 # Negatives move left on number line. The result is -3 instead of -2 -14 // 5
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Modulus operator **%** can be used to obtain remaider. Pay attention when negative number is involved.
14 % 5 # Hint: −14 // 5 equal to −3 # (-3) * 5 + ? = -14 -14 % 5
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
*Operator precedence* is a rule that affects how an expression is evaluated. As we learned in high school, the multiplication is done first than the addition. e.g. **2 + 3 * 4**. This means multiplication operator has higher precedence than the addition operator.For your reference, a precedence table from the python reference manual is used to indicate the evaluation order in Python. For a complete precedence table, check the heading "Python Operators Precedence" in this [Python tutorial](http://www.tutorialspoint.com/python/python_basic_operators.htm)However, When things get confused, it is far better to use parentheses **()** to explicitlyspecify the precedence. This makes the program more readable.Here are some examples on operator precedence:
2 + 3 * 4 (2 + 3) * 4 2 + 3 ** 2 (2 + 3) ** 2 -(4+3)+2
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Similary as string, variables can be used to store a number so that it is easy to manipulate them.
x = 3 y = 2 x + 2 sum = x + y sum x * y
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
One common expression is to run a math operation on a variable and then assign the result of the operation back to the variable. Therefore, there is a shortcut for such a expression.
x = 2 x = x * 3 x
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
This is equivalant to:
x = 2 # Note there is no space between '*' and '+' x *= 3 x
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
2.3 Data conversion and comparison So far, we have seen three types of data: interger, float, and string. With various data type, Python can define the operations possible on them and the storage method for each of them. In the later pracs, we will further introduce more data types, such as tuple, list and dictionary. To obtain the data type of a variable or a value, we can use built-in function **type()**;whereas functions, such as **str()**, **int()**, **float()**, are used to convert data one type to another. Check the following examples on the usage of these functions:
type('Hello, world!)') input_Value = '45.6' type(input_Value) weight = float(input_Value) weight type(weight)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Note the system will report error message when the conversion function is not compatible with the data.
input_Value = 'David' weight = float(input_Value)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Comparison between two values can help make decision in a program. The result of the comparison is either **True** or **False**. They are the two values of *Boolean* type.
5 > 10 type(5 > 10) # Double equal sign is also used for comparison 10.0 == 10
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Check the following examples on comparison of two strings.
'cat' < 'dog' # All uppercases are before low cases. 'cat' < 'Dog' 'apple' < 'apricot'
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
There are three logical operators, *not*, *and* and *or*, which can be applied to the boolean values.
# Both condition #1 and condition #2 are True? 3 < 4 and 7 < 8 # Either condition 1 or condition 2 are True? 3 < 4 or 7 > 8 # Both conditional #1 and conditional #2 are False? not ((3 > 4) or (7 > 8))
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
2. 4. Input and output All programing languages provide features to interact with user. Python provide *input()* function to get input. It waits for the user to type some input and press return. We can add some information for the user by putting a message inside the function's brackets. It must be a string or a string variable. The text that was typed can be saved in a variable. Here is one example:
nInput = input('Enter you number here:\n')
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
However, be aware that the input received from the user are treated as a string, eventhough a user entered a number. The following **print()** function invokes an error message.
print(nInput + 3)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
The input need to be converted to an integer before the match operation can be performed as follows:
print(int(nInput) + 3)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
After user's input are accepted, the messages need to be displayed to the user accordingly. String concatenation is one way to display messages which incorporate variable values.
name = 'David' print('Hello, ' + name)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Another way of achieving this is using **print()** funciton with *string formatting*. We need to use the *string formatting operator*, the percent(**%**) sign.
name = 'David' print('Hello, %s' % name)
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Here is another example with two variables:
name = 'David' age = 23 print('%s is %d years old.' % (name, age))
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
Notice that the two variables, **name**, **age**, that specify the values are included at the end of the statement, and enclosed with a bracket. With the quotation mark, **%s** and **%d** are used to specify formating for string and integer respectively. The following table shows a selected set of symbols which can be used along with %. Format symbol Conversion %s String %d Signed decimal integer %f Floating point real number There are extra charaters that are used together with above symbols: Symbol Functionality - Left justification + Display the sign m.n m is the minimum total width; n is the number of digits to display after the decimal point Here are more examples that use above specifiers:
# With %f, the format is right justification by default. # As a result, white spaces are added to the left of the number # 10.4 means minimal width 10 with 4 decinal points print('Output a float number: %10.4f' % (3.5)) # plus sign after % means to show positive sign # Zero after plus sign means using leading zero to fill width of 5 print('Output an integer: %+05d' % (23))
_____no_output_____
MIT
Jupyter/SIT742P01A-Python.ipynb
jilliant/sit742
General Equilibrium This notebook illustrates **how to solve GE equilibrium models**. The example is a simple one-asset model without nominal rigidities.The notebook shows how to:1. Solve for the **stationary equilibrium**.2. Solve for (non-linear) **transition paths** using a relaxtion algorithm.3. Solve for **transition paths** (linear vs. non-linear) and **impulse-responses** using the **sequence-space method** of **Auclert et. al. (2020)**.
LOAD = False # load stationary equilibrium DO_VARY_SIGMA_E = True # effect of uncertainty on stationary equilibrium DO_TP_RELAX = True # do transition path with relaxtion
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Setup
%load_ext autoreload %autoreload 2 import time import numpy as np import numba as nb from scipy import optimize import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') prop_cycle = plt.rcParams['axes.prop_cycle'] colors = prop_cycle.by_key()['color'] from consav.misc import elapsed from GEModel import GEModelClass from GEModel import solve_backwards, simulate_forwards, simulate_forwards_transpose
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Choose number of threads in numba
import numba as nb nb.set_num_threads(8)
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Model
model = GEModelClass('baseline',load=LOAD) print(model)
Modelclass: GEModelClass Name: baseline namespaces: ['sol', 'sim', 'par'] other_attrs: [] savefolder: saved not_floats: ['Ne', 'Na', 'max_iter_solve', 'max_iter_simulate', 'path_T'] sol: a = ndarray with shape = (7, 500) [dtype: float64] m = ndarray with shape = (7, 500) [dtype: float64] c = ndarray with shape = (7, 500) [dtype: float64] Va = ndarray with shape = (7, 500) [dtype: float64] i = ndarray with shape = (7, 500) [dtype: int32] w = ndarray with shape = (7, 500) [dtype: float64] path_a = ndarray with shape = (500, 7, 500) [dtype: float64] path_m = ndarray with shape = (500, 7, 500) [dtype: float64] path_c = ndarray with shape = (500, 7, 500) [dtype: float64] path_Va = ndarray with shape = (500, 7, 500) [dtype: float64] path_i = ndarray with shape = (500, 7, 500) [dtype: int32] path_w = ndarray with shape = (500, 7, 500) [dtype: float64] jac_K = ndarray with shape = (500, 500) [dtype: float64] jac_C = ndarray with shape = (500, 500) [dtype: float64] jac_curlyK_r = ndarray with shape = (500, 500) [dtype: float64] jac_curlyK_w = ndarray with shape = (500, 500) [dtype: float64] jac_C_r = ndarray with shape = (500, 500) [dtype: float64] jac_C_w = ndarray with shape = (500, 500) [dtype: float64] jac_r_K = ndarray with shape = (500, 500) [dtype: float64] jac_w_K = ndarray with shape = (500, 500) [dtype: float64] jac_r_Z = ndarray with shape = (500, 500) [dtype: float64] jac_w_Z = ndarray with shape = (500, 500) [dtype: float64] H_K = ndarray with shape = (500, 500) [dtype: float64] H_Z = ndarray with shape = (500, 500) [dtype: float64] G = ndarray with shape = (500, 500) [dtype: float64] memory, gb: 0.1 sim: D = ndarray with shape = (7, 500) [dtype: float64] path_D = ndarray with shape = (500, 7, 500) [dtype: float64] path_K = ndarray with shape = (500,) [dtype: float64] path_C = ndarray with shape = (500,) [dtype: float64] path_Klag = ndarray with shape = (500,) [dtype: float64] memory, gb: 0.0 par: r_ss = nan [float] w_ss = nan [float] K_ss = nan [float] Y_ss = nan [float] C_ss = nan [float] kd_ss = nan [float] ks_ss = nan [float] sigma = 1.0 [float] beta = 0.982 [float] Z = 1.0 [float] Z_sigma = 0.01 [float] Z_rho = 0.9 [float] alpha = 0.11 [float] delta = 0.025 [float] rho = 0.966 [float] sigma_e = 0.1 [float] Ne = 7 [int] a_max = 200.0 [float] Na = 500 [int] path_T = 500 [int] max_iter_solve = 5000 [int] max_iter_simulate = 5000 [int] solve_tol = 1e-10 [float] simulate_tol = 1e-10 [float] a_grid = ndarray with shape = (500,) [dtype: float64] e_grid = ndarray with shape = (7,) [dtype: float64] e_trans = ndarray with shape = (7, 7) [dtype: float64] e_ergodic = ndarray with shape = (7,) [dtype: float64] e_trans_cumsum = ndarray with shape = (7, 7) [dtype: float64] e_ergodic_cumsum = ndarray with shape = (7,) [dtype: float64] memory, gb: 0.0
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
For easy access
par = model.par sim = model.sim sol = model.sol
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Productivity states:**
for e,pr_e in zip(par.e_grid,par.e_ergodic): print(f'Pr[e = {e:7.4f}] = {pr_e:.4f}') assert np.isclose(np.sum(par.e_grid*par.e_ergodic),1.0)
Pr[e = 0.3599] = 0.0156 Pr[e = 0.4936] = 0.0938 Pr[e = 0.6769] = 0.2344 Pr[e = 0.9282] = 0.3125 Pr[e = 1.2729] = 0.2344 Pr[e = 1.7456] = 0.0937 Pr[e = 2.3939] = 0.0156
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Find Stationary Equilibrium **Step 1:** Find demand and supply of capital for a grid of interest rates.
if not LOAD: t0 = time.time() par = model.par # a. interest rate trial values Nr = 20 r_vec = np.linspace(0.005,1.0/par.beta-1-0.002,Nr) # 1+r > beta not possible # b. allocate Ks = np.zeros(Nr) Kd = np.zeros(Nr) # c. loop r_min = r_vec[0] r_max = r_vec[Nr-1] for i_r in range(Nr): # i. firm side k = model.firm_demand(r_vec[i_r],par.Z) Kd[i_r] = k*1 # aggregate labor = 1.0 # ii. household side success = model.solve_household_ss(r=r_vec[i_r]) if success: success = model.simulate_household_ss() if success: # total demand Ks[i_r] = np.sum(model.sim.D*model.sol.a) # bounds on r diff = Ks[i_r]-Kd[i_r] if diff < 0: r_min = np.fmax(r_min,r_vec[i_r]) if diff > 0: r_max = np.fmin(r_max,r_vec[i_r]) else: Ks[i_r] = np.nan # d. save model.save() print(f'grid search done in {elapsed(t0)}')
grid search done in 10.8 secs
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 2:** Plot supply and demand.
if not LOAD: par = model.par fig = plt.figure(figsize=(6,4)) ax = fig.add_subplot(1,1,1) ax.plot(r_vec,Ks,label='supply of capital') ax.plot(r_vec,Kd,label='demand for capital') ax.axvline(r_min,lw=0.5,ls='--',color='black') ax.axvline(r_max,lw=0.5,ls='--',color='black') ax.legend(frameon=True) ax.set_xlabel('interest rate, $r$') ax.set_ylabel('capital, $K_t$') fig.tight_layout() fig.savefig('figs/stationary_equilibrium.pdf')
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 3:** Solve root-finding problem.
def obj(r,model): model.solve_household_ss(r=r) model.simulate_household_ss() return np.sum(model.sim.D*model.sol.a)-model.firm_demand(r,model.par.Z) if not LOAD: t0 = time.time() opt = optimize.root_scalar(obj,bracket=[r_min,r_max],method='bisect',args=(model,)) model.par.r_ss = opt.root assert opt.converged print(f'search done in {elapsed(t0)}')
search done in 7.1 secs
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 4:** Check market clearing conditions.
model.steady_state()
household problem solved in 0.1 secs [652 iterations] household problem simulated in 0.1 secs [747 iterations] r: 0.0127 w: 1.0160 Y: 1.1415 K/Y: 2.9186 capital market clearing: -0.00000000 goods market clearing: -0.00000755
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Timings
%timeit model.solve_household_ss(r=par.r_ss) %timeit model.simulate_household_ss()
66.2 ms ± 1.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Income uncertainty and the equilibrium interest rate The equlibrium interest rate decreases when income uncertainty is increased.
if DO_VARY_SIGMA_E: par = model.par # a. seetings sigma_e_vec = [0.20] # b. find equilibrium rates model_ = model.copy() for sigma_e in sigma_e_vec: # i. set new parameter model_.par.sigma_e = sigma_e model_.create_grids() # ii. solve print(f'sigma_e = {sigma_e:.4f}',end='') opt = optimize.root_scalar( obj, bracket=[0.00,model.par.r_ss], method='bisect', args=(model_,) ) print(f' -> r_ss = {opt.root:.4f}') model_.par.r_ss = opt.root model_.steady_state() print('\n')
sigma_e = 0.2000 -> r_ss = 0.0029 household problem solved in 0.1 secs [430 iterations] household problem simulated in 0.0 secs [427 iterations] r: 0.0029 w: 1.0546 Y: 1.1849 K/Y: 3.9462 capital market clearing: -0.00000000 goods market clearing: -0.00000587
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Test matrix formulation **Step 1:** Construct $\boldsymbol{Q}_{ss}$
# a. allocate Q Q = np.zeros((par.Ne*par.Na,par.Ne*par.Na)) # b. fill for i_e in range(par.Ne): # get view of current block q = Q[i_e*par.Na:(i_e+1)*par.Na,i_e*par.Na:(i_e+1)*par.Na] for i_a in range(par.Na): # i. optimal choice a_opt = sol.a[i_e,i_a] # ii. above -> all weight on last node if a_opt >= par.a_grid[-1]: q[i_a,-1] = 1.0 # iii. below -> all weight on first node elif a_opt <= par.a_grid[0]: q[i_a,0] = 1.0 # iv. standard -> distribute weights on neighboring nodes else: i_a_low = np.searchsorted(par.a_grid,a_opt,side='right')-1 assert a_opt >= par.a_grid[i_a_low], f'{a_opt} < {par.a_grid[i_a_low]}' assert a_opt < par.a_grid[i_a_low+1], f'{a_opt} < {par.a_grid[i_a_low]}' q[i_a,i_a_low] = (par.a_grid[i_a_low+1]-a_opt)/(par.a_grid[i_a_low+1]-par.a_grid[i_a_low]) q[i_a,i_a_low+1] = 1-q[i_a,i_a_low]
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 2:** Construct $\tilde{\Pi}^e=\Pi^e \otimes \boldsymbol{I}_{\_{a}\times\_{a}}$
Pit = np.kron(par.e_trans,np.identity(par.Na))
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 3:** Test $\overrightarrow{D}_{t+1}=\tilde{\Pi}^{e\prime}\boldsymbol{Q}_{ss}^{\prime}\overrightarrow{D}_{t}$
D = np.zeros(sim.D.shape) D[:,0] = par.e_ergodic # a. standard D_plus = np.zeros(D.shape) simulate_forwards(D,sol.i,sol.w,par.e_trans.T.copy(),D_plus) # b. matrix product D_plus_alt = ((Pit.T@Q.T)@D.ravel()).reshape((par.Ne,par.Na)) # c. test equality assert np.allclose(D_plus,D_plus_alt)
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Find transition path **MIT-shock:** Transtion path for arbitrary exogenous path of $Z_t$ starting from the stationary equilibrium, i.e. $D_{-1} = D_{ss}$ and in particular $K_{-1} = K_{ss}$. **Step 1:** Construct $\{Z_t\}_{t=0}^{T-1}$ where $Z_t = (1-\rho_Z)Z_{ss} + \rho_Z Z_t$ and $Z_0 = (1+\sigma_Z) Z_{ss}$
path_Z = model.get_path_Z()
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 2:** Apply relaxation algorithm.
if DO_TP_RELAX: t0 = time.time() # a. allocate path_r = np.repeat(model.par.r_ss,par.path_T) # use steady state as initial guess path_r_ = np.zeros(par.path_T) path_w = np.zeros(par.path_T) # b. setting nu = 0.90 # relaxation parameter max_iter = 5000 # maximum number of iterations # c. iterate it = 0 while True: # i. find wage for t in range(par.path_T): path_w[t] = model.implied_w(path_r[t],path_Z[t]) # ii. solve and simulate model.solve_household_path(path_r,path_w) model.simulate_household_path(model.sim.D) # iii. implied prices for t in range(par.path_T): path_r_[t] = model.implied_r(sim.path_Klag[t],path_Z[t]) # iv. difference max_abs_diff = np.max(np.abs(path_r-path_r_)) if it%10 == 0: print(f'{it:4d}: {max_abs_diff:.8f}') if max_abs_diff < 1e-8: break # v. update path_r = nu*path_r + (1-nu)*path_r_ # vi. increment it += 1 if it > max_iter: raise Exception('too many iterations') print(f'\n transtion path found in {elapsed(t0)}')
0: 0.00038764 10: 0.00013139 20: 0.00004581 30: 0.00001597 40: 0.00000557 50: 0.00000194 60: 0.00000068 70: 0.00000024 80: 0.00000008 90: 0.00000003 100: 0.00000001 transtion path found in 23.7 secs
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Plot transition-paths:**
if DO_TP_RELAX: fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(2,2,1) ax.plot(np.arange(par.path_T),path_Z,'-o',ms=2) ax.set_title('technology, $Z_t$'); ax = fig.add_subplot(2,2,2) ax.plot(np.arange(par.path_T),sim.path_K,'-o',ms=2) ax.set_title('capital, $k_t$'); ax = fig.add_subplot(2,2,3) ax.plot(np.arange(par.path_T),path_r,'-o',ms=2) ax.set_title('interest rate, $r_t$'); ax = fig.add_subplot(2,2,4) ax.plot(np.arange(par.path_T),path_w,'-o',ms=2) ax.set_title('wage, $w_t$') fig.tight_layout() fig.savefig('figs/transition_path.pdf')
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Remember:**
if DO_TP_RELAX: path_Z_relax = path_Z path_K_relax = sim.path_K path_r_relax = path_r path_w_relax = path_w
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Find impulse-responses using sequence-space method **Paper:** Auclert, A., Bardóczy, B., Rognlie, M., and Straub, L. (2020). *Using the Sequence-Space Jacobian to Solve and Estimate Heterogeneous-Agent Models*. **Original code:** [shade-econ](https://github.com/shade-econ/sequence-jacobian/sequence-space-jacobian)**This code:** Illustrates the sequence-space method. The original paper shows how to do it computationally efficient and for a general class of models. **Step 1:** Compute the Jacobian for the household block around the stationary equilibrium
def jac(model,price,dprice=1e-4,do_print=True): t0_all = time.time() if do_print: print(f'price is {price}') par = model.par sol = model.sol sim = model.sim # a. step 1: solve backwards t0 = time.time() path_r = np.repeat(par.r_ss,par.path_T) path_w = np.repeat(par.w_ss,par.path_T) if price == 'r': path_r[-1] += dprice elif price == 'w': path_w[-1] += dprice model.solve_household_path(path_r,path_w,do_print=False) if do_print: print(f'solved backwards in {elapsed(t0)}') # b. step 2: derivatives t0 = time.time() diff_Ds = np.zeros((par.path_T,*sim.D.shape)) diff_as = np.zeros(par.path_T) diff_cs = np.zeros(par.path_T) for s in range(par.path_T): t_ =(par.path_T-1)-s simulate_forwards(sim.D,sol.path_i[t_],sol.path_w[t_],par.e_trans.T,diff_Ds[s]) diff_Ds[s] = (diff_Ds[s]-sim.D)/dprice diff_as[s] = (np.sum(sol.path_a[t_]*sim.D)-np.sum(sol.a*sim.D))/dprice diff_cs[s] = (np.sum(sol.path_c[t_]*sim.D)-np.sum(sol.c*sim.D))/dprice if do_print: print(f'derivatives calculated in {elapsed(t0)}') # c. step 3: expectation factors t0 = time.time() # demeaning improves numerical stability def demean(x): return x - x.sum()/x.size exp_as = np.zeros((par.path_T-1,*sol.a.shape)) exp_as[0] = demean(sol.a) exp_cs = np.zeros((par.path_T-1,*sol.c.shape)) exp_cs[0] = demean(sol.c) for t in range(1,par.path_T-1): simulate_forwards_transpose(exp_as[t-1],sol.i,sol.w,par.e_trans,exp_as[t]) exp_as[t] = demean(exp_as[t]) simulate_forwards_transpose(exp_cs[t-1],sol.i,sol.w,par.e_trans,exp_cs[t]) exp_cs[t] = demean(exp_cs[t]) if do_print: print(f'expecation factors calculated in {elapsed(t0)}') # d. step 4: F t0 = time.time() Fa = np.zeros((par.path_T,par.path_T)) Fa[0,:] = diff_as Fc = np.zeros((par.path_T,par.path_T)) Fc[0,:] = diff_cs Fa[1:, :] = exp_as.reshape((par.path_T-1, -1)) @ diff_Ds.reshape((par.path_T, -1)).T Fc[1:, :] = exp_cs.reshape((par.path_T-1, -1)) @ diff_Ds.reshape((par.path_T, -1)).T if do_print: print(f'f calculated in {elapsed(t0)}') t0 = time.time() # e. step 5: J Ja = Fa.copy() for t in range(1, Ja.shape[1]): Ja[1:, t] += Ja[:-1, t - 1] Jc = Fc.copy() for t in range(1, Jc.shape[1]): Jc[1:, t] += Jc[:-1, t - 1] if do_print: print(f'J calculated in {elapsed(t0)}') # f. save setattr(model.sol,f'jac_curlyK_{price}',Ja) setattr(model.sol,f'jac_C_{price}',Jc) if do_print: print(f'full Jacobian calculated in {elapsed(t0_all)}\n') jac(model,'r') jac(model,'w')
price is r solved backwards in 0.2 secs derivatives calculated in 2.0 secs expecation factors calculated in 0.8 secs f calculated in 0.1 secs J calculated in 0.0 secs full Jacobian calculated in 3.1 secs price is w solved backwards in 0.1 secs derivatives calculated in 0.2 secs expecation factors calculated in 0.1 secs f calculated in 0.1 secs J calculated in 0.0 secs full Jacobian calculated in 0.5 secs
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Inspect Jacobians:**
fig = plt.figure(figsize=(12,8)) T_fig = 200 # curlyK_r ax = fig.add_subplot(2,2,1) for s in [0,25,50,75,100]: ax.plot(np.arange(T_fig),sol.jac_curlyK_r[s,:T_fig],'-o',ms=2,label=f'$s={s}$') ax.legend(frameon=True) ax.set_title(r'$\mathcal{J}^{\mathcal{K},r}$') ax.set_xlim([0,T_fig]) # curlyK_w ax = fig.add_subplot(2,2,2) for s in [0,25,50,75,100]: ax.plot(np.arange(T_fig),sol.jac_curlyK_w[s,:T_fig],'-o',ms=2) ax.set_title(r'$\mathcal{J}^{\mathcal{K},w}$') ax.set_xlim([0,T_fig]) # C_r ax = fig.add_subplot(2,2,3) for s in [0,25,50,75,100]: ax.plot(np.arange(T_fig),sol.jac_C_r[s,:T_fig],'-o',ms=2,label=f'$s={s}$') ax.legend(frameon=True) ax.set_title(r'$\mathcal{J}^{C,r}$') ax.set_xlim([0,T_fig]) # curlyK_w ax = fig.add_subplot(2,2,4) for s in [0,25,50,75,100]: ax.plot(np.arange(T_fig),sol.jac_C_w[s,:T_fig],'-o',ms=2) ax.set_title(r'$\mathcal{J}^{C,w}$') ax.set_xlim([0,T_fig]) fig.tight_layout() fig.savefig('figs/jacobians.pdf')
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks