markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Terminology (from [this post](https://towardsdatascience.com/scale-standardize-or-normalize-with-scikit-learn-6ccc7d176a02)): * Scale generally means to change the range of the values. The shape of the distribution doesn’t change. Think about how a scale model of a building has the same proportions as the original, just smaller. That’s why we say it is drawn to scale. The range is often set at 0 to 1.* Standardize generally means changing the values so that the distribution standard deviation from the mean equals one. It outputs something very close to a normal distribution. Scaling is often implied.* Normalize can be used to mean either of the above things (and more!). I suggest you avoid the term normalize, because it has many definitions and is prone to creating confusion.via [Machine Learning Mastery](https://machinelearningmastery.com/standardscaler-and-minmaxscaler-transforms-in-python/):* If the distribution of the quantity is normal, then it should be standardized, otherwise, the data should be normalized.
house_prices = pd.read_csv("data/house-prices.csv") house_prices["AgeWhenSold"] = house_prices["YrSold"] - house_prices["YearBuilt"] house_prices.head()
_____no_output_____
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet
Unscaled Housing Prices Age When Sold
sns.displot(house_prices["AgeWhenSold"]) plt.xticks(rotation=90) plt.show()
_____no_output_____
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet
StandardScalerNote that DataFrame.var and DataFrame.std default to using 1 degree of freedom (ddof=1) but StandardScaler is using numpy's versions which default to ddof=0. That's why when printing the variance and standard deviation of the original data frame, we're specifying ddof=0. ddof=1 is known as Bessel's correction.
df = pd.DataFrame({ 'col1': [1, 2, 3], 'col2': [10, 20, 30], 'col3': [0, 20, 22] }) print("Original:\n") print(df) print("\nColumn means:\n") print(df.mean()) print("\nOriginal variance:\n") print(df.var(ddof=0)) print("\nOriginal standard deviations:\n") print(df.std(ddof=0)) scaler = StandardScaler() df1 = pd.DataFrame(scaler.fit_transform(df), columns=df.columns) print("\nAfter scaling:\n") print(df1) print("\nColumn means:\n") print(round(df1.mean(), 3)) print("\nVariance:\n") print(df1.var(ddof=0)) print("\nStandard deviations:\n") print(df1.std(ddof=0)) print("\nExample calculation for col2:") print("z = (x - mean) / std") print("z = (10 - 20) / 8.164966 = -1.224745")
Original: col1 col2 col3 0 1 10 0 1 2 20 20 2 3 30 22 Column means: col1 2.0 col2 20.0 col3 14.0 dtype: float64 Original variance: col1 0.666667 col2 66.666667 col3 98.666667 dtype: float64 Original standard deviations: col1 0.816497 col2 8.164966 col3 9.933110 dtype: float64 After scaling: col1 col2 col3 0 -1.224745 -1.224745 -1.409428 1 0.000000 0.000000 0.604040 2 1.224745 1.224745 0.805387 Column means: col1 0.0 col2 0.0 col3 0.0 dtype: float64 Variance: col1 1.0 col2 1.0 col3 1.0 dtype: float64 Standard deviations: col1 1.0 col2 1.0 col3 1.0 dtype: float64 Example calculation for col2: z = (x - mean) / std z = (10 - 20) / 8.164966 = -1.224745
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet
Standard Scaler with Age When Sold
scaler = StandardScaler() age_when_sold_scaled = scaler.fit_transform(house_prices["AgeWhenSold"].values.reshape(-1, 1)) sns.displot(age_when_sold_scaled) plt.xticks(rotation=90) plt.show()
_____no_output_____
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet
Whiten```x_new = x / std(x)```
data = [5, 1, 3, 3, 2, 3, 8, 1, 2, 2, 3, 5] print("Original:", data) print("\nStd Dev:", np.std(data)) scaled = whiten(data) print("\nScaled with Whiten:", scaled) scaled_manual = data / np.std(data) print("\nScaled Manuallly:", scaled_manual)
Original: [5, 1, 3, 3, 2, 3, 8, 1, 2, 2, 3, 5] Std Dev: 1.9075871903765997 Scaled with Whiten: [2.62111217 0.52422243 1.5726673 1.5726673 1.04844487 1.5726673 4.19377947 0.52422243 1.04844487 1.04844487 1.5726673 2.62111217] Scaled Manuallly: [2.62111217 0.52422243 1.5726673 1.5726673 1.04844487 1.5726673 4.19377947 0.52422243 1.04844487 1.04844487 1.5726673 2.62111217]
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet
MinMaxScales to a value between 0 and 1.More suspectible to influence by outliers. Housing Prices Age When Sold
scaler = MinMaxScaler() age_when_sold_scaled = scaler.fit_transform(house_prices["AgeWhenSold"].values.reshape(-1, 1)) sns.displot(age_when_sold_scaled) plt.xticks(rotation=90) plt.show()
_____no_output_____
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet
Robust Scaler
scaler = RobustScaler() age_when_sold_scaled = scaler.fit_transform(house_prices["AgeWhenSold"].values.reshape(-1, 1)) sns.displot(age_when_sold_scaled) plt.xticks(rotation=90) plt.show()
_____no_output_____
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet
Parsing out Cosmos Data JSON
import pandas as pd import numpy as np import yaml import os os.listdir('../data')
_____no_output_____
MIT
notebooks/Exploring Json.ipynb
BillmanH/exoplanets
Loading local data I ran a query in the Cosmos DB Explorer and then load it in a json file.
res = yaml.safe_load(open('../data/example nodes.json')) res[1]
_____no_output_____
MIT
notebooks/Exploring Json.ipynb
BillmanH/exoplanets
Now I just need to parse out the orbiting edges
[{"source":i['objid'][0],"target":i['orbitsId'][0],"label":"orbits"} for i in res]
_____no_output_____
MIT
notebooks/Exploring Json.ipynb
BillmanH/exoplanets
Tutorial 6.3. Advanced Topics on Extreme Value Analysis Description: Some advanced topics on Extreme Value Analysis are presented. Students are advised to complete the exercises. Project: Structural Wind Engineering WS19-20 Chair of Structural Analysis @ TUM - R. Wüchner, M. Péntek Author: anoop.kodakkal@tum.de, mate.pentek@tum.deCreated on: 24.12.2019Last update: 08.01.2020 Contents: 1. Prediction of the extreme value of a time series - MaxMin Estimation 2. Lieblein's BLUE method The worksheet is based on the knowledge base and scripts provided by [NIST](https://www.itl.nist.gov/div898/winds/overview.htm) as well as work available from [Christopher Howlett](https://github.com/chowlet5) from UWO.
# import import matplotlib.pyplot as plt import numpy as np from scipy.stats import gumbel_r as gumbel from ipywidgets import interactive #external files from peakpressure import maxminest from blue4pressure import * import custom_utilities as c_utils
_____no_output_____
BSD-3-Clause
Ex06Advanced/3_ExtremeValueAnalysis/.ipynb_checkpoints/swe_ws1920_6_3_advanced_topics_on_extreme_value_analysis-checkpoint.ipynb
mpentek/StructuralWindEngineering
1. Prediction of the extreme value of a time series - MaxMin Estimation This method is based on [the procedure (and sample Matlab file](https://www.itl.nist.gov/div898/winds/peakest_files/peakest.htm) by Sadek, F. and Simiu, E. (2002). "Peak non-gaussian wind effects for database-assisted low-rise building design." Journal of Engineering Mechanics, 128(5), 530-539. Please find it [here](https://www.itl.nist.gov/div898/winds/pdf_files/b02030.pdf).The method uses* gamma distribution for estimating the peaks corresponding to the longer tail of time series * normal distribution for estimating the peaks corresponding to the shorter tail of time seriesThe distribution of the peaks is then estimated by using the standard translation processes approach. implementation details : INPUT ARGUMENTS:Each row of *record* is a time series.The optional input argument *dur_ratio* allows peaks to be estimated fora duration that differs from the duration of the record itself: *dur_ratio* = [duration for peak estimation]/[duration of record] (If unspecified, a value of 1 is used.)OUTPUT ARGUMENTS:* *max_est* gives the expected maximum values of each row of *record** *min_est* gives the expected minimum values of each row of *record** *max_std* gives the standard deviations of the maximum value for each row of *record** *min_std* gives the standard deviations of the minimum value for each row of *record* Let us test the method for a given time series
# using as sample input some pre-generated generalized extreme value random series given_series = np.loadtxt('test_data_gevrnd.dat', skiprows=0, usecols = (0,)) # print results dur_ratio = 1 result = maxminest(given_series, dur_ratio) maxv = result[0][0][0] minv = result[1][0][0] print('estimation of maximum value ', np.around(maxv,3)) print('estimation of minimum value ', np.around(minv,3)) plt.figure(num=1, figsize=(8, 6)) x_series = np.arange(0.0, len(given_series), 1.0) plt.plot(x_series, given_series) plt.ylabel('Amplitude') plt.xlabel('Time [s]') plt.hlines([maxv, minv], x_series[0], x_series[-1]) plt.title('Predicted extrema') plt.grid(True) plt.show()
_____no_output_____
BSD-3-Clause
Ex06Advanced/3_ExtremeValueAnalysis/.ipynb_checkpoints/swe_ws1920_6_3_advanced_topics_on_extreme_value_analysis-checkpoint.ipynb
mpentek/StructuralWindEngineering
Let us plot the pdf and cdf
[pdf_x, pdf_y] = c_utils.get_pdf(given_series) ecdf_y = c_utils.get_ecdf(pdf_x, pdf_y) plt.figure(num=2, figsize=(16, 6)) plt.subplot(1,2,1) plt.plot(pdf_x, pdf_y) plt.ylabel('PDF(Amplitude)') plt.grid(True) plt.subplot(1,2,2) plt.plot(pdf_x, ecdf_y) plt.vlines([maxv, minv], 0, 1) plt.ylabel('CDF(Amplitude)') plt.grid(True) plt.show()
_____no_output_____
BSD-3-Clause
Ex06Advanced/3_ExtremeValueAnalysis/.ipynb_checkpoints/swe_ws1920_6_3_advanced_topics_on_extreme_value_analysis-checkpoint.ipynb
mpentek/StructuralWindEngineering
2. Lieblein's BLUE method From a time series of pressure coefficients, *blue4pressure.py* estimatesextremes of positive and negative pressures based on Lieblein's BLUE (Best Linear Unbiased Estimate) method applied to n epochs. Extremes are estimated for 1 and dur epochs for probabilities of non-exceedance P1 and P2 of the Gumbel distribution fitted to the epochal peaks.*n* = integer, dur need not be an integer.Written by Dat Duthinh 8_25_2015, 2_2_2016, 2_6_2017.For further reference check out the material provided by [NIST](https://www.itl.nist.gov/div898/winds/gumbel_blue/gumbblue.htm).Reference: 1) Julius Lieblein "Efficient Methods of Extreme-ValueMethodology" NBSIR 74-602 OCT 1974 for n = 4:162) Nicholas John Cook "The designer's guide to wind loading ofbuilding structures" part 1, British Research Establishment 1985 Table C3pp. 321-323 for n = 17:24. Extension to n=100 by Adam Pintar Feb 12 2016.3) INTERNATIONAL STANDARD, ISO 4354 (2009-06-01), 2nd edition, “Wind actions on structures,” Annex D (informative) “Aerodynamic pressure and force coefficients,” Geneva, Switzerland, p. 22 implementation details : INPUT ARGUMENTS* *cp* = vector of time history of pressure coefficients* *n* = number of epochs (integer)of cp data, 4 <= n <= 100* *dur* = number of epochs for estimation of extremes. Default dur = n dur need not be an integer* *P1, P2* = probabilities of non-exceedance of extremes in EV1 (Gumbel), P1 defaults to 0.80 (ISO)and P2 to 0.5704 (mean) for the Gumbel distribution . OUTPUT ARGUMENTS* *suffix max* for + peaks, min for - peaks of pressure coeff.* *p1_max* (p1_min)= extreme value of positive (negative) peaks with probability of non-exceedance P1 for 1 epoch* *p2_max* (p2_min)= extreme value of positive (negative) peaks with probability of exceedance P2 for 1 epoch* *p1_rmax* (p1_rmin)= extreme value of positive (negative) peaks with probability of non-exceedance P1 for dur epochs* *p2_rmax* (p2_rmin)= extreme value of positive (negative) peaks with probability of non-exceedance P2 for for dur epochs* *cp_max* (cp_min)= vector of n positive (negative) epochal peaks* *u_max, b_max* (u_min, b_min) = location and scale parameters of EV1 (Gumbel) for positive (negative) peaks
# n = number of epochs (integer)of cp data, 4 <= n <= 100 n=4 # P1, P2 = probabilities of non-exceedance of extremes in EV1 (Gumbel). P1=0.80 P2=0.5704 # this corresponds to the mean of gumbel distribution # dur = number of epochs for estimation of extremes. Default dur = n # dur need not be an integer dur=1 # Call function result = blue4pressure(given_series, n, P1, P2, dur) p1_max = result[0][0] p2_max = result[1][0] umax = result[4][0] # location parameters b_max = result[5][0] # sclae parameters p1_min = result[7][0] p2_min = result[8][0] umin = result[11][0] # location parameters b_min = result[12][0] # scale parameters # print results ## maximum print('estimation of maximum value with probability of non excedence of p1', np.around(p1_max,3)) print('estimation of maximum value with probability of non excedence of p2', np.around(p2_max,3)) ## minimum print('estimation of minimum value with probability of non excedence of p1', np.around(p1_min,3)) print('estimation of minimum value with probability of non excedence of p2', np.around(p2_min,3))
estimation of maximum value with probability of non excedence of p1 2.055 estimation of maximum value with probability of non excedence of p2 1.908 estimation of minimum value with probability of non excedence of p1 -0.582 estimation of minimum value with probability of non excedence of p2 -0.547
BSD-3-Clause
Ex06Advanced/3_ExtremeValueAnalysis/.ipynb_checkpoints/swe_ws1920_6_3_advanced_topics_on_extreme_value_analysis-checkpoint.ipynb
mpentek/StructuralWindEngineering
Let us plot the pdf and cdf for the maximum values
max_pdf_x = np.linspace(1, 3, 100) max_pdf_y = gumbel.pdf(max_pdf_x, umax, b_max) max_ecdf_y = c_utils.get_ecdf(max_pdf_x, max_pdf_y) plt.figure(num=3, figsize=(16, 6)) plt.subplot(1,2,1) # PDF generated as a fitted curve using generalized extreme distribution plt.plot(max_pdf_x, max_pdf_y, label = 'PDF from the fitted Gumbel') plt.xlabel('Max values') plt.ylabel('PDF(Amplitude)') plt.title('PDF of Maxima') plt.grid(True) plt.legend() plt.subplot(1,2,2) plt.plot(max_pdf_x, max_ecdf_y) plt.vlines([p1_max, p2_max], 0, 1) plt.ylabel('CDF(Amplitude)') plt.grid(True) plt.show()
_____no_output_____
BSD-3-Clause
Ex06Advanced/3_ExtremeValueAnalysis/.ipynb_checkpoints/swe_ws1920_6_3_advanced_topics_on_extreme_value_analysis-checkpoint.ipynb
mpentek/StructuralWindEngineering
What's this TensorFlow business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you choose to work with that notebook). What is it?TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray. Why?* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn TensorFlow?TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.**NOTE: This notebook is meant to teach you the latest version of Tensorflow which is as of this homework version `2.2.0-rc3`. Most examples on the web today are still in 1.x, so be careful not to confuse the two when looking up documentation**. Install Tensorflow 2.0 (ONLY IF YOU ARE WORKING LOCALLY)1. Have the latest version of Anaconda installed on your machine.2. Create a new conda environment starting from Python 3.7. In this setup example, we'll call it `tf_20_env`.3. Run the command: `source activate tf_20_env`4. Then pip install TF 2.0 as described here: https://www.tensorflow.org/install Table of ContentsThis notebook has 5 parts. We will walk through TensorFlow at **three different levels of abstraction**, which should help you better understand it and prepare you for working on your project.1. Part I, Preparation: load the CIFAR-10 dataset.2. Part II, Barebone TensorFlow: **Abstraction Level 1**, we will work directly with low-level TensorFlow graphs. 3. Part III, Keras Model API: **Abstraction Level 2**, we will use `tf.keras.Model` to define arbitrary neural network architecture. 4. Part IV, Keras Sequential + Functional API: **Abstraction Level 3**, we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently, and then explore the functional libraries for building unique and uncommon models that require more flexibility.5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. We will discuss Keras in more detail later in the notebook.Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `tf.keras.Model` | High | Medium || `tf.keras.Sequential` | Low | High | Part I: PreparationFirst, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
import os import tensorflow as tf import numpy as np import math import timeit import matplotlib.pyplot as plt %matplotlib inline def load_cifar10(num_training=49000, num_validation=1000, num_test=10000): """ Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 dataset and use appropriate data types and shapes cifar10 = tf.keras.datasets.cifar10.load_data() (X_train, y_train), (X_test, y_test) = cifar10 X_train = np.asarray(X_train, dtype=np.float32) y_train = np.asarray(y_train, dtype=np.int32).flatten() X_test = np.asarray(X_test, dtype=np.float32) y_test = np.asarray(y_test, dtype=np.int32).flatten() # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Normalize the data: subtract the mean pixel and divide by std mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True) std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True) X_train = (X_train - mean_pixel) / std_pixel X_val = (X_val - mean_pixel) / std_pixel X_test = (X_test - mean_pixel) / std_pixel return X_train, y_train, X_val, y_val, X_test, y_test # If there are errors with SSL downloading involving self-signed certificates, # it may be that your Python version was recently installed on the current machine. # See: https://github.com/tensorflow/tensorflow/issues/10779 # To fix, run the command: /Applications/Python\ 3.7/Install\ Certificates.command # ...replacing paths as necessary. # Invoke the above function to get our data. NHW = (0, 1, 2) X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10() print('Train data shape: ', X_train.shape) print('Train labels shape: ', y_train.shape, y_train.dtype) print('Validation data shape: ', X_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) class Dataset(object): def __init__(self, X, y, batch_size, shuffle=False): """ Construct a Dataset object to iterate over data X and labels y Inputs: - X: Numpy array of data, of any shape - y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0] - batch_size: Integer giving number of elements per minibatch - shuffle: (optional) Boolean, whether to shuffle the data on each epoch """ assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels' self.X, self.y = X, y self.batch_size, self.shuffle = batch_size, shuffle def __iter__(self): N, B = self.X.shape[0], self.batch_size idxs = np.arange(N) if self.shuffle: np.random.shuffle(idxs) return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B)) train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True) val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False) test_dset = Dataset(X_test, y_test, batch_size=64) # We can iterate through a dataset like this: for t, (x, y) in enumerate(train_dset): print(t, x.shape, y.shape) if t > 5: break
0 (64, 32, 32, 3) (64,) 1 (64, 32, 32, 3) (64,) 2 (64, 32, 32, 3) (64,) 3 (64, 32, 32, 3) (64,) 4 (64, 32, 32, 3) (64,) 5 (64, 32, 32, 3) (64,) 6 (64, 32, 32, 3) (64,)
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
You can optionally **use GPU by setting the flag to True below**. Colab UsersIf you are using Colab, you need to manually switch to a GPU device. You can do this by clicking `Runtime -> Change runtime type` and selecting `GPU` under `Hardware Accelerator`. Note that you have to rerun the cells from the top since the kernel gets restarted upon switching runtimes.
# Set up some global variables USE_GPU = True if USE_GPU: device = '/device:GPU:0' else: device = '/cpu:0' # Constant to control how often we print when training models print_every = 100 print('Using device: ', device)
Using device: /device:GPU:0
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Part II: Barebones TensorFlowTensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.**"Barebones Tensorflow" is important to understanding the building blocks of TensorFlow, but much of it involves concepts from TensorFlow 1.x.** We will be working with legacy modules such as `tf.Variable`.Therefore, please read and understand the differences between legacy (1.x) TF and the new (2.0) TF. Historical background on TensorFlow 1.xTensorFlow 1.x is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x:1. **Build a computational graph that describes the computation that you want to perform**. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.2. **Run the computational graph many times.** Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph. The new paradigm in Tensorflow 2.0Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager.The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn't make use of `tf.Session`, `tf.run`, `placeholder`, `feed_dict`. To get more details of what's different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guideLater, in the rest of this notebook we'll focus on this new, simpler approach. TensorFlow warmup: Flatten FunctionWe can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:- N is the number of datapoints (minibatch size)- H is the height of the feature map- W is the width of the feature map- C is the number of channels in the feature mapThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. Notice the `tf.reshape` call has the target shape as `(N, -1)`, meaning it will reshape/keep the first dimension to be N, and then infer as necessary what the second dimension is in the output, so we can collapse the remaining dimensions from the input properly.**NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
def flatten(x): """ Input: - TensorFlow Tensor of shape (N, D1, ..., DM) Output: - TensorFlow Tensor of shape (N, D1 * ... * DM) """ N = tf.shape(x)[0] return tf.reshape(x, (N, -1)) def test_flatten(): # Construct concrete values of the input data x using numpy x_np = np.arange(24).reshape((2, 3, 4)) print('x_np:\n', x_np, '\n') # Compute a concrete output value. x_flat_np = flatten(x_np) print('x_flat_np:\n', x_flat_np, '\n') test_flatten()
x_np: [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] x_flat_np: tf.Tensor( [[ 0 1 2 3 4 5 6 7 8 9 10 11] [12 13 14 15 16 17 18 19 20 21 22 23]], shape=(2, 12), dtype=int32)
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Barebones TensorFlow: Define a Two-Layer NetworkWe will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. After defining the network architecture in the `two_layer_fc` function, we will test the implementation by checking the shape of the output.**It's important that you read and understand this implementation.**
def two_layer_fc(x, params): """ A fully-connected neural network; the architecture is: fully-connected layer -> ReLU -> fully connected layer. Note that we only need to define the forward pass here; TensorFlow will take care of computing the gradients for us. The input to the network will be a minibatch of data, of shape (N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units, and the output layer will produce scores for C classes. Inputs: - x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of input data. - params: A list [w1, w2] of TensorFlow Tensors giving weights for the network, where w1 has shape (D, H) and w2 has shape (H, C). Returns: - scores: A TensorFlow Tensor of shape (N, C) giving classification scores for the input data x. """ w1, w2 = params # Unpack the parameters x = flatten(x) # Flatten the input; now x has shape (N, D) h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H) scores = tf.matmul(h, w2) # Compute scores of shape (N, C) return scores def two_layer_fc_test(): hidden_layer_size = 42 # Scoping our TF operations under a tf.device context manager # lets us tell TensorFlow where we want these Tensors to be # multiplied and/or operated on, e.g. on a CPU or a GPU. with tf.device(device): x = tf.zeros((64, 32, 32, 3)) w1 = tf.zeros((32 * 32 * 3, hidden_layer_size)) w2 = tf.zeros((hidden_layer_size, 10)) # Call our two_layer_fc function for the forward pass of the network. scores = two_layer_fc(x, [w1, w2]) print(scores.shape) two_layer_fc_test()
(64, 10)
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Barebones TensorFlow: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for `C` classes.**HINT**: For convolutions: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/conv2d; be careful with padding!**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
def three_layer_convnet(x, params): """ A three-layer convolutional network with the architecture described above. Inputs: - x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images - params: A list of TensorFlow Tensors giving the weights and biases for the network; should contain the following: - conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving weights for the first convolutional layer. - conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the first convolutional layer. - conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2) giving weights for the second convolutional layer - conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the second convolutional layer. - fc_w: TensorFlow Tensor giving weights for the fully-connected layer. Can you figure out what the shape should be? - fc_b: TensorFlow Tensor giving biases for the fully-connected layer. Can you figure out what the shape should be? """ conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params scores = None ############################################################################ # TODO: Implement the forward pass for the three-layer ConvNet. # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** paddings = tf.constant([[0,0], [2,2], [2,2], [0,0]]) x = tf.pad(x, paddings, 'CONSTANT') conv1 = tf.nn.conv2d(x, conv_w1, strides=[1,1,1,1], padding="VALID")+conv_b1 relu1 = tf.nn.relu(conv1) paddings = tf.constant([[0,0], [1,1], [1,1], [0,0]]) conv1 = tf.pad(conv1, paddings, 'CONSTANT') conv2 = tf.nn.conv2d(conv1, conv_w2, strides=[1,1,1,1], padding="VALID")+conv_b2 relu2 = tf.nn.relu(conv2) relu2 = flatten(relu2) scores = tf.matmul(relu2, fc_w) + fc_b # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ return scores
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.When you run this function, `scores_np` should have shape `(64, 10)`.
def three_layer_convnet_test(): with tf.device(device): x = tf.zeros((64, 32, 32, 3)) conv_w1 = tf.zeros((5, 5, 3, 6)) conv_b1 = tf.zeros((6,)) conv_w2 = tf.zeros((3, 3, 6, 9)) conv_b2 = tf.zeros((9,)) fc_w = tf.zeros((32 * 32 * 9, 10)) fc_b = tf.zeros((10,)) params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b] scores = three_layer_convnet(x, params) # Inputs to convolutional layers are 4-dimensional arrays with shape # [batch_size, height, width, channels] print('scores_np has shape: ', scores.shape) three_layer_convnet_test()
scores_np has shape: (64, 10)
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Barebones TensorFlow: Training StepWe now define the `training_step` function performs a single training step. This will take three basic steps:1. Compute the loss2. Compute the gradient of the loss with respect to all network weights3. Make a weight update step using (stochastic) gradient descent.We need to use a few new TensorFlow functions to do all of this:- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/reduce_mean- For computing gradients of the loss with respect to the weights we'll use `tf.GradientTape` (useful for Eager execution): https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub` ("sub" is for subtraction): https://www.tensorflow.org/api_docs/python/tf/assign_sub
def training_step(model_fn, x, y, params, learning_rate): with tf.GradientTape() as tape: scores = model_fn(x, params) # Forward pass of the model loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores) total_loss = tf.reduce_mean(loss) grad_params = tape.gradient(total_loss, params) # Make a vanilla gradient descent step on all of the model parameters # Manually update the weights using assign_sub() for w, grad_w in zip(params, grad_params): w.assign_sub(learning_rate * grad_w) return total_loss def train_part2(model_fn, init_fn, learning_rate): """ Train a model on CIFAR-10. Inputs: - model_fn: A Python function that performs the forward pass of the model using TensorFlow; it should have the following signature: scores = model_fn(x, params) where x is a TensorFlow Tensor giving a minibatch of image data, params is a list of TensorFlow Tensors holding the model weights, and scores is a TensorFlow Tensor of shape (N, C) giving scores for all elements of x. - init_fn: A Python function that initializes the parameters of the model. It should have the signature params = init_fn() where params is a list of TensorFlow Tensors holding the (randomly initialized) weights of the model. - learning_rate: Python float giving the learning rate to use for SGD. """ params = init_fn() # Initialize the model parameters for t, (x_np, y_np) in enumerate(train_dset): # Run the graph on a batch of training data. loss = training_step(model_fn, x_np, y_np, params, learning_rate) # Periodically print the loss and check accuracy on the val set. if t % print_every == 0: print('Iteration %d, loss = %.4f' % (t, loss)) check_accuracy(val_dset, x_np, model_fn, params) def check_accuracy(dset, x, model_fn, params): """ Check accuracy on a classification model, e.g. for validation. Inputs: - dset: A Dataset object against which to check accuracy - x: A TensorFlow placeholder Tensor where input images should be fed - model_fn: the Model we will be calling to make predictions on x - params: parameters for the model_fn to work with Returns: Nothing, but prints the accuracy of the model """ num_correct, num_samples = 0, 0 for x_batch, y_batch in dset: scores_np = model_fn(x_batch, params).numpy() y_pred = scores_np.argmax(axis=1) num_samples += x_batch.shape[0] num_correct += (y_pred == y_batch).sum() acc = float(num_correct) / num_samples print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Barebones TensorFlow: InitializationWe'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
def create_matrix_with_kaiming_normal(shape): if len(shape) == 2: fan_in, fan_out = shape[0], shape[1] elif len(shape) == 4: fan_in, fan_out = np.prod(shape[:3]), shape[3] return tf.keras.backend.random_normal(shape) * np.sqrt(2.0 / fan_in)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Barebones TensorFlow: Train a Two-Layer NetworkWe are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.We just need to define a function to initialize the weights of the model, and call `train_part2`.Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.You don't need to tune any hyperparameters, but you should achieve validation accuracies above 40% after one epoch of training.
def two_layer_fc_init(): """ Initialize the weights of a two-layer network, for use with the two_layer_network function defined above. You can use the `create_matrix_with_kaiming_normal` helper! Inputs: None Returns: A list of: - w1: TensorFlow tf.Variable giving the weights for the first layer - w2: TensorFlow tf.Variable giving the weights for the second layer """ hidden_layer_size = 4000 w1 = tf.Variable(create_matrix_with_kaiming_normal((3 * 32 * 32, 4000))) w2 = tf.Variable(create_matrix_with_kaiming_normal((4000, 10))) return [w1, w2] learning_rate = 1e-2 train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
Iteration 0, loss = 3.0406 Got 146 / 1000 correct (14.60%) Iteration 100, loss = 2.0258 Got 377 / 1000 correct (37.70%) Iteration 200, loss = 1.4799 Got 399 / 1000 correct (39.90%) Iteration 300, loss = 1.8364 Got 361 / 1000 correct (36.10%) Iteration 400, loss = 1.8527 Got 415 / 1000 correct (41.50%) Iteration 500, loss = 1.8608 Got 434 / 1000 correct (43.40%) Iteration 600, loss = 1.8009 Got 432 / 1000 correct (43.20%) Iteration 700, loss = 1.9825 Got 445 / 1000 correct (44.50%)
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Barebones TensorFlow: Train a three-layer ConvNetWe will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou don't need to do any hyperparameter tuning, but you should see validation accuracies above 43% after one epoch of training.
def three_layer_convnet_init(): """ Initialize the weights of a Three-Layer ConvNet, for use with the three_layer_convnet function defined above. You can use the `create_matrix_with_kaiming_normal` helper! Inputs: None Returns a list containing: - conv_w1: TensorFlow tf.Variable giving weights for the first conv layer - conv_b1: TensorFlow tf.Variable giving biases for the first conv layer - conv_w2: TensorFlow tf.Variable giving weights for the second conv layer - conv_b2: TensorFlow tf.Variable giving biases for the second conv layer - fc_w: TensorFlow tf.Variable giving weights for the fully-connected layer - fc_b: TensorFlow tf.Variable giving biases for the fully-connected layer """ params = None ############################################################################ # TODO: Initialize the parameters of the three-layer network. # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** conv_w1 = tf.Variable(kaiming_normal([5, 5, 3, 32])) conv_b1 = tf.Variable(np.zeros([32]), dtype=tf.float32) conv_w2 = tf.Variable(kaiming_normal([3, 3, 32, 16])) conv_b2 = tf.Variable(np.zeros([16]), dtype=tf.float32) fc_w = tf.Variable(kaiming_normal([32*32*16,10])) fc_b = tf.Variable(np.zeros([10]), dtype=tf.float32) params = (conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b) # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ return params learning_rate = 3e-3 train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Part III: Keras Model Subclassing APIImplementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters. This was fine for a small network, but could quickly become unweildy for a large complex model.Fortunately TensorFlow 2.0 provides higher-level APIs such as `tf.keras` which make it easy to build models out of modular, object-oriented layers. Further, TensorFlow 2.0 uses eager execution that evaluates operations immediately, without explicitly constructing any computational graphs. This makes it easy to write and debug models, and reduces the boilerplate code.In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:1. Define a new class which subclasses `tf.keras.Model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.keras.layers` package provides many common neural-network layers, like `tf.keras.layers.Dense` for fully-connected layers and `tf.keras.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super(YourModelName, self).__init__()` as the first line in your initializer!3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II. Keras Model Subclassing API: Two-Layer NetworkHere is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.initializers.VarianceScaling` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/initializers/VarianceScalingWe construct `tf.keras.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation='relu'` to the constructor; the second layer uses softmax activation function. Finally, we use `tf.keras.layers.Flatten` to flatten the output from the previous fully-connected layer.
class TwoLayerFC(tf.keras.Model): def __init__(self, hidden_size, num_classes): super(TwoLayerFC, self).__init__() initializer = tf.initializers.VarianceScaling(scale=2.0) self.fc1 = tf.keras.layers.Dense(hidden_size, activation='relu', kernel_initializer=initializer) self.fc2 = tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer) self.flatten = tf.keras.layers.Flatten() def call(self, x, training=False): x = self.flatten(x) x = self.fc1(x) x = self.fc2(x) return x def test_TwoLayerFC(): """ A small unit test to exercise the TwoLayerFC model above. """ input_size, hidden_size, num_classes = 50, 42, 10 x = tf.zeros((64, input_size)) model = TwoLayerFC(hidden_size, num_classes) with tf.device(device): scores = model(x) print(scores.shape) test_TwoLayerFC()
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Keras Model Subclassing API: Three-Layer ConvNetNow it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:1. Convolutional layer with 5 x 5 kernels, with zero-padding of 22. ReLU nonlinearity3. Convolutional layer with 3 x 3 kernels, with zero-padding of 14. ReLU nonlinearity5. Fully-connected layer to give class scores6. Softmax nonlinearityYou should initialize the weights of your network using the same initialization method as was used in the two-layer network above.**Hint**: Refer to the documentation for `tf.keras.layers.Conv2D` and `tf.keras.layers.Dense`:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Conv2Dhttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense
class ThreeLayerConvNet(tf.keras.Model): def __init__(self, channel_1, channel_2, num_classes): super(ThreeLayerConvNet, self).__init__() ######################################################################## # TODO: Implement the __init__ method for a three-layer ConvNet. You # # should instantiate layer objects to be used in the forward pass. # ######################################################################## # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** initializer = tf.variance_scaling_initializer(scale=2.0) self.conv1 = tf.layers.Conv2D(channel_1, [5,5], [1,1], padding='valid', kernel_initializer=initializer, activation=tf.nn.relu) self.conv2 = tf.layers.Conv2D(channel_2, [3,3], [1,1], padding='valid', kernel_initializer=initializer, activation=tf.nn.relu) self.fc = tf.layers.Dense(num_classes, kernel_initializer=initializer) # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ######################################################################## # END OF YOUR CODE # ######################################################################## def call(self, x, training=False): scores = None ######################################################################## # TODO: Implement the forward pass for a three-layer ConvNet. You # # should use the layer objects defined in the __init__ method. # ######################################################################## # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** padding = tf.constant([[0,0],[2,2],[2,2],[0,0]]) x = tf.pad(x, padding, 'CONSTANT') x = self.conv1(x) padding = tf.constant([[0,0],[1,1],[1,1],[0,0]]) x = tf.pad(x, padding, 'CONSTANT') x = self.conv2(x) x = tf.layers.flatten(x) scores = self.fc(x) # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ######################################################################## # END OF YOUR CODE # ######################################################################## return scores
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
def test_ThreeLayerConvNet(): channel_1, channel_2, num_classes = 12, 8, 10 model = ThreeLayerConvNet(channel_1, channel_2, num_classes) with tf.device(device): x = tf.zeros((64, 3, 32, 32)) scores = model(x) print(scores.shape) test_ThreeLayerConvNet()
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Keras Model Subclassing API: Eager TrainingWhile keras models have a builtin training loop (using the `model.fit`), sometimes you need more customization. Here's an example, of a training loop implemented with eager execution.In particular, notice `tf.GradientTape`. Automatic differentiation is used in the backend for implementing backpropagation in frameworks like TensorFlow. During eager execution, `tf.GradientTape` is used to trace operations for computing gradients later. A particular `tf.GradientTape` can only compute one gradient; subsequent calls to tape will throw a runtime error. TensorFlow 2.0 ships with easy-to-use built-in metrics under `tf.keras.metrics` module. Each metric is an object, and we can use `update_state()` to add observations and `reset_state()` to clear all observations. We can get the current result of a metric by calling `result()` on the metric object.
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1, is_training=False): """ Simple training loop for use with models defined using tf.keras. It trains a model for one epoch on the CIFAR-10 training set and periodically checks accuracy on the CIFAR-10 validation set. Inputs: - model_init_fn: A function that takes no parameters; when called it constructs the model we want to train: model = model_init_fn() - optimizer_init_fn: A function which takes no parameters; when called it constructs the Optimizer object we will use to optimize the model: optimizer = optimizer_init_fn() - num_epochs: The number of epochs to train for Returns: Nothing, but prints progress during trainingn """ with tf.device(device): # Compute the loss like we did in Part II loss_fn = tf.keras.losses.SparseCategoricalCrossentropy() model = model_init_fn() optimizer = optimizer_init_fn() train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') val_loss = tf.keras.metrics.Mean(name='val_loss') val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='val_accuracy') t = 0 for epoch in range(num_epochs): # Reset the metrics - https://www.tensorflow.org/alpha/guide/migration_guide#new-style_metrics train_loss.reset_states() train_accuracy.reset_states() for x_np, y_np in train_dset: with tf.GradientTape() as tape: # Use the model function to build the forward pass. scores = model(x_np, training=is_training) loss = loss_fn(y_np, scores) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) # Update the metrics train_loss.update_state(loss) train_accuracy.update_state(y_np, scores) if t % print_every == 0: val_loss.reset_states() val_accuracy.reset_states() for test_x, test_y in val_dset: # During validation at end of epoch, training set to False prediction = model(test_x, training=False) t_loss = loss_fn(test_y, prediction) val_loss.update_state(t_loss) val_accuracy.update_state(test_y, prediction) template = 'Iteration {}, Epoch {}, Loss: {}, Accuracy: {}, Val Loss: {}, Val Accuracy: {}' print (template.format(t, epoch+1, train_loss.result(), train_accuracy.result()*100, val_loss.result(), val_accuracy.result()*100)) t += 1
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Keras Model Subclassing API: Train a Two-Layer NetworkWe can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.keras.optimizers.SGD` function; you can [read about it here](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGD).You don't need to tune any hyperparameters here, but you should achieve validation accuracies above 40% after one epoch of training.
hidden_size, num_classes = 4000, 10 learning_rate = 1e-2 def model_init_fn(): return TwoLayerFC(hidden_size, num_classes) def optimizer_init_fn(): return tf.keras.optimizers.SGD(learning_rate=learning_rate) train_part34(model_init_fn, optimizer_init_fn)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Keras Model Subclassing API: Train a Three-Layer ConvNetHere you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.To train the model you should use gradient descent with Nesterov momentum 0.9. **HINT**: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGDYou don't need to perform any hyperparameter tuning, but you should achieve validation accuracies above 50% after training for one epoch.
learning_rate = 3e-3 channel_1, channel_2, num_classes = 32, 16, 10 def model_init_fn(): model = None ############################################################################ # TODO: Complete the implementation of model_fn. # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** pass # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ return model def optimizer_init_fn(): optimizer = None ############################################################################ # TODO: Complete the implementation of model_fn. # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** pass # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ return optimizer train_part34(model_init_fn, optimizer_init_fn)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Part IV: Keras Sequential APIIn Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model. Keras Sequential API: Two-Layer NetworkIn this subsection, we will rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
learning_rate = 1e-2 def model_init_fn(): input_shape = (32, 32, 3) hidden_layer_size, num_classes = 4000, 10 initializer = tf.initializers.VarianceScaling(scale=2.0) layers = [ tf.keras.layers.Flatten(input_shape=input_shape), tf.keras.layers.Dense(hidden_layer_size, activation='relu', kernel_initializer=initializer), tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer), ] model = tf.keras.Sequential(layers) return model def optimizer_init_fn(): return tf.keras.optimizers.SGD(learning_rate=learning_rate) train_part34(model_init_fn, optimizer_init_fn)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Abstracting Away the Training LoopIn the previous examples, we used a customised training loop to train models (e.g. `train_part34`). Writing your own training loop is only required if you need more flexibility and control during training your model. Alternately, you can also use built-in APIs like `tf.keras.Model.fit()` and `tf.keras.Model.evaluate` to train and evaluate a model. Also remember to configure your model for training by calling `tf.keras.Model.compile.You don't need to perform any hyperparameter tuning here, but you should see validation and test accuracies above 42% after training for one epoch.
model = model_init_fn() model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate), loss='sparse_categorical_crossentropy', metrics=[tf.keras.metrics.sparse_categorical_accuracy]) model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val)) model.evaluate(X_test, y_test)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Keras Sequential API: Three-Layer ConvNetHere you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:1. Convolutional layer with 32 5x5 kernels, using zero padding of 22. ReLU nonlinearity3. Convolutional layer with 16 3x3 kernels, using zero padding of 14. ReLU nonlinearity5. Fully-connected layer giving class scores6. Softmax nonlinearityYou should initialize the weights of the model using a `tf.initializers.VarianceScaling` as above.You should train the model using Nesterov momentum 0.9.You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
def model_init_fn(): model = None ############################################################################ # TODO: Construct a three-layer ConvNet using tf.keras.Sequential. # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** pass # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ return model learning_rate = 5e-4 def optimizer_init_fn(): optimizer = None ############################################################################ # TODO: Complete the implementation of model_fn. # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** pass # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ return optimizer train_part34(model_init_fn, optimizer_init_fn)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
We will also train this model with the built-in training loop APIs provided by TensorFlow.
model = model_init_fn() model.compile(optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=[tf.keras.metrics.sparse_categorical_accuracy]) model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val)) model.evaluate(X_test, y_test)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Part IV: Functional API Demonstration with a Two-Layer Network In the previous section, we saw how we can use `tf.keras.Sequential` to stack layers to quickly build simple models. But this comes at the cost of losing flexibility.Often we will have to write complex models that have non-sequential data flows: a layer can have **multiple inputs and/or outputs**, such as stacking the output of 2 previous layers together to feed as input to a third! (Some examples are residual connections and dense blocks.)In such cases, we can use Keras functional API to write models with complex topologies such as: 1. Multi-input models 2. Multi-output models 3. Models with shared layers (the same layer called several times) 4. Models with non-sequential data flows (e.g. residual connections)Writing a model with Functional API requires us to create a `tf.keras.Model` instance and explicitly write input tensors and output tensors for this model.
def two_layer_fc_functional(input_shape, hidden_size, num_classes): initializer = tf.initializers.VarianceScaling(scale=2.0) inputs = tf.keras.Input(shape=input_shape) flattened_inputs = tf.keras.layers.Flatten()(inputs) fc1_output = tf.keras.layers.Dense(hidden_size, activation='relu', kernel_initializer=initializer)(flattened_inputs) scores = tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer)(fc1_output) # Instantiate the model given inputs and outputs. model = tf.keras.Model(inputs=inputs, outputs=scores) return model def test_two_layer_fc_functional(): """ A small unit test to exercise the TwoLayerFC model above. """ input_size, hidden_size, num_classes = 50, 42, 10 input_shape = (50,) x = tf.zeros((64, input_size)) model = two_layer_fc_functional(input_shape, hidden_size, num_classes) with tf.device(device): scores = model(x) print(scores.shape) test_two_layer_fc_functional()
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Keras Functional API: Train a Two-Layer NetworkYou can now train this two-layer network constructed using the functional API.You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
input_shape = (32, 32, 3) hidden_size, num_classes = 4000, 10 learning_rate = 1e-2 def model_init_fn(): return two_layer_fc_functional(input_shape, hidden_size, num_classes) def optimizer_init_fn(): return tf.keras.optimizers.SGD(learning_rate=learning_rate) train_part34(model_init_fn, optimizer_init_fn)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
Part V: CIFAR-10 open-ended challengeIn this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the built-in train function, the `train_part34` function from above, or implement your own training loop.Describe what you did at the end of the notebook. Some things you can try:- **Filter size**: Above we used 5x5 and 3x3; is this optimal?- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?- **Pooling**: We didn't use any pooling above. Would this improve the model?- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout? NOTE: Batch Normalization / DropoutIf you are using Batch Normalization and Dropout, remember to pass `is_training=True` if you use the `train_part34()` function. BatchNorm and Dropout layers have different behaviors at training and inference time. `training` is a specific keyword argument reserved for this purpose in any `tf.keras.Model`'s `call()` function. Read more about this here : https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/BatchNormalizationmethodshttps://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dropoutmethods Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind: - If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
class CustomConvNet(tf.keras.Model): def __init__(self): super(CustomConvNet, self).__init__() ############################################################################ # TODO: Construct a model that performs well on CIFAR-10 # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** pass # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ def call(self, input_tensor, training=False): ############################################################################ # TODO: Construct a model that performs well on CIFAR-10 # ############################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** pass # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################ # END OF YOUR CODE # ############################################################################ return x print_every = 700 num_epochs = 10 model = CustomConvNet() def model_init_fn(): return CustomConvNet() def optimizer_init_fn(): learning_rate = 1e-3 return tf.keras.optimizers.Adam(learning_rate) train_part34(model_init_fn, optimizer_init_fn, num_epochs=num_epochs, is_training=True)
_____no_output_____
MIT
assignment2/TensorFlow.ipynb
LOTEAT/CS231n
paperspace tmux - multiple screens tensor = array
## nomenclature # error/loss = target - calculated # non linear - activation functions
_____no_output_____
Apache-2.0
series2/week1/week1_class2.ipynb
s-ahuja/AI-Saturday
Slide 24, Slide 25 of neuralNetwork.pptx
import numpy as np #Slide 25 weights_1 = np.array([[0.71,0.112],[0.355,0.856],[0.268,0.468]]) x = np.array([1,1]) print(weights) print(x) y_linear_1 = np.dot(weights,x) print(y_linear_1) import math def sigmoid(x): return 1 / (1 + math.exp(-x)) fn_sigmoid = np.vectorize(sigmoid) y_nonlinear_1 = fn_sigmoid(y_linear_1) print(y_nonlinear_1) weights2 = np.array([0.116,0.329,0.708]) # 1x3 y_linear_2 = np.dot(weights2,y_nonlinear_1) print(y_linear_2) y_nonlinear_2 = fn_sigmoid(y_linear_2) print(y_nonlinear_2)
0.6926974470214398
Apache-2.0
series2/week1/week1_class2.ipynb
s-ahuja/AI-Saturday
Slide 26
import math weights_list = [] weights_list.append(np.array([[0.71,0.112],[0.355,0.856],[0.268,0.468]])) weights_list.append(np.array([0.116,0.329,0.708])) input_x = np.array([1,1]) def sigmoid(x): return 1 / (1 + math.exp(-x)) fn_sigmoid = np.vectorize(sigmoid) def calc(weight,x): y_linear = np.dot(weight,x) y_nonlinear = fn_sigmoid(y_linear) return y_nonlinear for weight in weights_list: y = calc(weight,input_x) input_x = y print (y)
0.6926974470214398
Apache-2.0
series2/week1/week1_class2.ipynb
s-ahuja/AI-Saturday
ListsSequential, Ordered Collection Creating lists
x = [4,2,6,3] #Create a list with values y = list() # Create an empty list y = [] #Create an empty list print(x) print(y)
[4, 2, 6, 3] []
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Adding items to a list
x=list() print(x) x.append('One') #Adds 'One' to the back of the empty list print(x) x.append('Two') #Adds 'Two' to the back of the list ['One'] print(x) x.insert(0,'Half') #Inserts 'Half' at location 0. Items will shift to make roomw print(x) x=list() x.extend([1,2,3]) #Unpacks the list and adds each item to the back of the list print(x)
[1, 2, 3]
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Indexing and slicing
x=[1,7,2,5,3,5,67,32] print(len(x)) print(x[3]) print(x[2:5]) print(x[-1]) print(x[::-1])
8 5 [2, 5, 3] 32 [32, 67, 5, 3, 5, 2, 7, 1]
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Removing items from a list
x=[1,7,2,5,3,5,67,32] x.pop() #Removes the last element from a list print(x) x.pop(3) #Removes element at item 3 from a list print(x) x.remove(7) #Removes the first 7 from the list print(x)
[1, 7, 2, 5, 3, 5, 67] [1, 7, 2, 3, 5, 67] [1, 2, 3, 5, 67]
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Anything you want to remove must be in the list or the location must be inside the list
x.remove(20)
_____no_output_____
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Mutablility of lists
y=['a','b'] x = [1,y,3] print(x) print(y) y[1] = 4 print(y) print(x) x="Hello" print(x,id(x)) x+=" You!" print(x,id(x)) #x is not the same object it was y=["Hello"] print(y,id(y)) y+=["You!"] print(y,id(y)) #y is still the same object. Lists are mutable. Strings are immutable def eggs(item,total=0): total+=item return total def spam(elem,some_list=[]): some_list.append(elem) return some_list print(eggs(1)) print(eggs(2)) print(spam(1)) print(spam(2))
1 2 [1] [1, 2]
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Iteration Range iteration
#The for loop creates a new variable (e.g., index below) #range(len(x)) generates values from 0 to len(x) x=[1,7,2,5,3,5,67,32] for index in range(len(x)): print(x[index]) list(range(len(x)))
_____no_output_____
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
List element iteration
x=[1,7,2,5,3,5,67,32] #The for draws elements - sequentially - from the list x and uses the variable "element" to store values for element in x: print(element)
1 7 2 5 3 5 67 32
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Practice problem Write a function search_list that searches a list of tuple pairs and returns the value associated with the first element of the pair
def search_list(list_of_tuples,value): #Write the function here for t in prices: if t[0] == value: return t[1] prices = [('AAPL',96.43),('IONS',39.28),('GS',159.53)] ticker = 'IONS' print(search_list(prices,ticker))
39.28
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Dictionaries
mktcaps = {'AAPL':538.7,'GOOG':68.7,'IONS':4.6} mktcaps['AAPL'] #Returns the value associated with the key "AAPL" mktcaps['GS'] #Error because GS is not in mktcaps mktcaps.get('GS') #Returns None because GS is not in mktcaps mktcaps['GS'] = 88.65 #Adds GS to the dictionary print(mktcaps) del(mktcaps['GOOG']) #Removes GOOG from mktcaps print(mktcaps) mktcaps.keys() #Returns all the keys mktcaps.values() #Returns all the values list1 = [1, 2, 3, 4, 5, 6, 7] list1[0] list1[:2] list1[:-2] list1[3:5] data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] print(data[1][0][0]) numbers = [1, 2, 3, 4] numbers.append([5, 6, 7, 8]) print(len(numbers)) list1 = [1, 2, 3, 4, 5, 6, 7] print(list1[0]) print(list1[:2]) print(list1[:-2]) print(list1[3:5]) dict1 = {"john":40, "peter":45} dict2 = {"john":466, "peter":45} dict1 > dict2 dict1 = {"a":1, "b":2}# to delete the entry for "a":1, use ________. #d.delete("a":1) #dict1.delete("a") #del dict1("a":1) del dict1["a"] dict1 s = {1, 2, 4, 3}# which of the following will result in an exception (error)? Multiple options may be correct. #print(s[3]) print(max(s)) print(len(s)) #s[3] = 45
4 4
MIT
Week 2 - A Crash Course In Python Part 2/Collections.ipynb
2series/Analytics-And-Python
Julian's Code
start = time.time() # 5m meter lakes DEM40 = skimage.transform.resize(DEMflat, (DEMflat.shape[0] / 10, DEMflat.shape[1] / 10), anti_aliasing=True) DEM_inv = skimage.util.invert(DEM40) DEMinv_gaus = gaussian_filter(DEM_inv,1) marker = np.copy(DEMinv_gaus) marker[1:-1, 1:-1] = -9999 Inan = np.argwhere(np.isnan(DEMinv_gaus)) Inan_mask = np.isnan(DEMinv_gaus) Inan_mask_not = np.logical_not(Inan_mask) if((np.array(Inan)).size>0): I = skimage.morphology.binary_dilation(Inan_mask,np.ones(3)) & Inan_mask_not marker[I] = DEMinv_gaus[I] mask = DEMinv_gaus demfs = reconstruction(marker, mask, method='dilation') D = DEMinv_gaus-demfs index = list(Inan_mask_not) maxdepth = 40 while np.any(D[index]>maxdepth): lakemask = D>0 label_lakemask = measure.label(lakemask) STATS = measure.regionprops(label_lakemask,D) for r in np.arange(0,len(STATS)): if(STATS[r].max_intensity < maxdepth): pass else: poly_x = STATS[r].coords[:,0] poly_y = STATS[r].coords[:,1] poly = D[poly_x, poly_y] ix = poly.argmax() #ix = ix[1] marker[STATS[r].coords[ix][0],STATS[r].coords[ix][1]] = DEMinv_gaus[STATS[r].coords[ix][0],STATS[r].coords[ix][1]] demfs = reconstruction(marker,DEMinv_gaus, method='dilation'); D = DEMinv_gaus-demfs; demfs = skimage.util.invert(demfs) demfs[Inan_mask] = np.nan end = time.time() print(end-start) fig, ax = plt.subplots(ncols = 2, figsize=(10,5)) lakes2 = demfs-DEM40 jjscodefig = ax[0].imshow(lakes2, cmap='jet') jjscodefig.set_clim(0,8) plt.colorbar(jjscodefig) basecodefig = ax[1].imshow(lakes, cmap='jet') basecodefig.set_clim(0,8) #difference between methods lakediff = lakes-lakes2 fig, ax = plt.subplots( figsize=(10,10)) plt.imshow(lakediff) plt.colorbar() plt.clim(-5,5)
_____no_output_____
Apache-2.0
topocode2.ipynb
pangeo-data/pangeo-rema
Lake Properties
lakemask = lakes>0 label_lakes = measure.label(lakemask) LakeProps = measure.regionprops(label_lakes,lakes) numLakes = len(LakeProps) Area = np.zeros((numLakes,1)) Orientation= np.zeros((numLakes,1)) Volume = np.zeros((numLakes,1)) Max_Depth = np.zeros((numLakes,1)) Mean_Depth = np.zeros((numLakes,1)) Min_Depth = np.zeros((numLakes,1)) Perimeter = np.zeros((numLakes,1)) PPscore = np.zeros((numLakes,1)) DVscore = np.zeros((numLakes,1)) Centroid = np.zeros((numLakes,2)) for lake in np.arange(0,numLakes): Area[lake] = LakeProps[lake].area*8**2 Orientation[lake] = LakeProps[lake].orientation Volume[lake] = LakeProps[lake].intensity_image.sum()*8**2 Max_Depth[lake] = LakeProps[lake].max_intensity Mean_Depth[lake] = LakeProps[lake].mean_intensity Min_Depth[lake] = LakeProps[lake].min_intensity Perimeter[lake] = LakeProps[lake].perimeter*8 PPscore[lake] = (4*3.14*Area[lake])/(Perimeter[lake]**2) DVscore[lake] = 3*Mean_Depth[lake]/Max_Depth[lake] Centroid[lake] = LakeProps[lake].centroid plt.scatter(Area, Max_Depth) plt.xlim(0,1e5) plt.ylim(0,5)
_____no_output_____
Apache-2.0
topocode2.ipynb
pangeo-data/pangeo-rema
Elevation Data
ElevationProps = measure.regionprops(label_lakes,DEM40) numLakes = len(LakeProps) Max_Elev = np.zeros((numLakes,1)) Mean_Elev = np.zeros((numLakes,1)) Min_Elev = np.zeros((numLakes,1)) for lake in np.arange(0,numLakes): Max_Elev[lake] =ElevationProps[lake].max_intensity Mean_Elev[lake] = ElevationProps[lake].mean_intensity Min_Elev[lake] = ElevationProps[lake].min_intensity
_____no_output_____
Apache-2.0
topocode2.ipynb
pangeo-data/pangeo-rema
Full Tiles
xlength = DEMflat.shape[0] ylength = DEMflat.shape[1] quarter_tile = np.empty([int(xlength/2),int(ylength/2),4]) inProj = Proj(init= geoDEM.crs) outProj = Proj(init='epsg:4326') #Plate Carree lon,lat = transform(inProj,outProj,list(geoDEM.x),list(geoDEM.y)) quarter_tile[:,:,0] = DEMflat[0:int(xlength/2), 0:int(ylength/2)] quarter_tile[:,:,1] = DEMflat[int(xlength/2):xlength,0:int(ylength/2)] quarter_tile[:,:,2] = DEMflat[0:int(xlength/2),int(ylength/2):ylength] quarter_tile[:,:,3] = DEMflat[int(xlength/2):xlength,int(ylength/2):ylength] #coordinates_lat = np.empty([int(xlength/2),4]) #coordinates_lon = np.empty([int(ylength/2),4]) #coordinates_lon[:,0] = lon[0:int(xlength/2)] #coordinates_lon[:,1] = lon[int(xlength/2):xlength] #coordinates_lon[:,2] = lon[0:int(xlength/2)] #coordinates_lon[:,3] = lon[int(xlength/2):xlength] #coordinates_lat[:,0] = lat[0:int(ylength/2)] #coordinates_lat[:,1] = lat[0:int(ylength/2)] #coordinates_lat[:,2] = lat[int(ylength/2):ylength] #coordinates_lat[:,3] = lat[int(ylength/2):ylength] numLakes_total = 0 Area_total = [] Orientation_total= [] Volume_total = [] Max_Depth_total = [] Mean_Depth_total = [] Min_Depth_total = [] Perimeter_total = [] PPscore_total = [] DVscore_total = [] Max_Elev_total = [] Mean_Elev_total = [] Min_Elev_total = [] Centroidlat_total = [] Centroidlon_total = [] for tile in np.arange(0,3): DEM40 = skimage.transform.resize(quarter_tile[:,:,tile], (quarter_tile[:,:,tile].shape[0] / 5, quarter_tile[:,:,tile].shape[1] / 5), anti_aliasing=True) DEM40 = rd.rdarray(DEM40,no_data = -9999 ) DEMf = rd.FillDepressions(DEM40) lakes = DEMf-DEM40 lakemask = lakes>0 label_lakes = measure.label(lakemask) LakeProps = measure.regionprops(label_lakes,lakes) ElevationProps = measure.regionprops(label_lakes,DEM40) numLakes = len(LakeProps) Area = np.zeros((numLakes,1)) Orientation= np.zeros((numLakes,1)) Volume = np.zeros((numLakes,1)) Max_Depth = np.zeros((numLakes,1)) Mean_Depth = np.zeros((numLakes,1)) Min_Depth = np.zeros((numLakes,1)) Perimeter = np.zeros((numLakes,1)) PPscore = np.zeros((numLakes,1)) DVscore = np.zeros((numLakes,1)) Centroidlon = np.zeros((numLakes,1)) Centroidlat = np.zeros((numLakes,1)) for lake in np.arange(0,numLakes): Area[lake] = LakeProps[lake].area*8**2 Orientation[lake] = LakeProps[lake].orientation Volume[lake] = LakeProps[lake].intensity_image.sum()*8**2 Max_Depth[lake] = LakeProps[lake].max_intensity Mean_Depth[lake] = LakeProps[lake].mean_intensity Min_Depth[lake] = LakeProps[lake].min_intensity Perimeter[lake] = LakeProps[lake].perimeter*8 PPscore[lake] = (4*3.14*Area[lake])/(Perimeter[lake]**2) DVscore[lake] = 3*Mean_Depth[lake]/Max_Depth[lake] Max_Elev[lake] =ElevationProps[lake].max_intensity Mean_Elev[lake] = ElevationProps[lake].mean_intensity Min_Elev[lake] = ElevationProps[lake].min_intensity Centroidlat[lake] = coordinates_lat[int(round(LakeProps[lake].centroid[0])),tile] Centroidlon[lake] = coordinates_lon[int(round(LakeProps[lake].centroid[1])),tile] numLakes_total = numLakes_total+numLakes Area_total = np.append(Area_total,Area) Orientation_total= np.append(Orientation_total,Orientation) Volume_total = np.append(Volume_total,Volume) Max_Depth_total =np.append(Max_Depth_total,Max_Depth) Mean_Depth_total = np.append(Mean_Depth_total,Mean_Depth) Min_Depth_total = np.append(Min_Depth_total,Min_Depth) Perimeter_total = np.append(Perimeter_total,Perimeter) PPscore_total = np.append(PPscore_total,PPscore) DVscore_total = np.append(DVscore_total,DVscore) Max_Elev_total = np.append(Max_Elev_total,Max_Elev) Mean_Elev_total = np.append(Mean_Elev_total,Mean_Elev) Min_Elev_total = np.append(Min_Elev_total,Min_Elev) Centroidlat_total = np.append(Centroidlat_total,Centroidlat) Centroidlon_total = np.append(Centroidlon_total, Centroidlon) coordinates_lat[int(round(LakeProps[1].centroid[0])),1] plt.scatter(Centroidlat_total,Centroidlon_total) len(lon)
_____no_output_____
Apache-2.0
topocode2.ipynb
pangeo-data/pangeo-rema
Authoring repeatable processes aka AzureML pipelines
from azureml.core import Workspace ws = Workspace.from_config() dataset = ws.datasets["diabetes-tabular"] compute_target = ws.compute_targets["cpu-cluster"] from azureml.core import RunConfiguration # To simplify we are going to use a big demo environment instead # of creating our own specialized environment. We will also use # the same environment for all steps, but this is not needed. runconfig = RunConfiguration() runconfig.environment = ws.environments["AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu"]
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Step 1 - Convert data into LightGBM dataset
from azureml.pipeline.core import PipelineData step01_output = PipelineData( "training_data", datastore=ws.get_default_datastore(), is_directory=True ) from azureml.pipeline.core import PipelineParameter from azureml.data.dataset_consumption_config import DatasetConsumptionConfig ds_pipeline_param = PipelineParameter(name="dataset", default_value=dataset) step01_input_dataset = DatasetConsumptionConfig("input_dataset", ds_pipeline_param) from azureml.pipeline.steps import PythonScriptStep step_01 = PythonScriptStep( "step01_data_prep.py", source_directory="040_scripts", arguments=["--dataset-id", step01_input_dataset, "--output-path", step01_output], name="Prepare data", runconfig=runconfig, compute_target=compute_target, inputs=[step01_input_dataset], outputs=[step01_output], allow_reuse=True, )
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Step 2 - Train the LightGBM model
from azureml.pipeline.core import PipelineParameter learning_rate_param = PipelineParameter(name="learning_rate", default_value=0.05) step02_output = PipelineData( "model_output", datastore=ws.get_default_datastore(), is_directory=True ) step_02 = PythonScriptStep( "step02_train.py", source_directory="040_scripts", arguments=[ "--learning-rate", learning_rate_param, "--input-path", step01_output, "--output-path", step02_output, ], name="Train model", runconfig=runconfig, compute_target=compute_target, inputs=[step01_output], outputs=[step02_output], )
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Step 3 - Register model
step_03 = PythonScriptStep( "step03_register.py", source_directory="040_scripts", arguments=[ "--input-path", step02_output, "--dataset-id", step01_input_dataset, ], name="Register model", runconfig=runconfig, compute_target=compute_target, inputs=[step01_input_dataset, step02_output], )
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Create pipeline
from azureml.pipeline.core import Pipeline pipeline = Pipeline(workspace=ws, steps=[step_01, step_02, step_03])
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Trigger pipeline through SDK
from azureml.core import Experiment # Using the SDK experiment = Experiment(ws, "pipeline-run") pipeline_run = experiment.submit(pipeline, pipeline_parameters={"learning_rate": 0.5}) pipeline_run.wait_for_completion()
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Register pipeline to reuse
published_pipeline = pipeline.publish( "Training pipeline", description="A pipeline to train a LightGBM model" )
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Trigger published pipeline through REST
from azureml.core.authentication import InteractiveLoginAuthentication auth = InteractiveLoginAuthentication() aad_token = auth.get_authentication_header() import requests response = requests.post( published_pipeline.endpoint, headers=aad_token, json={ "ExperimentName": "pipeline-run", "ParameterAssignments": {"learning_rate": 0.02}, }, ) print( f"Made a POST request to {published_pipeline.endpoint} and got {response.status_code}." ) print(f"The portal url for the run is {response.json()['RunUrl']}")
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Scheduling a pipeline
from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule from datetime import datetime recurrence = ScheduleRecurrence( frequency="Month", interval=1, start_time=datetime.now() ) schedule = Schedule.create( workspace=ws, name="pipeline-schedule", pipeline_id=published_pipeline.id, experiment_name="pipeline-schedule-run", recurrence=recurrence, wait_for_provisioning=True, description="Schedule to retrain model", ) print("Created schedule with id: {}".format(schedule.id)) from azureml.pipeline.core.schedule import Schedule # Disable schedule schedules = Schedule.list(ws, active_only=True) print("Your workspace has the following schedules set up:") for schedule in schedules: print(f"Disabling {schedule.id} (Published pipeline: {schedule.pipeline_id}") schedule.disable(wait_for_provisioning=True)
_____no_output_____
MIT
fundamentals/src/notebooks/040_pipelines.ipynb
konabuta/fta-azure-machine-learning
Local Feature MatchingBy the end of this exercise, you will be able to transform images of a flat (planar) object, or images taken from the same point into a common reference frame. This is at the core of applications such as panorama stitching.A quick overview:1. We will start with histogram representations for images (or image regions).2. Then we will detect robust keypoints in images and use simple histogram descriptors to describe the neighborhood of each keypoint.3. After this we will compare descriptors from different images using a distance function and establish matching points.4. Using these matching points we will estimate the homography transformation between two images of a planar object (wall with graffiti) and use this to warp one image to look like the other.
%matplotlib notebook import numpy as np import matplotlib.pyplot as plt import imageio import cv2 import math from scipy import ndimage from attrdict import AttrDict from mpl_toolkits.mplot3d import Axes3D # Many useful functions def plot_multiple(images, titles=None, colormap='gray', max_columns=np.inf, imwidth=4, imheight=4, share_axes=False): """Plot multiple images as subplots on a grid.""" if titles is None: titles = [''] *len(images) assert len(images) == len(titles) n_images = len(images) n_cols = min(max_columns, n_images) n_rows = int(np.ceil(n_images / n_cols)) fig, axes = plt.subplots( n_rows, n_cols, figsize=(n_cols * imwidth, n_rows * imheight), squeeze=False, sharex=share_axes, sharey=share_axes) axes = axes.flat # Hide subplots without content for ax in axes[n_images:]: ax.axis('off') if not isinstance(colormap, (list,tuple)): colormaps = [colormap]*n_images else: colormaps = colormap for ax, image, title, cmap in zip(axes, images, titles, colormaps): ax.imshow(image, cmap=cmap) ax.set_title(title) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout() def load_image(f_name): return imageio.imread(f_name, as_gray=True).astype(np.float32)/255 def convolve_with_two(image, kernel1, kernel2): """Apply two filters, one after the other.""" image = ndimage.convolve(image, kernel1) image = ndimage.convolve(image, kernel2) return image def gauss(x, sigma): return 1 / np.sqrt(2 * np.pi) / sigma * np.exp(- x**2 / 2 / sigma**2) def gaussdx(x, sigma): return (-1 / np.sqrt(2 * np.pi) / sigma**3 * x * np.exp(- x**2 / 2 / sigma**2)) def gauss_derivs(image, sigma): kernel_radius = np.ceil(3.0 * sigma) x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis] G = gauss(x, sigma) D = gaussdx(x, sigma) image_dx = convolve_with_two(image, D, G.T) image_dy = convolve_with_two(image, G, D.T) return image_dx, image_dy def gauss_filter(image, sigma): kernel_radius = np.ceil(3.0 * sigma) x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis] G = gauss(x, sigma) return convolve_with_two(image, G, G.T) def gauss_second_derivs(image, sigma): kernel_radius = np.ceil(3.0 * sigma) x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis] G = gauss(x, sigma) D = gaussdx(x, sigma) image_dx, image_dy = gauss_derivs(image, sigma) image_dxx = convolve_with_two(image_dx, D, G.T) image_dyy = convolve_with_two(image_dy, G, D.T) image_dxy = convolve_with_two(image_dx, G, D.T) return image_dxx, image_dxy, image_dyy def map_range(x, start, end): """Maps values `x` that are within the range [start, end) to the range [0, 1) Values smaller than `start` become 0, values larger than `end` become slightly smaller than 1.""" return np.clip((x-start)/(end-start), 0, 1-1e-10) def draw_keypoints(image, points): image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB) radius = image.shape[1]//100+1 for x, y in points: cv2.circle(image, (int(x), int(y)), radius, (1, 0, 0), thickness=2) return image def draw_point_matches(im1, im2, point_matches): result = np.concatenate([im1, im2], axis=1) result = (result.astype(float)*0.6).astype(np.uint8) im1_width = im1.shape[1] for x1, y1, x2, y2 in point_matches: cv2.line(result, (x1, y1), (im1_width+x2, y2), color=(0,255,255), thickness=2, lineType=cv2.LINE_AA) return result %%html <!-- This adds heading numbers to each section header --> <style> body {counter-reset: section;} h2:before {counter-increment: section; content: counter(section) " ";} </style>
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Histograms in 1DIf we have a grayscale image, creating a histogram of the gray values tells us how frequently each gray value appears in the image, at a certain discretization level, which is controlled by the number of bins.Implement `compute_1d_histogram(im, n_bins)`. Given an grayscale image `im` with shape `[height, width]` and the number of bins `n_bins`, return a `histogram` array that contains the number of values falling into each bin. Assume that the values (of the image) are in the range \[0,1), so the specified number of bins should cover the range from 0 to 1. Normalize the resulting histogram to sum to 1.
def compute_1d_histogram(im, n_bins): histogram = np.zeros(n_bins) # YOUR CODE HERE raise NotImplementedError() return histogram fig, axes = plt.subplots(1,4, figsize=(10,2), constrained_layout=True) bin_counts = [2, 25, 256] gray_img = imageio.imread('terrain.png', as_gray=True ).astype(np.float32)/256 axes[0].set_title('Image') axes[0].imshow(gray_img, cmap='gray') for ax, n_bins in zip(axes[1:], bin_counts): ax.set_title(f'1D histogram with {n_bins} bins') bin_size = 1/n_bins x_axis = np.linspace(0, 1, n_bins, endpoint=False)+bin_size/2 hist = compute_1d_histogram(gray_img, n_bins) ax.bar(x_axis, hist, bin_size)
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
What is the effect of the different bin counts? YOUR ANSWER HERE Histograms in 3DIf the pixel values are more than one-dimensional (e.g. three-dimensional RGB, for red, green and blue color channels), we can build a multi-dimensional histogram. In the R, G, B example this will tell us how frequently each *combination* of R, G, B values occurs. (Note that this contains more information than simply building 3 one-dimensional histograms, each for R, G and B, separately. Why?)Implement a new function `compute_3d_histogram(im, n_bins)`, which takes as input an array of shape `[height, width, 3]` and returns a histogram of shape `[n_bins, n_bins, n_bins]`. Again, assume that the range of values is \[0,1) and normalize the histogram at the end.Visualize the RGB histograms of the images `sunset.png` and `terrain.png` using the provided code and describe what you see. We cannot use a bar chart in 3D. Instead, in the position of each 3D bin ("voxel"), we have a sphere, whose volume is proportional to the histogram's value in that bin. The color of the sphere is simply the RGB color that the bin represents. Which number of bins gives the best impression of the color distribution?
def compute_3d_histogram(im, n_bins): histogram = np.zeros([n_bins, n_bins, n_bins], dtype=np.float32) # YOUR CODE HERE raise NotImplementedError() return histogram def plot_3d_histogram(ax, data, axis_names='xyz'): """Plot a 3D histogram. We plot a sphere for each bin, with volume proportional to the bin content.""" r,g,b = np.meshgrid(*[np.linspace(0,1, dim) for dim in data.shape], indexing='ij') colors = np.stack([r,g,b], axis=-1).reshape(-1, 3) marker_sizes = 300 * data**(1/3) ax.scatter(r.flat, g.flat, b.flat, s=marker_sizes.flat, c=colors, alpha=0.5) ax.set_xlabel(axis_names[0]) ax.set_ylabel(axis_names[1]) ax.set_zlabel(axis_names[2]) paths = ['sunset.png', 'terrain.png'] images = [imageio.imread(p) for p in paths] plot_multiple(images, paths) fig, axes = plt.subplots(1, 2, figsize=(8, 4), subplot_kw={'projection': '3d'}) for path, ax in zip(paths, axes): im = imageio.imread(path).astype(np.float32)/256 hist = compute_3d_histogram(im, n_bins=16) # <--- FIDDLE WITH N_BINS HERE plot_3d_histogram(ax, hist, 'RGB') fig.tight_layout()
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Histograms in 2DNow modify your code to work in 2D. This can be useful, for example, for a gradient image that stores two values for each pixel: the vertical and horizontal derivative. Again, assume the values are in the range \[0,1).Since gradients can be negative, we need to pick a relevant range of values an map them linearly to the range of \[0,1) before applying `compute_2d_histogram`. This is implemented by the function `map_range` provided at the beginning of the notebook.In 2D we can plot the histogram as an image. For better visibility of small values, we plot the logarithm of each bin value. Yellowish colors mean high values. The center is (0,0). Can you explain why each histogram looks the way it does for the test images?
def compute_2d_histogram(im, n_bins): histogram = np.zeros([n_bins, n_bins], dtype=np.float32) # YOUR CODE HERE raise NotImplementedError() return histogram def compute_gradient_histogram(rgb_im, n_bins): # Convert to grayscale gray_im = cv2.cvtColor(im, cv2.COLOR_RGB2GRAY).astype(float) # Compute Gaussian derivatives dx, dy = gauss_derivs(gray_im, sigma=2.0) # Map the derivatives between -10 and 10 to be between 0 and 1 dx = map_range(dx, start=-10, end=10) dy = map_range(dy, start=-10, end=10) # Stack the two derivative images along a new # axis at the end (-1 means "last") gradients = np.stack([dy, dx], axis=-1) return dx, dy, compute_2d_histogram(gradients, n_bins=16) paths = ['model/obj4__0.png', 'model/obj42__0.png'] images, titles = [], [] for path in paths: im = imageio.imread(path) dx, dy, hist = compute_gradient_histogram(im, n_bins=16) images += [im, dx, dy, np.log(hist+1e-3)] titles += [path, 'dx', 'dy', 'Histogram (log)'] plot_multiple(images, titles, max_columns=4, imwidth=2, imheight=2, colormap='viridis')
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Similar to the function `compute_gradient_histogram` above, we can build a "Mag/Lap" histogram from the gradient magnitudes and the Laplacians at each pixel. Refer back to the first exercise to refresh your knowledge of the Laplacian. Implement this in `compute_maglap_histogram`!Make sure to map the relevant range of the gradient magnitude and Laplacian values to \[0,1) using `map_range()`. For the magnitude you can assume that the values will mostly lie in the range \[0, 15) and the Laplacian in the range \[-5, 5).
def compute_maglap_histogram(rgb_im, n_bins): # Convert to grayscale gray_im = cv2.cvtColor(rgb_im, cv2.COLOR_RGB2GRAY).astype(float) # Compute Gaussian derivatives sigma = 2 kernel_radius = np.ceil(3.0 * sigma) x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis] G = gauss(x, sigma) D = gaussdx(x, sigma) dx = convolve_with_two(gray_im, D, G.T) dy = convolve_with_two(gray_im, G, D.T) # Compute second derivatives dxx = convolve_with_two(dx, D, G.T) dyy = convolve_with_two(dy, G, D.T) # Compute gradient magnitude and Laplacian # YOUR CODE HERE raise NotImplementedError() mag_lap = np.stack([mag, lap], axis=-1) return mag, lap, compute_2d_histogram(mag_lap, n_bins=16) paths = [f'model/obj{i}__0.png' for i in [20, 37, 36, 55]] images, titles = [], [] for path in paths: im = imageio.imread(path) mag, lap, hist = compute_maglap_histogram(im, n_bins=16) images += [im, mag, lap, np.log(hist+1e-3)] titles += [path, 'Gradient magn.', 'Laplacian', 'Histogram (log)'] plot_multiple(images, titles, imwidth=2, imheight=2, max_columns=4, colormap='viridis')
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Comparing HistogramsThe above histograms looked different, but to quantify this objectively, we need a **distance measure**. The Euclidean distance is a common one.Implement the function `euclidean_distance`, which takes two histograms $P$ and $Q$ as input and returns their Euclidean distance:$$\textit{dist}_{\textit{Euclidean}}(P, Q) = \sqrt{\sum_{i=1}^{D}{(P_i - Q_i)^2}}$$Another commonly used distance for histograms is the so-called chi-squared ($\chi^2$) distance, commonly defined as:$$\chi^2(P, Q) = \frac{1}{2} \sum_{i=1}^{D}\frac{(P_i - Q_i)^2}{P_i + Q_i + \epsilon}$$Where we can use a small value $\epsilon$ is used to avoid division by zero.Implement it as `chi_square_distance`. The inputs `hist1` and `hist2` are histogram vectors containing the bin values. Remember to use numpy array functions (such as `np.sum()`) instead of looping over each element in Python (looping is slow).
def euclidean_distance(hist1, hist2): # YOUR CODE HERE raise NotImplementedError() def chi_square_distance(hist1, hist2, eps=1e-3): # YOUR CODE HERE raise NotImplementedError()
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Now let's take the image `obj1__0.png` as reference and let's compare it to `obj91__0.png` and `obj94__0.png`, using an RGB histogram, both with Euclidean and chi-square distance. Can you interpret the results?You can also try other images from the "model" folder.
im1 = imageio.imread('model/obj1__0.png') im2 = imageio.imread('model/obj91__0.png') im3 = imageio.imread('model/obj94__0.png') n_bins = 8 h1 = compute_3d_histogram(im1/256, n_bins) h2 = compute_3d_histogram(im2/256, n_bins) h3 = compute_3d_histogram(im3/256, n_bins) eucl_dist1 = euclidean_distance(h1, h2) chisq_dist1 = chi_square_distance(h1, h2) eucl_dist2 = euclidean_distance(h1, h3) chisq_dist2 = chi_square_distance(h1, h3) titles = ['Reference image', f'Eucl: {eucl_dist1:.3f}, ChiSq: {chisq_dist1:.3f}', f'Eucl: {eucl_dist2:.3f}, ChiSq: {chisq_dist2:.3f}'] plot_multiple([im1, im2, im3], titles, imheight=3)
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Keypoint DetectionNow we turn to finding keypoints in images. Harris DetectorThe Harris detector searches for points, around which the second-moment matrix $M$ of the gradient vector has two large eigenvalues (This $M$ is denoted by $C$ in the Grauman & Leibe script). This matrix $M$ can be written as:$$M(\sigma, \tilde{\sigma}) = G(\tilde{\sigma}) \star \left[\begin{matrix} I_x^2(\sigma) & I_x(\sigma) \cdot I_y(\sigma) \cr I_x(\sigma)\cdot I_y(\sigma) & I_y^2(\sigma) \end{matrix}\right]$$Note that the matrix $M$ is computed for each pixel (we omitted the $x, y$ dependency in this formula for clarity). In the above notation the 4 elements of the second-moment matrix are considered as full 2D "images" (signals) and each of these 4 "images" are convolved with the Gaussian $G(\tilde{\sigma})$ independently. We have two sigmas $\sigma$ and $\tilde{\sigma}$ here for two different uses of Gaussian blurring: * first for computing the derivatives themselves (as derivatives-of-Gaussian) with $\sigma$, and * then another Gaussian with $\tilde{\sigma}$ that operates on "images" containing the *products* of the derivatives (such as $I_x^2(\sigma)$) in order to collect summary statistics from a window around each point.Instead of explicitly computing the eigenvalues $\lambda_1$ and $\lambda_2$ of $M$, the following equivalences are used:$$\det(M) = \lambda_1 \lambda_2 = (G(\tilde{\sigma}) \star I_x^2)\cdot (G(\tilde{\sigma}) \star I_y^2) - (G(\tilde{\sigma}) \star (I_x\cdot I_y))^2$$$$\mathrm{trace}(M) = \lambda_1 + \lambda_2 = G(\tilde{\sigma}) \star I_x^2 + G(\tilde{\sigma}) \star I_y^2$$The Harris criterion is then:$$\det(M) - \alpha \cdot \mathrm{trace}^2(M) > t$$In practice, the parameters are usually set as $\tilde{\sigma} = 2 \sigma, \alpha=0.06$.Read more in Section 3.2.1.2 of the Grauman & Leibe script (grauman-leibe-ch3-local-features.pdf in the Moodle).----Write a function `harris_score(im, opts)` which: - computes the values of $M$ **for each pixel** of the grayscale image `im` - calculates the trace and the determinant at each pixel - combines them to the Harris response and returns the resulting imageTo handle the large number of configurable parameters in this exercise, we will store them in an `opts` object. Use `opts.sigma1` for $\sigma$, `opts.sigma2` for $\tilde{\sigma}$ and `opts.alpha` for $\alpha$.Furthermore, implement `nms(scores)` to perform non-maximum suppression of the response image.Then look at `score_map_to_keypoints(scores, opts)`. It takes a score map and returns an array of shape `[number_of_corners, 2]`, with each row being the $(x,y)$ coordinates of a found keypoint. We use `opts.score_threshold` as the threshold for considering a point to be a keypoint. (This is quite similar to how we found detections from score maps in the sliding-window detection exercise.)
def harris_scores(im, opts): dx, dy = gauss_derivs(im, opts.sigma1) # YOUR CODE HERE raise NotImplementedError() return scores def nms(scores): """Non-maximum suppression""" # YOUR CODE HERE raise NotImplementedError() return scores_out def score_map_to_keypoints(scores, opts): corner_ys, corner_xs = (scores > opts.score_threshold).nonzero() return np.stack([corner_xs, corner_ys], axis=1)
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Now check the score maps and keypoints:
opts = AttrDict() opts.sigma1=2 opts.sigma2=opts.sigma1*2 opts.alpha=0.06 opts.score_threshold=1e-8 paths = ['checkboard.jpg', 'graf.png', 'gantrycrane.png'] images = [] titles = [] for path in paths: image = load_image(path) score_map = harris_scores(image, opts) score_map_nms = nms(score_map) keypoints = score_map_to_keypoints(score_map_nms, opts) keypoint_image = draw_keypoints(image, keypoints) images += [score_map, keypoint_image] titles += ['Harris response scores', 'Harris keypoints'] plot_multiple(images, titles, max_columns=2, colormap='viridis')
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Hessian DetectorThe Hessian detector operates on the second-derivative matrix $H$ (called the “Hessian” matrix)$$H = \left[\begin{matrix}I_{xx}(\sigma) & I_{xy}(\sigma) \cr I_{xy}(\sigma) & I_{yy}(\sigma)\end{matrix}\right] \tag{6}$$Note that these are *second* derivatives, while the Harris detector computes *products* of *first* derivatives! The score is computed as follows:$$\sigma^4 \det(H) = \sigma^4 (I_{xx}I_{yy} - I^2_{xy}) > t \tag{7}$$You can read more in Section 3.2.1.1 of the Grauman & Leibe script (grauman-leibe-ch3-local-features.pdf in the Moodle).-----Write a function `hessian_scores(im, opts)`, which: - computes the four entries of the $H$ matrix for each pixel of a given image, - calculates the determinant of $H$ to get the response imageUse `opts.sigma1` for computing the Gaussian second derivatives.
def hessian_scores(im, opts): height, width = im.shape # YOUR CODE HERE raise NotImplementedError() return scores opts = AttrDict() opts.sigma1=3 opts.score_threshold=5e-4 paths = ['checkboard.jpg', 'graf.png', 'gantrycrane.png'] images = [] titles = [] for path in paths: image = load_image(path) score_map = hessian_scores(image, opts) score_map_nms = nms(score_map) keypoints = score_map_to_keypoints(score_map_nms, opts) keypoint_image = draw_keypoints(image, keypoints) images += [score_map, keypoint_image] titles += ['Hessian scores', 'Hessian keypoints'] plot_multiple(images, titles, max_columns=2, colormap='viridis')
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Region Descriptor MatchingNow that we can detect robust keypoints, we can try to match them across different images of the same object. For this we need a way to compare the neighborhood of a keypoint found in one image with the neighborhood of a keypoint found in another. If the neighborhoods are similar, then the keypoints may represent the same physical point on the object.To compare two neighborhoods, we compute a **descriptor** vector for the image window around each keypoint and then compare these descriptors using a **distance function**.Inspect the following `compute_rgb_descriptors` function that takes a window around each point in `points` and computes a 3D RGB histogram and returns these as row vectors in a `descriptors` array.Now write the function `compute_maglap_descriptors`, which works very similarly to `compute_rgb_descriptors`, but computes two-dimensional gradient-magnitude/Laplacian histograms. (Compute the gradient magnitude and the Laplacian for the full image first. See also the beginning of this exercise.) Pay attention to the scale of the gradient-magnitude values.
def compute_rgb_descriptors(rgb_im, points, opts): """For each (x,y) point in `points` calculate the 3D RGB histogram descriptor and stack these into a matrix of shape [num_points, descriptor_length] """ win_half = opts.descriptor_window_halfsize descriptors = [] rgb_im_01 = rgb_im.astype(np.float32)/256 for (x, y) in points: y_start = max(0, y-win_half) y_end = y+win_half+1 x_start = max(0, x-win_half) x_end = x+win_half+1 window = rgb_im_01[y_start:y_end, x_start:x_end] histogram = compute_3d_histogram(window, opts.n_histogram_bins) descriptors.append(histogram.reshape(-1)) return np.array(descriptors) def compute_maglap_descriptors(rgb_im, points, opts): """For each (x,y) point in `points` calculate the magnitude-Laplacian 2D histogram descriptor and stack these into a matrix of shape [num_points, descriptor_length] """ # Compute the gradient magnitude and Laplacian for each pixel first gray_im = cv2.cvtColor(rgb_im, cv2.COLOR_RGB2GRAY).astype(float) kernel_radius = np.ceil(3.0 * opts.sigma1) x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis] G = gauss(x, opts.sigma1) D = gaussdx(x, opts.sigma1) dx = convolve_with_two(gray_im, D, G.T) dy = convolve_with_two(gray_im, G, D.T) dxx = convolve_with_two(dx, D, G.T) dyy = convolve_with_two(dy, G, D.T) # YOUR CODE HERE raise NotImplementedError() return np.array(descriptors)
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Now let's implement the distance computation between descriptors. Look at `compute_euclidean_distances`. It takes descriptors that were computed for keypoints found in two different images and returns the pairwise distances between all point pairs.Implement `compute_chi_square_distances` in a similar manner.
def compute_euclidean_distances(descriptors1, descriptors2): distances = np.empty((len(descriptors1), len(descriptors2))) for i, desc1 in enumerate(descriptors1): distances[i] = np.linalg.norm(descriptors2-desc1, axis=-1) return distances def compute_chi_square_distances(descriptors1, descriptors2): distances = np.empty((len(descriptors1), len(descriptors2))) # YOUR CODE HERE raise NotImplementedError() return distances
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Given the distances, a simple way to produce point matches is to take each descriptor extracted from a keypoint of the first image, and find the keypoint in the second image with the nearest descriptor. The full pipeline from images to point matches is implemented below in the function `find_point_matches(im1, im2, opts)`.Experiment with different parameter settings. Which keypoint detector, region descriptor and distance function works best?
def find_point_matches(im1, im2, opts): # Process first image im1_gray = cv2.cvtColor(im1, cv2.COLOR_RGB2GRAY).astype(float)/255 score_map1 = nms(opts.score_func(im1_gray, opts)) points1 = score_map_to_keypoints(score_map1, opts) descriptors1 = opts.descriptor_func(im1, points1, opts) # Process second image independently of first im2_gray = cv2.cvtColor(im2, cv2.COLOR_RGB2GRAY).astype(float)/255 score_map2 = nms(opts.score_func(im2_gray, opts)) points2 = score_map_to_keypoints(score_map2, opts) descriptors2 = opts.descriptor_func(im2, points2, opts) # Compute descriptor distances distances = opts.distance_func(descriptors1, descriptors2) # Find the nearest neighbor of each descriptor from the first image # among descriptors of the second image closest_ids = np.argmin(distances, axis=1) closest_dists = np.min(distances, axis=1) # Sort the point pairs in increasing order of distance # (most similar ones first) ids1 = np.argsort(closest_dists) ids2 = closest_ids[ids1] points1 = points1[ids1] points2 = points2[ids2] # Stack the point matches into rows of (x1, y1, x2, y2) values point_matches = np.concatenate([points1, points2], axis=1) return point_matches # Try changing these values in different ways and see if you can explain # why the result changes the way it does. opts = AttrDict() opts.sigma1=2 opts.sigma2=opts.sigma1*2 opts.alpha=0.06 opts.score_threshold=1e-8 opts.descriptor_window_halfsize = 20 opts.n_histogram_bins = 16 opts.score_func = harris_scores opts.descriptor_func = compute_maglap_descriptors opts.distance_func = compute_chi_square_distances # Or try these: #opts.sigma1=3 #opts.n_histogram_bins = 8 #opts.score_threshold=5e-4 #opts.score_func = hessian_scores #opts.descriptor_func = compute_rgb_descriptors #opts.distance_func = compute_euclidean_distances im1 = imageio.imread('graff5/img1.jpg') im2 = imageio.imread('graff5/img2.jpg') point_matches = find_point_matches(im1, im2, opts) match_image = draw_point_matches(im1, im2, point_matches[:50]) plot_multiple([match_image], imwidth=16, imheight=8)
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
Homography EstimationNow that we have these pairs of matching points (also called point correspondences), what can we do with them? In the above case, the wall is planar (flat) and the camera was moved towards the left to take the second image compared to the first image. Therefore, the way that points on the wall are transformed across these two images can be modeled as a **homography**. Homographies can model two distinct effects: * transformation across images of **any scene** taken from the **exact same camera position** (center of projection) * transformation across images of a **planar object** taken from **any camera position**. We are dealing with the second case in these graffiti images. Therefore if our point matches are correct, there should be a homography that transforms image points in the first image to the corresponding points in the second image. Recap the algorithm from the lecture for finding this homography (it's called the **Direct Linear Transformation**, DLT). There is a 2 page description of it in the Grauman & Leibe script (grauman-leibe-ch5-geometric-verification.pdf in the Moodle) in Section 5.1.3.----Now let's actually put this into practice. Implement `estimate_homography(point_matches)`, which returns a 3x3 homography matrix that transforms points of the first image to points of the second image.The steps are: 1. Build the matrix $A$ from the point matches according to Eq. 5.7 from the script. 2. Apply SVD using `np.linalg.svd(A)`. It returns $U,d,V^T$. Note that the last return value is not $V$ but $V^T$. 3. Compute $\mathbf{h}$ from $V$ according to Eq. 5.9 or 5.10 4. Reshape $\mathbf{h}$ to the 3x3 matrix $H$ and return it. The input `point_matches` contains as many rows as there are point matches (correspondences) and each row has 4 elements: $x, y, x', y'$.
def estimate_homography(point_matches): n_matches = len(point_matches) A = np.empty((n_matches*2, 9)) for i, (x1, y1, x2, y2) in enumerate(point_matches): # YOUR CODE HERE raise NotImplementedError() return H
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
The `point_matches` have already been sorted in the `find_point_matches` function according to the descriptor distances, so the more accurate pairs will be near the beginning. We can use the top $k$, e.g. $k=10$ pairs in the homography estimation and have a reasonably accurate estimate. What $k$ give the best result? What happens if you use too many? Why?We can use `cv2.warpPerspective` to warp the first image to the reference frame of the second. Does the result look good?Can you interpret the entries of the resulting $H$ matrix and are the numbers as you would expect them for these images?You can also try other image from the `graff5` folder or the `NewYork` folder.
# See what happens if you change top_k below top_k = 10 H = estimate_homography(point_matches[:top_k]) H_string = np.array_str(H, precision=5, suppress_small=True) print('The estimated homography matrix H is\n', H_string) im1_warped = cv2.warpPerspective(im1, H, (im2.shape[1], im2.shape[0])) absdiff = np.abs(im2.astype(np.float32)-im1_warped.astype(np.float32))/255 plot_multiple([im1, im2, im1_warped, absdiff], ['First image', 'Second image', 'Warped first image', 'Absolute difference'], max_columns=2, colormap='viridis')
_____no_output_____
MIT
Exercise3/Exercise3/local_feature_matching.ipynb
danikhani/CV1-2020
SiteAlign featuresWe read the SiteAlign features from the respective [paper](https://onlinelibrary.wiley.com/doi/full/10.1002/prot.21858) and [SI table](https://onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2Fprot.21858&file=prot21858-SupplementaryTable.pdf) to verify `kissim`'s implementation of the SiteAlign definitions:
from kissim.definitions import SITEALIGN_FEATURES SITEALIGN_FEATURES
_____no_output_____
MIT
notebooks/004_fingerprints/999_fetch_sitealign_features.ipynb
volkamerlab/kissim_app
Size SiteAlign's size definitions:> Natural amino acids have been classified into three groups according to the number of heavy atoms (6 heavy atoms: Arg, Phe, Trp, Tyr) and three values (“1,” “2,” “3”) are outputted according to the group to which the current residues belong to (Table I)https://onlinelibrary.wiley.com/doi/full/10.1002/prot.21858 Parse text from SiteAlign paper
size = { 1.0: "Ala, Cys, Gly, Pro, Ser, Thr, Val".split(", "), 2.0: "Asn, Asp, Gln, Glu, His, Ile, Leu, Lys, Met".split(", "), 3.0: "Arg, Phe, Trp, Tyr".split(", "), }
_____no_output_____
MIT
notebooks/004_fingerprints/999_fetch_sitealign_features.ipynb
volkamerlab/kissim_app
`kissim` definitions correct?
import pandas as pd from IPython.display import display, HTML # Format SiteAlign data size_list = [] for value, amino_acids in size.items(): values = [(amino_acid.upper(), value) for amino_acid in amino_acids] size_list = size_list + values size_series = ( pd.DataFrame(size_list, columns=["amino_acid", "size"]) .sort_values("amino_acid") .set_index("amino_acid") .squeeze() ) # KiSSim implementation of SiteAlign features correct? diff = size_series == SITEALIGN_FEATURES["size"] if not diff.all(): raise ValueError( f"KiSSim implementation of SiteAlign features is incorrect!!!\n" f"{display(HTML(diff.to_html()))}" ) else: print("KiSSim implementation of SiteAlign features is correct :)")
KiSSim implementation of SiteAlign features is correct :)
MIT
notebooks/004_fingerprints/999_fetch_sitealign_features.ipynb
volkamerlab/kissim_app
HBA, HBD, charge, aromatic, aliphatic Parse table from SiteAlign SI
sitealign_table = """ Ala 0 0 0 1 0 Arg 3 0 +1 0 0 Asn 1 1 0 0 0 Asp 0 2 -1 0 0 Cys 1 0 0 1 0 Gly 0 0 0 0 0 Gln 1 1 0 0 0 Glu 0 2 -1 0 0 His/Hid/Hie 1 1 0 0 1 Hip 2 0 1 0 0 Ile 0 0 0 1 0 Leu 0 0 0 1 0 Lys 1 0 +1 0 0 Met 0 0 0 1 0 Phe 0 0 0 0 1 Pro 0 0 0 1 0 Ser 1 1 0 0 0 Thr 1 1 0 1 0 Trp 1 0 0 0 1 Tyr 1 1 0 0 1 Val 0 0 0 1 0 """ sitealign_table = [i.split() for i in sitealign_table.split("\n")[1:-1]] sitealign_dict = {i[0]: i[1:] for i in sitealign_table} sitealign_df = pd.DataFrame.from_dict(sitealign_dict).transpose() sitealign_df.columns = ["hbd", "hba", "charge", "aliphatic", "aromatic"] sitealign_df = sitealign_df[["hbd", "hba", "charge", "aromatic", "aliphatic"]] sitealign_df = sitealign_df.rename(index={"His/Hid/Hie": "His"}) sitealign_df = sitealign_df.drop("Hip", axis=0) sitealign_df = sitealign_df.astype("float") sitealign_df.index = [i.upper() for i in sitealign_df.index] sitealign_df = sitealign_df.sort_index() sitealign_df
_____no_output_____
MIT
notebooks/004_fingerprints/999_fetch_sitealign_features.ipynb
volkamerlab/kissim_app
`kissim` definitions correct?
from IPython.display import display, HTML diff = sitealign_df == SITEALIGN_FEATURES.drop("size", axis=1).sort_index() if not diff.all().all(): raise ValueError( f"KiSSim implementation of SiteAlign features is incorrect!!!\n" f"{display(HTML(diff.to_html()))}" ) else: print("KiSSim implementation of SiteAlign features is correct :)")
KiSSim implementation of SiteAlign features is correct :)
MIT
notebooks/004_fingerprints/999_fetch_sitealign_features.ipynb
volkamerlab/kissim_app
Table style
from Bio.Data.IUPACData import protein_letters_3to1 for feature_name in SITEALIGN_FEATURES.columns: print(feature_name) for name, group in SITEALIGN_FEATURES.groupby(feature_name): amino_acids = {protein_letters_3to1[i.capitalize()] for i in group.index} amino_acids = sorted(amino_acids) print(f"{name:<7}{' '.join(amino_acids)}") print()
size 1.0 A C G P S T V 2.0 D E H I K L M N Q 3.0 F R W Y hbd 0.0 A D E F G I L M P V 1.0 C H K N Q S T W Y 3.0 R hba 0.0 A C F G I K L M P R V W 1.0 H N Q S T Y 2.0 D E charge -1.0 D E 0.0 A C F G H I L M N P Q S T V W Y 1.0 K R aromatic 0.0 A C D E G I K L M N P Q R S T V 1.0 F H W Y aliphatic 0.0 D E F G H K N Q R S W Y 1.0 A C I L M P T V
MIT
notebooks/004_fingerprints/999_fetch_sitealign_features.ipynb
volkamerlab/kissim_app
Fashion MNIST Generative Adversarial Network (GAN) [Мой блог](https://tiendil.org)[Пост об этом notebook](https://tiendil.org/generative-adversarial-network-implementation)[Все публичные notebooks](https://github.com/Tiendil/public-jupyter-notebooks)Учебная реализация [GAN](https://en.wikipedia.org/wiki/Generative_adversarial_network) на данных [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist).На основе следующих материалов:- https://machinelearningmastery.com/practical-guide-to-gan-failure-modes/- https://www.tensorflow.org/tutorials/generative/dcgan- https://keras.io/examples/generative/dcgan_overriding_train_step/Сходу у меня не получилось нагуглись «красивое» решение. Поэтому тут будет композиция разных уроков. На мой взгляд, получилось более идиоматично.Про GAN лучше почитать по ссылке выше. Краткая суть:- Тренируются две сети: generator & discriminator.- Генератор учится создавать картинки из шума.- Дискриминатор учится отличать поддельные картинки от настоящих.- Ошибка дискриминатора определяется качеством предсказания фейковости изображения.- Ошибка генератора определяется качеством обмана дискриминатора. Подробнее про ошибки будет далее.Если правильно подобрать топологии сетей и параметры обучения, то в итоге генератор научается создавать картинки неотличимые от оригинальных. ??????. Profit. Подготовка Notebook запускался в кастомизированном docker контейнере.Подробнее про мои злоключения с настройкой tensorflow + CUDA можно почитать в блоге: [нельзя просто так взять и запустить DL](https://tiendil.org/you-cant-just-take-and-run-dl).Официальная документация о [запуске tensorflow через docker](https://www.tensorflow.org/install/docker).Dockerfile: ```FROM tensorflow/tensorflow:2.5.0-gpu-jupyterRUN apt-get update && apt-get install -y graphvizRUN pip install --upgrade pipCOPY requirements.txt ./RUN pip install -r ./requirements.txt```requirements.txt:```pandas==1.1.5kaggle==1.5.12pydot==1.4.2 requeired by tensorflow to visualize modelslivelossplot==0.5.4 required to plot loss while trainingalbumentations==1.0.3 augument image datajupyter-beeper==1.0.3``` Инициализация Уже без комментариев, подробнее рассказано в [предыдущих notebooks](https://github.com/Tiendil/public-jupyter-notebooks).
import os import random import logging import datetime import PIL import PIL.Image import jupyter_beeper from IPython.display import display, Markdown, Image import ipywidgets as ipw os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1' import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt from livelossplot import PlotLossesKerasTF import cv2 logging.getLogger().setLevel(logging.WARNING) tf.get_logger().setLevel(logging.WARNING) tf.autograph.set_verbosity(1) old_settings = np.seterr('raise') gpus = tf.config.list_physical_devices("GPU") display(Markdown(f'Num GPUs Available: {len(gpus)}')) if not gpus: raise RuntimeError('No GPUs found, learning process will be too slow. In Google Colab set runtime type — GPU.') display(Markdown(f'Eager mode: {tf.executing_eagerly()}')) tf.config.experimental.set_memory_growth(gpus[0], True) SEED = 1 random.seed(SEED) np.random.seed(SEED) tf.random.set_seed(SEED) tf.keras.backend.clear_session() RNG = np.random.default_rng()
_____no_output_____
BSD-3-Clause
gan-fashion-mnist/notebook.ipynb
Tiendil/public-jupyter-notebooks
Вспомогательные функции Можно пролистать. Смотрите при необходимости.
def split_dataset(data, *parts, cache): data_size = data.cardinality() assert data_size == sum(parts), \ f"dataset size must be equal to sum of parts: {data_size} != sum{parts}" result = [] for part in parts: data_part = data.take(part) if cache: data_part = data_part.cache() result.append(data_part) data = data.skip(part) return result def normalizer(minimum, maximum): def normalize_dataset(x): return (x - minimum) / (maximum - minimum) return normalize_dataset def display_model(model, name): filename = f'/tmp/tmp_model_schema_{name}.png' keras.utils.plot_model(model, show_shapes=True, show_layer_names=True, show_dtype=True, expand_nested=True, to_file=filename) display(Image(filename)) class LayersNameGenerator: __slots__ = ('prefix', 'number') _version = 0 def __init__(self, prefix): self.prefix = prefix self.number = 0 self.__class__._version += 1 def __call__(self, type_name, name=None): self.number += 1 if name is None: name = str(self.number) return f'{self.prefix}.{self._version}-{type_name}.{name}' def display_examples(examples_number=1, data_number=1, image_getter=None, label_getter='', figsize=(16, 16), subplot=None, cmap=plt.get_cmap('gray')): if image_getter is None: raise ValueError('image_getter must be an image or a collable') if not callable(image_getter): image_value = image_getter image_getter = lambda j: image_value if not callable(label_getter): label_value = label_getter label_getter = lambda j: label_value examples_number = min(examples_number, data_number) if subplot is None: subplot = (1, examples_number) plt.figure(figsize=figsize) if examples_number < data_number: choices = RNG.choice(data_number, examples_number, replace=False) else: choices = list(range(data_number)) for i, j in enumerate(choices): plt.subplot(*subplot, i+1) plt.imshow(image_getter(j), cmap=cmap) plt.title(label_getter(j)) plt.show() def display_memory_stats(): stats = tf.config.experimental.get_memory_info('GPU:0') message = f''' current: {stats["current"]/1024/1024}Mb peak: {stats["peak"]/1024/1024}Mb ''' display(Markdown(message)) def make_report(history, main, metrics): groups = {'main': {}} for key in history.history.keys(): if key in ('loss', 'val_loss', 'accuracy', 'val_accuracy'): if key.startswith('val_'): metric = key else: metric = f'train_{key}' groups['main'][metric] = history.history[key][-1] continue if not any(key.endswith(f'_{metric}') for metric in metrics): continue group, metric = key.rsplit('_', 1) validation = False if group.startswith('val_'): group = group[4:] validation = True if group not in groups: groups[group] = {} if validation: metric = f'val_{metric}' else: metric = f'train_{metric}' groups[group][metric] = history.history[key][-1] lines = [] for group, group_metrics in groups.items(): lines.append(f'**{group}:**') lines.append(f'```') for name, value in sorted(group_metrics.items()): if name in ('accuracy', 'val_accuracy', 'train_accuracy'): lines.append(f' {name}: {value:.4%} ({value})') else: lines.append(f' {name}: {value}') lines.append(f'```') train_loss = groups[main]['train_loss'] val_loss = groups[main].get('val_loss') val_accuracy = groups[main].get('val_accuracy') history.history[key][-1] if val_loss is None: description = f'train_loss: {train_loss:.4};' else: description = f'train_loss: {train_loss:.4}; val_loss: {val_loss:.4}; val_acc: {val_accuracy:.4%}' lines.append(f'**description:** {description}') return '\n\n'.join(lines), description def crope_layer(input, expected_shape, names): raw_shape = input.get_shape() if raw_shape == (None, *expected_shape): outputs = input else: dy = raw_shape[1] - expected_shape[0] dx = raw_shape[2] - expected_shape[1] x1 = dx // 2 x2 = dx - x1 y1 = dy // 2 y2 = dy - y1 outputs = layers.Cropping2D(cropping=((y1, y2), (x1, x2)), name=names('Cropping2D'))(input) return outputs def neurons_in_shape(shape): input_n = 1 for n in shape: if n is not None: input_n *= n return input_n def form_images_map(h, w, images, channels, scale=1): map_image = np.empty((SPRITE_SIZE*h, SPRITE_SIZE*w, channels), dtype=np.float32) for i in range(h): y_1 = i * SPRITE_SIZE for j in range(w): sprite = images[i*w+j] x_1 = j * SPRITE_SIZE map_image[y_1:y_1+SPRITE_SIZE, x_1:x_1+SPRITE_SIZE, :] = sprite if channels == 1: mode = 'L' map_image = np.squeeze(map_image) elif channels == 3: mode = 'RGB' else: raise ValueError(f'Unexpected channels value {channels}') if scale != 1: width, height = w * SPRITE_SIZE, h * SPRITE_SIZE map_image = cv2.resize(map_image, dsize=(width * scale, height * scale), interpolation=cv2.INTER_NEAREST) image = PIL.Image.fromarray((map_image * 255).astype(np.int8), mode) return image
_____no_output_____
BSD-3-Clause
gan-fashion-mnist/notebook.ipynb
Tiendil/public-jupyter-notebooks
Получение данных
# получаем картинки одежды средствами TensorFlow (TRAIN_IMAGES, TRAIN_LABELS), (TEST_IMAGES, TEST_LABELS) = tf.keras.datasets.fashion_mnist.load_data() # константы, описывающие данные CHANNELS = 1 SPRITE_SIZE = 28 SPRITE_SHAPE = (SPRITE_SIZE, SPRITE_SIZE, CHANNELS) # Подготавливаем данные. Для GAN нам нужны только картинки. def transform(images): images = (images / 255.0).astype(np.float32) images = np.expand_dims(images, axis=-1) return images def filter_by_class(images, labels, classes): _images = tf.data.Dataset.from_tensor_slices(transform(images)) _labels = tf.data.Dataset.from_tensor_slices(labels) d = tf.data.Dataset.zip((_images, _labels)) d = d.filter(lambda i, l: tf.reduce_any(tf.equal(classes, l))) d = d.map(lambda i, l: i) return d # Обучаться будем только на изображениях обуви: # # - сеть будет учиться быстрее; # - результат будет лучше; # - будет проще, интереснее играться с работой обученной сети. # # Впрочем, эта реализация нормально учится и на всех изображениях. _classes = tf.constant((5, 7, 9), tf.uint8) _train = filter_by_class(TRAIN_IMAGES, TRAIN_LABELS, _classes) _test = filter_by_class(TEST_IMAGES, TEST_LABELS, _classes) DATA = _train.concatenate(_test).cache() # В некоторых местах нам потребуется знать размер обучающей выборки. # Получать его таким образом — плохое решение, но на таких объёмах данных оно работает. DATA_NUMBER = len(list(DATA)) display(Markdown(f'full data shape: {DATA}')) # Визуально проверяем, что отобрали нужные классы data = [image for image in DATA.take(100).as_numpy_iterator()] form_images_map(5, 20, data, scale=1, channels=CHANNELS)
_____no_output_____
BSD-3-Clause
gan-fashion-mnist/notebook.ipynb
Tiendil/public-jupyter-notebooks
Конструируем модель По-сути, GAN — это три сети:- Generator.- Discriminator.- GAN — объединение двух предыдущих.Сам GAN можно не оформлять отдельной сетью, достаточно правильно описать взаимодействие генератора и дискриминатора при обучения. Но, поскольку они учатся совместно, как одно целое, я вижу логичным работать с ними как с единой сетью.Поэтому мы отдельно создадим генератор с дискриминатором, после чего опишем класс сети, объединяющий их в единое целое. Обучение GAN Обучение генератора и дискриминатора, само собой, происходит на основе функций ошибок. Функции каждой сети оценивают качество бинарной классификации входных данных, на фейковые и реальные. Обычно для этого используют [Binary Crossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy).Дискриминатор на вход получает часть реальных изображений и часть созданных генератором. Поскольку класс каждого изображения мы знаем, мы можем легко определить ошибку дискриминатора.Ошибку же генератора посчитать немного сложнее — качество его работы определяется дискриминатором. Чем хуже результат выдаёт дискриминатор на картинках генератора, тем лучше работает генератор. Поэтому мы скармиливаем дискриминатору созданные изображения с отметкой того, что они реальные (относятся к классу реальных), ошибка дискриминатора на таких данных и будет ошибкой генератора. Синхронизация сетей Если генератор и дискриминатор будут учиться с разной скоростью или иметь разный потенциал выучиваемости, то они не смогут обучаться синхронно. Либо генератор превзойдёт дискриминатор и будет душить его тривиальными фейками, либо дискриминатор найдёт элементарный способ отличать подделки, который генератор не сможет обойти. Поэтому очень рекомендую при экспериментах с GAN сначала запускать что-нибудь очень простое, но работающее. И только после этого усложнять и экспериментировать. Не будьте как я :-)Эти же соображения предполагают визуализацию результатов обучения сети. **Убедитесь, что она корректно работает перед экспериментами.** Иначе можете как я сутки отлаживать работающую сеть с неработающей визуализацией. Метрики У нас есть две конкурирующие сети, которые учатся на результатах работы друг друга. Такое обучение, потенциально, может происходить бесконечно. Поэтому сходу не ясно какой критерий остановки использовать и на какие метрики смотреть, чтобы анализировать ход обучения.На сколько я понял, по крайней мере для простых случаев качество обучения GAN оценивают визуально: видит человек косяки на выходе генератора или не видит. Альтернативой может быть либо использование другой, предобученной сети, либо метаанализ метрик. Ни в ту ни в другую сторону я не смотрел.Касательно анализа самих метрик, есть одна эвристика, которую можно применить сразу в двух местах. Поскольку сети конкурируют, обучаются совместно и на одних данных, мы можем ожидать, что их ошибки будут стабильны. Утрируя, если генератор и дискриминатор обучаются с одинаковой скоростью из одного состояния, то их ошибки не должны изменяться, так как на любое улучшение генератора последует соответствующее улучшение дискриминатора и наоборот.Отсюда можно вывести метаметрики, которые позволяют оценить стабильность обучения GAN:- Отношение ошибки генератора к ошибке диксриминатора должно колебаться около единицы. Конечно, если их функции ошибок совпадают.- Отношение ошибки дискриминатора на реальных данных к ошибке дискриминатора на фейковых данных должно колебаться около единицы.Если любое из этих отношений сильно отклоняется от единицы, значит GAN обучается неравномерно и могут возникнуть проблемы. В то же время необходимо помнить, что нейронные сети — сложная штука, и отклонения могут быть. Иногда даже большие. Главное чтобы GAN восстанавливался после них.
def construct_discriminator(): names = LayersNameGenerator('discriminator') inputs = keras.Input(shape=SPRITE_SHAPE, name=names('Input')) branch = inputs n = 64 branch = layers.Conv2D(n, 4, 2, padding='same', name=names('Conv2D'))(branch) branch = layers.LeakyReLU(alpha=0.2, name=names('LeakyReLU'))(branch) branch = layers.Conv2D(n, 4, 2, padding='same', name=names('Conv2D'))(branch) branch = layers.LeakyReLU(alpha=0.2, name=names('LeakyReLU'))(branch) branch = layers.Flatten(name=names('Flatten'))(branch) branch = layers.Dense(1, activation="sigmoid", name=names('Dense'))(branch) outputs = branch return keras.Model(inputs=inputs, outputs=outputs, name='Discimiantor') def construct_generator(code_n): names = LayersNameGenerator('generator') inputs = keras.Input(shape=(code_n,), name=names('Input')) branch = inputs n = 128 branch = layers.Dense(7 * 7 * n, activation='elu', name=names('Dense'))(branch) branch = layers.Reshape((7, 7, n), name=names('Reshape'))(branch) branch = layers.Conv2DTranspose(n, 4, 2, activation='relu', padding='same', name=names('Conv2DTranspose'))(branch) branch = layers.Conv2DTranspose(n, 4, 2, activation='relu', padding='same', name=names('Conv2DTranspose'))(branch) branch = layers.Conv2D(CHANNELS, 7, activation="sigmoid", padding='same', name=names('Conv2D'))(branch) outputs = branch return keras.Model(inputs=inputs, outputs=outputs, name='Generator') # Вспомогательный класс для сбора метрик GAN. # Кроме трёх базовых метрик: # - ошибка дискриминатора на реальных данных; # - ошибка дискриминатора на фейковых данных; # - ошибка генератора; # Поддерживает две производные метрики: # - отношение ошибок дискриминатора на реальных и фейковых данных; # - отношение ошибок дискриминатора на фековых данных и генератора. class GANMetrics: def __init__(self): self._define('discriminator_real_loss') self._define('discriminator_fake_loss') self._define('generator_loss') self._define('discriminator_real_vs_fake_loss') self._define('discriminator_vs_generator_loss') def _define(self, name): setattr(self, name, keras.metrics.Mean(name=name)) def update_state(self, d_real_loss, d_fake_loss, g_loss): self.discriminator_real_loss.update_state(d_real_loss) self.discriminator_fake_loss.update_state(d_fake_loss) self.generator_loss.update_state(g_loss) self.discriminator_real_vs_fake_loss.update_state(tf.math.divide_no_nan(d_real_loss, d_fake_loss)) self.discriminator_vs_generator_loss.update_state(tf.math.divide_no_nan(d_fake_loss, g_loss)) def result(self): return {"discriminator_real_loss": self.discriminator_real_loss.result(), "discriminator_fake_loss": self.discriminator_fake_loss.result(), "generator_loss": self.generator_loss.result(), "discriminator_real_vs_fake_loss": self.discriminator_real_vs_fake_loss.result(), "discriminator_vs_generator_loss": self.discriminator_vs_generator_loss.result()} def list(self): return [self.discriminator_real_loss, self.discriminator_fake_loss, self.generator_loss, self.discriminator_real_vs_fake_loss, self.discriminator_vs_generator_loss] # Группы графиков для livelossplot def plotlosses_groups(self): return {'discriminator loss': ['discriminator_real_loss', 'discriminator_fake_loss'], 'generator loss': ['generator_loss'], 'relations': ['discriminator_real_vs_fake_loss', 'discriminator_vs_generator_loss']} # Короткие имена для графиков livelossplot def plotlosses_group_patterns(self): return ((r'^(discriminator_real_loss)(.*)', 'real'), (r'^(discriminator_fake_loss)(.*)', 'fake'), (r'^(generator_loss)(.*)', 'loss'), (r'^(discriminator_real_vs_fake_loss)(.*)', 'real / fake'), (r'^(discriminator_vs_generator_loss)(.*)', 'disciminator / generator'),) # Класс сети, объединяющей генератор и дискриминатор в GAN. # Делаем отдельный класс, так как нам необходимо переопределить шаг обучения. # Плюс, оформление в виде класса позволяет проще визуализировать сеть. class GAN(keras.Model): def __init__(self, discriminator, generator, latent_dim, **kwargs): inputs = layers.Input(shape=latent_dim) super().__init__(inputs=inputs, outputs=discriminator(generator(inputs)), **kwargs) self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim self.batch_size = None self.real_labels = None self.fake_labels = None def compile(self, batch_size): super().compile() self.custom_metrics = GANMetrics() self.batch_size = batch_size self.real_labels = tf.ones((self.batch_size, 1)) self.fake_labels = tf.zeros((self.batch_size, 1)) @property def metrics(self): return self.custom_metrics.list() def latent_vector(self, n): return tf.random.normal(shape=(n, self.latent_dim)) # Самый интересный метод — шаг обучения GAN. # В куче примеров генератор и дискриминатор учатся отдельно и даже на разных данных. # Такой лобовой подход имеет право на жизнь, но он точно не оптимален. # Он приводит к генерации большого количества лишних данных и просто к лишним операциям над памятью. # Поэтому мы будем учить обе сети в один проход. @tf.function def train_step(self, real_images): # Генерируем шум для генератора, количество примеров берём равным количеству входных данных. random_latent_vectors = self.latent_vector(self.batch_size) # Генератор и дискриминатор должны учиться на разных операциях. # Поэтому самостоятельно записываем операции для расчёта градиентов. # Указываем persistent=True. TensorFlow по-умолчанию чистит GradientTape после расчёта первого градиаента, # а нам надо рассчитывать два — по градиенту на сеть. try: with tf.GradientTape(persistent=True) as tape: # генерируем поддельные картинки fake_images = self.generator(random_latent_vectors) # оцениваем их дискриминатором fake_predictions = self.discriminator(fake_images) # рассчитываем ошибку генератора, предполагая что сгенерированные картинки реальны g_loss = self.discriminator.compiled_loss(self.real_labels, fake_predictions) # рассчитываем ошибку дискриминатора на фейковых картинках, зная, что они фейковые d_f_loss = self.discriminator.compiled_loss(self.fake_labels, fake_predictions) # получаем предсказания дискриминатора для реальных картинок real_predictions = self.discriminator(real_images) # рассчитываем ошибку дискриминатора для реальных картинок d_r_loss = self.discriminator.compiled_loss(self.real_labels, real_predictions) # считаем градиент генератора и делаем шаг оптимизации grads = tape.gradient(g_loss, self.generator.trainable_weights) self.generator.optimizer.apply_gradients(zip(grads, self.generator.trainable_weights)) # считаем градиент дискриминатора и делаем шаг оптимизации grads = tape.gradient((d_r_loss, d_f_loss), self.discriminator.trainable_weights) self.discriminator.optimizer.apply_gradients(zip(grads, self.discriminator.trainable_weights)) # обновляем метрики self.custom_metrics.update_state(d_r_loss, d_f_loss, g_loss) finally: # Удаляем лог градиента del tape return self.custom_metrics.result() # Количество входов шума для генератора. # 10 — очень малое значение! Я взял его, чтобы после обучения сети было проще с ней экспериментировать. # По-хорошему, это значение надо установить в 100 или больше. # Само собой, при большом количестве шума, сложно будет целенаправлено манипулировать сетью. # Обойти эту проблему можно с использованием дополнительной autoencoder сети, # которая учится «сжимать» данные до множества признаков. # Подход с autoencoder мне видится логичным и потому, что GAN использует входные данные всё-таки как шум, # а не как признаки. В то же время autoencoder ориентирован на выделение признаков. CODE_N = 10 # Создаём генератор, дискриминатор и объединяем их в GAN. # Обратите внимание на кастомные параметры оптимизаторов. # Стандартные параметры TensorFlow плохо подходят для обучения GAN. discriminator = construct_discriminator() discriminator.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5), loss=keras.losses.BinaryCrossentropy()) generator = construct_generator(CODE_N) generator.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5), loss=keras.losses.BinaryCrossentropy()) gan = GAN(discriminator=discriminator, generator=generator, latent_dim=CODE_N, name='GAN') display_model(gan, 'GAN') # Проверяем, что модель в принципе что-то считает check_input = tf.constant(RNG.random((1, CODE_N)), shape=(1, CODE_N)) generator_output = gan.generator(check_input) display(Markdown(f'Generator output')) display_examples(image_getter=generator_output[0], figsize=(3, 3)) discriminator_output = gan.discriminator(generator_output) display(Markdown(f'Discriminator output: {discriminator_output}')) # Проверяем, что визуализатор работает на реальных данных data = [image for image in DATA.take(9).as_numpy_iterator()] form_images_map(3, 3, data, scale=1, channels=CHANNELS) # Определяем собственный callback для model.fit, который будет: # - отображать работу генератора каждую эпоху; # - сохранять картинки на файловую систему. class GANMonitor(keras.callbacks.Callback): def __init__(self, w, h, images_directory, scale): self.w = w self.h = h self.images_directory = images_directory self.scale = scale def on_epoch_end(self, epoch, logs=None): n = self.w * self.h random_latent_vectors = self.model.latent_vector(n) generated_images = self.model.generator(random_latent_vectors).numpy() pil_world = form_images_map(self.h, self.w, generated_images, channels=CHANNELS, scale=self.scale) pil_world.save(f"{IMAGES_DIRECTORY}/generated_img_%04d.png" % (epoch,)) display(pil_world) # Задаём параметры обучения # Сколько раз цикл обучения пройдёт по всем обучающим данным. # Установите на свой вкус, 100 должно хватить, чтобу увидеть результат EPOCHS = 100 BATCH_SIZE = 128 display(Markdown(f'batch size: {BATCH_SIZE}')) display(Markdown(f'epochs: {EPOCHS}')) %%time # каталог с результатами работы генератора IMAGES_DIRECTORY = 'generated-images' # создаём каталог с картинками и чистим его, если он заполнен !mkdir -p $IMAGES_DIRECTORY !rm $IMAGES_DIRECTORY/* # Явно формируем dataset для скармливания сети во время обучения. # Разбиваем на куски и говорим готовить их заранее. data_for_train = DATA.shuffle(DATA_NUMBER).batch(BATCH_SIZE, drop_remainder=True).prefetch(buffer_size=10) # Подготавливаем модель. gan.compile(batch_size=BATCH_SIZE) # Запускаем обучение. # Для PlotLossesKerasTF указываем дополнительную конфигурацию графиков. # Для GANMonitor указываем параметры визуализации. history = gan.fit(data_for_train, epochs=EPOCHS, callbacks=[PlotLossesKerasTF(from_step=-50, groups=gan.custom_metrics.plotlosses_groups(), group_patterns=gan.custom_metrics.plotlosses_group_patterns(), outputs=['MatplotlibPlot']), GANMonitor(h=3, w=10, images_directory=IMAGES_DIRECTORY, scale=1)]) # Гудим противным звуком, чтобы сообщить об окончании обучения jupyter_beeper.Beeper().beep(frequency=330, secs=3, blocking=True) # Поиграем с результатом start_index = random.randint(0, DATA_NUMBER-1) def zero_input(): return tf.zeros((CODE_N,)) start_vector = gan.latent_vector(1)[0] interact_args = {f'v_{i}': ipw.FloatSlider(min=-3.0, max=3.0, step=0.01, value=start_vector[i]) for i in range(CODE_N)} @ipw.interact(**interact_args) def generate_sprite(**kwargs): vector = zero_input().numpy() for i in range(CODE_N): vector[i] = kwargs[f'v_{i}'] vector = vector.reshape((1, CODE_N)) sprite = gan.generator(vector)[0].numpy() scale = 1 sprite = cv2.resize(sprite, dsize=(SPRITE_SIZE*scale, SPRITE_SIZE*scale), interpolation=cv2.INTER_NEAREST) return PIL.Image.fromarray((sprite * 255).astype(np.uint8))
_____no_output_____
BSD-3-Clause
gan-fashion-mnist/notebook.ipynb
Tiendil/public-jupyter-notebooks
**Important note:** You should always work on a duplicate of the course notebook. On the page you used to open this, tick the box next to the name of the notebook and click duplicate to easily create a new version of this notebook.You will get errors each time you try to update your course repository if you don't do this, and your changes will end up being erased by the original course version. Welcome to Jupyter Notebooks! If you want to learn how to use this tool you've come to the right place. This article will teach you all you need to know to use Jupyter Notebooks effectively. You only need to go through Section 1 to learn the basics and you can go into Section 2 if you want to further increase your productivity. You might be reading this tutorial in a web page (maybe Github or the course's webpage). We strongly suggest to read this tutorial in a (yes, you guessed it) Jupyter Notebook. This way you will be able to actually *try* the different commands we will introduce here. Section 1: Need to Know Introduction Let's build up from the basics, what is a Jupyter Notebook? Well, you are reading one. It is a document made of cells. You can write like I am writing now (markdown cells) or you can perform calculations in Python (code cells) and run them like this:
1+1
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
Cool huh? This combination of prose and code makes Jupyter Notebook ideal for experimentation: we can see the rationale for each experiment, the code and the results in one comprehensive document. In fast.ai, each lesson is documented in a notebook and you can later use that notebook to experiment yourself. Other renowned institutions in academy and industry use Jupyter Notebook: Google, Microsoft, IBM, Bloomberg, Berkeley and NASA among others. Even Nobel-winning economists [use Jupyter Notebooks](https://paulromer.net/jupyter-mathematica-and-the-future-of-the-research-paper/) for their experiments and some suggest that Jupyter Notebooks will be the [new format for research papers](https://www.theatlantic.com/science/archive/2018/04/the-scientific-paper-is-obsolete/556676/). Writing A type of cell in which you can write like this is called _Markdown_. [_Markdown_](https://en.wikipedia.org/wiki/Markdown) is a very popular markup language. To specify that a cell is _Markdown_ you need to click in the drop-down menu in the toolbar and select _Markdown_. Click on the the '+' button on the left and select _Markdown_ from the toolbar. Now you can type your first _Markdown_ cell. Write 'My first markdown cell' and press run. ![add](images/notebook_tutorial/add.png) You should see something like this: My first markdown cell Now try making your first _Code_ cell: follow the same steps as before but don't change the cell type (when you add a cell its default type is _Code_). Type something like 3/2. You should see '1.5' as output.
3/2
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
Modes If you made a mistake in your *Markdown* cell and you have already ran it, you will notice that you cannot edit it just by clicking on it. This is because you are in **Command Mode**. Jupyter Notebooks have two distinct modes:1. **Edit Mode**: Allows you to edit a cell's content.2. **Command Mode**: Allows you to edit the notebook as a whole and use keyboard shortcuts but not edit a cell's content. You can toggle between these two by either pressing ESC and Enter or clicking outside a cell or inside it (you need to double click if its a Markdown cell). You can always know which mode you're on since the current cell has a green border if in **Edit Mode** and a blue border in **Command Mode**. Try it! Other Important Considerations 1. Your notebook is autosaved every 120 seconds. If you want to manually save it you can just press the save button on the upper left corner or press s in **Command Mode**. ![Save](images/notebook_tutorial/save.png) 2. To know if your kernel is computing or not you can check the dot in your upper right corner. If the dot is full, it means that the kernel is working. If not, it is idle. You can place the mouse on it and see the state of the kernel be displayed. ![Busy](images/notebook_tutorial/busy.png) 3. There are a couple of shortcuts you must know about which we use **all** the time (always in **Command Mode**). These are:Shift+Enter: Runs the code or markdown on a cellUp Arrow+Down Arrow: Toggle across cellsb: Create new cell0+0: Reset KernelYou can find more shortcuts in the Shortcuts section below. 4. You may need to use a terminal in a Jupyter Notebook environment (for example to git pull on a repository). That is very easy to do, just press 'New' in your Home directory and 'Terminal'. Don't know how to use the Terminal? We made a tutorial for that as well. You can find it [here](https://course.fast.ai/terminal_tutorial.html). ![Terminal](images/notebook_tutorial/terminal.png) That's it. This is all you need to know to use Jupyter Notebooks. That said, we have more tips and tricks below ↓↓↓ Section 2: Going deeper Markdown formatting Italics, Bold, Strikethrough, Inline, Blockquotes and Links The five most important concepts to format your code appropriately when using markdown are: 1. *Italics*: Surround your text with '\_' or '\*'2. **Bold**: Surround your text with '\__' or '\**'3. `inline`: Surround your text with '\`'4. > blockquote: Place '\>' before your text.5. [Links](https://course.fast.ai/): Surround the text you want to link with '\[\]' and place the link adjacent to the text, surrounded with '()' Headings Notice that including a hashtag before the text in a markdown cell makes the text a heading. The number of hashtags you include will determine the priority of the header ('' is level one, '' is level two, '' is level three and '' is level four). We will add three new cells with the '+' button on the left to see how every level of heading looks. Double click on some headings and find out what level they are! Lists There are three types of lists in markdown. Ordered list:1. Step 1 2. Step 1B3. Step 3 Unordered list* learning rate* cycle length* weight decay Task list- [x] Learn Jupyter Notebooks - [x] Writing - [x] Modes - [x] Other Considerations- [ ] Change the world Double click on each to see how they are built! Code Capabilities **Code** cells are different than **Markdown** cells in that they have an output cell. This means that we can _keep_ the results of our code within the notebook and share them. Let's say we want to show a graph that explains the result of an experiment. We can just run the necessary cells and save the notebook. The output will be there when we open it again! Try it out by running the next four cells.
# Import necessary libraries from fastai.vision import * import matplotlib.pyplot as plt from PIL import Image a = 1 b = a + 1 c = b + a + 1 d = c + b + a + 1 a, b, c ,d plt.plot([a,b,c,d]) plt.show()
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
We can also print images while experimenting. I am watching you.
Image.open('images/notebook_tutorial/cat_example.jpg')
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
Running the app locally You may be running Jupyter Notebook from an interactive coding environment like Gradient, Sagemaker or Salamander. You can also run a Jupyter Notebook server from your local computer. What's more, if you have installed Anaconda you don't even need to install Jupyter (if not, just `pip install jupyter`).You just need to run `jupyter notebook` in your terminal. Remember to run it from a folder that contains all the folders/files you will want to access. You will be able to open, view and edit files located within the directory in which you run this command but not files in parent directories.If a browser tab does not open automatically once you run the command, you should CTRL+CLICK the link starting with 'https://localhost:' and this will open a new tab in your default browser. Creating a notebook Click on 'New' in the upper left corner and 'Python 3' in the drop-down list (we are going to use a [Python kernel](https://github.com/ipython/ipython) for all our experiments).![new_notebook](images/notebook_tutorial/new_notebook.png)Note: You will sometimes hear people talking about the Notebook 'kernel'. The 'kernel' is just the Python engine that performs the computations for you. Shortcuts and tricks Command Mode Shortcuts There are a couple of useful keyboard shortcuts in `Command Mode` that you can leverage to make Jupyter Notebook faster to use. Remember that to switch back and forth between `Command Mode` and `Edit Mode` with Esc and Enter. m: Convert cell to Markdown y: Convert cell to Code D+D: Delete cell o: Toggle between hide or show output Shift+Arrow up/Arrow down: Selects multiple cells. Once you have selected them you can operate on them like a batch (run, copy, paste etc). Shift+M: Merge selected cells. Shift+Tab: [press once] Tells you which parameters to pass on a functionShift+Tab: [press three times] Gives additional information on the method Cell Tricks
from fastai import* from fastai.vision import *
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
There are also some tricks that you can code into a cell. `?function-name`: Shows the definition and docstring for that function
?ImageDataBunch
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
`??function-name`: Shows the source code for that function
??ImageDataBunch
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
`doc(function-name)`: Shows the definition, docstring **and links to the documentation** of the function(only works with fastai library imported)
doc(ImageDataBunch)
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
Line Magics Line magics are functions that you can run on cells and take as an argument the rest of the line from where they are called. You call them by placing a '%' sign before the command. The most useful ones are: `%matplotlib inline`: This command ensures that all matplotlib plots will be plotted in the output cell within the notebook and will be kept in the notebook when saved. `%reload_ext autoreload`, `%autoreload 2`: Reload all modules before executing a new line. If a module is edited, it is not necessary to rerun the import commands, the modules will be reloaded automatically. These three commands are always called together at the beginning of every notebook.
%matplotlib inline %reload_ext autoreload %autoreload 2
_____no_output_____
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
`%timeit`: Runs a line a ten thousand times and displays the average time it took to run it.
%timeit [i+1 for i in range(1000)]
39.6 µs ± 543 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3
`%debug`: Allows to inspect a function which is showing an error using the [Python debugger](https://docs.python.org/3/library/pdb.html).
for i in range(1000): a = i+1 b = 'string' c = b+1 %debug
> <ipython-input-15-8d78ff778454>(4)<module>()  1 for i in range(1000):  2  a = i+1  3  b = 'string' ----> 4  c = b+1  ipdb> print(a) 1 ipdb> print(b) string
Apache-2.0
nbs/dl1/00_notebook_tutorial.ipynb
jwdinius/course-v3