text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Gena/map_center_object.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_center_object.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Gena/map_center_object.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_center_object.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# get a single feature
countries = ee.FeatureCollection("USDOS/LSIB_SIMPLE/2017")
country = countries.filter(ee.Filter.eq('country_na', 'Ukraine'))
Map.addLayer(country, { 'color': 'orange' }, 'feature collection layer')
# TEST: center feature on a map
Map.centerObject(country, 6)
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
```
import pandas as pd
from textblob import Word
headers = pd.read_csv("header.csv")
headers['Header']
citation = [Word("citation").synsets[2], Word("reference").synsets[1], Word("cite").synsets[3]]
run = [Word("run").synsets[9],Word("run").synsets[34],Word("execute").synsets[4]]
install = [Word("installation").synsets[0],Word("install").synsets[0],Word("setup").synsets[1],Word("prepare").synsets[0],Word("preparation").synsets[0],Word("manual").synsets[0],Word("guide").synsets[2],Word("guide").synsets[9]]
download = [Word("download").synsets[0]]
requirement = [Word("requirement").synsets[2],Word("prerequisite").synsets[0],Word("prerequisite").synsets[1],Word("dependency").synsets[0],Word("dependent").synsets[0]]
contact = [Word("contact").synsets[9]]
description = [Word("description").synsets[0],Word("description").synsets[1],Word("introduction").synsets[3],Word("introduction").synsets[6],Word("basics").synsets[0],Word("initiation").synsets[1],Word("start").synsets[0],Word("start").synsets[4],Word("started").synsets[0],Word("started").synsets[1],Word("started").synsets[7],Word("started").synsets[8],Word("overview").synsets[0],Word("summary").synsets[0],Word("summary").synsets[2]]
contributor = [Word("contributor").synsets[0]]
documentation = [Word("documentation").synsets[1]]
license = [Word("license").synsets[3],Word("license").synsets[0]]
usage = [Word("usage").synsets[0],Word("example").synsets[0],Word("example").synsets[5],Word("implement").synsets[1],Word("implementation").synsets[1],Word("demo").synsets[1],Word("tutorial").synsets[0],Word("tutorial").synsets[1]]
update = [Word("updating").synsets[0],Word("updating").synsets[3]]
issues = [Word("issues").synsets[0],Word("errors").synsets[5],Word("problems").synsets[0],Word("problems").synsets[2]]
support = [Word("support").synsets[7],Word("help").synsets[0],Word("help").synsets[9],Word("report").synsets[0],Word("report").synsets[6]]
group = dict()
group.update({"citation":citation})
group.update({"download":download})
group.update({"run":run})
group.update({"installation":install})
group.update({"requirement":requirement})
group.update({"contact":contact})
group.update({"description":description})
group.update({"contributor":contributor})
group.update({"documentation":documentation})
group.update({"license":license})
group.update({"usage":usage})
group.update({"update":update})
group.update({"issues":issues})
group.update({"support":support})
def find_sim(wordlist,wd): #returns the max probability between a word and subgroup
simvalue = []
for sense in wordlist:
if(wd.path_similarity(sense)!=None):
simvalue.append(wd.path_similarity(sense))
if(len(simvalue)!=0):
return max(simvalue)
else:
return 0
def match_group(word_syn,group,threshold):
currmax = 0
maxgroup = ""
simvalues = dict()
for sense in word_syn: #for a given sense of a word
similarities = []
for key, value in group.items(): #value has all the similar words
path_sim = find_sim(value,sense)
# print("Similarity is:",path_sim)
if(path_sim>threshold): #then append to the list
if(path_sim>currmax):
maxgroup = key
currmax = path_sim
return maxgroup
datadf = pd.DataFrame({'Header': [], 'Group': []})
matchedgroups = []
for h in headers["Header"]:
sentence = h.split(" ")[1:]
for s in sentence:
synn = Word(s).synsets
if(len(synn)>0):
bestgroup = match_group(synn,group,0.6)
if(bestgroup!=""):
datadf = datadf.append({'Header' : h, 'Group' : bestgroup}, ignore_index=True)
print(datadf)
datadf.to_csv('header_groups.csv', index=False)
```
| github_jupyter |
# Tuning an estimator
[José C. García Alanis (he/him)](https://github.com/JoseAlanis)
Research Fellow - Child and Adolescent Psychology at [Uni Marburg](https://www.uni-marburg.de/de)
Member - [RTG 2271 | Breaking Expectations](https://www.uni-marburg.de/en/fb04/rtg-2271), [Brainhack](https://brainhack.org/)
<img align="left" src="https://raw.githubusercontent.com/G0RELLA/gorella_mwn/master/lecture/static/Twitter%20social%20icons%20-%20circle%20-%20blue.png" alt="logo" title="Twitter" width="30" height="30" /> <img align="left" src="https://raw.githubusercontent.com/G0RELLA/gorella_mwn/master/lecture/static/GitHub-Mark-120px-plus.png" alt="logo" title="Github" width="30" height="30" /> @JoiAlhaniz
<img align="right" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/ml-dl_workshop.png" alt="logo" title="Github" width="400" height="280" />
### Aim(s) of this section
It's very important to learn when and where its appropriate to "tweak" your model.
Since we have done all of the previous analysis in our training data, it's fine to try out different models.
But we absolutely cannot "test" it on our *left out data*. If we do, we are in great danger of overfitting.
It is not uncommon to try other models, or tweak hyperparameters. In this case, due to our relatively small sample size, we are probably not powered sufficiently to do so, and we would once again risk overfitting. However, for the sake of demonstration, we will do some tweaking.
We will try a few different examples:
- normalizing our target data
- tweaking our hyperparameters
- trying a more complicated model
- feature selection
### Prepare data for model
Lets bring back our example data set
```
import numpy as np
import pandas as pd
# get the data set
data = np.load('MAIN2019_BASC064_subsamp_features.npz')['a']
# get the labels
info = pd.read_csv('participants.csv')
print('There are %s samples and %s features' % (data.shape[0], data.shape[1]))
```
We'll set `Age` as target
- i.e., well look at these from the `regression` perspective
```
# set age as target
Y_con = info['Age']
Y_con.describe()
```
### Model specification
Now let's bring back the model specifications we used last time
```
from sklearn.model_selection import train_test_split
# split the data
X_train, X_test, y_train, y_test = train_test_split(data, Y_con, random_state=0)
# use `AgeGroup` for stratification
age_class2 = info.loc[y_train.index,'AgeGroup']
```
### Normalize the target data¶
```
# plot the data
sns.displot(y_train,label='train')
plt.legend()
# create a log transformer function and log transform Y (age)
from sklearn.preprocessing import FunctionTransformer
log_transformer = FunctionTransformer(func = np.log, validate=True)
log_transformer.fit(y_train.values.reshape(-1,1))
y_train_log = log_transformer.transform(y_train.values.reshape(-1,1))[:,0]
```
Now let's plot the transformed data
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.displot(y_train_log,label='test log')
plt.legend()
```
and go on with fitting the model to the log-tranformed data
```
# split the data
X_train2, X_test, y_train2, y_test = train_test_split(
X_train, # x
y_train, # y
test_size = 0.25, # 75%/25% split
shuffle = True, # shuffle dataset before splitting
stratify = age_class2, # keep distribution of age class consistent
# betw. train & test sets.
random_state = 0 # same shuffle each time
)
from sklearn.svm import SVR
from sklearn.model_selection import cross_val_predict, cross_val_score
from sklearn.metrics import r2_score, mean_absolute_error
# re-intialize the model
lin_svr = SVR(kernel='linear')
# predict
y_pred = cross_val_predict(lin_svr, X_train, y_train_log, cv=10)
# scores
acc = r2_score(y_train_log, y_pred)
mae = mean_absolute_error(y_train_log,y_pred)
# check the accuracy
print('R2:', acc)
print('MAE:', mae)
# plot the relationship
sns.regplot(x=y_pred, y=y_train_log, scatter_kws=dict(color='k'))
plt.xlabel('Predicted Log Age')
plt.ylabel('Log Age')
```
Alright, seems like a definite improvement, right? We might agree on that.
But we can't forget about interpretability? The MAE is much less interpretable now
- do you know why?
### Tweak the hyperparameters¶
Many machine learning algorithms have hyperparameters that can be "tuned" to optimize model fitting.
Careful parameter tuning can really improve a model, but haphazard tuning will often lead to overfitting.
Our SVR model has multiple hyperparameters. Let's explore some approaches for tuning them
for 1000 points, what is a parameter?
```
SVR?
```
Now, how do we know what parameter tuning does?
- One way is to plot a **Validation Curve**, this will let us view changes in training and validation accuracy of a model as we shift its hyperparameters. We can do this easily with sklearn.
We'll fit the same model, but with a range of different values for `C`
- The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly. Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. For very tiny values of C, you should get misclassified examples, often even if your training data is linearly separable.
```
from sklearn.model_selection import validation_curve
C_range = 10. ** np.arange(-3, 7)
train_scores, valid_scores = validation_curve(lin_svr, X_train, y_train_log,
param_name= "C",
param_range = C_range,
cv=10,
scoring='neg_mean_squared_error')
# A bit of pandas magic to prepare the data for a seaborn plot
tScores = pd.DataFrame(train_scores).stack().reset_index()
tScores.columns = ['C','Fold','Score']
tScores.loc[:,'Type'] = ['Train' for x in range(len(tScores))]
vScores = pd.DataFrame(valid_scores).stack().reset_index()
vScores.columns = ['C','Fold','Score']
vScores.loc[:,'Type'] = ['Validate' for x in range(len(vScores))]
ValCurves = pd.concat([tScores,vScores]).reset_index(drop=True)
ValCurves.head()
# and plot the results
g = sns.catplot(x='C',y='Score',hue='Type',data=ValCurves,kind='point')
plt.xticks(range(10))
g.set_xticklabels(C_range, rotation=90)
```
It looks like accuracy is better for higher values of `C`, and plateaus somewhere between 0.1 and 1.
The default setting is `C=1`, so it looks like we can't really improve much by changing `C`.
But our SVR model actually has two hyperparameters, `C` and `epsilon`. Perhaps there is an optimal combination of settings for these two parameters.
We can explore that somewhat quickly with a `grid search`, which is once again easily achieved with `sklearn`.
Because we are fitting the model multiple times witih cross-validation, this will take some time ...
### Let's tune some hyperparameters
```
from sklearn.model_selection import GridSearchCV
C_range = 10. ** np.arange(-3, 8)
epsilon_range = 10. ** np.arange(-3, 8)
param_grid = dict(epsilon=epsilon_range, C=C_range)
grid = GridSearchCV(lin_svr, param_grid=param_grid, cv=10)
grid.fit(X_train, y_train_log)
```
Now that the grid search has completed, let's find out what was the "best" parameter combination
```
print(grid.best_params_)
```
And if redo our cross-validation with this parameter set?
```
y_pred = cross_val_predict(SVR(kernel='linear',
C=grid.best_params_['C'],
epsilon=grid.best_params_['epsilon'],
gamma='auto'),
X_train, y_train_log, cv=10)
# scores
acc = r2_score(y_train_log, y_pred)
mae = mean_absolute_error(y_train_log,y_pred)
# print model performance
print('R2:', acc)
print('MAE:', mae)
# and plot the results
sns.regplot(x=y_pred, y=y_train_log, scatter_kws=dict(color='k'))
plt.xlabel('Predicted Log Age')
plt.ylabel('Log Age')
```
Perhaps unsurprisingly, the model fit is only very slightly improved from what we had with our defaults. **There's a reason they are defaults, you silly**
Grid search can be a powerful and useful tool. But can you think of a way that, if not properly utilized, it could lead to overfitting? Could it be happening here?
You can find a nice set of tutorials with links to very helpful content regarding how to tune hyperparameters while being aware of over- and under-fitting here:
https://scikit-learn.org/stable/modules/learning_curve.html
| github_jupyter |
# Trial 2: classification with learned graph filters
We want to classify data by first extracting meaningful features from learned filters.
```
import time
import numpy as np
import scipy.sparse, scipy.sparse.linalg, scipy.spatial.distance
from sklearn import datasets, linear_model
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
sys.path.append('..')
from lib import graph
```
# Parameters
# Dataset
* Two digits version of MNIST with N samples of each class.
* Distinguishing 4 from 9 is the hardest.
```
def mnist(a, b, N):
"""Prepare data for binary classification of MNIST."""
folder = os.path.join('..', 'data')
mnist = datasets.fetch_mldata('MNIST original', data_home=folder)
assert N < min(sum(mnist.target==a), sum(mnist.target==b))
M = mnist.data.shape[1]
X = np.empty((M, 2, N))
X[:,0,:] = mnist.data[mnist.target==a,:][:N,:].T
X[:,1,:] = mnist.data[mnist.target==b,:][:N,:].T
y = np.empty((2, N))
y[0,:] = -1
y[1,:] = +1
X.shape = M, 2*N
y.shape = 2*N, 1
return X, y
X, y = mnist(4, 9, 1000)
print('Dimensionality: N={} samples, M={} features'.format(X.shape[1], X.shape[0]))
X -= 127.5
print('X in [{}, {}]'.format(np.min(X), np.max(X)))
def plot_digit(nn):
M, N = X.shape
m = int(np.sqrt(M))
fig, axes = plt.subplots(1,len(nn), figsize=(15,5))
for i, n in enumerate(nn):
n = int(n)
img = X[:,n]
axes[i].imshow(img.reshape((m,m)))
axes[i].set_title('Label: y = {:.0f}'.format(y[n,0]))
plot_digit([0, 1, 1e2, 1e2+1, 1e3, 1e3+1])
```
# Regularized least-square
## Reference: sklearn ridge regression
* With regularized data, the objective is the same with or without bias.
```
def test_sklearn(tauR):
def L(w, b=0):
return np.linalg.norm(X.T @ w + b - y)**2 + tauR * np.linalg.norm(w)**2
def dL(w):
return 2 * X @ (X.T @ w - y) + 2 * tauR * w
clf = linear_model.Ridge(alpha=tauR, fit_intercept=False)
clf.fit(X.T, y)
w = clf.coef_.T
print('L = {}'.format(L(w, clf.intercept_)))
print('|dLw| = {}'.format(np.linalg.norm(dL(w))))
# Normalized data: intercept should be small.
print('bias: {}'.format(abs(np.mean(y - X.T @ w))))
test_sklearn(1e-3)
```
## Linear classifier
```
def test_optim(clf, X, y, ax=None):
"""Test optimization on full dataset."""
tstart = time.process_time()
ret = clf.fit(X, y)
print('Processing time: {}'.format(time.process_time()-tstart))
print('L = {}'.format(clf.L(*ret, y)))
if hasattr(clf, 'dLc'):
print('|dLc| = {}'.format(np.linalg.norm(clf.dLc(*ret, y))))
if hasattr(clf, 'dLw'):
print('|dLw| = {}'.format(np.linalg.norm(clf.dLw(*ret, y))))
if hasattr(clf, 'loss'):
if not ax:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.semilogy(clf.loss)
ax.set_title('Convergence')
ax.set_xlabel('Iteration number')
ax.set_ylabel('Loss')
if hasattr(clf, 'Lsplit'):
print('Lsplit = {}'.format(clf.Lsplit(*ret, y)))
print('|dLz| = {}'.format(np.linalg.norm(clf.dLz(*ret, y))))
ax.semilogy(clf.loss_split)
class rls:
def __init__(s, tauR, algo='solve'):
s.tauR = tauR
if algo is 'solve':
s.fit = s.solve
elif algo is 'inv':
s.fit = s.inv
def L(s, X, y):
return np.linalg.norm(X.T @ s.w - y)**2 + s.tauR * np.linalg.norm(s.w)**2
def dLw(s, X, y):
return 2 * X @ (X.T @ s.w - y) + 2 * s.tauR * s.w
def inv(s, X, y):
s.w = np.linalg.inv(X @ X.T + s.tauR * np.identity(X.shape[0])) @ X @ y
return (X,)
def solve(s, X, y):
s.w = np.linalg.solve(X @ X.T + s.tauR * np.identity(X.shape[0]), X @ y)
return (X,)
def predict(s, X):
return X.T @ s.w
test_optim(rls(1e-3, 'solve'), X, y)
test_optim(rls(1e-3, 'inv'), X, y)
```
# Feature graph
```
t_start = time.process_time()
z = graph.grid(int(np.sqrt(X.shape[0])))
dist, idx = graph.distance_sklearn_metrics(z, k=4)
A = graph.adjacency(dist, idx)
L = graph.laplacian(A, True)
lmax = graph.lmax(L)
print('Execution time: {:.2f}s'.format(time.process_time() - t_start))
```
# Lanczos basis
```
def lanczos(L, X, K):
M, N = X.shape
a = np.empty((K, N))
b = np.zeros((K, N))
V = np.empty((K, M, N))
V[0,...] = X / np.linalg.norm(X, axis=0)
for k in range(K-1):
W = L.dot(V[k,...])
a[k,:] = np.sum(W * V[k,...], axis=0)
W = W - a[k,:] * V[k,...] - (b[k,:] * V[k-1,...] if k>0 else 0)
b[k+1,:] = np.linalg.norm(W, axis=0)
V[k+1,...] = W / b[k+1,:]
a[K-1,:] = np.sum(L.dot(V[K-1,...]) * V[K-1,...], axis=0)
return V, a, b
def lanczos_H_diag(a, b):
K, N = a.shape
H = np.zeros((K*K, N))
H[:K**2:K+1, :] = a
H[1:(K-1)*K:K+1, :] = b[1:,:]
H.shape = (K, K, N)
Q = np.linalg.eigh(H.T, UPLO='L')[1]
Q = np.swapaxes(Q,1,2).T
return Q
def lanczos_basis_eval(L, X, K):
V, a, b = lanczos(L, X, K)
Q = lanczos_H_diag(a, b)
M, N = X.shape
Xt = np.empty((K, M, N))
for n in range(N):
Xt[...,n] = Q[...,n].T @ V[...,n]
Xt *= Q[0,:,np.newaxis,:]
Xt *= np.linalg.norm(X, axis=0)
return Xt, Q[0,...]
```
# Tests
* Memory arrangement for fastest computations: largest dimensions on the outside, i.e. fastest varying indices.
* The einsum seems to be efficient for three operands.
```
def test():
"""Test the speed of filtering and weighting."""
def mult(impl=3):
if impl is 0:
Xb = Xt.view()
Xb.shape = (K, M*N)
XCb = Xb.T @ C # in MN x F
XCb = XCb.T.reshape((F*M, N))
return (XCb.T @ w).squeeze()
elif impl is 1:
tmp = np.tensordot(Xt, C, (0,0))
return np.tensordot(tmp, W, ((0,2),(1,0)))
elif impl is 2:
tmp = np.tensordot(Xt, C, (0,0))
return np.einsum('ijk,ki->j', tmp, W)
elif impl is 3:
return np.einsum('kmn,fm,kf->n', Xt, W, C)
C = np.random.normal(0,1,(K,F))
W = np.random.normal(0,1,(F,M))
w = W.reshape((F*M, 1))
a = mult(impl=0)
for impl in range(4):
tstart = time.process_time()
for k in range(1000):
b = mult(impl)
print('Execution time (impl={}): {}'.format(impl, time.process_time() - tstart))
np.testing.assert_allclose(a, b)
#test()
```
# GFL classification without weights
* The matrix is singular thus not invertible.
```
class gflc_noweights:
def __init__(s, F, K, niter, algo='direct'):
"""Model hyper-parameters"""
s.F = F
s.K = K
s.niter = niter
if algo is 'direct':
s.fit = s.direct
elif algo is 'sgd':
s.fit = s.sgd
def L(s, Xt, y):
#tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, np.ones((s.F,M))) - y.squeeze()
#tmp = np.einsum('kmn,kf->mnf', Xt, s.C).sum((0,2)) - y.squeeze()
#tmp = (C.T @ Xt.reshape((K,M*N))).reshape((F,M,N)).sum((0,2)) - y.squeeze()
tmp = np.tensordot(s.C, Xt, (0,0)).sum((0,1)) - y.squeeze()
return np.linalg.norm(tmp)**2
def dLc(s, Xt, y):
tmp = np.tensordot(s.C, Xt, (0,0)).sum(axis=(0,1)) - y.squeeze()
return np.dot(Xt, tmp).sum(1)[:,np.newaxis].repeat(s.F,1)
#return np.einsum('kmn,n->km', Xt, tmp).sum(1)[:,np.newaxis].repeat(s.F,1)
def sgd(s, X, y):
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
s.loss = [s.L(Xt, y)]
for t in range(s.niter):
s.C -= 1e-13 * s.dLc(Xt, y)
s.loss.append(s.L(Xt, y))
return (Xt,)
def direct(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
W = np.ones((s.F, M))
c = s.C.reshape((s.K*s.F, 1))
s.loss = [s.L(Xt, y)]
Xw = np.einsum('kmn,fm->kfn', Xt, W)
#Xw = np.tensordot(Xt, W, (1,1))
Xw.shape = (s.K*s.F, N)
#np.linalg.inv(Xw @ Xw.T)
c[:] = np.linalg.solve(Xw @ Xw.T, Xw @ y)
s.loss.append(s.L(Xt, y))
return (Xt,)
#test_optim(gflc_noweights(1, 4, 100, 'sgd'), X, y)
#test_optim(gflc_noweights(1, 4, 0, 'direct'), X, y)
```
# GFL classification with weights
```
class gflc_weights():
def __init__(s, F, K, tauR, niter, algo='direct'):
"""Model hyper-parameters"""
s.F = F
s.K = K
s.tauR = tauR
s.niter = niter
if algo is 'direct':
s.fit = s.direct
elif algo is 'sgd':
s.fit = s.sgd
def L(s, Xt, y):
tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze()
return np.linalg.norm(tmp)**2 + s.tauR * np.linalg.norm(s.W)**2
def dLw(s, Xt, y):
tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze()
return 2 * np.einsum('kmn,kf,n->fm', Xt, s.C, tmp) + 2 * s.tauR * s.W
def dLc(s, Xt, y):
tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze()
return 2 * np.einsum('kmn,n,fm->kf', Xt, tmp, s.W)
def sgd(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
s.W = np.random.normal(0, 1, (s.F, M))
s.loss = [s.L(Xt, y)]
for t in range(s.niter):
s.C -= 1e-12 * s.dLc(Xt, y)
s.W -= 1e-12 * s.dLw(Xt, y)
s.loss.append(s.L(Xt, y))
return (Xt,)
def direct(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
s.W = np.random.normal(0, 1, (s.F, M))
#c = s.C.reshape((s.K*s.F, 1))
#w = s.W.reshape((s.F*M, 1))
c = s.C.view()
c.shape = (s.K*s.F, 1)
w = s.W.view()
w.shape = (s.F*M, 1)
s.loss = [s.L(Xt, y)]
for t in range(s.niter):
Xw = np.einsum('kmn,fm->kfn', Xt, s.W)
#Xw = np.tensordot(Xt, s.W, (1,1))
Xw.shape = (s.K*s.F, N)
c[:] = np.linalg.solve(Xw @ Xw.T, Xw @ y)
Z = np.einsum('kmn,kf->fmn', Xt, s.C)
#Z = np.tensordot(Xt, s.C, (0,0))
#Z = s.C.T @ Xt.reshape((K,M*N))
Z.shape = (s.F*M, N)
w[:] = np.linalg.solve(Z @ Z.T + s.tauR * np.identity(s.F*M), Z @ y)
s.loss.append(s.L(Xt, y))
return (Xt,)
def predict(s, X):
Xt, q = lanczos_basis_eval(L, X, s.K)
return np.einsum('kmn,kf,fm->n', Xt, s.C, s.W)
#test_optim(gflc_weights(3, 4, 1e-3, 50, 'sgd'), X, y)
clf_weights = gflc_weights(F=3, K=50, tauR=1e4, niter=5, algo='direct')
test_optim(clf_weights, X, y)
```
# GFL classification with splitting
Solvers
* Closed-form solution.
* Stochastic gradient descent.
```
class gflc_split():
def __init__(s, F, K, tauR, tauF, niter, algo='direct'):
"""Model hyper-parameters"""
s.F = F
s.K = K
s.tauR = tauR
s.tauF = tauF
s.niter = niter
if algo is 'direct':
s.fit = s.direct
elif algo is 'sgd':
s.fit = s.sgd
def L(s, Xt, XCb, Z, y):
return np.linalg.norm(XCb.T @ s.w - y)**2 + s.tauR * np.linalg.norm(s.w)**2
def Lsplit(s, Xt, XCb, Z, y):
return np.linalg.norm(Z.T @ s.w - y)**2 + s.tauF * np.linalg.norm(XCb - Z)**2 + s.tauR * np.linalg.norm(s.w)**2
def dLw(s, Xt, XCb, Z, y):
return 2 * Z @ (Z.T @ s.w - y) + 2 * s.tauR * s.w
def dLc(s, Xt, XCb, Z, y):
Xb = Xt.reshape((s.K, -1)).T
Zb = Z.reshape((s.F, -1)).T
return 2 * s.tauF * Xb.T @ (Xb @ s.C - Zb)
def dLz(s, Xt, XCb, Z, y):
return 2 * s.w @ (s.w.T @ Z - y.T) + 2 * s.tauF * (Z - XCb)
def lanczos_filter(s, Xt):
M, N = Xt.shape[1:]
Xb = Xt.reshape((s.K, M*N)).T
#XCb = np.tensordot(Xb, C, (2,1))
XCb = Xb @ s.C # in MN x F
XCb = XCb.T.reshape((s.F*M, N)) # Needs to copy data.
return XCb
def sgd(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.zeros((s.K, s.F))
s.w = np.zeros((s.F*M, 1))
Z = np.random.normal(0, 1, (s.F*M, N))
XCb = np.empty((s.F*M, N))
s.loss = [s.L(Xt, XCb, Z, y)]
s.loss_split = [s.Lsplit(Xt, XCb, Z, y)]
for t in range(s.niter):
s.C -= 1e-7 * s.dLc(Xt, XCb, Z, y)
XCb[:] = s.lanczos_filter(Xt)
Z -= 1e-4 * s.dLz(Xt, XCb, Z, y)
s.w -= 1e-4 * s.dLw(Xt, XCb, Z, y)
s.loss.append(s.L(Xt, XCb, Z, y))
s.loss_split.append(s.Lsplit(Xt, XCb, Z, y))
return Xt, XCb, Z
def direct(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.zeros((s.K, s.F))
s.w = np.zeros((s.F*M, 1))
Z = np.random.normal(0, 1, (s.F*M, N))
XCb = np.empty((s.F*M, N))
Xb = Xt.reshape((s.K, M*N)).T
Zb = Z.reshape((s.F, M*N)).T
s.loss = [s.L(Xt, XCb, Z, y)]
s.loss_split = [s.Lsplit(Xt, XCb, Z, y)]
for t in range(s.niter):
s.C[:] = Xb.T @ Zb / np.sum((np.linalg.norm(X, axis=0) * q)**2, axis=1)[:,np.newaxis]
XCb[:] = s.lanczos_filter(Xt)
#Z[:] = np.linalg.inv(s.tauF * np.identity(s.F*M) + s.w @ s.w.T) @ (s.tauF * XCb + s.w @ y.T)
Z[:] = np.linalg.solve(s.tauF * np.identity(s.F*M) + s.w @ s.w.T, s.tauF * XCb + s.w @ y.T)
#s.w[:] = np.linalg.inv(Z @ Z.T + s.tauR * np.identity(s.F*M)) @ Z @ y
s.w[:] = np.linalg.solve(Z @ Z.T + s.tauR * np.identity(s.F*M), Z @ y)
s.loss.append(s.L(Xt, XCb, Z, y))
s.loss_split.append(s.Lsplit(Xt, XCb, Z, y))
return Xt, XCb, Z
def predict(s, X):
Xt, q = lanczos_basis_eval(L, X, s.K)
XCb = s.lanczos_filter(Xt)
return XCb.T @ s.w
#test_optim(gflc_split(3, 4, 1e-3, 1e-3, 50, 'sgd'), X, y)
clf_split = gflc_split(3, 4, 1e4, 1e-3, 8, 'direct')
test_optim(clf_split, X, y)
```
# Filters visualization
Observations:
* Filters learned with the splitting scheme have much smaller amplitudes.
* Maybe the energy sometimes goes in W ?
* Why are the filters so different ?
```
lamb, U = graph.fourier(L)
print('Spectrum in [{:1.2e}, {:1.2e}]'.format(lamb[0], lamb[-1]))
def plot_filters(C, spectrum=False):
K, F = C.shape
M, M = L.shape
m = int(np.sqrt(M))
X = np.zeros((M,1))
X[int(m/2*(m+1))] = 1 # Kronecker
Xt, q = lanczos_basis_eval(L, X, K)
Z = np.einsum('kmn,kf->mnf', Xt, C)
Xh = U.T @ X
Zh = np.tensordot(U.T, Z, (1,0))
pmin = int(m/2) - K
pmax = int(m/2) + K + 1
fig, axes = plt.subplots(2,int(np.ceil(F/2)), figsize=(15,5))
for f in range(F):
img = Z[:,0,f].reshape((m,m))[pmin:pmax,pmin:pmax]
im = axes.flat[f].imshow(img, vmin=Z.min(), vmax=Z.max(), interpolation='none')
axes.flat[f].set_title('Filter {}'.format(f))
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.82, 0.16, 0.02, 0.7])
fig.colorbar(im, cax=cax)
if spectrum:
ax = plt.figure(figsize=(15,5)).add_subplot(111)
for f in range(F):
ax.plot(lamb, Zh[...,f] / Xh, '.-', label='Filter {}'.format(f))
ax.legend(loc='best')
ax.set_title('Spectrum of learned filters')
ax.set_xlabel('Frequency')
ax.set_ylabel('Amplitude')
ax.set_xlim(0, lmax)
plot_filters(clf_weights.C, True)
plot_filters(clf_split.C, True)
```
# Extracted features
```
def plot_features(C, x):
K, F = C.shape
m = int(np.sqrt(x.shape[0]))
xt, q = lanczos_basis_eval(L, x, K)
Z = np.einsum('kmn,kf->mnf', xt, C)
fig, axes = plt.subplots(2,int(np.ceil(F/2)), figsize=(15,5))
for f in range(F):
img = Z[:,0,f].reshape((m,m))
#im = axes.flat[f].imshow(img, vmin=Z.min(), vmax=Z.max(), interpolation='none')
im = axes.flat[f].imshow(img, interpolation='none')
axes.flat[f].set_title('Filter {}'.format(f))
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.82, 0.16, 0.02, 0.7])
fig.colorbar(im, cax=cax)
plot_features(clf_weights.C, X[:,[0]])
plot_features(clf_weights.C, X[:,[1000]])
```
# Performance w.r.t. hyper-parameters
* F plays a big role.
* Both for performance and training time.
* Larger values lead to over-fitting !
* Order $K \in [3,5]$ seems sufficient.
* $\tau_R$ does not have much influence.
```
def scorer(clf, X, y):
yest = clf.predict(X).round().squeeze()
y = y.squeeze()
yy = np.ones(len(y))
yy[yest < 0] = -1
nerrs = np.count_nonzero(y - yy)
return 1 - nerrs / len(y)
def perf(clf, nfolds=3):
"""Test training accuracy."""
N = X.shape[1]
inds = np.arange(N)
np.random.shuffle(inds)
inds.resize((nfolds, int(N/nfolds)))
folds = np.arange(nfolds)
test = inds[0,:]
train = inds[folds != 0, :].reshape(-1)
fig, axes = plt.subplots(1,3, figsize=(15,5))
test_optim(clf, X[:,train], y[train], axes[2])
axes[0].plot(train, clf.predict(X[:,train]), '.')
axes[0].plot(train, y[train].squeeze(), '.')
axes[0].set_ylim([-3,3])
axes[0].set_title('Training set accuracy: {:.2f}'.format(scorer(clf, X[:,train], y[train])))
axes[1].plot(test, clf.predict(X[:,test]), '.')
axes[1].plot(test, y[test].squeeze(), '.')
axes[1].set_ylim([-3,3])
axes[1].set_title('Testing set accuracy: {:.2f}'.format(scorer(clf, X[:,test], y[test])))
if hasattr(clf, 'C'):
plot_filters(clf.C)
perf(rls(tauR=1e6))
for F in [1,3,5]:
perf(gflc_weights(F=F, K=50, tauR=1e4, niter=5, algo='direct'))
#perf(rls(tauR=1e-3))
#for K in [2,3,5,7]:
# perf(gflc_weights(F=3, K=K, tauR=1e-3, niter=5, algo='direct'))
#for tauR in [1e-3, 1e-1, 1e1]:
# perf(rls(tauR=tauR))
# perf(gflc_weights(F=3, K=3, tauR=tauR, niter=5, algo='direct'))
```
# Classification
* Greater is $F$, greater should $K$ be.
```
def cross_validation(clf, nfolds, nvalidations):
M, N = X.shape
scores = np.empty((nvalidations, nfolds))
for nval in range(nvalidations):
inds = np.arange(N)
np.random.shuffle(inds)
inds.resize((nfolds, int(N/nfolds)))
folds = np.arange(nfolds)
for n in folds:
test = inds[n,:]
train = inds[folds != n, :].reshape(-1)
clf.fit(X[:,train], y[train])
scores[nval, n] = scorer(clf, X[:,test], y[test])
return scores.mean()*100, scores.std()*100
#print('Accuracy: {:.2f} +- {:.2f}'.format(scores.mean()*100, scores.std()*100))
#print(scores)
def test_classification(clf, params, param, values, nfolds=10, nvalidations=1):
means = []
stds = []
fig, ax = plt.subplots(1,1, figsize=(15,5))
for i,val in enumerate(values):
params[param] = val
mean, std = cross_validation(clf(**params), nfolds, nvalidations)
means.append(mean)
stds.append(std)
ax.annotate('{:.2f} +- {:.2f}'.format(mean,std), xy=(i,mean), xytext=(10,10), textcoords='offset points')
ax.errorbar(np.arange(len(values)), means, stds, fmt='.', markersize=10)
ax.set_xlim(-.8, len(values)-.2)
ax.set_xticks(np.arange(len(values)))
ax.set_xticklabels(values)
ax.set_xlabel(param)
ax.set_ylim(50, 100)
ax.set_ylabel('Accuracy')
ax.set_title('Parameters: {}'.format(params))
test_classification(rls, {}, 'tauR', [1e8,1e7,1e6,1e5,1e4,1e3,1e-5,1e-8], 10, 10)
params = {'F':1, 'K':2, 'tauR':1e3, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'tauR', [1e8,1e6,1e5,1e4,1e3,1e2,1e-3,1e-8], 10, 10)
params = {'F':2, 'K':10, 'tauR':1e4, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'F', [1,2,3,5])
params = {'F':2, 'K':4, 'tauR':1e4, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'K', [2,3,4,5,8,10,20,30,50,70])
```
# Sampled MNIST
```
Xfull = X
def sample(X, p, seed=None):
M, N = X.shape
z = graph.grid(int(np.sqrt(M)))
# Select random pixels.
np.random.seed(seed)
mask = np.arange(M)
np.random.shuffle(mask)
mask = mask[:int(p*M)]
return z[mask,:], X[mask,:]
X = Xfull
z, X = sample(X, .5)
dist, idx = graph.distance_sklearn_metrics(z, k=4)
A = graph.adjacency(dist, idx)
L = graph.laplacian(A)
lmax = graph.lmax(L)
lamb, U = graph.fourier(L)
print('Spectrum in [{:1.2e}, {:1.2e}]'.format(lamb[0], lamb[-1]))
print(L.shape)
def plot(n):
M, N = X.shape
m = int(np.sqrt(M))
x = X[:,n]
#print(x+127.5)
plt.scatter(z[:,0], -z[:,1], s=20, c=x+127.5)
plot(10)
def plot_digit(nn):
M, N = X.shape
m = int(np.sqrt(M))
fig, axes = plt.subplots(1,len(nn), figsize=(15,5))
for i, n in enumerate(nn):
n = int(n)
img = X[:,n]
axes[i].imshow(img.reshape((m,m)))
axes[i].set_title('Label: y = {:.0f}'.format(y[n,0]))
#plot_digit([0, 1, 1e2, 1e2+1, 1e3, 1e3+1])
#clf_weights = gflc_weights(F=3, K=4, tauR=1e-3, niter=5, algo='direct')
#test_optim(clf_weights, X, y)
#plot_filters(clf_weights.C, True)
#test_classification(rls, {}, 'tauR', [1e1,1e0])
#params = {'F':2, 'K':5, 'tauR':1e-3, 'niter':5, 'algo':'direct'}
#test_classification(gflc_weights, params, 'F', [1,2,3])
test_classification(rls, {}, 'tauR', [1e8,1e7,1e6,1e5,1e4,1e3,1e-5,1e-8], 10, 10)
params = {'F':2, 'K':2, 'tauR':1e3, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'tauR', [1e8,1e5,1e4,1e3,1e2,1e1,1e-3,1e-8], 10, 1)
params = {'F':2, 'K':10, 'tauR':1e5, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'F', [1,2,3,4,5,10])
params = {'F':2, 'K':4, 'tauR':1e5, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'K', [2,3,4,5,6,7,8,10,20,30])
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler, ModelCheckpoint
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
fold_set = pd.read_csv('../input/aptos-split-oldnew/5-fold.csv')
X_train = fold_set[fold_set['fold_2'] == 'train']
X_val = fold_set[fold_set['fold_2'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
# Preprocecss data
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
display(X_train.head())
```
# Model parameters
```
# Model parameters
model_path = '../working/effNetB4_img256_noBen_fold3.h5'
FACTOR = 4
BATCH_SIZE = 8 * FACTOR
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-3/2 * FACTOR
WARMUP_LEARNING_RATE = 1e-3/2 * FACTOR
HEIGHT = 256
WIDTH = 256
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
LR_WARMUP_EPOCHS = 5
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS = EPOCHS * STEP_SIZE
WARMUP_STEPS = LR_WARMUP_EPOCHS * STEP_SIZE
```
# Pre-procecess images
```
old_data_base_path = '../input/diabetic-retinopathy-resized/resized_train/resized_train/'
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['fold_2']
item_data = item['data']
if item_set == 'train':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, train_dest_path)
if item_set == 'validation':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
def plot_metrics(history, figsize=(20, 14)):
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=figsize)
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB4(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b4_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=2).history
```
# Fine-tune the model
```
for layer in model.layers:
layer.trainable = True
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True, save_weights_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS,
hold_base_rate_steps=(3 * STEP_SIZE))
callback_list = [checkpoint, es, cosine_lr]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, ax = plt.subplots(1, 1, sharex='col', figsize=(20, 4))
ax.plot(cosine_lr.learning_rates)
ax.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
plot_metrics(history)
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
| github_jupyter |
# Translate `dzn` to `smt2` for z3
### Check Versions of Tools
```
import os
import subprocess
my_env = os.environ.copy()
output = subprocess.check_output(f'''/home/{my_env['USER']}/optimathsat/bin/optimathsat -version''', shell=True, universal_newlines=True)
output
output = subprocess.check_output(f'''/home/{my_env['USER']}/minizinc/build/minizinc --version''', shell=True, universal_newlines=True)
output
output = subprocess.check_output(f'''/home/{my_env['USER']}/z3/build/z3 --version''', shell=True, universal_newlines=True)
output
```
First generate the FlatZinc files using the MiniZinc tool. Make sure that a `smt2` folder is located inside `./minizinc/share/minizinc/`. Else, to enable OptiMathSAT's support for global constraints download the [smt2.tar.gz](http://optimathsat.disi.unitn.it/data/smt2.tar.gz) package and unpack it there using
```zsh
tar xf smt2.tar.gz $MINIZINC_PATH/share/minizinc/
```
If next output shows a list of `.mzn` files, then this dependency is satified.
```
output = subprocess.check_output(f'''ls -la /home/{my_env['USER']}/minizinc/share/minizinc/smt2/''', shell=True, universal_newlines=True)
print(output)
```
## Transform `dzn` to `fzn` Using a `mzn` Model
Then transform the desired `.dzn` file to `.fzn` using a `Mz.mzn` MiniZinc model.
First list all `dzn` files contained in the `dzn_path` that should get processed.
```
import os
dzn_files = []
dzn_path = f'''/home/{my_env['USER']}/data/dzn/'''
for filename in os.listdir(dzn_path):
if filename.endswith(".dzn"):
dzn_files.append(filename)
len(dzn_files)
```
#### Model $Mz_1$
```
import sys
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz1-noAbs/'''
minizinc_base_cmd = f'''/home/{my_env['USER']}/minizinc/build/minizinc \
-Werror \
--compile --solver org.minizinc.mzn-fzn \
--search-dir /home/{my_env['USER']}/minizinc/share/minizinc/smt2/ \
/home/{my_env['USER']}/models/mzn/Mz1-noAbs.mzn '''
translate_count = 0
for dzn in dzn_files:
translate_count += 1
minizinc_transform_cmd = minizinc_base_cmd + dzn_path + dzn \
+ ' --output-to-file ' + fzn_path + dzn.replace('.', '-') + '.fzn'
print(f'''\r({translate_count}/{len(dzn_files)}) Translating {dzn_path + dzn} to {fzn_path + dzn.replace('.', '-')}.fzn''', end='')
sys.stdout.flush()
subprocess.check_output(minizinc_transform_cmd, shell=True,
universal_newlines=True)
```
#### Model $Mz_2$
```
import sys
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz2-noAbs/'''
minizinc_base_cmd = f'''/home/{my_env['USER']}/minizinc/build/minizinc \
-Werror \
--compile --solver org.minizinc.mzn-fzn \
--search-dir /home/{my_env['USER']}/minizinc/share/minizinc/smt2/ \
/home/{my_env['USER']}/models/mzn/Mz2-noAbs.mzn '''
translate_count = 0
for dzn in dzn_files:
translate_count += 1
minizinc_transform_cmd = minizinc_base_cmd + dzn_path + dzn \
+ ' --output-to-file ' + fzn_path + dzn.replace('.', '-') + '.fzn'
print(f'''\r({translate_count}/{len(dzn_files)}) Translating {dzn_path + dzn} to {fzn_path + dzn.replace('.', '-')}.fzn''', end='')
sys.stdout.flush()
subprocess.check_output(minizinc_transform_cmd, shell=True,
universal_newlines=True)
```
## Translate `fzn` to `smt2`
The generated `.fzn` files can be used to generate a `.smt2` files using the `fzn2smt2.py` script from this [project](https://github.com/PatrickTrentin88/fzn2omt).
**NOTE**: Files `R001` (no cables) and `R002` (one one-sided cable) throw an error while translating.
#### $Mz_1$
```
import os
fzn_files = []
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz1-noAbs/'''
for filename in os.listdir(fzn_path):
if filename.endswith(".fzn"):
fzn_files.append(filename)
len(fzn_files)
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz1-noAbs/'''
fzn2smt2_base_cmd = f'''/home/{my_env['USER']}/fzn2omt/bin/fzn2z3.py'''
translate_count = 0
my_env = os.environ.copy()
my_env['PATH'] = f'''/home/{my_env['USER']}/optimathsat/bin/:{my_env['PATH']}'''
my_env['PATH'] = f'''/home/{my_env['USER']}/z3/build/:{my_env['PATH']}'''
for fzn in fzn_files:
translate_count += 1
fzn2smt2_transform_cmd = f'''{fzn2smt2_base_cmd} {fzn_path}{fzn} --smt2 {smt2_path}{fzn.replace('.', '-')}.smt2'''
print(f'''\r({translate_count}/{len(fzn_files)}) Translating {fzn_path + fzn} to {smt2_path + fzn.replace('.', '-')}.smt2''', end='')
try:
output = subprocess.check_output(fzn2smt2_transform_cmd,
shell=True,env=my_env,
universal_newlines=True)
except Exception as e:
output = str(e.output)
print(f'''\r{output}''', end='')
sys.stdout.flush()
```
#### $Mz_2$
```
import os
fzn_files = []
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz2-noAbs/'''
for filename in os.listdir(fzn_path):
if filename.endswith(".fzn"):
fzn_files.append(filename)
len(fzn_files)
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz2-noAbs/'''
fzn2smt2_base_cmd = f'''/home/{my_env['USER']}/fzn2omt/bin/fzn2z3.py'''
translate_count = 0
my_env = os.environ.copy()
my_env['PATH'] = f'''/home/{my_env['USER']}/optimathsat/bin/:{my_env['PATH']}'''
my_env['PATH'] = f'''/home/{my_env['USER']}/z3/build/:{my_env['PATH']}'''
for fzn in fzn_files:
translate_count += 1
fzn2smt2_transform_cmd = f'''{fzn2smt2_base_cmd} {fzn_path}{fzn} --smt2 {smt2_path}{fzn.replace('.', '-')}.smt2'''
print(f'''\r({translate_count}/{len(fzn_files)}) Translating {fzn_path + fzn} to {smt2_path + fzn.replace('.', '-')}.smt2''', end='')
try:
output = subprocess.check_output(fzn2smt2_transform_cmd,
shell=True,env=my_env,
universal_newlines=True)
except Exception as e:
output = str(e.output)
print(f'''\r{output}''', end='')
sys.stdout.flush()
```
### Adjust `smt2` Files According to Chapter 5.2
- Add lower and upper bounds for the decision variable `pfc`
- Add number of cavities as comments for later solution extraction (workaround)
```
import os
import re
def adjust_smt2_file(smt2_path: str, file: str, write_path: str):
with open(smt2_path+'/'+file, 'r+') as myfile:
data = "".join(line for line in myfile)
filename = os.path.splitext(file)[0]
newFile = open(os.path.join(write_path, filename +'.smt2'),"w+")
newFile.write(data)
newFile.close()
openFile = open(os.path.join(write_path, filename +'.smt2'))
data = openFile.readlines()
additionalLines = data[-5:]
data = data[:-5]
openFile.close()
newFile = open(os.path.join(write_path, filename +'.smt2'),"w+")
newFile.writelines([item for item in data])
newFile.close()
with open(os.path.join(write_path, filename +'.smt2'),"r") as myfile:
data = "".join(line for line in myfile)
newFile = open(os.path.join(write_path, filename +'.smt2'),"w+")
matches = re.findall(r'\(define-fun .\d\d \(\) Int (\d+)\)', data)
try:
cavity_count = int(matches[0])
newFile.write(f''';; k={cavity_count}\n''')
newFile.write(f''';; Extract pfc from\n''')
for i in range(0,cavity_count):
newFile.write(f''';; X_INTRODUCED_{str(i)}_\n''')
newFile.write(data)
for i in range(1,cavity_count+1):
lb = f'''(define-fun lbound{str(i)} () Bool (> X_INTRODUCED_{str(i-1)}_ 0))\n'''
ub = f'''(define-fun ubound{str(i)} () Bool (<= X_INTRODUCED_{str(i-1)}_ {str(cavity_count)}))\n'''
assertLb = f'''(assert lbound{str(i)})\n'''
assertUb = f'''(assert ubound{str(i)})\n'''
newFile.write(lb)
newFile.write(ub)
newFile.write(assertLb)
newFile.write(assertUb)
except:
print(f'''\nCheck {filename} for completeness - data missing?''')
newFile.writelines([item for item in additionalLines])
newFile.close()
```
#### $Mz_1$
```
import os
smt2_files = []
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz1-noAbs'''
for filename in os.listdir(smt2_path):
if filename.endswith(".smt2"):
smt2_files.append(filename)
len(smt2_files)
fix_count = 0
for smt2 in smt2_files:
fix_count += 1
print(f'''\r{fix_count}/{len(smt2_files)} Fixing file {smt2}''', end='')
adjust_smt2_file(smt2_path=smt2_path, file=smt2, write_path=f'''{smt2_path}''')
sys.stdout.flush()
```
#### $Mz_2$
```
import os
smt2_files = []
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz2-noAbs'''
for filename in os.listdir(smt2_path):
if filename.endswith(".smt2"):
smt2_files.append(filename)
len(smt2_files)
fix_count = 0
for smt2 in smt2_files:
fix_count += 1
print(f'''\r{fix_count}/{len(smt2_files)} Fixing file {smt2}''', end='')
adjust_smt2_file(smt2_path=smt2_path, file=smt2, write_path=f'''{smt2_path}''')
sys.stdout.flush()
```
## Test Generated `smt2` Files Using `z3`
This shoud generate the `smt2` file without any error. If this was the case then the `z3` prover can be called on a file by running
```zsh
z3 output/A001-dzn-smt2-fzn.smt2
```
yielding something similar to
```zsh
z3 output/A001-dzn-smt2-fzn.smt2
sat
(objectives
(obj 41881)
)
(model
(define-fun X_INTRODUCED_981_ () Bool
false)
(define-fun X_INTRODUCED_348_ () Bool
false)
.....
```
#### Test with `smt2` from $Mz_1$
```
command = f'''/home/{my_env['USER']}/z3/build/z3 /home/{my_env['USER']}/data/smt2/z3/Mz1-noAbs/A001-dzn-fzn.smt2'''
print(command)
try:
result = subprocess.check_output(command, shell=True, universal_newlines=True)
except Exception as e:
print(e.output)
print(result)
```
#### Test with `smt2` from $Mz_2$
```
result = subprocess.check_output(
f'''/home/{my_env['USER']}/z3/build/z3 \
/home/{my_env['USER']}/data/smt2/z3/Mz2-noAbs/v3/A004-dzn-fzn_v3.smt2''',
shell=True, universal_newlines=True)
print(result)
```
| github_jupyter |
<p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
<h1>Welcome to Colaboratory!</h1>
Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.
With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser.
```
#@title Introducing Colaboratory { display-mode: "form" }
#@markdown This 3-minute video gives an overview of the key features of Colaboratory:
from IPython.display import YouTubeVideo
YouTubeVideo('inN8seMm7UI', width=600, height=400)
```
## Getting Started
The document you are reading is a [Jupyter notebook](https://jupyter.org/), hosted in Colaboratory. It is not a static page, but an interactive environment that lets you write and execute code in Python and other languages.
For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
```
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter".
All cells modify the same global state, so variables that you define by executing a cell can be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
```
For more information about working with Colaboratory notebooks, see [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb).
---
# Cells
A notebook is a list of cells. Cells contain either explanatory text or executable code and its output. Click a cell to select it.
## Code cells
Below is a **code cell**. Once the toolbar button indicates CONNECTED, click in the cell to select it and execute the contents in the following ways:
* Click the **Play icon** in the left gutter of the cell;
* Type **Cmd/Ctrl+Enter** to run the cell in place;
* Type **Shift+Enter** to run the cell and move focus to the next cell (adding one if none exists); or
* Type **Alt+Enter** to run the cell and insert a new code cell immediately below it.
There are additional options for running some or all cells in the **Runtime** menu.
```
a = 13
a
```
## Text cells
This is a **text cell**. You can **double-click** to edit this cell. Text cells
use markdown syntax. To learn more, see our [markdown
guide](/notebooks/markdown_guide.ipynb).
You can also add math to text cells using [LaTeX](http://www.latex-project.org/)
to be rendered by [MathJax](https://www.mathjax.org). Just place the statement
within a pair of **\$** signs. For example `$\sqrt{3x-1}+(1+x)^2$` becomes
$\sqrt{3x-1}+(1+x)^2.$
## Adding and moving cells
You can add new cells by using the **+ CODE** and **+ TEXT** buttons that show when you hover between cells. These buttons are also in the toolbar above the notebook where they can be used to add a cell below the currently selected cell.
You can move a cell by selecting it and clicking **Cell Up** or **Cell Down** in the top toolbar.
Consecutive cells can be selected by "lasso selection" by dragging from outside one cell and through the group. Non-adjacent cells can be selected concurrently by clicking one and then holding down Ctrl while clicking another. Similarly, using Shift instead of Ctrl will select all intermediate cells.
# Integration with Drive
Colaboratory is integrated with Google Drive. It allows you to share, comment, and collaborate on the same document with multiple people:
* The **SHARE** button (top-right of the toolbar) allows you to share the notebook and control permissions set on it.
* **File->Make a Copy** creates a copy of the notebook in Drive.
* **File->Save** saves the File to Drive. **File->Save and checkpoint** pins the version so it doesn't get deleted from the revision history.
* **File->Revision history** shows the notebook's revision history.
## Commenting on a cell
You can comment on a Colaboratory notebook like you would on a Google Document. Comments are attached to cells, and are displayed next to the cell they refer to. If you have **comment-only** permissions, you will see a comment button on the top right of the cell when you hover over it.
If you have edit or comment permissions you can comment on a cell in one of three ways:
1. Select a cell and click the comment button in the toolbar above the top-right corner of the cell.
1. Right click a text cell and select **Add a comment** from the context menu.
3. Use the shortcut **Ctrl+Shift+M** to add a comment to the currently selected cell.
You can resolve and reply to comments, and you can target comments to specific collaborators by typing *+[email address]* (e.g., `+user@domain.com`). Addressed collaborators will be emailed.
The Comment button in the top-right corner of the page shows all comments attached to the notebook.
## More Resources
- [Guide to Markdown](/notebooks/markdown_guide.ipynb)
- Colaboratory is built on top of [Jupyter Notebook](https://jupyter.org/).
---
**Original Sources:**
1. https://colab.research.google.com/notebooks/welcome.ipynb
2. https://colab.research.google.com/notebooks/basic_features_overview.ipynb
| github_jupyter |
<p><font size="6"><b>04 - Pandas: Working with time series data</b></font></p>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:jorisvandenbossche@gmail.com>, <mailto:stijnvanhoey@gmail.com>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
```
# Introduction: `datetime` module
Standard Python contains the `datetime` module to handle date and time data:
```
import datetime
dt = datetime.datetime(year=2016, month=12, day=19, hour=13, minute=30)
dt
print(dt) # .day,...
print(dt.strftime("%d %B %Y"))
```
# Dates and times in pandas
## The ``Timestamp`` object
Pandas has its own date and time objects, which are compatible with the standard `datetime` objects, but provide some more functionality to work with.
The `Timestamp` object can also be constructed from a string:
```
ts = pd.Timestamp('2016-12-19')
ts
```
Like with `datetime.datetime` objects, there are several useful attributes available on the `Timestamp`. For example, we can get the month (experiment with tab completion!):
```
ts.month
```
There is also a `Timedelta` type, which can e.g. be used to add intervals of time:
```
ts + pd.Timedelta('5 days')
```
## Parsing datetime strings

Unfortunately, when working with real world data, you encounter many different `datetime` formats. Most of the time when you have to deal with them, they come in text format, e.g. from a `CSV` file. To work with those data in Pandas, we first have to *parse* the strings to actual `Timestamp` objects.
<div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
To convert string formatted dates to Timestamp objects: use the `pandas.to_datetime` function
</div>
```
pd.to_datetime("2016-12-09")
pd.to_datetime("09/12/2016")
pd.to_datetime("09/12/2016", dayfirst=True)
pd.to_datetime("09/12/2016", format="%d/%m/%Y")
```
A detailed overview of how to specify the `format` string, see the table in the python documentation: https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
## `Timestamp` data in a Series or DataFrame column
```
s = pd.Series(['2016-12-09 10:00:00', '2016-12-09 11:00:00', '2016-12-09 12:00:00'])
s
```
The `to_datetime` function can also be used to convert a full series of strings:
```
ts = pd.to_datetime(s)
ts
```
Notice the data type of this series has changed: the `datetime64[ns]` dtype. This indicates that we have a series of actual datetime values.
The same attributes as on single `Timestamp`s are also available on a Series with datetime data, using the **`.dt`** accessor:
```
ts.dt.hour
ts.dt.dayofweek
```
To quickly construct some regular time series data, the [``pd.date_range``](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html) function comes in handy:
```
pd.Series(pd.date_range(start="2016-01-01", periods=10, freq='3H'))
```
# Time series data: `Timestamp` in the index
## River discharge example data
For the following demonstration of the time series functionality, we use a sample of discharge data of the Maarkebeek (Flanders) with 3 hour averaged values, derived from the [Waterinfo website](https://www.waterinfo.be/).
```
data = pd.read_csv("data/vmm_flowdata.csv")
data.head()
```
We already know how to parse a date column with Pandas:
```
data['Time'] = pd.to_datetime(data['Time'])
```
With `set_index('datetime')`, we set the column with datetime values as the index, which can be done by both `Series` and `DataFrame`.
```
data = data.set_index("Time")
data
```
The steps above are provided as built-in functionality of `read_csv`:
```
data = pd.read_csv("data/vmm_flowdata.csv", index_col=0, parse_dates=True)
```
<div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
`pd.read_csv` provides a lot of built-in functionality to support this kind of transactions when reading in a file! Check the help of the read_csv function...
</div>
## The DatetimeIndex
When we ensure the DataFrame has a `DatetimeIndex`, time-series related functionality becomes available:
```
data.index
```
Similar to a Series with datetime data, there are some attributes of the timestamp values available:
```
data.index.day
data.index.dayofyear
data.index.year
```
The `plot` method will also adapt its labels (when you zoom in, you can see the different levels of detail of the datetime labels):
```
%matplotlib widget
data.plot()
# switching back to static inline plots (the default)
%matplotlib inline
```
We have too much data to sensibly plot on one figure. Let's see how we can easily select part of the data or aggregate the data to other time resolutions in the next sections.
## Selecting data from a time series
We can use label based indexing on a timeseries as expected:
```
data[pd.Timestamp("2012-01-01 09:00"):pd.Timestamp("2012-01-01 19:00")]
```
But, for convenience, indexing a time series also works with strings:
```
data["2012-01-01 09:00":"2012-01-01 19:00"]
```
A nice feature is **"partial string" indexing**, where we can do implicit slicing by providing a partial datetime string.
E.g. all data of 2013:
```
data['2013':]
```
Or all data of January up to March 2012:
```
data['2012-01':'2012-03']
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all data starting from 2012</li>
</ul>
</div>
```
data['2012':]
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all data in January for all different years</li>
</ul>
</div>
```
data[data.index.month == 1]
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all data in April, May and June for all different years</li>
</ul>
</div>
```
data[data.index.month.isin([4, 5, 6])]
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all 'daytime' data (between 8h and 20h) for all days</li>
</ul>
</div>
```
data[(data.index.hour > 8) & (data.index.hour < 20)]
```
## The power of pandas: `resample`
A very powerfull method is **`resample`: converting the frequency of the time series** (e.g. from hourly to daily data).
The time series has a frequency of 1 hour. I want to change this to daily:
```
data.resample('D').mean().head()
```
Other mathematical methods can also be specified:
```
data.resample('D').max().head()
```
<div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases <br>
These strings can also be combined with numbers, eg `'10D'`...
</div>
```
data.resample('M').mean().plot() # 10D
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the monthly standard deviation of the columns</li>
</ul>
</div>
```
data.resample('M').std().plot() # 'A'
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the monthly mean and median values for the years 2011-2012 for 'L06_347'<br><br></li>
</ul>
__Note__ Did you know <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.agg.html"><code>agg</code></a> to derive multiple statistics at the same time?
</div>
```
subset = data['2011':'2012']['L06_347']
subset.resample('M').agg(['mean', 'median']).plot()
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>plot the monthly mininum and maximum daily average value of the 'LS06_348' column</li>
</ul>
</div>
```
daily = data['LS06_348'].resample('D').mean() # daily averages calculated
daily.resample('M').agg(['min', 'max']).plot() # monthly minimum and maximum values of these daily averages
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a bar plot of the mean of the stations in year of 2013</li>
</ul>
</div>
```
data['2013':'2013'].mean().plot(kind='barh')
```
| github_jupyter |
Demonstrating how to get DonkeyCar Tub files into a PyTorch/fastai DataBlock
```
from fastai.data.all import *
from fastai.vision.all import *
from fastai.data.transforms import ColReader, Normalize, RandomSplitter
import torch
from torch import nn
from torch.nn import functional as F
from donkeycar.parts.tub_v2 import Tub
import pandas as pd
from pathlib import Path
from malpi.dk.train import preprocessFileList, get_data, get_learner, get_autoencoder, train_autoencoder
def learn_resnet():
learn2 = cnn_learner(dls, resnet18, loss_func=MSELossFlat(), metrics=[rmse], cbs=ActivationStats(with_hist=True))
learn2.fine_tune(5)
learn2.recorder.plot_loss()
learn2.show_results(figsize=(20,10))
```
The below code is modified from: https://github.com/cmasenas/fastai_navigation_training/blob/master/fastai_train.ipynb.
TODO: Figure out how to have multiple output heads
```
def test_one_transform(name, inputs, df_all, batch_tfms, item_tfms, epochs, lr):
dls = get_data(inputs, df_all=df_all, batch_tfms=batch_tfms, item_tfms=item_tfms)
callbacks = [CSVLogger(f"Transform_{name}.csv", append=True)]
learn = get_learner(dls)
#learn.no_logging() #Try this to block logging when doing many training test runs
learn.fit_one_cycle(epochs, lr, cbs=callbacks)
#learn.recorder.plot_loss()
#learn.show_results(figsize=(20,10))
# Train multipel times using a list of Transforms, one at a time.
# Compare mean/stdev of best validation loss (or rmse?) for each Transform
df_all = get_dataframe("track1_warehouse.txt")
transforms = [None]
transforms.extend( [*aug_transforms(do_flip=False, size=128)] )
for tfm in transforms:
name = "None" if tfm is None else str(tfm.__class__.__name__)
print( f"Transform: {name}" )
for i in range(5):
print( f" Run {i+1}" )
test_one_transform(name, "track1_warehouse.txt", df_all, None, 5, 3e-3)
def visualize_learner( learn ):
#dls=nav.dataloaders(df, bs=512)
preds, tgt = learn.get_preds(dl=[dls.one_batch()])
plt.title("Target vs Predicted Steering", fontsize=18, y=1.0)
plt.xlabel("Target", fontsize=14, labelpad=15)
plt.ylabel("Predicted", fontsize=14, labelpad=15)
plt.plot(tgt.T[0], preds.T[0],'bo')
plt.plot([-1,1],[-1,1],'r', linewidth = 4)
plt.show()
plt.title("Target vs Predicted Throttle", fontsize=18, y=1.02)
plt.xlabel("Target", fontsize=14, labelpad=15)
plt.ylabel("Predicted", fontsize=14, labelpad=15)
plt.plot(tgt.T[1], preds.T[1],'bo')
plt.plot([0,1],[0,1],'r', linewidth = 4)
plt.show()
learn.export()
df_all = get_dataframe("track1_warehouse.txt")
dls = get_data("track1_warehouse.txt", df_all=df_all, batch_tfms=None)
learn = get_learner(dls)
learn.fit_one_cycle(15, 3e-3)
visualize_learner(learn)
learn.export('models/track1_v2.pkl')
def clear_pyplot_memory():
plt.clf()
plt.cla()
plt.close()
df_all = get_dataframe("track1_warehouse.txt")
transforms=[None,
RandomResizedCrop(128,p=1.0,min_scale=0.5,ratio=(0.9,1.1)),
RandomErasing(sh=0.2, max_count=6,p=1.0),
Brightness(max_lighting=0.4, p=1.0),
Contrast(max_lighting=0.4, p=1.0),
Saturation(max_lighting=0.4, p=1.0)]
#dls = get_data(None, df_all, item_tfms=item_tfms, batch_tfms=batch_tfms)
for tfm in transforms:
name = "None" if tfm is None else str(tfm.__class__.__name__)
if name == "RandomResizedCrop":
item_tfms = tfm
batch_tfms = None
else:
item_tfms = None
batch_tfms = tfm
dls = get_data("track1_warehouse.txt",
df_all=df_all,
item_tfms=item_tfms, batch_tfms=batch_tfms)
dls.show_batch(unique=True, show=True)
plt.savefig( f'Transform_{name}.png' )
#clear_pyplot_memory()
learn, dls = train_autoencoder( "tracks_all.txt", 5, 3e-3, name="ae_test1", verbose=False )
learn.recorder.plot_loss()
learn.show_results(figsize=(20,10))
#plt.savefig(name + '.png')
idx = 0
idx += 1
im1 = dls.one_batch()[0]
im1_out = learn.model.forward(im1)
show_image(im1[idx])
show_image(im1_out[idx])
from fastai.metrics import rmse
from typing import List, Callable, Union, Any, TypeVar, Tuple
Tensor = TypeVar('torch.tensor')
from abc import abstractmethod
class BaseVAE(nn.Module):
def __init__(self) -> None:
super(BaseVAE, self).__init__()
def encode(self, input: Tensor) -> List[Tensor]:
raise NotImplementedError
def decode(self, input: Tensor) -> Any:
raise NotImplementedError
def sample(self, batch_size:int, current_device: int, **kwargs) -> Tensor:
raise NotImplementedError
def generate(self, x: Tensor, **kwargs) -> Tensor:
raise NotImplementedError
@abstractmethod
def forward(self, *inputs: Tensor) -> Tensor:
pass
@abstractmethod
def loss_function(self, *inputs: Any, **kwargs) -> Tensor:
pass
class VanillaVAE(BaseVAE):
def __init__(self,
in_channels: int,
latent_dim: int,
hidden_dims: List = None,
**kwargs) -> None:
super(VanillaVAE, self).__init__()
self.latent_dim = latent_dim
self.kld_weight = 0.00025 # TODO calculate based on: #al_img.shape[0]/ self.num_train_imgs
modules = []
if hidden_dims is None:
hidden_dims = [32, 64, 128, 256, 512]
# Build Encoder
for h_dim in hidden_dims:
modules.append(
nn.Sequential(
nn.Conv2d(in_channels, out_channels=h_dim,
kernel_size= 3, stride= 2, padding = 1),
nn.BatchNorm2d(h_dim),
nn.LeakyReLU())
)
in_channels = h_dim
self.encoder = nn.Sequential(*modules)
self.fc_mu = nn.Linear(hidden_dims[-1]*4, latent_dim)
self.fc_var = nn.Linear(hidden_dims[-1]*4, latent_dim)
# Build Decoder
modules = []
self.decoder_input = nn.Linear(latent_dim, hidden_dims[-1] * 4)
hidden_dims.reverse()
for i in range(len(hidden_dims) - 1):
modules.append(
nn.Sequential(
nn.ConvTranspose2d(hidden_dims[i],
hidden_dims[i + 1],
kernel_size=3,
stride = 2,
padding=1,
output_padding=1),
nn.BatchNorm2d(hidden_dims[i + 1]),
nn.LeakyReLU())
)
self.decoder = nn.Sequential(*modules)
self.final_layer = nn.Sequential(
nn.ConvTranspose2d(hidden_dims[-1],
hidden_dims[-1],
kernel_size=3,
stride=2,
padding=1,
output_padding=1),
nn.BatchNorm2d(hidden_dims[-1]),
nn.LeakyReLU(),
nn.Conv2d(hidden_dims[-1], out_channels= 3,
kernel_size= 3, padding= 1),
nn.Tanh())
def encode(self, input: Tensor) -> List[Tensor]:
"""
Encodes the input by passing through the encoder network
and returns the latent codes.
:param input: (Tensor) Input tensor to encoder [N x C x H x W]
:return: (Tensor) List of latent codes
"""
result = self.encoder(input)
result = torch.flatten(result, start_dim=1)
# Split the result into mu and var components
# of the latent Gaussian distribution
mu = self.fc_mu(result)
log_var = self.fc_var(result)
return [mu, log_var]
def decode(self, z: Tensor) -> Tensor:
"""
Maps the given latent codes
onto the image space.
:param z: (Tensor) [B x D]
:return: (Tensor) [B x C x H x W]
"""
result = self.decoder_input(z)
result = result.view(-1, 512, 2, 2)
result = self.decoder(result)
result = self.final_layer(result)
return result
def reparameterize(self, mu: Tensor, logvar: Tensor) -> Tensor:
"""
Reparameterization trick to sample from N(mu, var) from
N(0,1).
:param mu: (Tensor) Mean of the latent Gaussian [B x D]
:param logvar: (Tensor) Standard deviation of the latent Gaussian [B x D]
:return: (Tensor) [B x D]
"""
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std)
return eps * std + mu
def forward(self, input: Tensor, **kwargs) -> List[Tensor]:
mu, log_var = self.encode(input)
z = self.reparameterize(mu, log_var)
return [self.decode(z), input, mu, log_var]
def loss_function(self,
*args,
**kwargs) -> dict:
"""
Computes the VAE loss function.
KL(N(\mu, \sigma), N(0, 1)) = \log \frac{1}{\sigma} + \frac{\sigma^2 + \mu^2}{2} - \frac{1}{2}
:param args:
:param kwargs:
:return:
"""
#print( f"loss_function: {len(args[0])} {type(args[0][0])} {args[1].shape}" )
recons = args[0][0]
input = args[1]
mu = args[0][2]
log_var = args[0][3]
kld_weight = self.kld_weight # kwargs['M_N'] # Account for the minibatch samples from the dataset
recons_loss =F.mse_loss(recons, input)
kld_loss = torch.mean(-0.5 * torch.sum(1 + log_var - mu ** 2 - log_var.exp(), dim = 1), dim = 0)
loss = recons_loss + kld_weight * kld_loss
return loss
#return {'loss': loss, 'Reconstruction_Loss':recons_loss.detach(), 'KLD':-kld_loss.detach()}
def sample(self,
num_samples:int,
current_device: int, **kwargs) -> Tensor:
"""
Samples from the latent space and return the corresponding
image space map.
:param num_samples: (Int) Number of samples
:param current_device: (Int) Device to run the model
:return: (Tensor)
"""
z = torch.randn(num_samples,
self.latent_dim)
z = z.to(current_device)
samples = self.decode(z)
return samples
def generate(self, x: Tensor, **kwargs) -> Tensor:
"""
Given an input image x, returns the reconstructed image
:param x: (Tensor) [B x C x H x W]
:return: (Tensor) [B x C x H x W]
"""
return self.forward(x)[0]
input_file="track1_warehouse.txt"
item_tfms = [Resize(64,method="squish")]
dls = get_data(input_file, item_tfms=item_tfms, verbose=False, autoencoder=True)
vae = VanillaVAE(3, 64)
learn = Learner(dls, vae, loss_func=vae.loss_function)
learn.fit_one_cycle(5, 3e-3)
vae
```
| github_jupyter |
# Redis列表实现一次pop 弹出多条数据

```
# 连接 Redis
import redis
client = redis.Redis(host='122.51.39.219', port=6379, password='leftright123')
# 注意:
# 这个 Redis 环境仅作为练习之用,每小时会清空一次,请勿存放重要数据。
# 准备数据
client.lpush('test_batch_pop', *list(range(10000)))
# 一条一条读取,非常耗时
import time
start = time.time()
while True:
data = client.lpop('test_batch_pop')
if not data:
break
end = time.time()
delta = end - start
print(f'循环读取10000条数据,使用 lpop 耗时:{delta}')
```
## 为什么使用`lpop`读取10000条数据这么慢?
因为`lpop`每次只弹出1条数据,每次弹出数据都要连接 Redis 。大量时间浪费在了网络传输上面。
## 如何实现批量弹出多条数据,并在同一次网络请求中返回?
先使用 `lrange` 获取数据,再使用`ltrim`删除被获取的数据。
```
# 复习一下 lrange 的用法
datas = client.lrange('test_batch_pop', 0, 9) # 读取前10条数据
datas
# 学习一下 ltrim 的用法
client.ltrim('test_batch_pop', 10, -1) # 删除前10条数据
# 验证一下数据是否被成功删除
length = client.llen('test_batch_pop')
print(f'现在列表里面还剩{length}条数据')
datas = client.lrange('test_batch_pop', 0, 9) # 读取前10条数据
datas
# 一种看起来正确的做法
def batch_pop_fake(key, n):
datas = client.lrange(key, 0, n - 1)
client.ltrim(key, n, -1)
return datas
batch_pop_fake('test_batch_pop', 10)
client.lrange('test_batch_pop', 0, 9)
```
## 这种写法用什么问题
在多个进程同时使用 batch_pop_fake 函数的时候,由于执行 lrange 与 ltrim 是在两条语句中,因此实际上会分成2个网络请求。那么当 A 进程
刚刚执行完lrange,还没有来得及执行 ltrim 时,B 进程刚好过来执行 lrange,那么 AB 两个进程就会获得相同的数据。
等 B 进程获取完成数据以后,A 进程的 ltrim 刚刚抵达,此时Redis 会删除前 n 条数据,然后 B 进程的 ltrim 也到了,再删除前 n 条数据。那么最终导致的结果就是,AB 两个进程同时拿到前 n 条数据,但是却有2n 条数据被删除。
## 使用 pipeline 打包多个命令到一个请求中
pipeline 的使用方法如下:
```python
import redis
client = redis.Redis()
pipe = client.pipeline()
pipe.lrange('key', 0, n - 1)
pipe.ltrim('key', n, -1)
result = pipe.execute()
```
pipe.execute()返回一个列表,这个列表每一项按顺序对应每一个命令的执行结果。在上面的例子中,result 是一个有两项的列表,第一项对应 lrange 的返回结果,第二项为 True,表示 ltrim 执行成功。
```
# 真正可用的批量弹出数据函数
def batch_pop_real(key, n):
pipe = client.pipeline()
pipe.lrange(key, 0, n - 1)
pipe.ltrim(key, n, -1)
result = pipe.execute()
return result[0]
# 清空列表并重新添加10000条数据
client.delete('test_batch_pop')
client.lpush('test_batch_pop', *list(range(10000)))
start = time.time()
while True:
datas = batch_pop_real('test_batch_pop', 1000)
if not datas:
break
for data in datas:
pass
end = time.time()
print(f'批量弹出10000条数据,耗时:{end - start}')
client.llen('test_batch_pop')
```



| github_jupyter |
```
# importamos las librerías necesarias
%matplotlib inline
import random
import tsfresh
import os
import math
from scipy import stats
from scipy.spatial.distance import pdist
from math import sqrt, log, floor
from fastdtw import fastdtw
import ipywidgets as widgets
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import pandas as pd
import seaborn as sns
from statistics import mean
from scipy.spatial.distance import euclidean
import scipy.cluster.hierarchy as hac
from scipy.cluster.hierarchy import fcluster
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.manifold import TSNE
from sklearn.metrics import normalized_mutual_info_score, adjusted_rand_score, silhouette_score, silhouette_samples
from sklearn.metrics import mean_squared_error
from scipy.spatial import distance
sns.set(style='white')
# "fix" the randomness for reproducibility
random.seed(42)
!pip install tsfresh
```
### Dataset
Los datos son series temporales (casos semanales de Dengue) de distintos distritos de Paraguay
```
path = "./data/Notificaciones/"
filename_read = os.path.join(path,"normalizado.csv")
notificaciones = pd.read_csv(filename_read,delimiter=",",engine='python')
notificaciones.shape
listaMunicp = notificaciones['distrito_nombre'].tolist()
listaMunicp = list(dict.fromkeys(listaMunicp))
print('Son ', len(listaMunicp), ' distritos')
listaMunicp.sort()
print(listaMunicp)
```
A continuación tomamos las series temporales que leímos y vemos como quedan
```
timeSeries = pd.DataFrame()
for muni in listaMunicp:
municipio=notificaciones['distrito_nombre']==muni
notif_x_municp=notificaciones[municipio]
notif_x_municp = notif_x_municp.reset_index(drop=True)
notif_x_municp = notif_x_municp['incidencia']
notif_x_municp = notif_x_municp.replace('nan', np.nan).fillna(0.000001)
notif_x_municp = notif_x_municp.replace([np.inf, -np.inf], np.nan).fillna(0.000001)
timeSeries = timeSeries.append(notif_x_municp)
ax = sns.tsplot(ax=None, data=notif_x_municp.values, err_style="unit_traces")
plt.show()
#timeseries shape
n=217
timeSeries.shape
timeSeries.describe()
```
### Análisis de grupos (Clustering)
El Clustering o la clusterización es un proceso importante dentro del Machine learning. Este proceso desarrolla una acción fundamental que le permite a los algoritmos de aprendizaje automatizado entrenar y conocer de forma adecuada los datos con los que desarrollan sus actividades. Tiene como finalidad principal lograr el agrupamiento de conjuntos de objetos no etiquetados, para lograr construir subconjuntos de datos conocidos como Clusters. Cada cluster dentro de un grafo está formado por una colección de objetos o datos que a términos de análisis resultan similares entre si, pero que poseen elementos diferenciales con respecto a otros objetos pertenecientes al conjunto de datos y que pueden conformar un cluster independiente.

Aunque los datos no necesariamente son tan fáciles de agrupar

### Métricas de similitud
Para medir lo similares ( o disimilares) que son los individuos existe una enorme cantidad de índices de similaridad y de disimilaridad o divergencia. Todos ellos tienen propiedades y utilidades distintas y habrá que ser consciente de ellas para su correcta aplicación al caso que nos ocupe.
La mayor parte de estos índices serán o bien, indicadores basados en la distancia (considerando a los individuos como vectores en el espacio de las variables) (en este sentido un elevado valor de la distancia entre dos individuos nos indicará un alto grado de disimilaridad entre ellos); o bien, indicadores basados en coeficientes de correlación ; o bien basados en tablas de datos de posesión o no de una serie de atributos.
A continuación mostramos las funciones de:
* Distancia Euclidiana
* Error cuadrático medio
* Fast Dynamic Time Warping
* Correlación de Pearson y
* Correlación de Spearman.
Existen muchas otras métricas y depende de la naturaleza de cada problema decidir cuál usar. Por ejemplo, *Fast Dymanic Time Warping* es una medida de similitud diseña especialmente para series temporales.
```
#Euclidean
def euclidean(x, y):
r=np.linalg.norm(x-y)
if math.isnan(r):
r=1
#print(r)
return r
#RMSE
def rmse(x, y):
r=sqrt(mean_squared_error(x,y))
if math.isnan(r):
r=1
#print(r)
return r
#Fast Dynamic time warping
def fast_DTW(x, y):
r, _ = fastdtw(x, y, dist=euclidean)
if math.isnan(r):
r=1
#print(r)
return r
#Correlation
def corr(x, y):
r=np.dot(x-mean(x),y-mean(y))/((np.linalg.norm(x-mean(x)))*(np.linalg.norm(y-mean(y))))
if math.isnan(r):
r=0
#print(r)
return 1 - r
#Spearman
def scorr(x, y):
r = stats.spearmanr(x, y)[0]
if math.isnan(r):
r=0
#print(r)
return 1 - r
# compute distances using LCSS
# function for LCSS computation
# based on implementation from
# https://rosettacode.org/wiki/Longest_common_subsequence
def lcs(a, b):
lengths = [[0 for j in range(len(b)+1)] for i in range(len(a)+1)]
# row 0 and column 0 are initialized to 0 already
for i, x in enumerate(a):
for j, y in enumerate(b):
if x == y:
lengths[i+1][j+1] = lengths[i][j] + 1
else:
lengths[i+1][j+1] = max(lengths[i+1][j], lengths[i][j+1])
x, y = len(a), len(b)
result = lengths[x][y]
return result
def discretise(x):
return int(x * 10)
def multidim_lcs(a, b):
a = a.applymap(discretise)
b = b.applymap(discretise)
rows, dims = a.shape
lcss = [lcs(a[i+2], b[i+2]) for i in range(dims)]
return 1 - sum(lcss) / (rows * dims)
#Distancias para kmeans
#Euclidean
euclidean_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
euclidean_dist[i,j] = euclidean(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#RMSE
rmse_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
rmse_dist[i,j] = rmse(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#Corr
corr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
corr_dist[i,j] = corr(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#scorr
scorr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
scorr_dist[i,j] = scorr(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#DTW
dtw_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
dtw_dist[i,j] = fast_DTW(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
```
### Determinar el número de clusters a formar
La mayoría de las técnicas de clustering necesitan como *input* el número de clusters a formar, para eso lo que se hace es hacer una prueba con diferentes números de cluster y nos quedamos con el que dió menor error en general. Para medir ese error utilizamos **Silhouette score**.
El **Silhoutte score** se puede utilizar para estudiar la distancia de separación entre los clusters resultantes, especialmente si no hay conocimiento previo de cuáles son los verdaderos grupos para cada objeto, que es el caso más común en aplicaciones reales.
El Silhouette score $s(i)$ se calcula:
\begin{equation}
s(i)=\dfrac{b(i)-a(i)}{max(b(i),a(i))}
\end{equation}
Definamos $a (i)$ como la distancia media del punto $(i)$ a todos los demás puntos del grupo que se le asignó ($A$). Podemos interpretar $a (i)$ como qué tan bien se asigna el punto al grupo. Cuanto menor sea el valor, mejor será la asignación.
De manera similar, definamos $b (i)$ como la distancia media del punto $(i)$ a otros puntos de su grupo vecino más cercano ($B$). El grupo ($B$) es el grupo al que no se asigna el punto $(i)$ pero su distancia es la más cercana entre todos los demás grupos. $ s (i) $ se encuentra en el rango de [-1,1].
```
from yellowbrick.cluster import KElbowVisualizer
model = AgglomerativeClustering()
visualizer = KElbowVisualizer(model, k=(3,20),metric='distortion', timings=False)
visualizer.fit(rmse_dist) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
```
Así tenemos que son 9 los grupos que formaremos
```
k=9
```
## Técnicas de clustering
### K-means
El objetivo de este algoritmo es el de encontrar “K” grupos (clusters) entre los datos crudos. El algoritmo trabaja iterativamente para asignar a cada “punto” (las filas de nuestro conjunto de entrada forman una coordenada) uno de los “K” grupos basado en sus características. Son agrupados en base a la similitud de sus features (las columnas). Como resultado de ejecutar el algoritmo tendremos:
* Los “centroids” de cada grupo que serán unas “coordenadas” de cada uno de los K conjuntos que se utilizarán para poder etiquetar nuevas muestras.
* Etiquetas para el conjunto de datos de entrenamiento. Cada etiqueta perteneciente a uno de los K grupos formados.
Los grupos se van definiendo de manera “orgánica”, es decir que se va ajustando su posición en cada iteración del proceso, hasta que converge el algoritmo. Una vez hallados los centroids deberemos analizarlos para ver cuales son sus características únicas, frente a la de los otros grupos.

En la figura de arriba vemos como los datos se agrupan según el *centroid* que está representado por una estrella. El algortimo inicializa los centroides aleatoriamente y va ajustandolo en cada iteracción, los puntos que están más cerca del *centroid* son los que pertenecen al mismo grupo.
### Clustering jerárquico

El algortimo de clúster jerárquico agrupa los datos basándose en la distancia entre cada uno y buscando que los datos que están dentro de un clúster sean los más similares entre sí.
En una representación gráfica los elementos quedan anidados en jerarquías con forma de árbol.
### DBScan
El agrupamiento espacial basado en densidad de aplicaciones con ruido o Density-based spatial clustering of applications with noise (DBSCAN) es un algoritmo de agrupamiento de datos (data clustering). Es un algoritmo de agrupamiento basado en densidad (density-based clustering) porque encuentra un número de grupos (clusters) comenzando por una estimación de la distribución de densidad de los nodos correspondientes. DBSCAN es uno de los algoritmos de agrupamiento más usados y citados en la literatura científica.

Los puntos marcados en rojo son puntos núcleo. Los puntos amarillos son densamente alcanzables desde rojo y densamente conectados con rojo, y pertenecen al mismo clúster. El punto azul es un punto ruidoso que no es núcleo ni densamente alcanzable.
```
#Experimentos
print('Silhouette coefficent')
#HAC + euclidean
Z = hac.linkage(timeSeries, method='complete', metric=euclidean)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + euclidean distance: ",silhouette_score(euclidean_dist, clusters))
#HAC + rmse
Z = hac.linkage(timeSeries, method='complete', metric=rmse)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + rmse distance: ",silhouette_score( rmse_dist, clusters))
#HAC + corr
Z = hac.linkage(timeSeries, method='complete', metric=corr)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + corr distance: ",silhouette_score( corr_dist, clusters))
#HAC + scorr
Z = hac.linkage(timeSeries, method='complete', metric=scorr)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + scorr distance: ",silhouette_score( scorr_dist, clusters))
#HAC + LCSS
#Z = hac.linkage(timeSeries, method='complete', metric=multidim_lcs)
#clusters = fcluster(Z, k, criterion='maxclust')
#print("HAC + LCSS distance: ",silhouette_score( timeSeries, clusters, metric=multidim_lcs))
#HAC + DTW
Z = hac.linkage(timeSeries, method='complete', metric=fast_DTW)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + DTW distance: ",silhouette_score( dtw_dist, clusters))
km_euc = KMeans(n_clusters=k).fit_predict(euclidean_dist)
silhouette_avg=silhouette_score( euclidean_dist, km_euc)
print("KM + euclidian distance: ",silhouette_score( euclidean_dist, km_euc))
km_rmse = KMeans(n_clusters=k).fit_predict(rmse_dist)
print("KM + rmse distance: ",silhouette_score( rmse_dist, km_rmse))
km_corr = KMeans(n_clusters=k).fit_predict(corr_dist)
print("KM + corr distance: ",silhouette_score( corr_dist, km_corr))
km_scorr = KMeans(n_clusters=k).fit_predict(scorr_dist)
print("KM + scorr distance: ",silhouette_score( scorr_dist, km_scorr))
km_dtw = KMeans(n_clusters=k).fit_predict(dtw_dist)
print("KM + dtw distance: ",silhouette_score( dtw_dist, clusters))
#Experimentos DBSCAN
DB_euc = DBSCAN(eps=3, min_samples=2).fit_predict(euclidean_dist)
silhouette_avg=silhouette_score( euclidean_dist, DB_euc)
print("DBSCAN + euclidian distance: ",silhouette_score( euclidean_dist, DB_euc))
DB_rmse = DBSCAN(eps=12, min_samples=10).fit_predict(rmse_dist)
#print("DBSCAN + rmse distance: ",silhouette_score( rmse_dist, DB_rmse))
print("DBSCAN + rmse distance: ",0.00000000)
DB_corr = DBSCAN(eps=3, min_samples=2).fit_predict(corr_dist)
print("DBSCAN + corr distance: ",silhouette_score( corr_dist, DB_corr))
DB_scorr = DBSCAN(eps=3, min_samples=2).fit_predict(scorr_dist)
print("DBSCAN + scorr distance: ",silhouette_score( scorr_dist, DB_scorr))
DB_dtw = DBSCAN(eps=3, min_samples=2).fit_predict(dtw_dist)
print("KM + dtw distance: ",silhouette_score( dtw_dist, DB_dtw))
```
## Clustering basado en propiedades
Otro enfoque en el clustering es extraer ciertas propiedades de nuestros datos y hacer la agrupación basándonos en eso, el procedimiento es igual a como si estuviesemos trabajando con nuestros datos reales.
```
from tsfresh import extract_features
#features extraction
extracted_features = extract_features(timeSeries, column_id="indice")
extracted_features.shape
list(extracted_features.columns.values)
n=217
features = pd.DataFrame()
Mean=[]
Var=[]
aCF1=[]
Peak=[]
Entropy=[]
Cpoints=[]
for muni in listaMunicp:
municipio=notificaciones['distrito_nombre']==muni
notif_x_municp=notificaciones[municipio]
notif_x_municp = notif_x_municp.reset_index(drop=True)
notif_x_municp = notif_x_municp['incidencia']
notif_x_municp = notif_x_municp.replace('nan', np.nan).fillna(0.000001)
notif_x_municp = notif_x_municp.replace([np.inf, -np.inf], np.nan).fillna(0.000001)
#Features
mean=tsfresh.feature_extraction.feature_calculators.mean(notif_x_municp)
var=tsfresh.feature_extraction.feature_calculators.variance(notif_x_municp)
ACF1=tsfresh.feature_extraction.feature_calculators.autocorrelation(notif_x_municp,1)
peak=tsfresh.feature_extraction.feature_calculators.number_peaks(notif_x_municp,20)
entropy=tsfresh.feature_extraction.feature_calculators.sample_entropy(notif_x_municp)
cpoints=tsfresh.feature_extraction.feature_calculators.number_crossing_m(notif_x_municp,5)
Mean.append(mean)
Var.append(var)
aCF1.append(ACF1)
Peak.append(peak)
Entropy.append(entropy)
Cpoints.append(cpoints)
data_tuples = list(zip(Mean,Var,aCF1,Peak,Entropy,Cpoints))
features = pd.DataFrame(data_tuples, columns =['Mean', 'Var', 'ACF1', 'Peak','Entropy','Cpoints'])
# print the data
features
features.iloc[1]
#Distancias para kmeans
#Euclidean
f_euclidean_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(1,n):
#print("j",j)
f_euclidean_dist[i,j] = euclidean(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#RMSE
f_rmse_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_rmse_dist[i,j] = rmse(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#Corr
#print(features.iloc[i].values.flatten())
#print(features.iloc[j].values.flatten())
print('-------------------------------')
f_corr_dist = np.zeros((n,n))
#for i in range(0,n):
# print("i",i)
# for j in range(0,n):
# print("j",j)
# print(features.iloc[i].values.flatten())
# print(features.iloc[j].values.flatten())
# f_corr_dist[i,j] = corr(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#scorr
f_scorr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_scorr_dist[i,j] = scorr(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#DTW
f_dtw_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_dtw_dist[i,j] = fast_DTW(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
from yellowbrick.cluster import KElbowVisualizer
model = AgglomerativeClustering()
visualizer = KElbowVisualizer(model, k=(3,50),metric='distortion', timings=False)
visualizer.fit(f_scorr_dist) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
k=9
km_euc = KMeans(n_clusters=k).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, km_euc)
print("KM + euclidian distance: ",silhouette_score( f_euclidean_dist, km_euc))
km_rmse = KMeans(n_clusters=k).fit_predict(f_rmse_dist)
print("KM + rmse distance: ",silhouette_score( f_rmse_dist, km_rmse))
#km_corr = KMeans(n_clusters=k).fit_predict(f_corr_dist)
#print("KM + corr distance: ",silhouette_score( f_corr_dist, km_corr))
#print("KM + corr distance: ",silhouette_score( f_corr_dist, 0.0))
km_scorr = KMeans(n_clusters=k).fit_predict(f_scorr_dist)
print("KM + scorr distance: ",silhouette_score( f_scorr_dist, km_scorr))
km_dtw = KMeans(n_clusters=k).fit_predict(f_dtw_dist)
print("KM + dtw distance: ",silhouette_score( f_dtw_dist, clusters))
#Experimentos HAC
HAC_euc = AgglomerativeClustering(n_clusters=k).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, HAC_euc)
print("HAC + euclidian distance: ",silhouette_score( f_euclidean_dist, HAC_euc))
HAC_rmse = AgglomerativeClustering(n_clusters=k).fit_predict(f_rmse_dist)
print("HAC + rmse distance: ",silhouette_score( f_rmse_dist, HAC_rmse))
#HAC_corr = AgglomerativeClustering(n_clusters=k).fit_predict(f_corr_dist)
#print("HAC + corr distance: ",silhouette_score( f_corr_dist,HAC_corr))
print("HAC + corr distance: ",0.0)
HAC_scorr = AgglomerativeClustering(n_clusters=k).fit_predict(f_scorr_dist)
print("HAC + scorr distance: ",silhouette_score( f_scorr_dist, HAC_scorr))
HAC_dtw = AgglomerativeClustering(n_clusters=k).fit_predict(f_dtw_dist)
print("HAC + dtw distance: ",silhouette_score( f_dtw_dist, HAC_dtw))
#Experimentos DBSCAN
DB_euc = DBSCAN(eps=3, min_samples=2).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, DB_euc)
print("DBSCAN + euclidian distance: ",silhouette_score( f_euclidean_dist, DB_euc))
DB_rmse = DBSCAN(eps=12, min_samples=10).fit_predict(f_rmse_dist)
#print("DBSCAN + rmse distance: ",silhouette_score( f_rmse_dist, DB_rmse))
#print("DBSCAN + rmse distance: ",0.00000000)
#DB_corr = DBSCAN(eps=3, min_samples=2).fit_predict(f_corr_dist)
#print("DBSCAN + corr distance: ",silhouette_score( f_corr_dist, DB_corr))
print("DBSCAN + corr distance: ",0.0)
DB_scorr = DBSCAN(eps=3, min_samples=2).fit_predict(f_scorr_dist)
print("DBSCAN + scorr distance: ",silhouette_score( f_scorr_dist, DB_scorr))
DB_dtw = DBSCAN(eps=3, min_samples=2).fit_predict(f_dtw_dist)
print("KM + dtw distance: ",silhouette_score( f_dtw_dist, DB_dtw))
```
| github_jupyter |
Text classification with attention and synthetic gradients.
Imports and set-up:
```
%tensorflow_version 2.x
import numpy as np
import tensorflow as tf
import pandas as pd
import subprocess
from sklearn.model_selection import train_test_split
import gensim
import re
import sys
import time
# TODO: actually implement distribution in the training loop
strategy = tf.distribute.get_strategy()
use_mixed_precision = False
tf.config.run_functions_eagerly(False)
tf.get_logger().setLevel('ERROR')
is_tpu = None
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
is_tpu = True
except ValueError:
is_tpu = False
if is_tpu:
print('TPU available.')
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
if use_mixed_precision:
policy = tf.keras.mixed_precision.experimental.Policy('mixed_bfloat16')
tf.keras.mixed_precision.experimental.set_policy(policy)
else:
print('No TPU available.')
result = subprocess.run(
['nvidia-smi', '-L'],
stdout=subprocess.PIPE).stdout.decode("utf-8").strip()
if "has failed" in result:
print("No GPU available.")
else:
print(result)
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
if use_mixed_precision:
policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')
tf.keras.mixed_precision.experimental.set_policy(policy)
```
Downloading the data
```
# Download the Sentiment140 dataset
!mkdir -p data
!wget -nc https://nyc3.digitaloceanspaces.com/ml-files-distro/v1/sentiment-analysis-is-bad/data/training.1600000.processed.noemoticon.csv.zip -P data
!unzip -n -d data data/training.1600000.processed.noemoticon.csv.zip
```
Loading and splitting the data
```
sen140 = pd.read_csv(
"data/training.1600000.processed.noemoticon.csv", encoding='latin-1',
names=["target", "ids", "date", "flag", "user", "text"])
sen140.head()
sen140 = sen140.sample(frac=1).reset_index(drop=True)
sen140 = sen140[['text', 'target']]
features, targets = sen140.iloc[:, 0].values, sen140.iloc[:, 1].values
print("A random tweet\t:", features[0])
# split between train and test sets
x_train, x_test, y_train, y_test = train_test_split(features,
targets,
test_size=0.33)
y_train = y_train.astype("float32") / 4.0
y_test = y_test.astype("float32") / 4.0
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
```
Preprocessing data
```
def process_tweet(x):
x = x.strip()
x = x.lower()
x = re.sub(r"[^a-zA-Z0-9üöäÜÖÄß\.,!\?\-%\$€\/ ]+'", ' ', x)
x = re.sub('([\.,!\?\-%\$€\/])', r' \1 ', x)
x = re.sub('\s{2,}', ' ', x)
x = x.split()
x.append("[&END&]")
length = len(x)
return x
tweets_train = []
tweets_test = []
for tweet in x_train:
tweets_train.append(process_tweet(tweet[0]))
for tweet in x_test:
tweets_test.append(process_tweet(tweet[0]))
# Building the initial vocab with all words from the training set
def add_or_update_word(_vocab, word):
entry = None
if word in _vocab:
entry = _vocab[word]
entry = (entry[0], entry[1] + 1)
else:
entry = (len(_vocab), 1)
_vocab[word] = entry
vocab_pre = {}
# "[&END&]" is for padding, "[&UNK&]" for unknown words
add_or_update_word(vocab_pre, "[&END&]")
add_or_update_word(vocab_pre, "[&UNK&]")
for tweet in tweets_train:
for word in tweet:
add_or_update_word(vocab_pre, word)
# limiting the vocabulary to only include words that appear at least 3 times
# in the training data set. Reduces vocab size to about 1/6th.
# This is to make it harder for the model to overfit by focusing on words that
# may only appear in the training data, and also to generally make it learn to
# handle unknown words (more robust)
keys = vocab_pre.keys()
vocab = {}
vocab["[&END&]"] = 0
vocab["[&UNK&]"] = 1
for key in keys:
freq = vocab_pre[key][1]
index = vocab_pre[key][0]
if freq >= 3 and index > 1:
vocab[key] = len(vocab)
# Replace words that have been removed from the vocabulary with "[&UNK&]" in
# both the training and testing data
def filter_unknown(_in, _vocab):
for tweet in _in:
for i in range(len(tweet)):
if not tweet[i] in _vocab:
tweet[i] = "[&UNK&]"
filter_unknown(tweets_train, vocab)
filter_unknown(tweets_test, vocab)
```
Using gensim word2vec to get a good word embedding.
```
# train the embedding
embedding_dims = 128
embedding = gensim.models.Word2Vec(tweets_train,
size=embedding_dims, min_count=0)
def tokenize(_in, _vocab):
_out = []
for i in range(len(_in)):
tweet = _in[i]
wordlist = []
for word in tweet:
wordlist.append(_vocab[word].index)
_out.append(wordlist)
return _out
tokens_train = tokenize(tweets_train, embedding.wv.vocab)
tokens_test = tokenize(tweets_test, embedding.wv.vocab)
```
Creating modules and defining the model.
```
class SequenceCollapseAttention(tf.Module):
'''
Collapses a sequence of arbitrary length into num_out_entries entries from
the sequence according to dot-product attention. So, a variable length
sequence is reduced to a sequence of a fixed, known length.
'''
def __init__(self,
num_out_entries,
initializer=tf.keras.initializers.HeNormal,
name=None):
super().__init__(name=name)
self.is_built = False
self.num_out_entries = num_out_entries
self.initializer = initializer()
def __call__(self, keys, query):
if not self.is_built:
self.weights = tf.Variable(
self.initializer([query.shape[-1], self.num_out_entries]),
trainable=True)
self.biases = tf.Variable(tf.zeros([self.num_out_entries]),
trainable=True)
self.is_built = True
scores = tf.linalg.matmul(query, self.weights) + self.biases
scores = tf.transpose(scores, perm=(0, 2, 1))
scores = tf.nn.softmax(scores)
output = tf.linalg.matmul(scores, keys)
return output
class WordEmbedding(tf.Module):
'''
Creates a word-embedding module from a provided embedding matrix.
'''
def __init__(self, embedding_matrix, trainable=False, name=None):
super().__init__(name=name)
self.embedding = tf.Variable(embedding_matrix, trainable=trainable)
def __call__(self, x):
return tf.nn.embedding_lookup(self.embedding, x)
testvar = None
class PositionalEncoding1D(tf.Module):
'''
Positional encoding as in the Attention Is All You Need paper. I hope.
For experimentation, the weight by which the positional information is mixed
into the input vectors is learned.
'''
def __init__(self, axis=-2, base=1000, name=None):
super().__init__(name=name)
self.axis = axis
self.base = base
self.encoding_weight = tf.Variable([2.0], trainable=True)
testvar = self.encoding_weight
def __call__(self, x):
sequence_length = tf.shape(x)[self.axis]
d = tf.shape(x)[-1]
T = tf.shape(x)[self.axis]
pos_enc = tf.range(0, d / 2, delta=1, dtype=tf.float32)
pos_enc = (-2.0 / tf.cast(d, dtype=tf.float32)) * pos_enc
base = tf.cast(tf.fill(tf.shape(pos_enc), self.base), dtype=tf.float32)
pos_enc = tf.math.pow(base, pos_enc)
pos_enc = tf.expand_dims(pos_enc, axis=0)
pos_enc = tf.tile(pos_enc, [T, 1])
t = tf.expand_dims(tf.range(1, T+1, delta=1, dtype=tf.float32), axis=-1)
pos_enc = tf.math.multiply(pos_enc, t)
pos_enc_sin = tf.expand_dims(tf.math.sin(pos_enc), axis=-1)
pos_enc_cos = tf.expand_dims(tf.math.cos(pos_enc), axis=-1)
pos_enc = tf.concat((pos_enc_sin, pos_enc_cos), axis=-1)
pos_enc = tf.reshape(pos_enc, [T, d])
return x + (pos_enc * self.encoding_weight)
class MLP_Block(tf.Module):
'''
With batch normalization before the activations.
A regular old multilayer perceptron, hidden shapes are defined by the
"shapes" argument.
'''
def __init__(self,
shapes,
initializer=tf.keras.initializers.HeNormal,
name=None,
activation=tf.nn.swish,
trainable_batch_norms=False):
super().__init__(name=name)
self.is_built = False
self.shapes = shapes
self.initializer = initializer()
self.weights = [None] * len(shapes)
self.biases = [None] * len(shapes)
self.bnorms = [None] * len(shapes)
self.activation = activation
self.trainable_batch_norms = trainable_batch_norms
def _build(self, x):
for n in range(0, len(self.shapes)):
in_shape = x.shape[-1] if n == 0 else self.shapes[n - 1]
factor = 1 if self.activation != tf.nn.crelu or n == 0 else 2
self.weights[n] = tf.Variable(
self.initializer([in_shape * factor, self.shapes[n]]),
trainable=True)
self.biases[n] = tf.Variable(tf.zeros([self.shapes[n]]),
trainable=True)
self.bnorms[n] = tf.keras.layers.BatchNormalization(
trainable=self.trainable_batch_norms)
self.is_built = True
def __call__(self, x, training=False):
if not self.is_built:
self._build(x)
h = x
for n in range(len(self.shapes)):
h = tf.linalg.matmul(h, self.weights[n]) + self.biases[n]
h = self.bnorms[n](h, training=training)
h = self.activation(h)
return h
class SyntheticGradient(tf.Module):
'''
An implementation of synthetic gradients. When added to a model, this
module will intercept incoming gradients and replace them by learned,
synthetic ones.
If you encounter NANs, try setting the sg_output_scale parameter to a lower
value, or increase the number of initial_epochs or epochs.
When the model using this module does not learn, the generator might be too
simple, the sg_output_scale might be too low, the learning rate of the
generator might be too large or too low, or the number of epochs might be
too large or too low.
If the number of initial epochs is too large, the generator can get stuck
in a local minimum and fail to learn.
The relative_generator_hidden_shapes list defines the shapes of the hidden
layers of the generator as a multiple of its input dimension. For an affine
transormation, pass an empty list.
'''
def __init__(self,
initializer=tf.keras.initializers.GlorotUniform,
activation=tf.nn.tanh,
relative_generator_hidden_shapes=[6, ],
learning_rate=0.01,
epochs=1,
initial_epochs=16,
sg_output_scale=1,
name=None):
super().__init__(name=name)
self.is_built = False
self.initializer = initializer
self.activation = activation
self.relative_generator_hidden_shapes = relative_generator_hidden_shapes
self.initial_epochs = initial_epochs
self.epochs = epochs
self.sg_output_scale = sg_output_scale
self.optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
def build(self, xy, dy):
'''
Builds the gradient generator on its first run, and trains on the first
incoming batch of gradients for a number of epochs to avoid bad results
(including NANs) in the first few batches where the generator still
outputs bad approximations. To further reduce NANs due to bad gradients,
a fixed scaler for the outputs of the generator is computed based on the
first batch.
'''
if self.is_built:
return
if len(self.relative_generator_hidden_shapes) > 0:
generator_shape = [
xy.shape[-1] * mult
for mult in
self.relative_generator_hidden_shapes]
self.generator_hidden = MLP_Block(
generator_shape,
activation=self.activation,
initializer=self.initializer,
trainable_batch_norms=False)
else:
self.generator_hidden = tf.identity
self.generator_out = MLP_Block(
[dy.shape[-1]],
activation=tf.identity,
initializer=self.initializer,
trainable_batch_norms=False)
# calculate a static scaler for the generated gradients to avoid
# overflows due to too large gradients
self.generator_out_scale = 1.0
x = self.generate_gradient(xy) / self.sg_output_scale
mag_y = tf.math.sqrt(tf.math.reduce_sum(tf.math.square(dy), axis=-1))
mag_x = tf.math.sqrt(tf.math.reduce_sum(tf.math.square(x), axis=-1))
mag_scale = tf.math.reduce_mean(mag_y / mag_x,
axis=tf.range(0, tf.rank(dy) - 1))
self.generator_out_scale = tf.Variable(mag_scale, trainable=False)
# train for a number of epochs on the first run, by default 16, to avoid
# bad results in the beginning of training.
for i in range(self.initial_epochs):
self.train_generator(xy, dy)
self.is_built = True
def generate_gradient(self, x):
'''
Just an MLP, or an affine transformation if the hidden shape in the
constructor is set to be empty.
'''
x = self.generator_hidden(x)
out = self.generator_out(x)
out = out * self.generator_out_scale
return out * self.sg_output_scale
def train_generator(self, x, target):
'''
Gradient descend for the gradient generator. This is called every time a
gradient comes in, although in theory (especially with deeper gradient
generators) once the gradients are modeled sufficiently, it could be OK
to stop training on incoming gradients, thus fully decoupling the lower
parts of the network from the upper parts relative to this SG module.
'''
with tf.GradientTape() as tape:
l2_loss = target - self.generate_gradient(x)
l2_loss = tf.math.reduce_sum(tf.math.square(l2_loss), axis=-1)
# l2_loss = tf.math.sqrt(l2_dist)
grads = tape.gradient(l2_loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(grads, self.trainable_variables))
@tf.custom_gradient
def sg(self, x, y):
'''
In the forward pass it is essentially a no-op (identity). In the
backwards pass it replaces the incoming gradient by a synthetic one.
'''
x = tf.identity(x)
def grad(dy):
# concat x and the label to be inputs for the generator:
xy = self.concat_x_and_y(x, y)
if not self.is_built:
self.build(xy, dy)
# train the generator on the incoming gradient:
for i in range(self.epochs):
self.train_generator(xy, dy)
# return the gradient. The second return value is the gradient for y
# which should be zero since we only need y (labels) to generate the
# synthetic gradients
dy = self.generate_gradient(xy)
return dy, tf.zeros(tf.shape(y))
return x, grad
def __call__(self, x, y):
return self.sg(x, y)
def concat_x_and_y(self, x, y):
'''
Probably an overly complex yet incomplete solution to a rather small
inconvenience.
Inconvenience: The gradient generators take the output of the last
module AND the target/labels of the network as inputs. But those two
tensors can be of different shapes. The obvious solution would be to
manually reshape the targets so they can be concatenated with the
outputs of the past state. But because i wanted this SG module to be as
"plug-and-play" as possible, i tried to attempt automatic reshaping.
Should work for 1d->1d, and 1d-sequence -> 1d, possibly 1d seq->seq,
unsure about the rest.
'''
# insert as many dims before the last dim of y to give it the same rank
# as x
amount = tf.math.maximum(tf.rank(x) - tf.rank(y), 0)
new_shape = tf.concat((tf.shape(y)[:-1],
tf.tile([1], [amount]),
[tf.shape(y)[-1]]), axis=-1)
y = tf.reshape(y, new_shape)
# tile the added dims such that x and y can be concatenated
# In order to tile only the added dims, i need to set the dimensions
# with a length of 1 (except the last) to the length of the
# corresponding dimensions in x, while setting the rest to 1.
# This is waiting to break.
mask = tf.cast(tf.math.less_equal(tf.shape(y),
tf.constant([1])), dtype=tf.int32)
# ignore the last dim
mask = tf.concat([mask[:-1], tf.constant([0])], axis=-1)
zeros_to_ones = tf.math.subtract(
tf.ones(tf.shape(mask), dtype=tf.int32),
mask)
# has ones where there is a one in the shape, now the 1s are set to the
# length in x
mask = tf.math.multiply(mask, tf.shape(x))
# add ones to all other dimensions to preserve their shape
mask = tf.math.add(zeros_to_ones, mask)
# tile
y = tf.tile(y, mask)
return tf.concat((x, y), axis=-1)
class FlattenL2D(tf.Module):
"Flattens the last two dimensions only"
def __init__(self, name=None):
super().__init__(name=name)
def __call__(self, x):
new_shape = tf.concat(
(tf.shape(x)[:-2], [(tf.shape(x)[-1]) * (tf.shape(x)[-2])]),
axis=-1)
return tf.reshape(x, new_shape)
initializer = tf.keras.initializers.HeNormal
class SentimentAnalysisWithAttention(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
# Structure and the idea behind it:
# 1: The input sequence is embedded and is positionally encoded.
# 2.1: An MLP block ('query') computes scores for the following
# attention layer for each entry in the sequence. Ie, it decides
# which words are worth a closer look.
# 2.2: An attention layer selects n positionally encoded word
# embeddings from the input sequence based on the learned queries.
# 3: The result is flattened into a tensor of known shape and a number
# of dense layers compute the final classification.
self.embedding = WordEmbedding(embedding.wv.vectors)
self.batch_norm = tf.keras.layers.BatchNormalization(trainable=True)
self.pos_enc = PositionalEncoding1D()
self.query = MLP_Block([256, 128], initializer=initializer)
self.attention = SequenceCollapseAttention(num_out_entries=9,
initializer=initializer)
self.flatten = FlattenL2D()
self.dense = MLP_Block([512, 256, 128, 64],
initializer=initializer,
trainable_batch_norms=True)
self.denseout = MLP_Block([1],
initializer=initializer,
activation=tf.nn.sigmoid,
trainable_batch_norms=True)
# Synthetic gradient modules for the various layers.
self.sg_query = SyntheticGradient(relative_generator_hidden_shapes=[9])
self.sg_attention = SyntheticGradient()
self.sg_dense = SyntheticGradient()
def __call__(self, x, y=tf.constant([]), training=False):
x = self.embedding(x)
x = self.pos_enc(x)
x = self.batch_norm(x, training=training)
q = self.query(x, training=training)
# q = self.sg_query(q, y) # SG
x = self.attention(x, q)
x = self.flatten(x)
x = self.sg_attention(x, y) # SG
x = self.dense(x, training=training)
x = self.sg_dense(x, y) # SG
output = self.denseout(x, training=training)
return output
model = SentimentAnalysisWithAttention()
class BatchGenerator(tf.keras.utils.Sequence):
'''
Creates batches from the given data, specifically it pads the sequences
per batch only as much as necessary to make every sequence within a batch
be of the same length.
'''
def __init__(self, inputs, labels, padding, batch_size):
self.batch_size = batch_size
self.labels = labels
self.inputs = inputs
self.padding = padding
# self.on_epoch_end()
def __len__(self):
return int(np.floor(len(self.inputs) / self.batch_size))
def __getitem__(self, index):
max_length = 0
start_index = index * self.batch_size
end_index = start_index + self.batch_size
for i in range(start_index, end_index):
l = len(self.inputs[i])
if l > max_length:
max_length = l
out_x = np.empty([self.batch_size, max_length], dtype='int32')
out_y = np.empty([self.batch_size, 1], dtype='float32')
for i in range(self.batch_size):
out_y[i] = self.labels[start_index + i]
tweet = self.inputs[start_index + i]
l = len(tweet)
l = min(l, max_length)
for j in range(0, l):
out_x[i][j] = tweet[j]
for j in range(l, max_length):
out_x[i][j] = self.padding
return out_x, out_y
```
Training the model
```
def train_validation_loop(model_caller, data_generator, epochs, metrics=[]):
batch_time = -1
for epoch in range(epochs):
start_e = time.time()
start_p = time.time()
num_batches = len(data_generator)
predictions = [None] * num_batches
for b in range(num_batches):
start_b = time.time()
x_batch, y_batch = data_generator[b]
predictions[b] = model_caller(x_batch, y_batch, metrics=metrics)
# progress output
elapsed_t = time.time() - start_b
if batch_time != -1:
batch_time = 0.05 * elapsed_t + 0.95 * batch_time
else:
batch_time = elapsed_t
if int(time.time() - start_p) >= 1 or b == (num_batches - 1):
start_p = time.time()
eta = int((num_batches - b) * batch_time)
ela = int(time.time() - start_e)
out_string = "\rEpoch %d/%d,\tbatch %d/%d,\telapsed: %d/%ds" % (
(epoch + 1), epochs, b + 1, num_batches, ela, ela + eta)
for metric in metrics:
out_string += "\t %s: %f" % (metric.name,
float(metric.result()))
out_length = len(out_string)
sys.stdout.write(out_string)
sys.stdout.flush()
for metric in metrics:
metric.reset_states()
sys.stdout.write("\n")
return np.concatenate(predictions)
def trainer(model, loss, optimizer):
@tf.function(experimental_relax_shapes=True)
def training_step(x_batch,
y_batch,
model=model,
loss=loss,
optimizer=optimizer,
metrics=[]):
with tf.GradientTape() as tape:
predictions = model(x_batch, y_batch, training=True)
losses = loss(y_batch, predictions)
grads = tape.gradient(losses, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
for metric in metrics:
metric.update_state(y_batch, predictions)
return predictions
return training_step
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
metrics = (tf.keras.metrics.BinaryCrossentropy(from_logits=True),
tf.keras.metrics.BinaryAccuracy())
batch_size = 512
epochs = 4
padding = embedding.wv.vocab["[&END&]"].index
training_generator = BatchGenerator(tokens_train,
y_train,
padding,
batch_size=batch_size)
train_validation_loop(trainer(model, loss, optimizer),
training_generator,
epochs,
metrics)
```
Testing it on validation data
```
def validator(model):
@tf.function(experimental_relax_shapes=True)
def validation_step(x_batch, y_batch, model=model, metrics=[]):
predictions = model(x_batch, training=False)
for metric in metrics:
metric.update_state(y_batch, predictions)
return predictions
return validation_step
testing_generator = BatchGenerator(tokens_test,
y_test,
padding,
batch_size=batch_size)
predictions = train_validation_loop(validator(model),
testing_generator,
1,
metrics)
```
Get some example results from the the test data.
```
most_evil_tweet=None
most_evil_evilness=1
most_cool_tweet=None
most_cool_coolness=1
most_angelic_tweet=None
most_angelic_angelicness=0
y_pred = np.concatenate(predictions)
for i in range(0,len(y_pred)):
judgement = y_pred[i]
polarity = abs(judgement-0.5)*2
if judgement>=most_angelic_angelicness:
most_angelic_angelicness = judgement
most_angelic_tweet = x_test[i]
if judgement<=most_evil_evilness:
most_evil_evilness = judgement
most_evil_tweet = x_test[i]
if polarity<=most_cool_coolness:
most_cool_coolness = polarity
most_cool_tweet = x_test[i]
print("The evilest tweet known to humankind:\n\t", most_evil_tweet)
print("Evilness: ", 1.0-most_evil_evilness)
print("\n")
print("The most angelic tweet any mortal has ever laid eyes upon:\n\t",
most_angelic_tweet)
print("Angelicness: ", most_angelic_angelicness)
print("\n")
print("This tweet is too cool for you, don't read:\n\t", most_cool_tweet)
print("Coolness: ", 1.0-most_cool_coolness)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/DeepInsider/playground-data/blob/master/docs/articles/deeplearningdat.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2019 Digital Advantage - Deep Insider.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 連載『機械学習 & ディープラーニング入門(データ構造編)』のノートブック
<table valign="middle">
<td>
<a target="_blank" href="https://deepinsider.jp/tutor/deeplearningdat"> <img src="https://re.deepinsider.jp/img/ml-logo/manabu.svg"/>Deep Insiderで記事を読む</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/DeepInsider/playground-data/blob/master/docs/articles/deeplearningdat.ipynb"> <img src="https://re.deepinsider.jp/img/ml-logo/gcolab.svg" />Google Colabで実行する</a>
</td>
<td>
<a target="_blank" href="https://github.com/DeepInsider/playground-data/blob/master/docs/articles/deeplearningdat.ipynb"> <img src="https://re.deepinsider.jp/img/ml-logo/github.svg" />GitHubでソースコードを見る</a>
</td>
</table>
※上から順に実行してください。上のコードで実行したものを再利用しているところがあるため、すべて実行しないとエラーになるコードがあります。
すべてのコードを一括実行したい場合は、メニューバーから[ランタイム]-[すべてのセルを実行]をクリックしてください。
※このノートブックは「Python 2」でも実行できるようにしていますが、基本的に「Python 3」を利用することをお勧めします。
Python 3を利用するには、メニューバーから[ランタイム]-[ランタイムのタイプを変更]を選択すると表示される[ノートブックの設定]ダイアログの、[ランタイムのタイプ]欄で「Python 3」に選択し、その右下にある[保存]ボタンをクリックしてください。
```
# Python バージョン2への対応
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import sys
print(sys.version_info.major) # 3 # バージョン(メジャー)
print(sys.version_info.minor) # 6 # バージョン(マイナー)
```
## Python言語におけるデータの構造
### Pythonにおける「1つの」データの表現
#### リスト1-1 「単一の」データを表現するコード
```
height = 177.2
print(height) # 177.2と出力される
```
#### リスト1-2 変数名だけを記述してオブジェクトの評価結果を出力
```
height # 177.2と出力される
```
#### リスト1-3 オブジェクト評価結果の出力とprint()関数の出力の違い
```
import numpy as np
array2d = np.array([ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ])
print(array2d) # [[165.5 58.4]
# [177.2 67.8]
# [183.2 83.7]]
array2d # array([[165.5, 58.4],
# [177.2, 67.8],
# [183.2, 83.7]])
```
#### リスト2-1 「単一の」データを複数書いて表現するコード
```
hana_height = 165.5
taro_height = 177.2
jiro_height = 183.2
hana_height, taro_height, jiro_height # (165.5, 177.2, 183.2)
```
#### リスト2-2 「複数(1次元)の」データを表現するコード
```
heights = [ 165.5, 177.2, 183.2 ]
heights # [165.5, 177.2, 183.2]
```
### Pythonにおける「複数(2次元)の」データの表現
#### リスト3 「複数(2次元)の」データを表現するコード
```
people = [ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ]
people # [165.5, 177.2, 183.2]
```
### Pythonにおける「複数(多次元)の」データの表現
#### リスト4 「複数(3次元)の」データを表現するコード
```
list3d = [
[ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ],
[ [ 155.5, 48.4 ], [ 167.2, 57.8 ], [ 173.2, 73.7 ] ],
[ [ 145.5, 38.4 ], [ 157.2, 47.8 ], [ 163.2, 63.7 ] ]
]
list3d # [[[165.5, 58.4], [177.2, 67.8], [183.2, 83.7]],
# [[155.5, 48.4], [167.2, 57.8], [173.2, 73.7]],
# [[145.5, 38.4], [157.2, 47.8], [163.2, 63.7]]]
```
## AIプログラムにおけるデータの構造(基本編)
### NumPyのインストール
#### リスト5-1 `numpy`パッケージをインストールするためのシェルコマンド
```
!pip install numpy
```
### numpyモジュールのインポート
#### リスト5-2 `numpy`モジュールをインポートするコード例
```
import numpy as np
```
### NumPyのデータ型「多次元配列」オブジェクトの作成
#### リスト5-3 `array`関数で多次元配列を作成するコード例(値を使用)
```
array2d = np.array([ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ])
array2d # array([[165.5, 58.4],
# [177.2, 67.8],
# [183.2, 83.7]])
```
#### リスト5-4 `array`関数で多次元配列を作成するコード例(変数を使用)
```
array3d = np.array(list3d)
array3d # array([[[165.5, 58.4],
# [177.2, 67.8],
# [183.2, 83.7]],
#
# [[155.5, 48.4],
# [167.2, 57.8],
# [173.2, 73.7]],
#
# [[145.5, 38.4],
# [157.2, 47.8],
# [163.2, 63.7]]])
```
#### リスト5-5 `ndarray`クラスの`tolist()`メソッドで多次元リストに変換するコード例
```
tolist3d = array3d.tolist()
tolist3d # [[[165.5, 58.4], [177.2, 67.8], [183.2, 83.7]],
# [[155.5, 48.4], [167.2, 57.8], [173.2, 73.7]],
# [[145.5, 38.4], [157.2, 47.8], [163.2, 63.7]]]
```
## AIプログラムにおけるデータの構造(応用編)
### Pandasのインストール
#### リスト6 ◎pandas◎パッケージをインストールするためのシェルコマンド
```
!pip install pandas
```
#### 図7-1 NumPyのデータをPandasで一覧表として表示する例
```
import pandas as pd
df = pd.DataFrame(array2d, columns=['身長', '体重'])
df
```
## AIプログラムにおけるデータの計算
### AI・ディープラーニングで数学を使う理由
#### リスト7-1 3人の身長の平均を計算するコード例(個別の値を使用)
```
# hana_height, taro_height, jiro_height = 165.5, 177.2, 183.2 # Lesson 1のリスト2-1で宣言済み
average_height = (
hana_height +
taro_height +
jiro_height
) / 3
print(average_height) # 175.29999999999998
```
#### リスト7-2 3人の身長と体重の平均を計算するコード例(多次元配列を使用)
```
import numpy as np
array1d = np.array([ 165.5, 177.2, 183.2 ])
average_height = np.average(array1d)
average_height # 175.29999999999998
```
### NumPyを使った計算
#### リスト8-1 3行2列の行列のさまざまな特性を表示するコード例
```
array2d = np.array([ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ])
print(array2d.shape) # (3, 2)
print(array2d.ndim) # 2
print(array2d.size) # 6
```
#### リスト8-2 NumPyを使った行列計算
```
diet = np.array([ [ 1.0, 0.0 ],
[ 0.0, 0.9 ] ])
lose_weights = diet @ array2d.T
# Python 3.5以降の場合。それ以前のPython 2系などの場合は、以下のmatmul関数を使う必要がある
#lose_weights = np.matmul(diet, array2d.T)
print(lose_weights.T) # [[165.5 52.56]
# [177.2 61.02]
# [183.2 75.33]]
```
#### リスト8-3 全要素の平均値を算出(身長/体重別ではない)
```
averages = np.average(array2d)
averages # 122.63333333333334
```
#### リスト8-4 身長/体重別の平均値を算出
```
averages = np.average(array2d, axis=0)
averages # array([175.3 , 69.96666667])
```
#### リスト8-5 3次元配列データでグループごとの身長/体重別の平均値を算出
```
array3d = np.array(
[ [ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ],
[ [ 155.5, 48.4 ], [ 167.2, 57.8 ], [ 173.2, 73.7 ] ],
[ [ 145.5, 38.4 ], [ 157.2, 47.8 ], [ 163.2, 63.7 ] ] ]
)
avr3d = np.average(array3d, axis=1)
print(avr3d) # [[175.3 69.96666667]
# [165.3 59.96666667]
# [155.3 49.96666667]]
```
## お疲れさまでした。データ構造の学習は修了です。
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Time series forecasting
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/time_series"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial is an introduction to time series forecasting using TensorFlow. It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs).
This is covered in two main parts, with subsections:
* Forecast for a single timestep:
* A single feature.
* All features.
* Forecast multiple steps:
* Single-shot: Make the predictions all at once.
* Autoregressive: Make one prediction at a time and feed the output back to the model.
## Setup
```
import os
import datetime
import IPython
import IPython.display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
```
## The weather dataset
This tutorial uses a <a href="https://www.bgc-jena.mpg.de/wetter/" class="external">weather time series dataset</a> recorded by the <a href="https://www.bgc-jena.mpg.de" class="external">Max Planck Institute for Biogeochemistry</a>.
This dataset contains 14 different features such as air temperature, atmospheric pressure, and humidity. These were collected every 10 minutes, beginning in 2003. For efficiency, you will use only the data collected between 2009 and 2016. This section of the dataset was prepared by François Chollet for his book [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
```
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip',
fname='jena_climate_2009_2016.csv.zip',
extract=True)
csv_path, _ = os.path.splitext(zip_path)
```
This tutorial will just deal with **hourly predictions**, so start by sub-sampling the data from 10 minute intervals to 1h:
```
df = pd.read_csv(csv_path)
# slice [start:stop:step], starting from index 5 take every 6th record.
df = df[5::6]
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
```
Let's take a glance at the data. Here are the first few rows:
```
df.head()
```
Here is the evolution of a few features over time.
```
plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)']
plot_features = df[plot_cols]
plot_features.index = date_time
_ = plot_features.plot(subplots=True)
plot_features = df[plot_cols][:480]
plot_features.index = date_time[:480]
_ = plot_features.plot(subplots=True)
```
### Inspect and cleanup
Next look at the statistics of the dataset:
```
df.describe().transpose()
```
#### Wind velocity
One thing that should stand out is the `min` value of the wind velocity, `wv (m/s)` and `max. wv (m/s)` columns. This `-9999` is likely erroneous. There's a separate wind direction column, so the velocity should be `>=0`. Replace it with zeros:
```
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
# The above inplace edits are reflected in the DataFrame
df['wv (m/s)'].min()
```
### Feature engineering
Before diving in to build a model it's important to understand your data, and be sure that you're passing the model appropriately formatted data.
#### Wind
The last column of the data, `wd (deg)`, gives the wind direction in units of degrees. Angles do not make good model inputs, 360° and 0° should be close to each other, and wrap around smoothly. Direction shouldn't matter if the wind is not blowing.
Right now the distribution of wind data looks like this:
```
plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind Direction [deg]')
plt.ylabel('Wind Velocity [m/s]')
```
But this will be easier for the model to interpret if you convert the wind direction and velocity columns to a wind **vector**:
```
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
```
The distribution of wind vectors is much simpler for the model to correctly interpret.
```
plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind X [m/s]')
plt.ylabel('Wind Y [m/s]')
ax = plt.gca()
ax.axis('tight')
```
#### Time
Similarly the `Date Time` column is very useful, but not in this string form. Start by converting it to seconds:
```
timestamp_s = date_time.map(datetime.datetime.timestamp)
```
Similar to the wind direction the time in seconds is not a useful model input. Being weather data it has clear daily and yearly periodicity. There are many ways you could deal with periodicity.
A simple approach to convert it to a usable signal is to use `sin` and `cos` to convert the time to clear "Time of day" and "Time of year" signals:
```
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
plt.plot(np.array(df['Day sin'])[:25])
plt.plot(np.array(df['Day cos'])[:25])
plt.xlabel('Time [h]')
plt.title('Time of day signal')
```
This gives the model access to the most important frequency features. In this case you knew ahead of time which frequencies were important.
If you didn't know, you can determine which frequencies are important using an `fft`. To check our assumptions, here is the `tf.signal.rfft` of the temperature over time. Note the obvious peaks at frequencies near `1/year` and `1/day`:
```
fft = tf.signal.rfft(df['T (degC)'])
f_per_dataset = np.arange(0, len(fft))
n_samples_h = len(df['T (degC)'])
hours_per_year = 24*365.2524
years_per_dataset = n_samples_h/(hours_per_year)
f_per_year = f_per_dataset/years_per_dataset
plt.step(f_per_year, np.abs(fft))
plt.xscale('log')
plt.ylim(0, 400000)
plt.xlim([0.1, max(plt.xlim())])
plt.xticks([1, 365.2524], labels=['1/Year', '1/day'])
_ = plt.xlabel('Frequency (log scale)')
```
### Split the data
We'll use a `(70%, 20%, 10%)` split for the training, validation, and test sets. Note the data is **not** being randomly shuffled before splitting. This is for two reasons.
1. It ensures that chopping the data into windows of consecutive samples is still possible.
2. It ensures that the validation/test results are more realistic, being evaluated on data collected after the model was trained.
```
column_indices = {name: i for i, name in enumerate(df.columns)}
n = len(df)
train_df = df[0:int(n*0.7)]
val_df = df[int(n*0.7):int(n*0.9)]
test_df = df[int(n*0.9):]
num_features = df.shape[1]
```
### Normalize the data
It is important to scale features before training a neural network. Normalization is a common way of doing this scaling. Subtract the mean and divide by the standard deviation of each feature.
The mean and standard deviation should only be computed using the training data so that the models have no access to the values in the validation and test sets.
It's also arguable that the model shouldn't have access to future values in the training set when training, and that this normalization should be done using moving averages. That's not the focus of this tutorial, and the validation and test sets ensure that you get (somewhat) honest metrics. So in the interest of simplicity this tutorial uses a simple average.
```
train_mean = train_df.mean()
train_std = train_df.std()
train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std
```
Now peek at the distribution of the features. Some features do have long tails, but there are no obvious errors like the `-9999` wind velocity value.
```
df_std = (df - train_mean) / train_std
df_std = df_std.melt(var_name='Column', value_name='Normalized')
plt.figure(figsize=(12, 6))
ax = sns.violinplot(x='Column', y='Normalized', data=df_std)
_ = ax.set_xticklabels(df.keys(), rotation=90)
```
## Data windowing
The models in this tutorial will make a set of predictions based on a window of consecutive samples from the data.
The main features of the input windows are:
* The width (number of time steps) of the input and label windows
* The time offset between them.
* Which features are used as inputs, labels, or both.
This tutorial builds a variety of models (including Linear, DNN, CNN and RNN models), and uses them for both:
* *Single-output*, and *multi-output* predictions.
* *Single-time-step* and *multi-time-step* predictions.
This section focuses on implementing the data windowing so that it can be reused for all of those models.
Depending on the task and type of model you may want to generate a variety of data windows. Here are some examples:
1. For example, to make a single prediction 24h into the future, given 24h of history you might define a window like this:

2. A model that makes a prediction 1h into the future, given 6h of history would need a window like this:

The rest of this section defines a `WindowGenerator` class. This class can:
1. Handle the indexes and offsets as shown in the diagrams above.
1. Split windows of features into a `(features, labels)` pairs.
2. Plot the content of the resulting windows.
3. Efficiently generate batches of these windows from the training, evaluation, and test data, using `tf.data.Dataset`s.
### 1. Indexes and offsets
Start by creating the `WindowGenerator` class. The `__init__` method includes all the necessary logic for the input and label indices.
It also takes the train, eval, and test dataframes as input. These will be converted to `tf.data.Dataset`s of windows later.
```
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
```
Here is code to create the 2 windows shown in the diagrams at the start of this section:
```
w1 = WindowGenerator(input_width=24, label_width=1, shift=24,
label_columns=['T (degC)'])
w1
w2 = WindowGenerator(input_width=6, label_width=1, shift=1,
label_columns=['T (degC)'])
w2
```
### 2. Split
Given a list consecutive inputs, the `split_window` method will convert them to a window of inputs and a window of labels.
The example `w2`, above, will be split like this:

This diagram doesn't show the `features` axis of the data, but this `split_window` function also handles the `label_columns` so it can be used for both the single output and multi-output examples.
```
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
```
Try it out:
```
# Stack three slices, the length of the total window:
example_window = tf.stack([np.array(train_df[:w2.total_window_size]),
np.array(train_df[100:100+w2.total_window_size]),
np.array(train_df[200:200+w2.total_window_size])])
example_inputs, example_labels = w2.split_window(example_window)
print('All shapes are: (batch, time, features)')
print(f'Window shape: {example_window.shape}')
print(f'Inputs shape: {example_inputs.shape}')
print(f'labels shape: {example_labels.shape}')
```
Typically data in TensorFlow is packed into arrays where the outermost index is across examples (the "batch" dimension). The middle indices are the "time" or "space" (width, height) dimension(s). The innermost indices are the features.
The code above took a batch of 3, 7-timestep windows, with 19 features at each time step. It split them into a batch of 6-timestep, 19 feature inputs, and a 1-timestep 1-feature label. The label only has one feature because the `WindowGenerator` was initialized with `label_columns=['T (degC)']`. Initially this tutorial will build models that predict single output labels.
### 3. Plot
Here is a plot method that allows a simple visualization of the split window:
```
w2.example = example_inputs, example_labels
def plot(self, model=None, plot_col='T (degC)', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot
```
This plot aligns inputs, labels, and (later) predictions based on the time that the item refers to:
```
w2.plot()
```
You can plot the other columns, but the example window `w2` configuration only has labels for the `T (degC)` column.
```
w2.plot(plot_col='p (mbar)')
```
### 4. Create `tf.data.Dataset`s
Finally this `make_dataset` method will take a time series `DataFrame` and convert it to a `tf.data.Dataset` of `(input_window, label_window)` pairs using the `preprocessing.timeseries_dataset_from_array` function.
```
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
```
The `WindowGenerator` object holds training, validation and test data. Add properties for accessing them as `tf.data.Datasets` using the above `make_dataset` method. Also add a standard example batch for easy access and plotting:
```
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
```
Now the `WindowGenerator` object gives you access to the `tf.data.Dataset` objects, so you can easily iterate over the data.
The `Dataset.element_spec` property tells you the structure, `dtypes` and shapes of the dataset elements.
```
# Each element is an (inputs, label) pair
w2.train.element_spec
```
Iterating over a `Dataset` yields concrete batches:
```
for example_inputs, example_labels in w2.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
```
## Single step models
The simplest model you can build on this sort of data is one that predicts a single feature's value, 1 timestep (1h) in the future based only on the current conditions.
So start by building models to predict the `T (degC)` value 1h into the future.

Configure a `WindowGenerator` object to produce these single-step `(input, label)` pairs:
```
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['T (degC)'])
single_step_window
```
The `window` object creates `tf.data.Datasets` from the training, validation, and test sets, allowing you to easily iterate over batches of data.
```
for example_inputs, example_labels in single_step_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
```
### Baseline
Before building a trainable model it would be good to have a performance baseline as a point for comparison with the later more complicated models.
This first task is to predict temperature 1h in the future given the current value of all features. The current values include the current temperature.
So start with a model that just returns the current temperature as the prediction, predicting "No change". This is a reasonable baseline since temperature changes slowly. Of course, this baseline will work less well if you make a prediction further in the future.

```
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
```
Instantiate and evaluate this model:
```
baseline = Baseline(label_index=column_indices['T (degC)'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
```
That printed some performance metrics, but those don't give you a feeling for how well the model is doing.
The `WindowGenerator` has a plot method, but the plots won't be very interesting with only a single sample. So, create a wider `WindowGenerator` that generates windows 24h of consecutive inputs and labels at a time.
The `wide_window` doesn't change the way the model operates. The model still makes predictions 1h into the future based on a single input time step. Here the `time` axis acts like the `batch` axis: Each prediction is made independently with no interaction between time steps.
```
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
wide_window
```
This expanded window can be passed directly to the same `baseline` model without any code changes. This is possible because the inputs and labels have the same number of timesteps, and the baseline just forwards the input to the output:

```
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
```
Plotting the baseline model's predictions you can see that it is simply the labels, shifted right by 1h.
```
wide_window.plot(baseline)
```
In the above plots of three examples the single step model is run over the course of 24h. This deserves some explaination:
* The blue "Inputs" line shows the input temperature at each time step. The model recieves all features, this plot only shows the temperature.
* The green "Labels" dots show the target prediction value. These dots are shown at the prediction time, not the input time. That is why the range of labels is shifted 1 step relative to the inputs.
* The orange "Predictions" crosses are the model's prediction's for each output time step. If the model were predicting perfectly the predictions would land directly on the "labels".
### Linear model
The simplest **trainable** model you can apply to this task is to insert linear transformation between the input and output. In this case the output from a time step only depends on that step:

A `layers.Dense` with no `activation` set is a linear model. The layer only transforms the last axis of the data from `(batch, time, inputs)` to `(batch, time, units)`, it is applied independently to every item across the `batch` and `time` axes.
```
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)
```
This tutorial trains many models, so package the training procedure into a function:
```
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
```
Train the model and evaluate its performance:
```
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
```
Like the `baseline` model, the linear model can be called on batches of wide windows. Used this way the model makes a set of independent predictions on consecuitive time steps. The `time` axis acts like another `batch` axis. There are no interactions between the predictions at each time step.

```
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
```
Here is the plot of its example predictions on the `wide_window`, note how in many cases the prediction is clearly better than just returning the input temperature, but in a few cases it's worse:
```
wide_window.plot(linear)
```
One advantage to linear models is that they're relatively simple to interpret.
You can pull out the layer's weights, and see the weight assigned to each input:
```
plt.bar(x = range(len(train_df.columns)),
height=linear.layers[0].kernel[:,0].numpy())
axis = plt.gca()
axis.set_xticks(range(len(train_df.columns)))
_ = axis.set_xticklabels(train_df.columns, rotation=90)
```
Sometimes the model doesn't even place the most weight on the input `T (degC)`. This is one of the risks of random initialization.
### Dense
Before applying models that actually operate on multiple time-steps, it's worth checking the performance of deeper, more powerful, single input step models.
Here's a model similar to the `linear` model, except it stacks several a few `Dense` layers between the input and the output:
```
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=1)
])
history = compile_and_fit(dense, single_step_window)
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
```
### Multi-step dense
A single-time-step model has no context for the current values of its inputs. It can't see how the input features are changing over time. To address this issue the model needs access to multiple time steps when making predictions:

The `baseline`, `linear` and `dense` models handled each time step independently. Here the model will take multiple time steps as input to produce a single output.
Create a `WindowGenerator` that will produce batches of the 3h of inputs and, 1h of labels:
Note that the `Window`'s `shift` parameter is relative to the end of the two windows.
```
CONV_WIDTH = 3
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['T (degC)'])
conv_window
conv_window.plot()
plt.title("Given 3h as input, predict 1h into the future.")
```
You could train a `dense` model on a multiple-input-step window by adding a `layers.Flatten` as the first layer of the model:
```
multi_step_dense = tf.keras.Sequential([
# Shape: (time, features) => (time*features)
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
# Add back the time dimension.
# Shape: (outputs) => (1, outputs)
tf.keras.layers.Reshape([1, -1]),
])
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', multi_step_dense(conv_window.example[0]).shape)
history = compile_and_fit(multi_step_dense, conv_window)
IPython.display.clear_output()
val_performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.val)
performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.test, verbose=0)
conv_window.plot(multi_step_dense)
```
The main down-side of this approach is that the resulting model can only be executed on input windows of exactly this shape.
```
print('Input shape:', wide_window.example[0].shape)
try:
print('Output shape:', multi_step_dense(wide_window.example[0]).shape)
except Exception as e:
print(f'\n{type(e).__name__}:{e}')
```
The convolutional models in the next section fix this problem.
### Convolution neural network
A convolution layer (`layers.Conv1D`) also takes multiple time steps as input to each prediction.
Below is the **same** model as `multi_step_dense`, re-written with a convolution.
Note the changes:
* The `layers.Flatten` and the first `layers.Dense` are replaced by a `layers.Conv1D`.
* The `layers.Reshape` is no longer necessary since the convolution keeps the time axis in its output.
```
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])
```
Run it on an example batch to see that the model produces outputs with the expected shape:
```
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)
```
Train and evaluate it on the ` conv_window` and it should give performance similar to the `multi_step_dense` model.
```
history = compile_and_fit(conv_model, conv_window)
IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
```
The difference between this `conv_model` and the `multi_step_dense` model is that the `conv_model` can be run on inputs of any length. The convolutional layer is applied to a sliding window of inputs:

If you run it on wider input, it produces wider output:
```
print("Wide window")
print('Input shape:', wide_window.example[0].shape)
print('Labels shape:', wide_window.example[1].shape)
print('Output shape:', conv_model(wide_window.example[0]).shape)
```
Note that the output is shorter than the input. To make training or plotting work, you need the labels, and prediction to have the same length. So build a `WindowGenerator` to produce wide windows with a few extra input time steps so the label and prediction lengths match:
```
LABEL_WIDTH = 24
INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1)
wide_conv_window = WindowGenerator(
input_width=INPUT_WIDTH,
label_width=LABEL_WIDTH,
shift=1,
label_columns=['T (degC)'])
wide_conv_window
print("Wide conv window")
print('Input shape:', wide_conv_window.example[0].shape)
print('Labels shape:', wide_conv_window.example[1].shape)
print('Output shape:', conv_model(wide_conv_window.example[0]).shape)
```
Now you can plot the model's predictions on a wider window. Note the 3 input time steps before the first prediction. Every prediction here is based on the 3 preceding timesteps:
```
wide_conv_window.plot(conv_model)
```
### Recurrent neural network
A Recurrent Neural Network (RNN) is a type of neural network well-suited to time series data. RNNs process a time series step-by-step, maintaining an internal state from time-step to time-step.
For more details, read the [text generation tutorial](https://www.tensorflow.org/tutorials/text/text_generation) or the [RNN guide](https://www.tensorflow.org/guide/keras/rnn).
In this tutorial, you will use an RNN layer called Long Short Term Memory ([LSTM](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/LSTM)).
An important constructor argument for all keras RNN layers is the `return_sequences` argument. This setting can configure the layer in one of two ways.
1. If `False`, the default, the layer only returns the output of the final timestep, giving the model time to warm up its internal state before making a single prediction:

2. If `True` the layer returns an output for each input. This is useful for:
* Stacking RNN layers.
* Training a model on multiple timesteps simultaneously.

```
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])
```
With `return_sequences=True` the model can be trained on 24h of data at a time.
Note: This will give a pessimistic view of the model's performance. On the first timestep the model has no access to previous steps, and so can't do any better than the simple `linear` and `dense` models shown earlier.
```
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', lstm_model(wide_window.example[0]).shape)
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
wide_window.plot(lstm_model)
```
### Performance
With this dataset typically each of the models does slightly better than the one before it.
```
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.ylabel('mean_absolute_error [T (degC), normalized]')
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
_ = plt.legend()
for name, value in performance.items():
print(f'{name:12s}: {value[1]:0.4f}')
```
### Multi-output models
The models so far all predicted a single output feature, `T (degC)`, for a single time step.
All of these models can be converted to predict multiple features just by changing the number of units in the output layer and adjusting the training windows to include all features in the `labels`.
```
single_step_window = WindowGenerator(
# `WindowGenerator` returns all features as labels if you
# don't set the `label_columns` argument.
input_width=1, label_width=1, shift=1)
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
for example_inputs, example_labels in wide_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
```
Note above that the `features` axis of the labels now has the same depth as the inputs, instead of 1.
#### Baseline
The same baseline model can be used here, but this time repeating all features instead of selecting a specific `label_index`.
```
baseline = Baseline()
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(wide_window.val)
performance['Baseline'] = baseline.evaluate(wide_window.test, verbose=0)
```
#### Dense
```
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(dense, single_step_window)
IPython.display.clear_output()
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
```
#### RNN
```
%%time
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate( wide_window.val)
performance['LSTM'] = lstm_model.evaluate( wide_window.test, verbose=0)
print()
```
<a id="residual"></a>
#### Advanced: Residual connections
The `Baseline` model from earlier took advantage of the fact that the sequence doesn't change drastically from time step to time step. Every model trained in this tutorial so far was randomly initialized, and then had to learn that the output is a a small change from the previous time step.
While you can get around this issue with careful initialization, it's simpler to build this into the model structure.
It's common in time series analysis to build models that instead of predicting the next value, predict how the value will change in the next timestep.
Similarly, "Residual networks" or "ResNets" in deep learning refer to architectures where each layer adds to the model's accumulating result.
That is how you take advantage of the knowledge that the change should be small.

Essentially this initializes the model to match the `Baseline`. For this task it helps models converge faster, with slightly better performance.
This approach can be used in conjunction with any model discussed in this tutorial.
Here it is being applied to the LSTM model, note the use of the `tf.initializers.zeros` to ensure that the initial predicted changes are small, and don't overpower the residual connection. There are no symmetry-breaking concerns for the gradients here, since the `zeros` are only used on the last layer.
```
class ResidualWrapper(tf.keras.Model):
def __init__(self, model):
super().__init__()
self.model = model
def call(self, inputs, *args, **kwargs):
delta = self.model(inputs, *args, **kwargs)
# The prediction for each timestep is the input
# from the previous time step plus the delta
# calculated by the model.
return inputs + delta
%%time
residual_lstm = ResidualWrapper(
tf.keras.Sequential([
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(
num_features,
# The predicted deltas should start small
# So initialize the output layer with zeros
kernel_initializer=tf.initializers.zeros)
]))
history = compile_and_fit(residual_lstm, wide_window)
IPython.display.clear_output()
val_performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.val)
performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.test, verbose=0)
print()
```
#### Performance
Here is the overall performance for these multi-output models.
```
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
plt.ylabel('MAE (average over all outputs)')
_ = plt.legend()
for name, value in performance.items():
print(f'{name:15s}: {value[1]:0.4f}')
```
The above performances are averaged across all model outputs.
## Multi-step models
Both the single-output and multiple-output models in the previous sections made **single time step predictions**, 1h into the future.
This section looks at how to expand these models to make **multiple time step predictions**.
In a multi-step prediction, the model needs to learn to predict a range of future values. Thus, unlike a single step model, where only a single future point is predicted, a multi-step model predicts a sequence of the future values.
There are two rough approaches to this:
1. Single shot predictions where the entire time series is predicted at once.
2. Autoregressive predictions where the model only makes single step predictions and its output is fed back as its input.
In this section all the models will predict **all the features across all output time steps**.
For the multi-step model, the training data again consists of hourly samples. However, here, the models will learn to predict 24h of the future, given 24h of the past.
Here is a `Window` object that generates these slices from the dataset:
```
OUT_STEPS = 24
multi_window = WindowGenerator(input_width=24,
label_width=OUT_STEPS,
shift=OUT_STEPS)
multi_window.plot()
multi_window
```
### Baselines
A simple baseline for this task is to repeat the last input time step for the required number of output timesteps:

```
class MultiStepLastBaseline(tf.keras.Model):
def call(self, inputs):
return tf.tile(inputs[:, -1:, :], [1, OUT_STEPS, 1])
last_baseline = MultiStepLastBaseline()
last_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance = {}
multi_performance = {}
multi_val_performance['Last'] = last_baseline.evaluate(multi_window.val)
multi_performance['Last'] = last_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(last_baseline)
```
Since this task is to predict 24h given 24h another simple approach is to repeat the previous day, assuming tomorrow will be similar:

```
class RepeatBaseline(tf.keras.Model):
def call(self, inputs):
return inputs
repeat_baseline = RepeatBaseline()
repeat_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance['Repeat'] = repeat_baseline.evaluate(multi_window.val)
multi_performance['Repeat'] = repeat_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(repeat_baseline)
```
### Single-shot models
One high level approach to this problem is use a "single-shot" model, where the model makes the entire sequence prediction in a single step.
This can be implemented efficiently as a `layers.Dense` with `OUT_STEPS*features` output units. The model just needs to reshape that output to the required `(OUTPUT_STEPS, features)`.
#### Linear
A simple linear model based on the last input time step does better than either baseline, but is underpowered. The model needs to predict `OUTPUT_STEPS` time steps, from a single input time step with a linear projection. It can only capture a low-dimensional slice of the behavior, likely based mainly on the time of day and time of year.

```
multi_linear_model = tf.keras.Sequential([
# Take the last time-step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_linear_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Linear'] = multi_linear_model.evaluate(multi_window.val)
multi_performance['Linear'] = multi_linear_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_linear_model)
```
#### Dense
Adding a `layers.Dense` between the input and output gives the linear model more power, but is still only based on a single input timestep.
```
multi_dense_model = tf.keras.Sequential([
# Take the last time step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, dense_units]
tf.keras.layers.Dense(512, activation='relu'),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_dense_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Dense'] = multi_dense_model.evaluate(multi_window.val)
multi_performance['Dense'] = multi_dense_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_dense_model)
```
#### CNN
A convolutional model makes predictions based on a fixed-width history, which may lead to better performance than the dense model since it can see how things are changing over time:

```
CONV_WIDTH = 3
multi_conv_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, CONV_WIDTH, features]
tf.keras.layers.Lambda(lambda x: x[:, -CONV_WIDTH:, :]),
# Shape => [batch, 1, conv_units]
tf.keras.layers.Conv1D(256, activation='relu', kernel_size=(CONV_WIDTH)),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_conv_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Conv'] = multi_conv_model.evaluate(multi_window.val)
multi_performance['Conv'] = multi_conv_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_conv_model)
```
#### RNN
A recurrent model can learn to use a long history of inputs, if it's relevant to the predictions the model is making. Here the model will accumulate internal state for 24h, before making a single prediction for the next 24h.
In this single-shot format, the LSTM only needs to produce an output at the last time step, so set `return_sequences=False`.

```
multi_lstm_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, lstm_units]
# Adding more `lstm_units` just overfits more quickly.
tf.keras.layers.LSTM(32, return_sequences=False),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_lstm_model, multi_window)
IPython.display.clear_output()
multi_val_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.val)
multi_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_lstm_model)
```
### Advanced: Autoregressive model
The above models all predict the entire output sequence as a in a single step.
In some cases it may be helpful for the model to decompose this prediction into individual time steps. Then each model's output can be fed back into itself at each step and predictions can be made conditioned on the previous one, like in the classic [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/abs/1308.0850).
One clear advantage to this style of model is that it can be set up to produce output with a varying length.
You could take any of single single-step multi-output models trained in the first half of this tutorial and run in an autoregressive feedback loop, but here you'll focus on building a model that's been explicitly trained to do that.

#### RNN
This tutorial only builds an autoregressive RNN model, but this pattern could be applied to any model that was designed to output a single timestep.
The model will have the same basic form as the single-step `LSTM` models: An `LSTM` followed by a `layers.Dense` that converts the `LSTM` outputs to model predictions.
A `layers.LSTM` is a `layers.LSTMCell` wrapped in the higher level `layers.RNN` that manages the state and sequence results for you (See [Keras RNNs](https://www.tensorflow.org/guide/keras/rnn) for details).
In this case the model has to manually manage the inputs for each step so it uses `layers.LSTMCell` directly for the lower level, single time step interface.
```
class FeedBack(tf.keras.Model):
def __init__(self, units, out_steps):
super().__init__()
self.out_steps = out_steps
self.units = units
self.lstm_cell = tf.keras.layers.LSTMCell(units)
# Also wrap the LSTMCell in an RNN to simplify the `warmup` method.
self.lstm_rnn = tf.keras.layers.RNN(self.lstm_cell, return_state=True)
self.dense = tf.keras.layers.Dense(num_features)
feedback_model = FeedBack(units=32, out_steps=OUT_STEPS)
```
The first method this model needs is a `warmup` method to initialize its internal state based on the inputs. Once trained this state will capture the relevant parts of the input history. This is equivalent to the single-step `LSTM` model from earlier:
```
def warmup(self, inputs):
# inputs.shape => (batch, time, features)
# x.shape => (batch, lstm_units)
x, *state = self.lstm_rnn(inputs)
# predictions.shape => (batch, features)
prediction = self.dense(x)
return prediction, state
FeedBack.warmup = warmup
```
This method returns a single time-step prediction, and the internal state of the LSTM:
```
prediction, state = feedback_model.warmup(multi_window.example[0])
prediction.shape
```
With the `RNN`'s state, and an initial prediction you can now continue iterating the model feeding the predictions at each step back as the input.
The simplest approach to collecting the output predictions is to use a python list, and `tf.stack` after the loop.
Note: Stacking a python list like this only works with eager-execution, using `Model.compile(..., run_eagerly=True)` for training, or with a fixed length output. For a dynamic output length you would need to use a `tf.TensorArray` instead of a python list, and `tf.range` instead of the python `range`.
```
def call(self, inputs, training=None):
# Use a TensorArray to capture dynamically unrolled outputs.
predictions = []
# Initialize the lstm state
prediction, state = self.warmup(inputs)
# Insert the first prediction
predictions.append(prediction)
# Run the rest of the prediction steps
for n in range(1, self.out_steps):
# Use the last prediction as input.
x = prediction
# Execute one lstm step.
x, state = self.lstm_cell(x, states=state,
training=training)
# Convert the lstm output to a prediction.
prediction = self.dense(x)
# Add the prediction to the output
predictions.append(prediction)
# predictions.shape => (time, batch, features)
predictions = tf.stack(predictions)
# predictions.shape => (batch, time, features)
predictions = tf.transpose(predictions, [1, 0, 2])
return predictions
FeedBack.call = call
```
Test run this model on the example inputs:
```
print('Output shape (batch, time, features): ', feedback_model(multi_window.example[0]).shape)
```
Now train the model:
```
history = compile_and_fit(feedback_model, multi_window)
IPython.display.clear_output()
multi_val_performance['AR LSTM'] = feedback_model.evaluate(multi_window.val)
multi_performance['AR LSTM'] = feedback_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(feedback_model)
```
### Performance
There are clearly diminishing returns as a function of model complexity on this problem.
```
x = np.arange(len(multi_performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in multi_val_performance.values()]
test_mae = [v[metric_index] for v in multi_performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=multi_performance.keys(),
rotation=45)
plt.ylabel(f'MAE (average over all times and outputs)')
_ = plt.legend()
```
The metrics for the multi-output models in the first half of this tutorial show the performance averaged across all output features. These performances similar but also averaged across output timesteps.
```
for name, value in multi_performance.items():
print(f'{name:8s}: {value[1]:0.4f}')
```
The gains achieved going from a dense model to convolutional and recurrent models are only a few percent (if any), and the autoregressive model performed clearly worse. So these more complex approaches may not be worth while on **this** problem, but there was no way to know without trying, and these models could be helpful for **your** problem.
## Next steps
This tutorial was a quick introduction to time series forecasting using TensorFlow.
* For further understanding, see:
* Chapter 15 of [Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/), 2nd Edition
* Chapter 6 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
* Lesson 8 of [Udacity's intro to TensorFlow for deep learning](https://www.udacity.com/course/intro-to-tensorflow-for-deep-learning--ud187), and the [exercise notebooks](https://github.com/tensorflow/examples/tree/master/courses/udacity_intro_to_tensorflow_for_deep_learning)
* Also remember that you can implement any [classical time series model](https://otexts.com/fpp2/index.html) in TensorFlow, this tutorial just focuses on TensorFlow's built-in functionality.
| github_jupyter |
# Word Embeddings in MySQL
This example uses the official MySQL Connector within Python3 to store and retrieve various amounts of Word Embeddings.
We will use a local MySQL database running as a Docker Container for testing purposes. To start the database run:
```
docker run -ti --rm --name ohmysql -e MYSQL_ROOT_PASSWORD=mikolov -e MYSQL_DATABASE=embeddings -p 3306:3306 mysql:5.7
```
```
import mysql.connector
import io
import time
import numpy
import plotly
from tqdm import tqdm_notebook as tqdm
```
# Dummy Embeddings
For testing purposes we will use randomly generated numpy arrays as dummy embbeddings.
```
def embeddings(n=1000, dim=300):
"""
Yield n tuples of random numpy arrays of *dim* length indexed by *n*
"""
idx = 0
while idx < n:
yield (str(idx), numpy.random.rand(dim))
idx += 1
```
# Conversion Functions
Since we can't just save a NumPy array into the database, we will convert it into a BLOB.
```
def adapt_array(array):
"""
Using the numpy.save function to save a binary version of the array,
and BytesIO to catch the stream of data and convert it into a BLOB.
"""
out = io.BytesIO()
numpy.save(out, array)
out.seek(0)
return out.read()
def convert_array(blob):
"""
Using BytesIO to convert the binary version of the array back into a numpy array.
"""
out = io.BytesIO(blob)
out.seek(0)
return numpy.load(out)
connection = mysql.connector.connect(user='root', password='mikolov',
host='127.0.0.1',
database='embeddings')
cursor = connection.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS `embeddings` (`key` TEXT, `embedding` BLOB);')
connection.commit()
%%time
for key, emb in embeddings():
arr = adapt_array(emb)
cursor.execute('INSERT INTO `embeddings` (`key`, `embedding`) VALUES (%s, %s);', (key, arr))
connection.commit()
%%time
for key, _ in embeddings():
cursor.execute('SELECT embedding FROM `embeddings` WHERE `key`=%s;', (key,))
data = cursor.fetchone()
arr = convert_array(data[0])
```
# Sample some data
To test the I/O we will write and read some data from the database. This may take a while.
```
write_times = []
read_times = []
counts = [500, 1000, 2000, 3000, 4000, 5000]
for c in counts:
print(c)
cursor.execute('DROP TABLE IF EXISTS `embeddings`;')
cursor.execute('CREATE TABLE IF NOT EXISTS `embeddings` (`key` TEXT, `embedding` BLOB);')
connection.commit()
start_time_write = time.time()
for key, emb in tqdm(embeddings(c), total=c):
arr = adapt_array(emb)
cursor.execute('INSERT INTO `embeddings` (`key`, `embedding`) VALUES (%s, %s);', (key, arr))
connection.commit()
write_times.append(time.time() - start_time_write)
start_time_read = time.time()
for key, emb in embeddings(c):
cursor.execute('SELECT embedding FROM `embeddings` WHERE `key`=%s;', (key,))
data = cursor.fetchone()
arr = convert_array(data[0])
read_times.append(time.time() - start_time_read)
print('DONE')
```
# Results
```
plotly.offline.init_notebook_mode(connected=True)
trace = plotly.graph_objs.Scatter(
y = write_times,
x = counts,
mode = 'lines+markers'
)
layout = plotly.graph_objs.Layout(title="MySQL Write Times",
yaxis=dict(title='Time in Seconds'),
xaxis=dict(title='Embedding Count'))
data = [trace]
fig = plotly.graph_objs.Figure(data=data, layout=layout)
plotly.offline.iplot(fig, filename='jupyter-scatter-write')
plotly.offline.init_notebook_mode(connected=True)
trace = plotly.graph_objs.Scatter(
y = read_times,
x = counts,
mode = 'lines+markers'
)
layout = plotly.graph_objs.Layout(title="MySQL Read Times",
yaxis=dict(title='Time in Seconds'),
xaxis=dict(title='Embedding Count'))
data = [trace]
fig = plotly.graph_objs.Figure(data=data, layout=layout)
plotly.offline.iplot(fig, filename='jupyter-scatter-read')
```
| github_jupyter |
```
import glob
import itertools
from ipywidgets import widgets, Layout
import numpy as np
import os
import pandas as pd
import plotly.io as pio
import plotly.graph_objects as go
from apex_performance_plotter.apex_performance_plotter.load_logfiles import load_logfiles
pio.templates.default = "plotly_white"
from IPython.core.interactiveshell import InteractiveShell
# Define the folders where to look for experiment outputs
os.chdir('../../../../experiment')
logfiles = glob.glob('{}*'.format('log'))
selected_logfiles = widgets.SelectMultiple(
options=logfiles,
description='Experiments',
disabled=False,
layout=Layout(width='100%')
)
display(selected_logfiles)
# Select the experiments to plot
# Display selected experiment properties
InteractiveShell.ast_node_interactivity = "all"
headers, dataframes = load_logfiles(selected_logfiles)
for idx, header in enumerate(headers):
display(header)
InteractiveShell.ast_node_interactivity = "last"
colors = ['#4363d8','#800000','#f58231','#e6beff']
# Plot latencies
figure_latencies = go.FigureWidget()
figure_latencies.layout.xaxis.title = 'Time (s)'
figure_latencies.layout.yaxis.title = 'Latencies (ms)'
for idx, experiment in enumerate(dataframes):
figure_latencies.add_scatter(x=experiment['T_experiment'],
y=experiment['latency_max (ms)'],
mode='markers', marker_color=colors[idx],
marker_symbol='x',
name= 'latency_max',
text=headers[idx]['Logfile name']);
figure_latencies.add_scatter(x=experiment['T_experiment'],
y=experiment['latency_mean (ms)'],
mode='markers', marker_color=colors[idx],
marker_symbol='triangle-up',
name='latency_mean',
text=headers[idx]['Logfile name']);
figure_latencies.add_scatter(x=experiment['T_experiment'],
y=experiment['latency_min (ms)'],
mode='markers', marker_color=colors[idx],
name='latency_min',
text=headers[idx]['Logfile name'])
figure_latencies
# Plot CPU usage
figure_cpu_usage = go.FigureWidget()
figure_cpu_usage.layout.xaxis.title = 'Time (s)'
figure_cpu_usage.layout.yaxis.title = 'CPU usage (%)'
for idx, experiment in enumerate(dataframes):
figure_cpu_usage.add_scatter(x=experiment['T_experiment'],
y=experiment['cpu_usage (%)'],
mode='markers', marker_color=colors[idx],
marker_symbol='x',
name= 'cpu_usage',
text=headers[idx]['Logfile name']);
figure_cpu_usage
# Plot memory consumption
figure_memory_usage = go.FigureWidget()
figure_memory_usage.layout.xaxis.title = 'Time (s)'
figure_memory_usage.layout.yaxis.title = 'Memory consumption (MB)'
for idx, experiment in enumerate(dataframes):
figure_memory_usage.add_scatter(x=experiment['T_experiment'],
y=experiment['ru_maxrss']/1024,
mode='markers', marker_color=colors[idx],
marker_symbol='x',
name= 'ru_maxrss',
text=headers[idx]['Logfile name']);
figure_memory_usage
```
| github_jupyter |
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Quantum Neural Networks
This notebook provides an introduction to Quantum Neural Networks (QNNs) using the Cirq. The presentation mostly follows [Farhi and Neven](https://arxiv.org/abs/1802.06002). We will construct a simple network for classification to demonstrate its utility on some randomly generated toy data.
First we need to install cirq, which has to be done each time this notebook is run. Executing the following cell will do that.
```
# install published dev version
# !pip install cirq~=0.4.0.dev
# install directly from HEAD:
!pip install git+https://github.com/quantumlib/Cirq.git@8c59dd97f8880ac5a70c39affa64d5024a2364d0
```
To verify that Cirq is installed in your environment, try to `import cirq` and print out a diagram of the Foxtail device. It should produce a 2x11 grid of qubits.
```
import cirq
import numpy as np
import matplotlib.pyplot as plt
print(cirq.google.Foxtail)
```
### The QNN Idea
We'll begin by describing here the QNN model we are pursuing. We'll discuss the quantum circuit describing a very simple neuron, and how it can be trained.
As in an ordinary neural network, a QNN takes in data, processes that data, and then returns an answer. In the quantum case, the data will be encoded into the initial quantum state, and the processing step is the action of a quantum circuit on that quantum state. At the end we will measure one or more of the qubits, and the statistics of those measurements are the output of the net.
#### Classical vs Quantum
An ordinary neural network can only handle classical input. The input to a QNN, though, is a quantum state, which consists of $2^n$ complex amplitudes for $n$-qubits. If you attached your quantum computer directly to some physics experiment, for example, then you could have a QNN do some post-processing on the experimental wavefunction in lieu of a more traditional measurement. There are some very exciting possiblities there, but unfortunately we wil not be considering them in this Colab. It requires significantly more quantum background to understand what's going on, and it's harder to give examples because the input states themselves can be quite complicated. For recent examples of that kind of network, though, check out [this](https://arxiv.org/abs/1805.08654) paper and [this](https://arxiv.org/abs/1810.03787) paper. The basic ingredients are similar to what we'll cover here
In this Colab we'll focus on classical inputs, by which I mean the specification of one of the computational basis states as the initial state. There are a total of $2^n$ of these states for $n$ qubits. Note the crucial difference between this case and the quantum case: in the quantum case the input is $2^n$-*dimensional*, while in the classical case there are $2^n$ *possible* inputs. The quantum neural network can process these inputs in a "quantum" way, meaning that it may be able to evaluate certain functions on these inputs more efficiently than a classical network. Whether the "quantum" processing is actually useful in practice remains to be seen, and in this Colab we will not have time to really get into that aspect of a QNN.
#### Data Processing
Given the classical input state, what will we do with it? At this stage it's helpful to be more specific and definite about the problem we are trying to solve. The problem we're going to focus on in this Colab is __two-category classicfication__. That means that after the quantum circuit has finished running, we measure one of the qubits, the *readout* qubit, and the value of that qubit will tell us which of the two categories our classical input state belonged to. Since this is quantum, the output that qubit is going to be random according to some probability distributuion. So really we're going to repeat the computation many times and take a majority vote.
Our classical input data is a bitstring that is converted into a computational basis state. We want to influence the readout qubit in a way that depends on this state. Our main tool for this a gate we call the $ZX$-gate, which acts on two qubits as
$$
\exp(i \pi w Z \otimes X) = \begin{bmatrix}
\cos \pi w & i\sin\pi w &0&0\\
i\sin\pi w & \cos \pi w &0&0\\
0&0& \cos \pi w & -i\sin\pi w \\
0&0 & -i\sin\pi w & \cos\pi w
\end{bmatrix},
$$
where $w$ is a free parameter ($w$ stands for weight). This gate rotates the second qubit around the $X$-axis (on the Bloch sphere) either clockwise or counterclockwise depending on the state of the first qubit as seen in the computational basis. The amount of the rotation is determined by $w$.
If we connect each of our input qubits to the readout qubit using one of these gates, then the result is that the readout qubit will be rotated in a way that depeonds the initial state in a straightforward way. This rotation is in the $YZ$ plane, so will change the statistics of measurements in either the $Z$ basis or the $Y$ basis for the readout qubit. We're going to choose to have the initial state of the readout qubit to be a standard computational basis state as usual, which is a $Z$ eigenstate but "neutral" with respect to $Y$ (i.e., 50/50 probabilty of $Y=+1$ or $Y=-1$). Then after all of the rotations are complete we'll measure the readout qubit in the $Y$ basis. If all goes well, then the net rotation induced by the $ZX$ gates will place the readout qubit near one of the two $Y$ eigenstates in a way that depends on the initial data.
To summarize, here is our strategy for processing the two-category classification problem:
1) Prepare a computational basis state corresponding to the input that should be categorized.
2) Use $ZX$ gates to rotate the state of the readout qubit in a way that depends on the input.
3) Measure the readout qubit in the $Y$ basis to get the predicted label. Take a majority vote after many repetitions.
This is the simplest possible kind of network, and really only corresponds to a single neuron. We'll talk about more complicated possibilities after understanding how to implement this one.
### Custom Two-Qubit Gate
Our first task is to code up the $ZX$ gate described above, which is given by the matrix
$$
\exp(i \pi w Z \otimes X) = \begin{bmatrix}
\cos \pi w & i\sin\pi w &0&0\\
i\sin\pi w & \cos \pi w &0&0\\
0&0& \cos \pi w & -i\sin\pi w \\
0&0 & -i\sin\pi w & \cos\pi w
\end{bmatrix},
$$
Just from the form of the gate we can see that it performs a rotation by angle $\pm \pi w$ on the second qubit depending on the value of the first qubit. If we only had one or the other of these two blocks, then this gate would literally be a controlled rotation. For example, using the Cirq conventions,
$$
CR_X(\theta) = \begin{bmatrix}
1 & 0 &0&0\\
0 & 1 &0&0\\
0&0& \cos \theta/2 & -i\sin \theta/2 \\
0&0 & -i\sin\theta/2 & \cos\theta/2
\end{bmatrix},
$$
which means that setting $\theta = 2\pi w$ should give us (part) of our desired transformation.
```
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = .25 # Put your own weight here.
angle = 2*np.pi*w
circuit = cirq.Circuit.from_ops(cirq.ControlledGate(cirq.Rx(angle)).on(a,b))
print(circuit)
circuit.to_unitary_matrix().round(2)
```
__Question__: The rotation in the upper-left block is by the opposite angle. But how do we get the rotation to happen in the upper-left block of the $4\times 4$ matrix in the first place? What is the circuit?
#### Solution
Switching the upper-left and lower-right blocks of a controlled gate corresponds to activating when the control qubit is in the $|0\rangle$ state instead of the $|1\rangle$ state. We can arrange this to happen by taking the control gate we already have and conjugating the control qubit by $X$ gates (which implement the NOT operation). Don't forget to also rotate by the opposite angle.
```
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = 0.25 # Put your own weight here.
angle = 2*np.pi*w
circuit = cirq.Circuit.from_ops([cirq.X(a),
cirq.ControlledGate(cirq.Rx(-angle)).on(a,b),
cirq.X(a)])
print(circuit)
circuit.to_unitary_matrix().round(2)
```
#### The Full $ZX$ Gate
We can put together the two controlled rotations to make the full $ZX$ gate. Having discussed the decomposition already, we can make our own class and specify its action using the `_decpompose_` method. Fill in the following code block to implement this gate.
```
class ZXGate(cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
def _decompose_(self, qubits):
a, b = qubits
## YOUR CODE HERE
# This lets the weight be a Symbol. Useful for paramterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### Solution
```
class ZXGate(cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
def _decompose_(self, qubits):
a, b = qubits
yield cirq.ControlledGate(cirq.Rx(2*np.pi*self.weight)).on(a,b)
yield cirq.X(a)
yield cirq.ControlledGate(cirq.Rx(-2*np.pi*self.weight)).on(a,b)
yield cirq.X(a)
# This lets the weight be a Symbol. Useful for paramterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### EigenGate Implementation
Another way to specify how a gate works is by an explicit eigen-action. In our case that is also easy, since we know that the gate acts as a phase (the eigenvalue) when the first qubit is in a $Z$ eigenstate (i.e., a computational basis state) and the second qubit is in an $X$ eigenstate.
The way we specify eigen-actions in Cirq is through the `_eigen_components` method, where we need to specify the eigenvalue as a phase together with a projector onto the eigenspace of that phase. We also conventionally specify the gate at $w=1$ and set $w$ internally to be the `exponent` of the gate, which automatically implements other values of $w$ for us.
In the case of the $ZX$ gate with $w=1$, one of our eigenvalues is $\exp(+i\pi)$, which is specified as $1$ in Cirq. (Because $1$ is the coefficeint of $i\pi$ in the exponential.) This is the phase when when the first qubit is in the $Z=+1$ state and the second qubit is in the $X=+1$ state, or when the first qubit is in the $Z=-1$ state and the second qubit is in the $X=-1$ state. The projector onto these states is
$$
\begin{align}
P &= |0+\rangle \langle 0{+}| + |1-\rangle \langle 1{-}|\\
&= \frac{1}{2}\big(|00\rangle \langle 00| +|00\rangle \langle 01|+|01\rangle \langle 00|+|01\rangle \langle 01|+ |10\rangle \langle 10|-|10\rangle \langle 11|-|11\rangle \langle 10|+|11\rangle \langle 11|\big)\\
&=\frac{1}{2}\begin{bmatrix}
1 & 1 &0&0\\
1 & 1 &0&0\\
0&0& 1 & -1 \\
0&0 & -1 & 1
\end{bmatrix}
\end{align}
$$
A similar formula holds for the eigenvalue $\exp(-i\pi)$ with the two blocks in the projector flipped.
__Exercise__: Implement the $ZX$ gate as an `EigenGate` using this decomposition. The following codeblock will get you started.
```
class ZXGate(cirq.ops.eigen_gate.EigenGate,
cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
super().__init__(exponent=weight) # Automatically handles weights other than 1
def _eigen_components(self):
return [
(1, np.array([[0.5, 0.5, 0, 0],
[ 0.5, 0.5, 0, 0],
[0, 0, 0.5, -0.5],
[0, 0, -0.5, 0.5]])),
(??, ??) # YOUR CODE HERE: phase and projector for the other eigenvalue
]
# This lets the weight be a Symbol. Useful for parameterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### Solution
```
class ZXGate(cirq.ops.eigen_gate.EigenGate,
cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
super().__init__(exponent=weight) # Automatically handles weights other than 1
def _eigen_components(self):
return [
(1, np.array([[0.5, 0.5, 0, 0],
[ 0.5, 0.5, 0, 0],
[0, 0, 0.5, -0.5],
[0, 0, -0.5, 0.5]])),
(-1, np.array([[0.5, -0.5, 0, 0],
[ -0.5, 0.5, 0, 0],
[0, 0, 0.5, 0.5],
[0, 0, 0.5, 0.5]]))
]
# This lets the weight be a Symbol. Useful for parameterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### Testing the Gate
__BEFORE MOVING ON__ make sure you've executed the `EigenGate` solution of the $ZX$ gate implementation. That's the one assumed for the code below, though other implementations may work just as well. In general, the cells in this Colab may depend on previous cells.
Let's test out our gate. First we'll make a simple test circuit to see that the ASCII diagrams are diplaying properly:
```
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = .15 # Put your own weight here. Try using a cirq.Symbol.
circuit = cirq.Circuit.from_ops(ZXGate(w).on(a,b))
print(circuit)
```
We should also check that the matrix is what we expect:
```
test_matrix = np.array([[np.cos(np.pi*w), 1j*np.sin(np.pi*w), 0, 0],
[1j*np.sin(np.pi*w), np.cos(np.pi*w), 0, 0],
[0, 0, np.cos(np.pi*w), -1j*np.sin(np.pi*w)],
[0, 0, -1j*np.sin(np.pi*w),np.cos(np.pi*w)]])
# Test for five digits of accuracy. Won't work with cirq.Symbol
assert (circuit.to_unitary_matrix().round(5) == test_matrix.round(5)).all()
```
### Create Circuit
Now we have to create the QNN circuit. We are simply going to let a $ZX$ gate act between each data qubit and the readout qubit. For simplicity, let's share a single weight between all of the gates. You are invited to experiment with making the weights different, but in our example problem below we can set them all equal by symmetry.
__Question__: What about the order of these actions? Which data qubits should interact with the readout qubit first?
Remember that we also want to measure the readout qubit in the $Y$ basis. Fundamentally speaking, all measurements in Cirq are computational basis measurements, and so we have to implement the change of basis by hand.
__Question__: What is the circuit for a basis transformation from the $Y$ basis to the computational basis? We want to choose our transformation so that an eigenstate with $Y=+1$ becomes an eigenstate with $Z=+1$ prior to measurement.
#### Solutions
* The $ZX$ gates all commute with each other, so the order of implementation doesn't matter!
* We want a transformation that maps $\big(|0\rangle + i |1\rangle\big)/\sqrt{2}$ to $|0\rangle$ and $\big(|0\rangle - i |1\rangle\big)\sqrt{2}$ to $|1\rangle$. Recall that the phase gate $S$ is given in matrix form by
$$
S = \begin{bmatrix}
1 & 0 \\
0 & i
\end{bmatrix},
$$
and the Hadamard transform is given by
$$
H = \frac{1}{\sqrt{2}}\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix},
$$
So acting with $S^{-1}$ and then $H$ gives what we want. We'll add these two gates to the end of the circuit on the readout qubit so that the final measurement effectively occurs in the $Y$ basis.
#### Make Circuit
A clean way of making circuits is to define generators for logically-related circuit elements, and then `append` those to the circuit you want to make. Here is a code snippet that initializes our qubits and defines a generator for a single layer of $ZX$ gates:
```
# Total number of data qubits
INPUT_SIZE = 9
data_qubits = cirq.LineQubit.range(INPUT_SIZE)
readout = cirq.NamedQubit('r')
# Initialize parameters of the circuit
params = {'w': 0}
def ZX_layer():
"""Adds a ZX gate between each data qubit and the readout.
All gates are given the same cirq.Symbol for a weight."""
for qubit in data_qubits:
yield ZXGate(cirq.Symbol('w')).on(qubit, readout)
```
Use this generator to create the QNN circuit. Don't forget to add the basis change for the readout qubit at the end!
```
qnn = cirq.Circuit()
qnn.append(???) # YOUR CODE HERE
```
#### Solution
```
qnn = cirq.Circuit()
qnn.append(ZX_layer())
qnn.append([cirq.S(readout)**-1, cirq.H(readout)]) # Basis transformation
```
#### View the Circuit
It's usually a good idea to view the ASCII diagram of your circuit to make sure it's doing what you want. This can be displayed by printing the circuit.
```
print(qnn)
```
You can experiment with adding more layers of $ZX$ gates (or adding other kinds of transformations!) to your QNN, but we can use this simplest kind of circuit to analyze a simple toy problem, which is what we will do next.
### A Toy Problem: Biased Coin Flips
As a toy problem, let's get our quantum neuron to decide whether a coin is biased toward heads or toward tails based on a sequence of coin flips.
To be specific, let's try to train a QNN to distinguish between a coin that yields "heads" with probability $p$, and one that yields "heads" with probability $1-p$. Without loss of generality, let's say that $p\leq 0.5$. We don't need a fancy QNN to come up with a winning strategy: given a series of coin flips, you guess $p$ if the majority of flips are "tails" and $1-p$ if the majority are "heads." But for purposes of illustration, let's do it the fancy way.
To translate this problem into our QNN language, we need to encode the sequence of coin flips into a computational basis state. Let's associate $0$ with tails and $1$ with heads. So the sequence of coin flips becomes a sequence of $0$s and $1$s, and these define a computational basis state.
We also need to define a convention for our labeling of the two coins. We'll say that the $p$ coin (majority tails) gets the label $-1$ and the $1-p$ coin (majority heads) gets the label $+1$. So when we measure $Y$ at the end of the computation we can say that the majority-vote of the $Y$ outcome is our predicted label.
To be a little more nuanced (and to aid the formulation of the problem), let's say that the expectation value $\langle Y \rangle$ for a given input state defines our estimator for the label of that state. We're going to use that to define a loss function for training next.
### Define Loss Function
Suppose we have a collection of $N$ (bitstring, label) pairs. A useful loss function to characterize the effectiveness of our QNN on this collection is
$$
\text{Loss} = \frac{1}{2N}\sum_{j=1}^n (1- \ell_j\langle Y \rangle_j),
$$
where $\ell_j$ is the label of the $j$th pair and $\langle Y \rangle_j$ is the expectation value of $Y$ on the readout qubit using the $j$th bitstring as input. If the network is perfect, the loss is equal to zero. If the network is maximally unsure about the labels (so that $\langle Y \rangle_j = 0$ for all $j$) then the loss is equal to $1/2$. And if the network gets everything wrong, then the loss is equal to $1$. We're going to train our network using this loss function, so next we'll write some functions to compute the loss.
Another useful function to have around is the average classification error. Recall that our prescription was to execute the quantum circuit many times and take a majority vote to compute the predicted label. The majority vote for the readout is the same as $\text{sign}(\langle Y \rangle)$, so we can write a formula for the error in this procedure as
$$
\text{Error} = \frac{1}{2N}\sum_{j=1}^n \big(1- \ell_j\text{sign}\big(\langle Y \rangle_j\big)\big).
$$
This is not so useful as a loss function because it is not smooth and does not provide an incentive to make $|\langle Y \rangle|$ large, but it can be an informative quantity to compute.
__Question__: Why would we want $|\langle Y \rangle|$ to be large?
#### Solution
When we implement this algorithm on the actual hardware, $\langle Y \rangle$ can only be estimated by repeatedly executing the circuit and measuring the result. The more measurements we make, the better our estimate of $\langle Y \rangle$ will be. Even if we are only interested in $\text{sign}\big(\langle Y \rangle\big)$, we will need to meake enough measurements to be sure that our estimate has the correct sign, and if $|\langle Y \rangle|$ is large then fewer measurements will be required to have high confidence in the sign.
Furthermore, if the machine is noisy (which it will be), then the noise will induce some errors in our estimate of $\langle Y \rangle$. If $|\langle Y \rangle|$ is small then it's likely that the noise will lead to the wrong sign.
#### Expectation Value
Our first function computes the expectation value of the readout qubit for our circuit given a specification of the initial state. Rather than a bitstring, we'll specify the initial state as an array of $0$s and $1$s. These are the outputs of the coin flips in our toy problem. We'll compute the expectation value exactly using the wavefunction for now.
```
def readout_expectation(state):
"""Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to calculate the wavefunction exactly."""
# A convenient representation of the state as an integer
state_num = int(np.sum(state*2**np.arange(len(state))))
resolver = cirq.ParamResolver(params)
simulator = cirq.Simulator()
# Specify an explicit qubit order so that we know which qubit is the readout
result = simulator.simulate(qnn, resolver, qubit_order=[readout]+data_qubits,
initial_state=state_num)
wf = result.final_state
# Becase we specified qubit order, the Z value of the readout is the most
# significant bit.
Z_readout = np.append(np.ones(2**INPUT_SIZE), -np.ones(2**INPUT_SIZE))
return np.sum(np.abs(wf)**2 * Z_readout)
```
#### Loss and Error
The next functions take a list of states (each specified as an array of $0$s and $1$s as before) and a corresponding list of labels and computes the loss and error, respectively, of that list.
```
def loss(states, labels):
loss=0
for state, label in zip(states,labels):
loss += 1 - label*readout_expectation(state)
return loss/(2*len(states))
def classification_error(states, labels):
error=0
for state,label in zip(states,labels):
error += 1 - label*np.sign(readout_expectation(state))
return error/(2*len(states))
```
#### Generating Data
For our toy problem we'll want to be able to generate a batch of data. Here is a helper function for that task:
```
def make_batch():
"""Generates a set of labels, then uses those labels to generate inputs.
label = -1 corresponds to majority 0 in the sate, label = +1 corresponds to
majority 1.
"""
np.random.seed(0) # For consistency in demo
labels = (-1)**np.random.choice(2, size=100) # Smaller batch sizes will speed up computation
states = []
for label in labels:
states.append(np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2]))
return states, labels
states, labels = make_batch()
```
### Training
Now we'll try to find the optimal weight to solve our toy problem. For illustration, we'll do both a brute force search of the paramter space as well as a stochastic gradient descent.
#### Brute Force Search
Let's compute both the loss and error rate on a batch of data as a function of the shared weight between all the gates.
```
# Using cirq.Simulator with the EigenGate implementation of ZZ, this takes
# about 30s to run. Using the XmonSimulator took about 40 minutes the last
# time I tried it!
%%time
linspace = np.linspace(start=-1, stop=1, num=80)
train_losses = []
error_rates = []
for p in linspace:
params = {'w': p}
train_losses.append(loss(states, labels))
error_rates.append(classification_error(states, labels))
plt.plot(linspace, train_losses)
plt.xlabel('Weight')
plt.ylabel('Loss')
plt.title('Loss as a Function of Weight')
plt.show()
plt.plot(linspace, error_rates)
plt.xlabel('Weight')
plt.ylabel('Error Rate')
plt.title('Error Rate as a Function of Weight')
plt.show()
```
__Question__: Why are the loss and error functions periodic with period $1$ when the $ZX$ gate is periodic with period $2$?
#### Solution
This kind of "halving" of the periodicity of $\langle Y \rangle$ compared to the period of the gates itself is typical of qubit systems. We can analyze how it works mathematically in a simpler setting. Instead of the $ZX$ Gate, let's just imagine that we rotate the readout qubit around the $X$ axis by some fixed amout. This is the effective calculation for a single fixed data input.
$$
\begin{align}
\langle Y \rangle &= \langle 0 |\exp(-i \pi w X) Y \exp(i \pi w X) |0 \rangle\\
&= \langle 0 |\big(\cos \pi w - i X\sin \pi w \big) Y \big(\cos \pi w + i X \sin \pi w \big) |0 \rangle\\
&= \langle 0 |\big(Y\cos 2\pi w +Z \sin 2\pi w \big) |0 \rangle\\
&= \sin 2\pi w.
\end{align}
$$
#### Stochastic Gradient Descent
To train the network we'll use stochastic gradient descent. Note that this isn't necessarily a good idea since the loss function is far from convex, and there's a good chance we'll get stuck in very inefficient local minimum if we initialize the paramters randomly. But as an exercise we'll do it anyway. In the next section we'll discuss other ways to train these sorts of networks.
We'll compute the gradient of the loss function using a symmetric finite-difference approximation: $f'(x) \approx (f(x + \epsilon) - f(x-\epsilon))/2\epsilon$. This is the most straightforward way to do it using the quantum computer. We'll also generate a new instance of the problem each time.
```
def stochastic_grad_loss():
"""Generates a new data point and computes the gradient of the loss
using that data point."""
# Randomly generate the data point.
label = (-1)**np.random.choice(2)
state = np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2])
# Compute the gradient using finite difference
eps = 10**-5 # Discretization of gradient. Try different values.
params['w'] -= eps
loss1 = loss([state],[label])
params['w'] += 2*eps
grad = (loss([state],[label])-loss1)/(2*eps)
params['w'] -= eps # Reset the parameter value
return grad
```
We can apply this function repeatedly to flow toward the minimum:
```
eta = 10**-4 # Learning rate. Try different values.
params = {'w': 0} # Initialize weight. Try different values.
for i in range(201):
if not i%25:
print('Step: {} Loss: {}'.format(i, loss(states, labels)))
grad = stochastic_grad_loss()
params['w'] += -eta*grad
print('Final Weight: {}'.format(params['w']))
```
### Use Sampling Instead of Calculating from the Wavefunction
On real hardware we will have to use sampling to find results instead of computing the exact wavefunction. Rewrite the `readout_expectation` function to compute the expectation value using sampling instead. Unlike with the wavefunction calculation, we also need to build our circuit in a way that accounts for the initial state (we are always assumed to start in the all $|0\rangle$ state)
```
def readout_expectation_sample(state):
"""Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to sample the final wavefunction."""
# We still need to resolve the parameters in the circuit.
resolver = cirq.ParamResolver(params)
# Make a copy of the QNN to avoid making changes to the global variable.
measurement_circuit = qnn.copy()
# Modify the measurement circuit to account for the desired input state.
# YOUR CODE HERE
# Add appropriate measurement gate(s) to the circuit.
# YOUR CODE HERE
simulator = cirq.google.XmonSimulator()
result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions
# Return the Z expectation value
return ((-1)**result.measurements['m']).mean()
```
#### Solution
```
def readout_expectation_sample(state):
"""Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to sample the final wavefunction."""
# We still need to resolve the parameters in the circuit.
resolver = cirq.ParamResolver(params)
# Make a copy of the QNN to avoid making changes to the global variable.
measurement_circuit = qnn.copy()
# Modify the measurement circuit to account for the desired input state.
for i, qubit in enumerate(data_qubits):
if state[i]:
measurement_circuit.insert(0,cirq.X(qubit))
# Add appropriate measurement gate(s) to the circuit.
measurement_circuit.append(cirq.measure(readout, key='m'))
simulator = cirq.Simulator()
result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions
# Return the Z expectation value
return ((-1)**result.measurements['m']).mean()
```
#### Comparison of Sampling with the Exact Wavefunction
Just to illustrate the difference between sampling and using the wavefunction, try running the two methods several times on identical input:
```
state = [0,0,0,1,0,1,1,0,1] # Try different initial states.
params = {'w': 0.05} # Try different weights.
print("Exact expectation value: {}".format(readout_expectation(state)))
print("Estimates from sampling:")
for _ in range(5):
print(readout_expectation_sample(state))
```
As an exercise, try repeating some of the above calculations (e.g., the SGD optimization) using `readout_expectation_sample` in place of `readout_expectation`. How many repetitions should you use? How should the hyperparameters `eps` and `eta` be adjusted in response to the number of repetitions?
### Optimizing For Hardware
There are more issues to think about if you want to run your network on real hardware. First is the connectivity issue, and second is minimizing the number of two-qubit operations.
Consider the Foxtail device:
```
print(cirq.google.Foxtail)
```
The qubits are arranged in two rows of eleven qubits each, and qubits can only communicate to their nearest neighbors along the horizontal and vertial connections. That does not mesh well with the QNN we designed, where all of the data qubits need to interact with the readout qubit.
There is no *in-principle* restriction on the kinds of algorithms you are allowed to run. The solution to the connectivity problem is to make use of SWAP gates, which have the effect of exchanging the states of two (neighboring) qubits. It's equivalent to what you would get if you physically exchanged the positions of two of the qubits in the grid. The problem is that each SWAP operation is costly, so you want to avoid SWAPing as much as possible. We need to think carefully about our algorithm design to minimize the number of SWAPs performed as the circuit is executed.
__Question__: How should we modify our QNN circuit so that it can runs efficiently on the Foxtail device?
#### Solution
One strategy is to move the readout qubit around as it talks to the other qubits. Suppose the readout qubit starts in the $(0,0)$ position. First it can interact with the qubits in the $(1,0)$ and $(0,1)$ positons like normal, then SWAP with the $(0,1)$ qubit. Now the readout qubit is in the $(0,1)$ position and can interact with the $(1,1)$ and $(0,2)$ qubits before SWAPing with the $(0,2)$ qubit. It continues down the line in this fashion.
Let's code up this circuit:
```
qnn_fox = cirq.Circuit()
w = 0.2 # Want an explicit numerical weight for later
for i in range(10):
qnn_fox.append([ZXGate(w).on(cirq.GridQubit(1,i), cirq.GridQubit(0,i)),
ZXGate(w).on(cirq.GridQubit(0,i+1), cirq.GridQubit(0,i)),
cirq.SWAP(cirq.GridQubit(0,i), cirq.GridQubit(0,i+1))])
qnn_fox.append(ZXGate(w).on(cirq.GridQubit(1,10), cirq.GridQubit(0,10)))
qnn_fox.append([(cirq.S**-1)(cirq.GridQubit(0,10)),cirq.H(cirq.GridQubit(0,10)),
cirq.measure(cirq.GridQubit(0,10))])
print(qnn_fox)
```
As coded, this circuit still won't run on the Foxtail device. That's because the gates we've defined are not native gates. Cirq has a built-in method that will convert our gates to Xmon gates (which are native for the Foxtail device) and attempt to optimze the circuit by reducing the total number of gates:
```
cirq.google.optimized_for_xmon(qnn_fox, new_device=cirq.google.Foxtail, allow_partial_czs=True)
```
Notice how we were able to pass in the `new_device` argument without getting an error messgae. That means the circuit will run properly on the Foxtail.
__Question__: We were smart to place the SWAP gates and $ZX$ gates next to each other where possible. Why?
__Question__: Can you see any ways to further optimize this circuit by hand? Hint: not all of the qubits are being measured.
#### Solutions
* Placing the SWAP and $ZX$ gates next to each other lets the optimizer treat the ocmbination of them as a single gate, which leads to fewer total two-qubit gates.
* The state of any qubit which is not being measured does not matter. In particular, any single-qubit gate acting on a non-measured qubit after the last two-qubit gate acting on that qubit will not affect the state of the measured qubit and so can be dropped.
### Exercise: Multiple Weights
Instead of just a single weight, create create a neuron with multiple weights. How will you optimize those weights?
### Exercise: Analytic Calculation
Because we stuck to such a simple example, essentially everything in this notebook can be calculated analytically. Do those calculations.
### Exercise: Add More "Quantum" Operations
The neuron we constructed essentially does a classial calculation. You can add more ingredients that make the data processing more "quantum." For example, you can add layers of Hadamard gates in between additional layers of $ZX$ gates. This sort of thing was explored in [Farhi and Neven](https://arxiv.org/abs/1802.06002). Try playing around with it.
| github_jupyter |
----
<img src="../../../files/refinitiv.png" width="20%" style="vertical-align: top;">
# Data Library for Python
----
## Content layer - News
This notebook demonstrates how to retrieve News.
#### Learn more
To learn more about the Refinitiv Data Library for Python please join the Refinitiv Developer Community. By [registering](https://developers.refinitiv.com/iam/register) and [logging](https://developers.refinitiv.com/content/devportal/en_us/initCookie.html) into the Refinitiv Developer Community portal you will have free access to a number of learning materials like
[Quick Start guides](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/quick-start),
[Tutorials](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/learning),
[Documentation](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/docs)
and much more.
#### Getting Help and Support
If you have any questions regarding using the API, please post them on
the [Refinitiv Data Q&A Forum](https://community.developers.refinitiv.com/spaces/321/index.html).
The Refinitiv Developer Community will be happy to help.
## Set the configuration file location
For a better ease of use, you have the option to set initialization parameters of the Refinitiv Data Library in the _refinitiv-data.config.json_ configuration file. This file must be located beside your notebook, in your user folder or in a folder defined by the _RD_LIB_CONFIG_PATH_ environment variable. The _RD_LIB_CONFIG_PATH_ environment variable is the option used by this series of examples. The following code sets this environment variable.
```
import os
os.environ["RD_LIB_CONFIG_PATH"] = "../../../Configuration"
```
## Some Imports to start with
```
import refinitiv.data as rd
from refinitiv.data.content import news
from datetime import timedelta
```
## Open the data session
The open_session() function creates and open sessions based on the information contained in the refinitiv-data.config.json configuration file. Please edit this file to set the session type and other parameters required for the session you want to open.
```
rd.open_session('platform.rdp')
```
## Retrieve data
### Headlines
#### Get headlines
```
response = news.headlines.Definition("Apple").get_data()
response.data.df
```
#### Get headlines within a range of dates
```
response = news.headlines.Definition(
query="Refinitiv",
date_from="20.03.2021",
date_to=timedelta(days=-4),
count=3
).get_data()
response.data.df
```
#### Get a limited number of headlines
```
response = news.headlines.Definition(query = "Google", count = 350).get_data()
response.data.df
```
### Story
```
response = news.story.Definition("urn:newsml:reuters.com:20211003:nNRAgvhyiu:1").get_data()
print(response.data.story.title, '\n')
print(response.data.story.content)
```
## Close the session
```
rd.close_session()
```
| github_jupyter |
# Migrating scripts from Framework Mode to Script Mode
This notebook focus on how to migrate scripts using Framework Mode to Script Mode. The original notebook using Framework Mode can be find here https://github.com/awslabs/amazon-sagemaker-examples/blob/4c2a93114104e0b9555d7c10aaab018cac3d7c04/sagemaker-python-sdk/tensorflow_distributed_mnist/tensorflow_local_mode_mnist.ipynb
### Set up the environment
```
import os
import subprocess
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
### Download the MNIST dataset
```
import utils
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
data_sets = input_data.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
```
### Upload the data
We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job.
```
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/mnist')
```
# Construct an entry point script for training
On this example, we assume that you aready have a Framework Mode training script named `mnist.py`:
```
!pygmentize 'mnist.py'
```
The training script `mnist.py` include the Framework Mode functions ```model_fn```, ```train_input_fn```, ```eval_input_fn```, and ```serving_input_fn```. We need to create a entrypoint script that uses the functions above to create a ```tf.estimator```:
```
%%writefile train.py
import argparse
# import original framework mode script
import mnist
import tensorflow as tf
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# read hyperparameters as script arguments
parser.add_argument('--training_steps', type=int)
parser.add_argument('--evaluation_steps', type=int)
args, _ = parser.parse_known_args()
# creates a tf.Estimator using `model_fn` that saves models to /opt/ml/model
estimator = tf.estimator.Estimator(model_fn=mnist.model_fn, model_dir='/opt/ml/model')
# creates parameterless input_fn function required by the estimator
def input_fn():
return mnist.train_input_fn(training_dir='/opt/ml/input/data/training', params=None)
train_spec = tf.estimator.TrainSpec(input_fn, max_steps=args.training_steps)
# creates parameterless serving_input_receiver_fn function required by the exporter
def serving_input_receiver_fn():
return mnist.serving_input_fn(params=None)
exporter = tf.estimator.LatestExporter('Servo',
serving_input_receiver_fn=serving_input_receiver_fn)
# creates parameterless input_fn function required by the evaluation
def input_fn():
return mnist.eval_input_fn(training_dir='/opt/ml/input/data/training', params=None)
eval_spec = tf.estimator.EvalSpec(input_fn, steps=args.evaluation_steps, exporters=exporter)
# start training and evaluation
tf.estimator.train_and_evaluate(estimator=estimator, train_spec=train_spec, eval_spec=eval_spec)
```
## Changes in the SageMaker TensorFlow estimator
We need to create a TensorFlow estimator pointing to ```train.py``` as the entrypoint:
```
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='train.py',
dependencies=['mnist.py'],
role='SageMakerRole',
framework_version='1.13',
hyperparameters={'training_steps':10, 'evaluation_steps':10},
py_version='py3',
train_instance_count=1,
train_instance_type='local')
mnist_estimator.fit(inputs)
```
# Deploy the trained model to prepare for predictions
The deploy() method creates an endpoint (in this case locally) which serves prediction requests in real-time.
```
mnist_predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='local')
```
# Invoking the endpoint
```
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
for i in range(10):
data = mnist.test.images[i].tolist()
predict_response = mnist_predictor.predict(data)
print("========================================")
label = np.argmax(mnist.test.labels[i])
print("label is {}".format(label))
print("prediction is {}".format(predict_response))
```
# Clean-up
Deleting the local endpoint when you're finished is important since you can only run one local endpoint at a time.
```
mnist_estimator.delete_endpoint()
```
| github_jupyter |
## Coding Matrices
Here are a few exercises to get you started with coding matrices. The exercises start off with vectors and then get more challenging
### Vectors
```
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable v
v = []
```
The v variable contains a Python list. This list could also be thought of as a 1x5 matrix with 1 row and 5 columns. How would you represent this list as a matrix?
```
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable mv
### The difference between a vector and a matrix in Python is that
### a matrix is a list of lists.
### Hint: See the last quiz on the previous page
mv = [[]]
```
How would you represent this vector in its vertical form with 5 rows and 1 column? When defining matrices in Python, each row is a list. So in this case, you have 5 rows and thus will need 5 lists.
As an example, this is what the vector $$<5, 7>$$ would look like as a 1x2 matrix in Python:
```python
matrix1by2 = [
[5, 7]
]
```
And here is what the same vector would look like as a 2x1 matrix:
```python
matrix2by1 = [
[5],
[7]
]
```
```
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable vT
### vT is a 5x1 matrix
vT = []
```
### Assigning Matrices to Variables
```
### TODO: Assign the following matrix to the variable m
### 8 7 1 2 3
### 1 5 2 9 0
### 8 2 2 4 1
m = [[]]
```
### Accessing Matrix Values
```
### TODO: In matrix m, change the value
### in the second row last column from 0 to 5
### Hint: You do not need to rewrite the entire matrix
```
### Looping through Matrices to do Math
Coding mathematical operations with matrices can be tricky. Because matrices are lists of lists, you will need to use a for loop inside another for loop. The outside for loop iterates over the rows and the inside for loop iterates over the columns.
Here is some pseudo code
```python
for i in number of rows:
for j in number of columns:
mymatrix[i][j]
```
To figure out how many times to loop over the matrix, you need to know the number of rows and number of columns.
If you have a variable with a matrix in it, how could you figure out the number of rows? How could you figure out the number of columns? The [len](https://docs.python.org/2/library/functions.html#len) function in Python might be helpful.
### Scalar Multiplication
```
### TODO: Use for loops to multiply each matrix element by 5
### Store the answer in the r variable. This is called scalar
### multiplication
###
### HINT: First write a for loop that iterates through the rows
### one row at a time
###
### Then write another for loop within the for loop that
### iterates through the columns
###
### If you used the variable i to represent rows and j
### to represent columns, then m[i][j] would give you
### access to each element in the matrix
###
### Because r is an empty list, you cannot directly assign
### a value like r[i][j] = m[i][j]. You might have to
### work on one row at a time and then use r.append(row).
r = []
```
### Printing Out a Matrix
```
### TODO: Write a function called matrix_print()
### that prints out a matrix in
### a way that is easy to read.
### Each element in a row should be separated by a tab
### And each row should have its own line
### You can test our your results with the m matrix
### HINT: You can use a for loop within a for loop
### In Python, the print() function will be useful
### print(5, '\t', end = '') will print out the integer 5,
### then add a tab after the 5. The end = '' makes sure that
### the print function does not print out a new line if you do
### not want a new line.
### Your output should look like this
### 8 7 1 2 3
### 1 5 2 9 5
### 8 2 2 4 1
def matrix_print(matrix):
return
m = [
[8, 7, 1, 2, 3],
[1, 5, 2, 9, 5],
[8, 2, 2, 4, 1]
]
matrix_print(m)
```
### Test Your Results
```
### You can run these tests to see if you have the expected
### results. If everything is correct, this cell has no output
assert v == [5, 10, 2, 6, 1]
assert mv == [
[5, 10, 2, 6, 1]
]
assert vT == [
[5],
[10],
[2],
[6],
[1]]
assert m == [
[8, 7, 1, 2, 3],
[1, 5, 2, 9, 5],
[8, 2, 2, 4, 1]
]
assert r == [
[40, 35, 5, 10, 15],
[5, 25, 10, 45, 25],
[40, 10, 10, 20, 5]
]
```
### Print Out Your Results
```
### Run this cell to print out your answers
print(v)
print(mv)
print(vT)
print(m)
print(r)
```
| github_jupyter |
```
%load_ext sql
%sql sqlite:///flights.db
```
숙제 1
=======
### 일러두기 :
**_꼼꼼하게 읽어보기 바랍니다_**
* `prettytable` 모듈을 설치해야 스크립트를 실행할 수 있음. (설치 방법: `pip install --user prettytable`)
* `flights.db` 파일이 숙제용 Jupyter notebook과 같은 디렉터리에 있어야 함 (없다면 [여기서](http://open.gnu.ac.kr/lecslides/2018-2-DB/Assignments1/flights.db.zip) 다운 받기) 압축을 해제해야 함. `flights.db.zip`이 있는 곳에서 `unzip flights.db.zip`으로 압축을 해제하면 됨
* 데이터베이스 `flights.db`를 다운 받은 후 가장 위의 셀의 명령 실행하기
* 테스트, 디버그, 탐색하기 등을 위해서 새로운 셀을 생성하는 것을 적극 권장함
* 셀을 실행시키고 셀 왼 편에 `In [*]:` 이 보인다면 _실행 중_ 을 의미함
* **만약 셀이 오랫 동안 결과를 내 놓지 않고 멈춘 것 같다면: SQL 에 다시 연결하도록 python kernel을 다시 시작해야 함**
* 커널을 다시 시작하는 방법: "Kernel >> Restart & Clear Output", 그리고 위의 셀부터 아래로 하나씩 실행 함
* 다른 버전의 데이터베이스를 로드하기 위해서도 마찬가지를 새로운 연결을 만들어야 함
* 기억하기:
* `%sql [SQL 질의문;]` 은 _한 줄짜리_ SQL 질의문에 사용
* `%%sql
[SQL 질의문;]` 은 _여러 줄짜리_ SQL 질의문에 사용
* `submit.py` 을 실행하면 질의문을 처리하고 출력함
* 실행의 결과는 `correct_output.txt` 파일에 나와 있음.
* 실행 결과의 비교를 원한다면 `python sanity_check.py` 을 실행하거나, 다음의 명령을 실행하여 결과를 얻을 수 있음 `python submit.py > my_output; diff my_output correct_output.txt` 터미널에서 입력해야 함
* **숙제로 작성한 `submit.py` 파일은 아래의 형식을 절대적으로 따라야 함.** 형식이라 함은:
* 컬럼의 이름은 `correct_output.txt` 에 나와 있는 이름과 **똑같은 이름**이어야 함
* 컬럼의 순서도 `correct_output.txt` 에 나와 있는 순서와 **똑같은 순서**이어야 함
### 제출 방법:
* iPython notebook 자체를 제출하지 말 것
* 대신에, `submit.py` 에 작성한 번호에 맞게 질의문을 복사 붙여 넣기 할 것
* `%sql` 또는 `%%sql` 명령은 SQL 문에 포함시키지 말 것
* 제출한 질의문을 똑같은 스키마에서 임의로 선택된 값에 대상으로 실행시켜 평가를 할 것임. 그렇기 때문에 해답과 똑같은 결과가 나오도록 상수등을 써서 조작하지 말것
* **`submission_instructions.txt` 에 설명된 방법으로 해답을 제출할 것**
_즐겁게 시작해봅시다!_
개요: 여행 일정 지연
------------------------
여행 일정이 지연 되는 것만큼 짜증 나는 일은 없습니다. 그렇지 않나요?
여행 일정이 지연되지 않도록 여러가지 새로운 방법을 찾아봅니다. 최근에 찾은 데이터가 왜 일정이 지연되는지 이유와 무엇을 포기할지를 잘 설명해주고 있습니다.
SQL을 사용해서 한 번 그 이유들을 알아봅시다.
----
이 과제에서는 2017년 7월의 여객기의 지연 정보의 정보를 담고 있습니다. 데이터베이스의 기본 릴레이션에 대한 정보를 알아 봅시다.
```
%%sql
SELECT *
FROM flight_delays
LIMIT 1;
```
굉장히 많은 컬럼들이 있는 것을 알 수 있는데, 그러면 몇 줄이나 될까요?
```
%%sql
SELECT COUNT(*) AS num_rows
FROM flight_delays
```
데이터의 양이 상당합니다! 손과 머리로만 해답을 찾지 못할 것 같군요.
데이터베이스에 더 많은 데이터를 넣을 필요는 없겠군요. 컬럼들이 어떤 의미를 갖는지 알아 보려면 [이 링크](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236)를 따라가기 바랍니다.
몇 개의 추가적인 테이블들을 같이 포함해 놓았습니다. 이 테이블들을 사용하면 `airline_id`, `airport_id`, 그리고 `day_of_week` 을 사람이 읽기 편한 정보로 변환할 수 있습니다.
아래의 셀을 이용하여 `airlines`과 `weekdays` 의 정보를 확인해보기 바랍니다:
```
%%sql
```
좋습니다. 이제 시작해봅시다.
# SQL 질의문
질의문 1: 항공편의 평균 지연 시간은?
------------------------
데이터에 대한 이해를 돕기 위해, 간단한 질의문을 작성해봅시다.
아래의 셀에 2017년 7월동안 모든 항공편의 평균 지연시간을 구하는 질의문을 작성해봅시다.
```
%%sql
```
질의문 2: 가장 긴 지연 시간은?
------------------------
평균은 그리 크지 않군요. 하지만 _최장_ 지연 시간은 어떻게 되나요?
아래의 셀에 2017년 7월동안 가장 늦게 도착한 시간을 찾는 질의문을 작성해봅시다.
```
%%sql
```
질의문 3: 어떤 항공편을 피하는 것이 정신 건강에 좋을까요?
------------------------
어떤 항공편이 가장 늦었나요?
아래의 셀에 2017년 7월에 가장 늦게 도착한 항공사(`carrier`)와 항공편 명, 출발 도시 명, 도착 도시 명, 항공 일정을 출력하는 질의문을 작성 바랍니다. 앞에서 얻은 정보를 질의문에 삽입해서 계산하지 말고 중첩 질의문을 쓰기 바랍니다.
```
%%sql
```
질의문 4: 어떤 요일이 여행하기 가장 안좋은 날인가요?
------------------------
학기가 시작되었으니 먼 곳으로 여행을 할 수는 없지만, 출장은 가야하겠지요. 비행기를 타기 가장 안좋은 날은 무슨 요일인가요?
아래의 셀에 요일마다 평균 지연 시간이 어떻게 되는지 내림차순으로 정렬하여 결과를 출력하도록 질의문을 작성하기 바랍니다. 출력 결과의 스키마는 (`weekday_name`, `average_delay`)의 형태를 갖고 있어야 합니다.
**Note: 요일의 ID를 그대로 출력하지 말기 바랍니다.** (Hint: `weekdays` 테이블을 사용하여 join하여 요일의 이름을 출력하도록 합시다.)
```
%%sql
```
질의문 5: SFO에서 출발하는 항공사 중 지연 시간이 가장 긴 항공사는 어디입니까?
------------------------
어떤 요일을 피해야 할지 알았으니 SFO에서 출발하는 항공사 중 한 곳을 정해야 합니다. 어디로 갈지는 말하지 않았으니, SFO에서 출발하는 모든 항공사의 항공편들의 평균 지연시간을 구해 봅시다.
아래의 셀에 2017년 7월에 SFO에서 출발한 각 항공사 별로 모든 항공편에 대해 평균 지연시간을 내림차순으로 구하는 질의문을 작성해봅시다.
**Note: 항공사 ID를 그대로 출력하지 맙시다.** (Hint: 중첩 질의문으로 `airlines` 테이블을 join 하여 항공사 이름을 출력합시다.)
```
%%sql
```
질의문 6: 항공사들의 지연 비율을 알아 봅시다
------------------------
지연되는 항공편이 많습니다. 어떤 항공사가 지연시간이 많은 알아봅시다.
아래의 셀에 평균 10분 이상 지연되는 항공편이 있었던 항공사들의 비율을 구해봅시다. 전체 항공사의 수를 세서 질의문에 포함시키지 말기 바랍니다. 그리고 질의문에는 최소한 하나 이상의 `HAVING` 절을 사용합시다.
Note: sqlite 의 `COUNT(*)`는 정수형을 리턴하기 때문에 실수형으로 결과를 출력하려면 최소한 한 번 이상 `SELECT CAST (COUNT(*) AS float)` 또는 `COUNT(*)*1.0` 절을 써야 합니다.
```
%%sql
```
질의문 7: 출발 지연이 도착 지연에 어떤 영향을 미치나요?
------------------------
비행기가 지연 출발하면 도착 시간에 얼마나 영향을 주는지 알고 싶습니다.
[샘플 공분산](https://en.wikipedia.org/wiki/Covariance) 은 두 변수 간의 분산량을 측정하여 상관관계가 있는지 알려주는 통계치입니다. 공분산이 클수록 상관관계가 높고 음수인 경우 역상관관계가 있습니다. 샘플 공분산의 계산 식은 다음과 같습니다:
$$
Cov(X,Y) = \frac{1}{n-1} \sum_{i=1}^n (x_i-\hat{x})(y_i-\hat{y})
$$
이 때, $x_i$ 는 $X$의 $i$번째 값이고, $y_i$는 $Y$의 $i$번째 값입니다. $X$ 와 $Y$의 평균은 $\bar{x}$ 과 $\bar{y}$으로 표현 하였습니다.
아래의 셀에 도착 지연과 출발 지연 시간의 공분산을 구하는 하나의 질의문을 작성 해보기 바랍니다.
*Note: [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) 으로 구할 수도 있습니다. 그 결과는 정규화 되어 1 부터 -1의 값으로 상관관계를 알려 줍니다. 하지만, SQLite는 루트 계산 함수가 들어 있지 않기 때문에 이 계산식을 쓸 수 가 없습니다. 다른 보편적인 데이터베이스(PostgreSQL와 MySQL)에는 루트 계산 함수가 구현되어 있습니다.*
```
%%sql
```
질의문 8: 한 주가 엉망이었습니다...
------------------------
7월 어떤 항공사의 마지막 한 주(24일 이후)의 평균 지연 시간이 그 이전 주(24일 이전)들의 평균 지연 시간보다 절대적으로 길었나요?
아래의 셀에 1일부터 23일까지의 평균 지연 시간 대비 24일 부터 31일 사이의 평균 지연 시간이 절대적으로 길었던 항공사의 이름을 출력하는 질의문을 작성하기 바랍니다.
Note: [sqlite에서 날짜 다루기](http://www.sqlite.org/lang_datefunc.html)에 따라 `day_of_month`을 사용하여 질의문을 작성하는 것이 편할 것입니다.
Note 2: 아마 과제 중 가장 어려운 질의문이 될 수도 있는데, 작은 단위로 질의문을 작성하여 한 부분씩 해결하고, 그 질의문을 합쳐서 최종 질의문을 작성하는 것이 좋습니다.
Hint: 두 개의 하위 질의문으로 계산할 수 있습니다. 하나의 질의문이 24일 이후의 평균 도착 시간을 계산하고, 다른 질의문이 24일 이전의 도착 시간을 계산하고, 두 질의문을 join하여 지연 시간의 차를 계산하면 됩니다.
```
%%sql
```
질의문 9: 진보적인 그리고 혁명적인
------------------------
포트랜드 (PDX)와 유진 (EUG)로 가기를 원하지만, 한 번에 가기가 쉽지 않군요. 우수 고객 마일리지를 채우기 위해 같은 항공편으로 각 도시로 이동하기를 원합니다. SFO -> PDF와 SFO -> EUG 로 가는 같은 항공사가 있는지 알고 싶습니다.
아래의 셀에 2017년 7월에 SFO -> PDX 과 SFO -> EUG 을 출항한 항공사의 유일한 이름(중복 없음 ID 가 아님)을 출력하는 하나의 SQL 질의문을 작성하기 바랍니다.
```
%%sql
```
질의문 10: 피로도와 등거리 간의 결정
------------------------
시카고에서 캘리포니아로 이동하려고 합니다. Midway (MDW) 또는 O'Hare (ORD) 에서 샌프란시스코 (SFO), 산호세 (SJC), 오크랜드 (OAK)로 도착하면 좋겟습니다. 만약 이 번 달이 7월이라고 하면 시카고에서 현지 시간 14시에 출발하는 경로 중 지연 시간이 가장 짧은 경로가 어떤 것입니까?
아래의 셀에 MDW 또는 ORD 에서 현지 시간 14시(`crs_dep_time`)에 출발하고 SFO, SJC, 또는 OAK에 도착하는 항공편들의 평균 지연 시간을 구하는 하나의 질의문을 작성하기 바랍니다. 출발과 도착 공항을 Group by로 묶고 지연 시간의 내림차순으로 출력하기 바랍니다.
Note: `crs_dep_time` 필드는 정수 형을 갖고 있으며 hhmm (e.g. 4:15pm 은 1615 임) 형을 따름.
```
%%sql
```
## 다 끝났습니다. 이제 제출합시다.
* 제출하는 방법은 가장 위의 설명문을 참고하기 바랍니다.
| github_jupyter |
```
import os
import urllib
from zipfile import ZipFile
import fileinput
import numpy as np
import gc
import urllib.request
if not os.path.exists('glove.840B.300d.txt'):
if not os.path.exists('glove.840B.300d.zip'):
print('downloading GloVe')
urllib.request.urlretrieve("http://nlp.stanford.edu/data/glove.840B.300d.zip", "glove.840B.300d.zip")
zip = ZipFile('glove.840B.300d.zip')
zip.extractall()
import torch
from torchtext import data
from torchtext import datasets
from torchtext.vocab import GloVe
import fileinput
import numpy as np
from cove import MTLSTM
inputs = data.Field(lower=True, include_lengths=True, batch_first=True)
answers = data.Field(sequential=False)
print('Generating train, dev, test splits')
train, dev, test = datasets.SNLI.splits(inputs, answers)
print('Building vocabulary')
inputs.build_vocab(train, dev, test)
g = GloVe(name='840B', dim=300)
gc.collect()
inputs.vocab.load_vectors(vectors=g)
gc.collect()
answers.build_vocab(train)
model = MTLSTM(n_vocab=len(inputs.vocab), vectors=inputs.vocab.vectors)
model.cuda(0)
train_iter, dev_iter, test_iter = data.BucketIterator.splits(
(train, dev, test), batch_size=100, device=0)
train_iter.init_epoch()
from keras.models import load_model
import tensorflow as tf
# To prevent Tensorflow from being greedy and allocating all GPU memory for itself
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
# Loading loding saved Keras CoVe model
cove_model = load_model('Keras_CoVe.h5')
TOTAL_NUM_TEST_SENTENCE = 10000
print('Comparing Keras CoVe prediction with Pytorch CoVe')
abs_error_per_dim = 0
total_num_of_dim = 0
num_test_sentence = 0
model.train()
for batch_idx, batch in enumerate(train_iter):
if num_test_sentence > TOTAL_NUM_TEST_SENTENCE:
# It takes a long time to run through all examples hence restricting the test set
break
cove_premise = model(*batch.premise)
#cove_hypothesis = model(*batch.hypothesis)
sentence_sparse_vector = batch.premise[0].data.cpu().numpy()
for i in range(len(sentence_sparse_vector)):
sentence = sentence_sparse_vector[i]
sentence_glove = []
for word in sentence:
sentence_glove.append(inputs.vocab.vectors[word].numpy())
sentence_glove = np.expand_dims(np.array(sentence_glove),0)
if np.any(np.sum(sentence_glove,axis=2)==0):
break
keras_cove_sentence = cove_model.predict(sentence_glove)
keras_cove_sentence = np.squeeze(keras_cove_sentence,0)
pytorch_cove_sentence = cove_premise.data.cpu().numpy()[i]
abs_error_per_dim+=np.sum(np.abs(keras_cove_sentence - pytorch_cove_sentence))
total_num_of_dim+=np.prod(sentence_glove.shape)
num_test_sentence+=1
abs_error_per_dim/=total_num_of_dim
print('abs error per dim:'+str(abs_error_per_dim))
```
| github_jupyter |
```
from HARK.ConsumptionSaving.ConsLaborModel import (
LaborIntMargConsumerType,
init_labor_lifecycle,
)
import numpy as np
import matplotlib.pyplot as plt
from time import process_time
mystr = lambda number: "{:.4f}".format(number) # Format numbers as strings
do_simulation = True
# Make and solve a labor intensive margin consumer i.e. a consumer with utility for leisure
LaborIntMargExample = LaborIntMargConsumerType(verbose=0)
LaborIntMargExample.cycles = 0
t_start = process_time()
LaborIntMargExample.solve()
t_end = process_time()
print(
"Solving a labor intensive margin consumer took "
+ str(t_end - t_start)
+ " seconds."
)
t = 0
bMin_orig = 0.0
bMax = 100.0
# Plot the consumption function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(bMin, bMax, 300)
bMin = bMin_orig
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
C = LaborIntMargExample.solution[t].cFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, C)
bMin = np.minimum(bMin, B_temp[0])
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Normalized consumption level")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, None)
plt.show()
# Plot the marginal consumption function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(bMin, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
C = LaborIntMargExample.solution[t].cFunc.derivativeX(
B_temp, Shk * np.ones_like(B_temp)
)
plt.plot(B_temp, C)
bMin = np.minimum(bMin, B_temp[0])
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Marginal propensity to consume")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the labor function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(0.0, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
Lbr = LaborIntMargExample.solution[t].LbrFunc(B_temp, Shk * np.ones_like(B_temp))
bMin = np.minimum(bMin, B_temp[0])
plt.plot(B_temp, Lbr)
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Labor supply")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the marginal value function at various transitory productivity shocks
pseudo_inverse = True
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(0.0, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
if pseudo_inverse:
vP = LaborIntMargExample.solution[t].vPfunc.cFunc(
B_temp, Shk * np.ones_like(B_temp)
)
else:
vP = LaborIntMargExample.solution[t].vPfunc(B_temp, Shk * np.ones_like(B_temp))
bMin = np.minimum(bMin, B_temp[0])
plt.plot(B_temp, vP)
plt.xlabel("Beginning of period bank balances")
if pseudo_inverse:
plt.ylabel("Pseudo inverse marginal value")
else:
plt.ylabel("Marginal value")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, None)
plt.show()
if do_simulation:
t_start = process_time()
LaborIntMargExample.T_sim = 120 # Set number of simulation periods
LaborIntMargExample.track_vars = ["bNrm", 'cNrm']
LaborIntMargExample.initialize_sim()
LaborIntMargExample.simulate()
t_end = process_time()
print(
"Simulating "
+ str(LaborIntMargExample.AgentCount)
+ " intensive-margin labor supply consumers for "
+ str(LaborIntMargExample.T_sim)
+ " periods took "
+ mystr(t_end - t_start)
+ " seconds."
)
N = LaborIntMargExample.AgentCount
CDF = np.linspace(0.0, 1, N)
plt.plot(np.sort(LaborIntMargExample.controls['cNrm']), CDF)
plt.xlabel(
"Consumption cNrm in " + str(LaborIntMargExample.T_sim) + "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, None)
plt.ylim(0.0, 1.0)
plt.show()
plt.plot(np.sort(LaborIntMargExample.controls['Lbr']), CDF)
plt.xlabel(
"Labor supply Lbr in " + str(LaborIntMargExample.T_sim) + "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, 1.0)
plt.ylim(0.0, 1.0)
plt.show()
plt.plot(np.sort(LaborIntMargExample.state_now['aNrm']), CDF)
plt.xlabel(
"End-of-period assets aNrm in "
+ str(LaborIntMargExample.T_sim)
+ "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, 20.0)
plt.ylim(0.0, 1.0)
plt.show()
# Make and solve a labor intensive margin consumer with a finite lifecycle
LifecycleExample = LaborIntMargConsumerType(**init_labor_lifecycle)
LifecycleExample.cycles = (
1 # Make this consumer live a sequence of periods exactly once
)
start_time = process_time()
LifecycleExample.solve()
end_time = process_time()
print(
"Solving a lifecycle labor intensive margin consumer took "
+ str(end_time - start_time)
+ " seconds."
)
LifecycleExample.unpack('cFunc')
bMax = 20.0
# Plot the consumption function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
C = LifecycleExample.solution[t].cFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, C)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Consumption function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Normalized consumption level")
plt.xlim(b_min, b_max)
plt.ylim(0.0, None)
plt.show()
# Plot the marginal consumption function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
MPC = LifecycleExample.solution[t].cFunc.derivativeX(
B_temp, Shk * np.ones_like(B_temp)
)
plt.plot(B_temp, MPC)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Marginal consumption function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Marginal propensity to consume")
plt.xlim(b_min, b_max)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the labor supply function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
L = LifecycleExample.solution[t].LbrFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, L)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Labor supply function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Labor supply")
plt.xlim(b_min, b_max)
plt.ylim(0.0, 1.01)
plt.show()
# Plot the marginal value function at various transitory productivity shocks
pseudo_inverse = True
TranShkSet = LifecycleExample.TranShkGrid[t]
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) / 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
if pseudo_inverse:
vP = LifecycleExample.solution[t].vPfunc.cFunc(
B_temp, Shk * np.ones_like(B_temp)
)
else:
vP = LifecycleExample.solution[t].vPfunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, vP)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.xlabel("Beginning of period bank balances")
if pseudo_inverse:
plt.ylabel("Pseudo inverse marginal value")
else:
plt.ylabel("Marginal value")
plt.title("Marginal value across periods of the lifecycle")
plt.xlim(b_min, b_max)
plt.ylim(0.0, None)
plt.show()
```
| github_jupyter |
# HistGradientBoostingClassifier with MaxAbsScaler
This code template is for classification analysis using a HistGradientBoostingClassifier and the feature rescaling technique called MaxAbsScaler
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.metrics import classification_report,plot_confusion_matrix
from sklearn.preprocessing import MaxAbsScaler
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path=""
```
List of features which are required for model training .
```
#x_values
features = []
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Data Rescaling
sklearn.preprocessing.MaxAbsScaler is used
Scale each feature by its maximum absolute value.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html)
```
Scaler=MaxAbsScaler()
x_train=Scaler.fit_transform(x_train)
x_test=Scaler.transform(x_test)
```
### Model
Histogram-based Gradient Boosting Classification Tree.This estimator is much faster than GradientBoostingClassifier for big datasets (n_samples >= 10 000).This estimator has native support for missing values (NaNs).
[Reference](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.HistGradientBoostingClassifier.html#sklearn.ensemble.HistGradientBoostingClassifier)
> **loss**: The loss function to use in the boosting process. ‘binary_crossentropy’ (also known as logistic loss) is used for binary classification and generalizes to ‘categorical_crossentropy’ for multiclass classification. ‘auto’ will automatically choose either loss depending on the nature of the problem.
> **learning_rate**: The learning rate, also known as shrinkage. This is used as a multiplicative factor for the leaves values. Use 1 for no shrinkage.
> **max_iter**: The maximum number of iterations of the boosting process, i.e. the maximum number of trees.
> **max_depth**: The maximum depth of each tree. The depth of a tree is the number of edges to go from the root to the deepest leaf. Depth isn’t constrained by default.
> **l2_regularization**: The L2 regularization parameter. Use 0 for no regularization (default).
> **early_stopping**: If ‘auto’, early stopping is enabled if the sample size is larger than 10000. If True, early stopping is enabled, otherwise early stopping is disabled.
> **n_iter_no_change**: Used to determine when to “early stop”. The fitting process is stopped when none of the last n_iter_no_change scores are better than the n_iter_no_change - 1 -th-to-last one, up to some tolerance. Only used if early stopping is performed.
> **tol**: The absolute tolerance to use when comparing scores during early stopping. The higher the tolerance, the more likely we are to early stop: higher tolerance means that it will be harder for subsequent iterations to be considered an improvement upon the reference score.
> **scoring**: Scoring parameter to use for early stopping.
```
model = HistGradientBoostingClassifier(random_state = 123)
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Snehaan Bhawal , Github: [Profile](https://github.com/Sbhawal)
| github_jupyter |
# Prática Guiada: Demonstração de `GridSearchCV`
Vamos usar o conjunto de dados iris... que já conhecemos bem.
Veremos como usar `GridSearchCV` para otimizar o hiperparâmetro `k` do algoritmo de vizinhos mais próximos.
[aqui](http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf) há um link para o paper de Ronald Fisher, que usou este conjunto de dados em 1936.
```
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score, train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
df = load_iris()
X = df.data
y = df.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=98)
len(X_train), len(X_test), len(y_train), len(y_test)
```
## 1. Escrevendo os parâmetros à mão
É claro que, dependendo do modelo, os hiperparâmetros podem ter um efeito considerável na qualidade da previsão.
Vamos ver como a precisão varia na hora de prever a espécie das flores para diferentes valores de K.
```
k_range = list(range(1, 100))
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X_train, y_train, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
k_scores
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy');
```
Como sempre, observamos que o desempenho muda para diferentes valores do hiperparâmetro. <br />
Como podemos sistematizar essa pesquisa e adicionar mais hiperparâmetros à exploração?
## 2. Usando `GridSearch`
```
from sklearn.model_selection import GridSearchCV
```
É definida uma lista de parâmetros a serem testados.
```
k_range = list(range(1, 31))
knn = KNeighborsClassifier()
range(1, 31)
param_grid = dict(n_neighbors=range(1, 31))
print(param_grid)
```
Instanciar o método `GridSearchCV`
```
grid = GridSearchCV(knn, param_grid, cv=10, scoring='accuracy', n_jobs=-1)
```
Fazer o ajuste
```
grid.fit(X_train, y_train)
```
`GridSeachCV` retorna um dict com muitas informações. Do momento da configuração de cada parâmetro até os scores médios (via validação cruzada). Ele também fornece os scores em cada conjunto de treino e teste da Validação Cruzada K-Fold.
```
grid.cv_results_.keys()
pd.DataFrame(grid.cv_results_).columns
pd.DataFrame(grid.cv_results_)
```
Vamos ver o melhor modelo:
```
grid.best_params_
grid.best_estimator_, grid.best_score_, grid.best_params_
```
### 2.1 Adicionando outros parâmetros para ajustar
Vamos adicionar o parâmetro binário de peso do algoritmo knn que determina se alguns vizinhos terão mais peso do que outros no momento da classificação. O valor de distância indica que o peso é inversamente proporcional à distância
GridSearchCV exige que a grade de parâmetros a serem verificados venha em um dicionário com os nomes dos parâmetros e a lista dos valores possíveis.
Observe que o GridSearchCV possui todos os métodos que a API sklearn oferece para modelos preditivos: fit, predict, predict_proba, etc.
```
k_range = list(range(1, 31))
weight_options = ['uniform', 'distance']
```
Agora a otimização será feita iterando e alternando `weights` e `k` (número de vizinhos próximos).
```
param_grid = dict(n_neighbors=k_range, weights=weight_options)
print(param_grid)
```
**Verificar:**
1. Como o processo de busca será realizado?
2. Quantas vezes o algoritmo terá que ser iterado?
Ajustar os modelos
```
grid = GridSearchCV(knn, param_grid, cv=10, scoring='accuracy')
grid.fit(X_train, y_train)
pd.DataFrame(grid.cv_results_)
```
Escolher o melhor modelo
```
print (grid.best_estimator_)
print(grid.best_score_)
print(grid.best_params_)
```
## 3. Usar os melhores modelos para executar as previsões
```
knn = KNeighborsClassifier(n_neighbors=8, weights='uniform')
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
print (classification_report(y_test, y_pred))
sns.heatmap(confusion_matrix(y_test, y_pred),annot=True)
```
Podemos usar o atalho que `GridSeachCV` possui: usando o método` predict` sobre o objeto `grid`.
```
grid.predict(X_test)
```
| github_jupyter |
# **Deep-STORM (2D)**
---
<font size = 4>Deep-STORM is a neural network capable of image reconstruction from high-density single-molecule localization microscopy (SMLM), first published in 2018 by [Nehme *et al.* in Optica](https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458). The architecture used here is a U-Net based network without skip connections. This network allows image reconstruction of 2D super-resolution images, in a supervised training manner. The network is trained using simulated high-density SMLM data for which the ground-truth is available. These simulations are obtained from random distribution of single molecules in a field-of-view and therefore do not imprint structural priors during training. The network output a super-resolution image with increased pixel density (typically upsampling factor of 8 in each dimension).
Deep-STORM has **two key advantages**:
- SMLM reconstruction at high density of emitters
- fast prediction (reconstruction) once the model is trained appropriately, compared to more common multi-emitter fitting processes.
---
<font size = 4>*Disclaimer*:
<font size = 4>This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.
<font size = 4>This notebook is based on the following paper:
<font size = 4>**Deep-STORM: super-resolution single-molecule microscopy by deep learning**, Optica (2018) by *Elias Nehme, Lucien E. Weiss, Tomer Michaeli, and Yoav Shechtman* (https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458)
<font size = 4>And source code found in: https://github.com/EliasNehme/Deep-STORM
<font size = 4>**Please also cite this original paper when using or developing this notebook.**
# **How to use this notebook?**
---
<font size = 4>Video describing how to use our notebooks are available on youtube:
- [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook
- [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook
---
###**Structure of a notebook**
<font size = 4>The notebook contains two types of cell:
<font size = 4>**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.
<font size = 4>**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.
---
###**Table of contents, Code snippets** and **Files**
<font size = 4>On the top left side of the notebook you find three tabs which contain from top to bottom:
<font size = 4>*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.
<font size = 4>*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.
<font size = 4>*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here.
<font size = 4>**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.
<font size = 4>**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!
---
###**Making changes to the notebook**
<font size = 4>**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.
<font size = 4>To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).
You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment.
#**0. Before getting started**
---
<font size = 4> Deep-STORM is able to train on simulated dataset of SMLM data (see https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458 for more info). Here, we provide a simulator that will generate training dataset (section 3.1.b). A few parameters will allow you to match the simulation to your experimental data. Similarly to what is described in the paper, simulations obtained from ThunderSTORM can also be loaded here (section 3.1.a).
---
<font size = 4>**Important note**
<font size = 4>- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.
<font size = 4>- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.
<font size = 4>- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.
---
# **1. Install Deep-STORM and dependencies**
---
```
Notebook_version = '1.13'
Network = 'Deep-STORM'
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
#@markdown ##Install Deep-STORM and dependencies
# %% Model definition + helper functions
!pip install fpdf
# Import keras modules and libraries
from tensorflow import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Activation, UpSampling2D, Convolution2D, MaxPooling2D, BatchNormalization, Layer
from tensorflow.keras.callbacks import Callback
from tensorflow.keras import backend as K
from tensorflow.keras import optimizers, losses
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import ReduceLROnPlateau
from skimage.transform import warp
from skimage.transform import SimilarityTransform
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from scipy.signal import fftconvolve
# Import common libraries
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import h5py
import scipy.io as sio
from os.path import abspath
from sklearn.model_selection import train_test_split
from skimage import io
import time
import os
import shutil
import csv
from PIL import Image
from PIL.TiffTags import TAGS
from scipy.ndimage import gaussian_filter
import math
from astropy.visualization import simple_norm
from sys import getsizeof
from fpdf import FPDF, HTMLMixin
from pip._internal.operations.freeze import freeze
import subprocess
from datetime import datetime
# For sliders and dropdown menu, progress bar
from ipywidgets import interact
import ipywidgets as widgets
from tqdm import tqdm
# For Multi-threading in simulation
from numba import njit, prange
# define a function that projects and rescales an image to the range [0,1]
def project_01(im):
im = np.squeeze(im)
min_val = im.min()
max_val = im.max()
return (im - min_val)/(max_val - min_val)
# normalize image given mean and std
def normalize_im(im, dmean, dstd):
im = np.squeeze(im)
im_norm = np.zeros(im.shape,dtype=np.float32)
im_norm = (im - dmean)/dstd
return im_norm
# Define the loss history recorder
class LossHistory(Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# Define a matlab like gaussian 2D filter
def matlab_style_gauss2D(shape=(7,7),sigma=1):
"""
2D gaussian filter - should give the same result as:
MATLAB's fspecial('gaussian',[shape],[sigma])
"""
m,n = [(ss-1.)/2. for ss in shape]
y,x = np.ogrid[-m:m+1,-n:n+1]
h = np.exp( -(x*x + y*y) / (2.*sigma*sigma) )
h.astype(dtype=K.floatx())
h[ h < np.finfo(h.dtype).eps*h.max() ] = 0
sumh = h.sum()
if sumh != 0:
h /= sumh
h = h*2.0
h = h.astype('float32')
return h
# Expand the filter dimensions
psf_heatmap = matlab_style_gauss2D(shape = (7,7),sigma=1)
gfilter = tf.reshape(psf_heatmap, [7, 7, 1, 1])
# Combined MSE + L1 loss
def L1L2loss(input_shape):
def bump_mse(heatmap_true, spikes_pred):
# generate the heatmap corresponding to the predicted spikes
heatmap_pred = K.conv2d(spikes_pred, gfilter, strides=(1, 1), padding='same')
# heatmaps MSE
loss_heatmaps = losses.mean_squared_error(heatmap_true,heatmap_pred)
# l1 on the predicted spikes
loss_spikes = losses.mean_absolute_error(spikes_pred,tf.zeros(input_shape))
return loss_heatmaps + loss_spikes
return bump_mse
# Define the concatenated conv2, batch normalization, and relu block
def conv_bn_relu(nb_filter, rk, ck, name):
def f(input):
conv = Convolution2D(nb_filter, kernel_size=(rk, ck), strides=(1,1),\
padding="same", use_bias=False,\
kernel_initializer="Orthogonal",name='conv-'+name)(input)
conv_norm = BatchNormalization(name='BN-'+name)(conv)
conv_norm_relu = Activation(activation = "relu",name='Relu-'+name)(conv_norm)
return conv_norm_relu
return f
# Define the model architechture
def CNN(input,names):
Features1 = conv_bn_relu(32,3,3,names+'F1')(input)
pool1 = MaxPooling2D(pool_size=(2,2),name=names+'Pool1')(Features1)
Features2 = conv_bn_relu(64,3,3,names+'F2')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2),name=names+'Pool2')(Features2)
Features3 = conv_bn_relu(128,3,3,names+'F3')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2),name=names+'Pool3')(Features3)
Features4 = conv_bn_relu(512,3,3,names+'F4')(pool3)
up5 = UpSampling2D(size=(2, 2),name=names+'Upsample1')(Features4)
Features5 = conv_bn_relu(128,3,3,names+'F5')(up5)
up6 = UpSampling2D(size=(2, 2),name=names+'Upsample2')(Features5)
Features6 = conv_bn_relu(64,3,3,names+'F6')(up6)
up7 = UpSampling2D(size=(2, 2),name=names+'Upsample3')(Features6)
Features7 = conv_bn_relu(32,3,3,names+'F7')(up7)
return Features7
# Define the Model building for an arbitrary input size
def buildModel(input_dim, initial_learning_rate = 0.001):
input_ = Input (shape = (input_dim))
act_ = CNN (input_,'CNN')
density_pred = Convolution2D(1, kernel_size=(1, 1), strides=(1, 1), padding="same",\
activation="linear", use_bias = False,\
kernel_initializer="Orthogonal",name='Prediction')(act_)
model = Model (inputs= input_, outputs=density_pred)
opt = optimizers.Adam(lr = initial_learning_rate)
model.compile(optimizer=opt, loss = L1L2loss(input_dim))
return model
# define a function that trains a model for a given data SNR and density
def train_model(patches, heatmaps, modelPath, epochs, steps_per_epoch, batch_size, upsampling_factor=8, validation_split = 0.3, initial_learning_rate = 0.001, pretrained_model_path = '', L2_weighting_factor = 100):
"""
This function trains a CNN model on the desired training set, given the
upsampled training images and labels generated in MATLAB.
# Inputs
# TO UPDATE ----------
# Outputs
function saves the weights of the trained model to a hdf5, and the
normalization factors to a mat file. These will be loaded later for testing
the model in test_model.
"""
# for reproducibility
np.random.seed(123)
X_train, X_test, y_train, y_test = train_test_split(patches, heatmaps, test_size = validation_split, random_state=42)
print('Number of training examples: %d' % X_train.shape[0])
print('Number of validation examples: %d' % X_test.shape[0])
# Setting type
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
y_train = y_train.astype('float32')
y_test = y_test.astype('float32')
#===================== Training set normalization ==========================
# normalize training images to be in the range [0,1] and calculate the
# training set mean and std
mean_train = np.zeros(X_train.shape[0],dtype=np.float32)
std_train = np.zeros(X_train.shape[0], dtype=np.float32)
for i in range(X_train.shape[0]):
X_train[i, :, :] = project_01(X_train[i, :, :])
mean_train[i] = X_train[i, :, :].mean()
std_train[i] = X_train[i, :, :].std()
# resulting normalized training images
mean_val_train = mean_train.mean()
std_val_train = std_train.mean()
X_train_norm = np.zeros(X_train.shape, dtype=np.float32)
for i in range(X_train.shape[0]):
X_train_norm[i, :, :] = normalize_im(X_train[i, :, :], mean_val_train, std_val_train)
# patch size
psize = X_train_norm.shape[1]
# Reshaping
X_train_norm = X_train_norm.reshape(X_train.shape[0], psize, psize, 1)
# ===================== Test set normalization ==========================
# normalize test images to be in the range [0,1] and calculate the test set
# mean and std
mean_test = np.zeros(X_test.shape[0],dtype=np.float32)
std_test = np.zeros(X_test.shape[0], dtype=np.float32)
for i in range(X_test.shape[0]):
X_test[i, :, :] = project_01(X_test[i, :, :])
mean_test[i] = X_test[i, :, :].mean()
std_test[i] = X_test[i, :, :].std()
# resulting normalized test images
mean_val_test = mean_test.mean()
std_val_test = std_test.mean()
X_test_norm = np.zeros(X_test.shape, dtype=np.float32)
for i in range(X_test.shape[0]):
X_test_norm[i, :, :] = normalize_im(X_test[i, :, :], mean_val_test, std_val_test)
# Reshaping
X_test_norm = X_test_norm.reshape(X_test.shape[0], psize, psize, 1)
# Reshaping labels
Y_train = y_train.reshape(y_train.shape[0], psize, psize, 1)
Y_test = y_test.reshape(y_test.shape[0], psize, psize, 1)
# Save datasets to a matfile to open later in matlab
mdict = {"mean_test": mean_val_test, "std_test": std_val_test, "upsampling_factor": upsampling_factor, "Normalization factor": L2_weighting_factor}
sio.savemat(os.path.join(modelPath,"model_metadata.mat"), mdict)
# Set the dimensions ordering according to tensorflow consensous
# K.set_image_dim_ordering('tf')
K.set_image_data_format('channels_last')
# Save the model weights after each epoch if the validation loss decreased
checkpointer = ModelCheckpoint(filepath=os.path.join(modelPath,"weights_best.hdf5"), verbose=1,
save_best_only=True)
# Change learning when loss reaches a plataeu
change_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.00005)
# Model building and complitation
model = buildModel((psize, psize, 1), initial_learning_rate = initial_learning_rate)
model.summary()
# Load pretrained model
if not pretrained_model_path:
print('Using random initial model weights.')
else:
print('Loading model weights from '+pretrained_model_path)
model.load_weights(pretrained_model_path)
# Create an image data generator for real time data augmentation
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0., # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0., # randomly shift images horizontally (fraction of total width)
height_shift_range=0., # randomly shift images vertically (fraction of total height)
zoom_range=0.,
shear_range=0.,
horizontal_flip=False, # randomly flip images
vertical_flip=False, # randomly flip images
fill_mode='constant',
data_format=K.image_data_format())
# Fit the image generator on the training data
datagen.fit(X_train_norm)
# loss history recorder
history = LossHistory()
# Inform user training begun
print('-------------------------------')
print('Training model...')
# Fit model on the batches generated by datagen.flow()
train_history = model.fit_generator(datagen.flow(X_train_norm, Y_train, batch_size=batch_size),
steps_per_epoch=steps_per_epoch, epochs=epochs, verbose=1,
validation_data=(X_test_norm, Y_test),
callbacks=[history, checkpointer, change_lr])
# Inform user training ended
print('-------------------------------')
print('Training Complete!')
# Save the last model
model.save(os.path.join(modelPath, 'weights_last.hdf5'))
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(train_history.history)
if os.path.exists(os.path.join(modelPath,"Quality Control")):
shutil.rmtree(os.path.join(modelPath,"Quality Control"))
os.makedirs(os.path.join(modelPath,"Quality Control"))
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = os.path.join(modelPath,"Quality Control/training_evaluation.csv")
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss','learning rate'])
for i in range(len(train_history.history['loss'])):
writer.writerow([train_history.history['loss'][i], train_history.history['val_loss'][i], train_history.history['lr'][i]])
return
# Normalization functions from Martin Weigert used in CARE
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Multi-threaded Erf-based image construction
@njit(parallel=True)
def FromLoc2Image_Erf(xc_array, yc_array, photon_array, sigma_array, image_size = (64,64), pixel_size = 100):
w = image_size[0]
h = image_size[1]
erfImage = np.zeros((w, h))
for ij in prange(w*h):
j = int(ij/w)
i = ij - j*w
for (xc, yc, photon, sigma) in zip(xc_array, yc_array, photon_array, sigma_array):
# Don't bother if the emitter has photons <= 0 or if Sigma <= 0
if (sigma > 0) and (photon > 0):
S = sigma*math.sqrt(2)
x = i*pixel_size - xc
y = j*pixel_size - yc
# Don't bother if the emitter is further than 4 sigma from the centre of the pixel
if (x+pixel_size/2)**2 + (y+pixel_size/2)**2 < 16*sigma**2:
ErfX = math.erf((x+pixel_size)/S) - math.erf(x/S)
ErfY = math.erf((y+pixel_size)/S) - math.erf(y/S)
erfImage[j][i] += 0.25*photon*ErfX*ErfY
return erfImage
@njit(parallel=True)
def FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = (64,64), pixel_size = 100):
w = image_size[0]
h = image_size[1]
locImage = np.zeros((image_size[0],image_size[1]) )
n_locs = len(xc_array)
for e in prange(n_locs):
locImage[int(max(min(round(yc_array[e]/pixel_size),w-1),0))][int(max(min(round(xc_array[e]/pixel_size),h-1),0))] += 1
return locImage
def getPixelSizeTIFFmetadata(TIFFpath, display=False):
with Image.open(TIFFpath) as img:
meta_dict = {TAGS[key] : img.tag[key] for key in img.tag.keys()}
# TIFF tags
# https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml
# https://www.awaresystems.be/imaging/tiff/tifftags/resolutionunit.html
ResolutionUnit = meta_dict['ResolutionUnit'][0] # unit of resolution
width = meta_dict['ImageWidth'][0]
height = meta_dict['ImageLength'][0]
xResolution = meta_dict['XResolution'][0] # number of pixels / ResolutionUnit
if len(xResolution) == 1:
xResolution = xResolution[0]
elif len(xResolution) == 2:
xResolution = xResolution[0]/xResolution[1]
else:
print('Image resolution not defined.')
xResolution = 1
if ResolutionUnit == 2:
# Units given are in inches
pixel_size = 0.025*1e9/xResolution
elif ResolutionUnit == 3:
# Units given are in cm
pixel_size = 0.01*1e9/xResolution
else:
# ResolutionUnit is therefore 1
print('Resolution unit not defined. Assuming: um')
pixel_size = 1e3/xResolution
if display:
print('Pixel size obtained from metadata: '+str(pixel_size)+' nm')
print('Image size: '+str(width)+'x'+str(height))
return (pixel_size, width, height)
def saveAsTIF(path, filename, array, pixel_size):
"""
Image saving using PIL to save as .tif format
# Input
path - path where it will be saved
filename - name of the file to save (no extension)
array - numpy array conatining the data at the required format
pixel_size - physical size of pixels in nanometers (identical for x and y)
"""
# print('Data type: '+str(array.dtype))
if (array.dtype == np.uint16):
mode = 'I;16'
elif (array.dtype == np.uint32):
mode = 'I'
else:
mode = 'F'
# Rounding the pixel size to the nearest number that divides exactly 1cm.
# Resolution needs to be a rational number --> see TIFF format
# pixel_size = 10000/(round(10000/pixel_size))
if len(array.shape) == 2:
im = Image.fromarray(array)
im.save(os.path.join(path, filename+'.tif'),
mode = mode,
resolution_unit = 3,
resolution = 0.01*1e9/pixel_size)
elif len(array.shape) == 3:
imlist = []
for frame in array:
imlist.append(Image.fromarray(frame))
imlist[0].save(os.path.join(path, filename+'.tif'), save_all=True,
append_images=imlist[1:],
mode = mode,
resolution_unit = 3,
resolution = 0.01*1e9/pixel_size)
return
class Maximafinder(Layer):
def __init__(self, thresh, neighborhood_size, use_local_avg, **kwargs):
super(Maximafinder, self).__init__(**kwargs)
self.thresh = tf.constant(thresh, dtype=tf.float32)
self.nhood = neighborhood_size
self.use_local_avg = use_local_avg
def build(self, input_shape):
if self.use_local_avg is True:
self.kernel_x = tf.reshape(tf.constant([[-1,0,1],[-1,0,1],[-1,0,1]], dtype=tf.float32), [3, 3, 1, 1])
self.kernel_y = tf.reshape(tf.constant([[-1,-1,-1],[0,0,0],[1,1,1]], dtype=tf.float32), [3, 3, 1, 1])
self.kernel_sum = tf.reshape(tf.constant([[1,1,1],[1,1,1],[1,1,1]], dtype=tf.float32), [3, 3, 1, 1])
def call(self, inputs):
# local maxima positions
max_pool_image = MaxPooling2D(pool_size=(self.nhood,self.nhood), strides=(1,1), padding='same')(inputs)
cond = tf.math.greater(max_pool_image, self.thresh) & tf.math.equal(max_pool_image, inputs)
indices = tf.where(cond)
bind, xind, yind = indices[:, 0], indices[:, 2], indices[:, 1]
confidence = tf.gather_nd(inputs, indices)
# local CoG estimator
if self.use_local_avg:
x_image = K.conv2d(inputs, self.kernel_x, padding='same')
y_image = K.conv2d(inputs, self.kernel_y, padding='same')
sum_image = K.conv2d(inputs, self.kernel_sum, padding='same')
confidence = tf.cast(tf.gather_nd(sum_image, indices), dtype=tf.float32)
x_local = tf.math.divide(tf.gather_nd(x_image, indices),tf.gather_nd(sum_image, indices))
y_local = tf.math.divide(tf.gather_nd(y_image, indices),tf.gather_nd(sum_image, indices))
xind = tf.cast(xind, dtype=tf.float32) + tf.cast(x_local, dtype=tf.float32)
yind = tf.cast(yind, dtype=tf.float32) + tf.cast(y_local, dtype=tf.float32)
else:
xind = tf.cast(xind, dtype=tf.float32)
yind = tf.cast(yind, dtype=tf.float32)
return bind, xind, yind, confidence
def get_config(self):
# Implement get_config to enable serialization. This is optional.
base_config = super(Maximafinder, self).get_config()
config = {}
return dict(list(base_config.items()) + list(config.items()))
# ------------------------------- Prediction with postprocessing function-------------------------------
def batchFramePredictionLocalization(dataPath, filename, modelPath, savePath, batch_size=1, thresh=0.1, neighborhood_size=3, use_local_avg = False, pixel_size = None):
"""
This function tests a trained model on the desired test set, given the
tiff stack of test images, learned weights, and normalization factors.
# Inputs
dataPath - the path to the folder containing the tiff stack(s) to run prediction on
filename - the name of the file to process
modelPath - the path to the folder containing the weights file and the mean and standard deviation file generated in train_model
savePath - the path to the folder where to save the prediction
batch_size. - the number of frames to predict on for each iteration
thresh - threshoold percentage from the maximum of the gaussian scaling
neighborhood_size - the size of the neighborhood for local maxima finding
use_local_average - Boolean whether to perform local averaging or not
"""
# load mean and std
matfile = sio.loadmat(os.path.join(modelPath,'model_metadata.mat'))
test_mean = np.array(matfile['mean_test'])
test_std = np.array(matfile['std_test'])
upsampling_factor = np.array(matfile['upsampling_factor'])
upsampling_factor = upsampling_factor.item() # convert to scalar
L2_weighting_factor = np.array(matfile['Normalization factor'])
L2_weighting_factor = L2_weighting_factor.item() # convert to scalar
# Read in the raw file
Images = io.imread(os.path.join(dataPath, filename))
if pixel_size == None:
pixel_size, _, _ = getPixelSizeTIFFmetadata(os.path.join(dataPath, filename), display=True)
pixel_size_hr = pixel_size/upsampling_factor
# get dataset dimensions
(nFrames, M, N) = Images.shape
print('Input image is '+str(N)+'x'+str(M)+' with '+str(nFrames)+' frames.')
# Build the model for a bigger image
model = buildModel((upsampling_factor*M, upsampling_factor*N, 1))
# Load the trained weights
model.load_weights(os.path.join(modelPath,'weights_best.hdf5'))
# add a post-processing module
max_layer = Maximafinder(thresh*L2_weighting_factor, neighborhood_size, use_local_avg)
# Initialise the results: lists will be used to collect all the localizations
frame_number_list, x_nm_list, y_nm_list, confidence_au_list = [], [], [], []
# Initialise the results
Prediction = np.zeros((M*upsampling_factor, N*upsampling_factor), dtype=np.float32)
Widefield = np.zeros((M, N), dtype=np.float32)
# run model in batches
n_batches = math.ceil(nFrames/batch_size)
for b in tqdm(range(n_batches)):
nF = min(batch_size, nFrames - b*batch_size)
Images_norm = np.zeros((nF, M, N),dtype=np.float32)
Images_upsampled = np.zeros((nF, M*upsampling_factor, N*upsampling_factor), dtype=np.float32)
# Upsampling using a simple nearest neighbor interp and calculating - MULTI-THREAD this?
for f in range(nF):
Images_norm[f,:,:] = project_01(Images[b*batch_size+f,:,:])
Images_norm[f,:,:] = normalize_im(Images_norm[f,:,:], test_mean, test_std)
Images_upsampled[f,:,:] = np.kron(Images_norm[f,:,:], np.ones((upsampling_factor,upsampling_factor)))
Widefield += Images[b*batch_size+f,:,:]
# Reshaping
Images_upsampled = np.expand_dims(Images_upsampled,axis=3)
# Run prediction and local amxima finding
predicted_density = model.predict_on_batch(Images_upsampled)
predicted_density[predicted_density < 0] = 0
Prediction += predicted_density.sum(axis = 3).sum(axis = 0)
bind, xind, yind, confidence = max_layer(predicted_density)
# normalizing the confidence by the L2_weighting_factor
confidence /= L2_weighting_factor
# turn indices to nms and append to the results
xind, yind = xind*pixel_size_hr, yind*pixel_size_hr
frmind = (bind.numpy() + b*batch_size + 1).tolist()
xind = xind.numpy().tolist()
yind = yind.numpy().tolist()
confidence = confidence.numpy().tolist()
frame_number_list += frmind
x_nm_list += xind
y_nm_list += yind
confidence_au_list += confidence
# Open and create the csv file that will contain all the localizations
if use_local_avg:
ext = '_avg'
else:
ext = '_max'
with open(os.path.join(savePath, 'Localizations_' + os.path.splitext(filename)[0] + ext + '.csv'), "w", newline='') as file:
writer = csv.writer(file)
writer.writerow(['frame', 'x [nm]', 'y [nm]', 'confidence [a.u]'])
locs = list(zip(frame_number_list, x_nm_list, y_nm_list, confidence_au_list))
writer.writerows(locs)
# Save the prediction and widefield image
Widefield = np.kron(Widefield, np.ones((upsampling_factor,upsampling_factor)))
Widefield = np.float32(Widefield)
# io.imsave(os.path.join(savePath, 'Predicted_'+os.path.splitext(filename)[0]+'.tif'), Prediction)
# io.imsave(os.path.join(savePath, 'Widefield_'+os.path.splitext(filename)[0]+'.tif'), Widefield)
saveAsTIF(savePath, 'Predicted_'+os.path.splitext(filename)[0], Prediction, pixel_size_hr)
saveAsTIF(savePath, 'Widefield_'+os.path.splitext(filename)[0], Widefield, pixel_size_hr)
return
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
NORMAL = '\033[0m' # white (normal)
def list_files(directory, extension):
return (f for f in os.listdir(directory) if f.endswith('.' + extension))
# @njit(parallel=True)
def subPixelMaxLocalization(array, method = 'CoM', patch_size = 3):
xMaxInd, yMaxInd = np.unravel_index(array.argmax(), array.shape, order='C')
centralPatch = XC[(xMaxInd-patch_size):(xMaxInd+patch_size+1),(yMaxInd-patch_size):(yMaxInd+patch_size+1)]
if (method == 'MAX'):
x0 = xMaxInd
y0 = yMaxInd
elif (method == 'CoM'):
x0 = 0
y0 = 0
S = 0
for xy in range(patch_size*patch_size):
y = math.floor(xy/patch_size)
x = xy - y*patch_size
x0 += x*array[x,y]
y0 += y*array[x,y]
S = array[x,y]
x0 = x0/S - patch_size/2 + xMaxInd
y0 = y0/S - patch_size/2 + yMaxInd
elif (method == 'Radiality'):
# Not implemented yet
x0 = xMaxInd
y0 = yMaxInd
return (x0, y0)
@njit(parallel=True)
def correctDriftLocalization(xc_array, yc_array, frames, xDrift, yDrift):
n_locs = xc_array.shape[0]
xc_array_Corr = np.empty(n_locs)
yc_array_Corr = np.empty(n_locs)
for loc in prange(n_locs):
xc_array_Corr[loc] = xc_array[loc] - xDrift[frames[loc]]
yc_array_Corr[loc] = yc_array[loc] - yDrift[frames[loc]]
return (xc_array_Corr, yc_array_Corr)
print('--------------------------------')
print('DeepSTORM installation complete.')
# Check if this is the latest version of the notebook
All_notebook_versions = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_Notebook_versions.csv", dtype=str)
print('Notebook version: '+Notebook_version)
Latest_Notebook_version = All_notebook_versions[All_notebook_versions["Notebook"] == Network]['Version'].iloc[0]
print('Latest notebook version: '+Latest_Notebook_version)
if Notebook_version == Latest_Notebook_version:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
# Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
# if Notebook_version == list(Latest_notebook_version.columns):
# print("This notebook is up-to-date.")
# if not Notebook_version == list(Latest_notebook_version.columns):
# print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
def pdf_export(trained = False, raw_data = False, pretrained_model = False):
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
#model_name = 'little_CARE_test'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hours)+ "hour(s) "+str(minutes)+"min(s) "+str(round(seconds))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and method:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
if raw_data == True:
shape = (M,N)
else:
shape = (int(FOV_size/pixel_size),int(FOV_size/pixel_size))
#dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(n_patches)+' paired image patches (image dimensions: '+str(patch_size)+', patch size (upsampled): ('+str(int(patch_size))+','+str(int(patch_size))+') with a batch size of '+str(batch_size)+', using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Losses were calculated using MSE for the heatmaps and L1 loss for the spike prediction. Key python packages used include tensorflow (v '+version_numbers[0]+'), numpy (v '+version_numbers[1]+'), Keras (v '+version_numbers[2]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+' GPU.'
if pretrained_model:
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(n_patches)+' paired image patches (image dimensions: '+str(patch_size)+', patch size (upsampled): ('+str(int(patch_size))+','+str(int(patch_size))+') with a batch size of '+str(batch_size)+', using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Losses were calculated using MSE for the heatmaps and L1 loss for the spike prediction. The models was retrained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), numpy (v '+version_numbers[1]+'), Keras (v '+version_numbers[2]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+' GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(180, 5, txt = text, align='L')
pdf.ln(1)
pdf.set_font('')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if raw_data==False:
simul_text = 'The training dataset was created in the notebook using the following simulation settings:'
pdf.cell(200, 5, txt=simul_text, align='L')
pdf.ln(1)
html = """
<table width=60% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Setting</th>
<th width = 50% align="left">Simulated Value</th>
</tr>
<tr>
<td width = 50%>FOV_size</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>pixel_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>ADC_per_photon_conversion</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>ReadOutNoise_ADC</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>ADC_offset</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>emitter_density</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>emitter_density_std</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>number_of_frames</td>
<td width = 50%>{7}</td>
</tr>
<tr>
<td width = 50%>sigma</td>
<td width = 50%>{8}</td>
</tr>
<tr>
<td width = 50%>sigma_std</td>
<td width = 50%>{9}</td>
</tr>
<tr>
<td width = 50%>n_photons</td>
<td width = 50%>{10}</td>
</tr>
<tr>
<td width = 50%>n_photons_std</td>
<td width = 50%>{11}</td>
</tr>
</table>
""".format(FOV_size, pixel_size, ADC_per_photon_conversion, ReadOutNoise_ADC, ADC_offset, emitter_density, emitter_density_std, number_of_frames, sigma, sigma_std, n_photons, n_photons_std)
pdf.write_html(html)
else:
simul_text = 'The training dataset was simulated using ThunderSTORM and loaded into the notebook.'
pdf.multi_cell(190, 5, txt=simul_text, align='L')
pdf.set_font("Arial", size = 11, style='B')
#pdf.ln(1)
#pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'ImageData_path', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = ImageData_path, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'LocalizationData_path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = LocalizationData_path, align = 'L')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'pixel_size:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = str(pixel_size), align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
# if Use_Default_Advanced_Parameters:
# pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used to generate patches:')
pdf.ln(1)
html = """
<table width=70% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Patch Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>upsampling_factor</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>num_patches_per_frame</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>min_number_of_emitters_per_patch</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>max_num_patches</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>gaussian_sigma</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>Automatic_normalization</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>L2_weighting_factor</td>
<td width = 50%>{7}</td>
</tr>
""".format(str(patch_size)+'x'+str(patch_size), upsampling_factor, num_patches_per_frame, min_number_of_emitters_per_patch, max_num_patches, gaussian_sigma, Automatic_normalization, L2_weighting_factor)
pdf.write_html(html)
pdf.ln(3)
pdf.set_font('Arial', size=10)
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=70% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Training Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{4}</td>
</tr>
</table>
""".format(number_of_epochs,batch_size,number_of_steps,percentage_validation,initial_learning_rate)
pdf.write_html(html)
pdf.ln(1)
# pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(21, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training Images', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_DeepSTORM2D.png').shape
pdf.image('/content/TrainingDataExample_DeepSTORM2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Deep-STORM: Nehme, Elias, et al. "Deep-STORM: super-resolution single-molecule microscopy by deep learning." Optica 5.4 (2018): 458-464.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
# if Use_Data_augmentation:
# ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
# pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+'_training_report.pdf')
print('------------------------------')
print('PDF report exported in '+model_path+'/'+model_name+'/')
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'Deep-STORM'
#model_name = os.path.basename(full_QC_model_path)
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+os.path.basename(QC_model_path)+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Loss curves', ln=1, align='L')
pdf.ln(1)
if os.path.exists(savePath+'/lossCurvePlots.png'):
exp_size = io.imread(savePath+'/lossCurvePlots.png').shape
pdf.image(savePath+'/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(savePath+'/QC_example_data.png').shape
pdf.image(savePath+'/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=94% style="margin-left:0px;">"""
with open(savePath+'/'+os.path.basename(QC_model_path)+'_QC_metrics.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
mSSIM_PvsGT = header[1]
mSSIM_SvsGT = header[2]
NRMSE_PvsGT = header[3]
NRMSE_SvsGT = header[4]
PSNR_PvsGT = header[5]
PSNR_SvsGT = header[6]
header = """
<tr>
<th width = 10% align="left">{0}</th>
<th width = 15% align="left">{1}</th>
<th width = 15% align="center">{2}</th>
<th width = 15% align="left">{3}</th>
<th width = 15% align="center">{4}</th>
<th width = 15% align="left">{5}</th>
<th width = 15% align="center">{6}</th>
</tr>""".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)
html = html+header
for row in metrics:
image = row[0]
mSSIM_PvsGT = row[1]
mSSIM_SvsGT = row[2]
NRMSE_PvsGT = row[3]
NRMSE_SvsGT = row[4]
PSNR_PvsGT = row[5]
PSNR_SvsGT = row[6]
cells = """
<tr>
<td width = 10% align="left">{0}</td>
<td width = 15% align="center">{1}</td>
<td width = 15% align="center">{2}</td>
<td width = 15% align="center">{3}</td>
<td width = 15% align="center">{4}</td>
<td width = 15% align="center">{5}</td>
<td width = 15% align="center">{6}</td>
</tr>""".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Deep-STORM: Nehme, Elias, et al. "Deep-STORM: super-resolution single-molecule microscopy by deep learning." Optica 5.4 (2018): 458-464.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(savePath+'/'+os.path.basename(QC_model_path)+'_QC_report.pdf')
print('------------------------------')
print('QC PDF report exported as '+savePath+'/'+os.path.basename(QC_model_path)+'_QC_report.pdf')
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)
```
# **2. Complete the Colab session**
---
## **2.1. Check for GPU access**
---
By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:
<font size = 4>Go to **Runtime -> Change the Runtime type**
<font size = 4>**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*
<font size = 4>**Accelerator: GPU** *(Graphics processing unit)*
```
#@markdown ##Run this cell to check if you have GPU access
# %tensorflow_version 1.x
import tensorflow as tf
# if tf.__version__ != '2.2.0':
# !pip install tensorflow==2.2.0
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime settings are correct then Google did not allocate GPU to your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
# from tensorflow.python.client import device_lib
# device_lib.list_local_devices()
# print the tensorflow version
print('Tensorflow version is ' + str(tf.__version__))
```
## **2.2. Mount your Google Drive**
---
<font size = 4> To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.
<font size = 4> Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive.
<font size = 4> Once this is done, your data are available in the **Files** tab on the top left of notebook.
```
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
```
# **3. Generate patches for training**
---
For Deep-STORM the training data can be obtained in two ways:
* Simulated using ThunderSTORM or other simulation tool and loaded here (**using Section 3.1.a**)
* Directly simulated in this notebook (**using Section 3.1.b**)
## **3.1.a Load training data**
---
Here you can load your simulated data along with its corresponding localization file.
* The `pixel_size` is defined in nanometer (nm).
```
#@markdown ##Load raw data
load_raw_data = True
# Get user input
ImageData_path = "" #@param {type:"string"}
LocalizationData_path = "" #@param {type: "string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value:
pixel_size = 100 #@param {type:"number"}
if get_pixel_size_from_file:
pixel_size,_,_ = getPixelSizeTIFFmetadata(ImageData_path, True)
# load the tiff data
Images = io.imread(ImageData_path)
# get dataset dimensions
if len(Images.shape) == 3:
(number_of_frames, M, N) = Images.shape
elif len(Images.shape) == 2:
(M, N) = Images.shape
number_of_frames = 1
print('Loaded images: '+str(M)+'x'+str(N)+' with '+str(number_of_frames)+' frames')
# Interactive display of the stack
def scroll_in_time(frame):
f=plt.figure(figsize=(6,6))
plt.imshow(Images[frame-1], interpolation='nearest', cmap = 'gray')
plt.title('Training source at frame = ' + str(frame))
plt.axis('off');
if number_of_frames > 1:
interact(scroll_in_time, frame=widgets.IntSlider(min=1, max=Images.shape[0], step=1, value=0, continuous_update=False));
else:
f=plt.figure(figsize=(6,6))
plt.imshow(Images, interpolation='nearest', cmap = 'gray')
plt.title('Training source')
plt.axis('off');
# Load the localization file and display the first
LocData = pd.read_csv(LocalizationData_path, index_col=0)
LocData.tail()
```
## **3.1.b Simulate training data**
---
This simulation tool allows you to generate SMLM data of randomly distrubuted emitters in a field-of-view.
The assumptions are as follows:
* Gaussian Point Spread Function (PSF) with standard deviation defined by `Sigma`. The nominal value of `sigma` can be evaluated using `sigma = 0.21 x Lambda / NA`. (from [Zhang *et al.*, Applied Optics 2007](https://doi.org/10.1364/AO.46.001819))
* Each emitter will emit `n_photons` per frame, and generate their equivalent Poisson noise.
* The camera will contribute Gaussian noise to the signal with a standard deviation defined by `ReadOutNoise_ADC` in ADC
* The `emitter_density` is defined as the number of emitters / um^2 on any given frame. Variability in the emitter density can be applied by adjusting `emitter_density_std`. The latter parameter represents the standard deviation of the normal distribution that the density is drawn from for each individual frame. `emitter_density` **is defined in number of emitters / um^2**.
* The `n_photons` and `sigma` can additionally include some Gaussian variability by setting `n_photons_std` and `sigma_std`.
Important note:
- All dimensions are in nanometer (e.g. `FOV_size` = 6400 represents a field of view of 6.4 um x 6.4 um).
```
load_raw_data = False
# ---------------------------- User input ----------------------------
#@markdown Run the simulation
#@markdown ---
#@markdown Camera settings:
FOV_size = 6400#@param {type:"number"}
pixel_size = 100#@param {type:"number"}
ADC_per_photon_conversion = 1 #@param {type:"number"}
ReadOutNoise_ADC = 4.5#@param {type:"number"}
ADC_offset = 50#@param {type:"number"}
#@markdown Acquisition settings:
emitter_density = 6#@param {type:"number"}
emitter_density_std = 0#@param {type:"number"}
number_of_frames = 20#@param {type:"integer"}
sigma = 110 #@param {type:"number"}
sigma_std = 5 #@param {type:"number"}
# NA = 1.1 #@param {type:"number"}
# wavelength = 800#@param {type:"number"}
# wavelength_std = 150#@param {type:"number"}
n_photons = 2250#@param {type:"number"}
n_photons_std = 250#@param {type:"number"}
# ---------------------------- Variable initialisation ----------------------------
# Start the clock to measure how long it takes
start = time.time()
print('-----------------------------------------------------------')
n_molecules = emitter_density*FOV_size*FOV_size/10**6
n_molecules_std = emitter_density_std*FOV_size*FOV_size/10**6
print('Number of molecules / FOV: '+str(round(n_molecules,2))+' +/- '+str((round(n_molecules_std,2))))
# sigma = 0.21*wavelength/NA
# sigma_std = 0.21*wavelength_std/NA
# print('Gaussian PSF sigma: '+str(round(sigma,2))+' +/- '+str(round(sigma_std,2))+' nm')
M = N = round(FOV_size/pixel_size)
FOV_size = M*pixel_size
print('Final image size: '+str(M)+'x'+str(M)+' ('+str(round(FOV_size/1000, 3))+'um x'+str(round(FOV_size/1000,3))+' um)')
np.random.seed(1)
display_upsampling = 8 # used to display the loc map here
NoiseFreeImages = np.zeros((number_of_frames, M, M))
locImage = np.zeros((number_of_frames, display_upsampling*M, display_upsampling*N))
frames = []
all_xloc = []
all_yloc = []
all_photons = []
all_sigmas = []
# ---------------------------- Main simulation loop ----------------------------
print('-----------------------------------------------------------')
for f in tqdm(range(number_of_frames)):
# Define the coordinates of emitters by randomly distributing them across the FOV
n_mol = int(max(round(np.random.normal(n_molecules, n_molecules_std, size=1)[0]), 0))
x_c = np.random.uniform(low=0.0, high=FOV_size, size=n_mol)
y_c = np.random.uniform(low=0.0, high=FOV_size, size=n_mol)
photon_array = np.random.normal(n_photons, n_photons_std, size=n_mol)
sigma_array = np.random.normal(sigma, sigma_std, size=n_mol)
# x_c = np.linspace(0,3000,5)
# y_c = np.linspace(0,3000,5)
all_xloc += x_c.tolist()
all_yloc += y_c.tolist()
frames += ((f+1)*np.ones(x_c.shape[0])).tolist()
all_photons += photon_array.tolist()
all_sigmas += sigma_array.tolist()
locImage[f] = FromLoc2Image_SimpleHistogram(x_c, y_c, image_size = (N*display_upsampling, M*display_upsampling), pixel_size = pixel_size/display_upsampling)
# # Get the approximated locations according to the grid pixel size
# Chr_emitters = [int(max(min(round(display_upsampling*x_c[i]/pixel_size),N*display_upsampling-1),0)) for i in range(len(x_c))]
# Rhr_emitters = [int(max(min(round(display_upsampling*y_c[i]/pixel_size),M*display_upsampling-1),0)) for i in range(len(y_c))]
# # Build Localization image
# for (r,c) in zip(Rhr_emitters, Chr_emitters):
# locImage[f][r][c] += 1
NoiseFreeImages[f] = FromLoc2Image_Erf(x_c, y_c, photon_array, sigma_array, image_size = (M,M), pixel_size = pixel_size)
# ---------------------------- Create DataFrame fof localization file ----------------------------
# Table with localization info as dataframe output
LocData = pd.DataFrame()
LocData["frame"] = frames
LocData["x [nm]"] = all_xloc
LocData["y [nm]"] = all_yloc
LocData["Photon #"] = all_photons
LocData["Sigma [nm]"] = all_sigmas
LocData.index += 1 # set indices to start at 1 and not 0 (same as ThunderSTORM)
# ---------------------------- Estimation of SNR ----------------------------
n_frames_for_SNR = 100
M_SNR = 10
x_c = np.random.uniform(low=0.0, high=pixel_size*M_SNR, size=n_frames_for_SNR)
y_c = np.random.uniform(low=0.0, high=pixel_size*M_SNR, size=n_frames_for_SNR)
photon_array = np.random.normal(n_photons, n_photons_std, size=n_frames_for_SNR)
sigma_array = np.random.normal(sigma, sigma_std, size=n_frames_for_SNR)
SNR = np.zeros(n_frames_for_SNR)
for i in range(n_frames_for_SNR):
SingleEmitterImage = FromLoc2Image_Erf(np.array([x_c[i]]), np.array([x_c[i]]), np.array([photon_array[i]]), np.array([sigma_array[i]]), (M_SNR, M_SNR), pixel_size)
Signal_photon = np.max(SingleEmitterImage)
Noise_photon = math.sqrt((ReadOutNoise_ADC/ADC_per_photon_conversion)**2 + Signal_photon)
SNR[i] = Signal_photon/Noise_photon
print('SNR: '+str(round(np.mean(SNR),2))+' +/- '+str(round(np.std(SNR),2)))
# ---------------------------- ----------------------------
# Table with info
simParameters = pd.DataFrame()
simParameters["FOV size (nm)"] = [FOV_size]
simParameters["Pixel size (nm)"] = [pixel_size]
simParameters["ADC/photon"] = [ADC_per_photon_conversion]
simParameters["Read-out noise (ADC)"] = [ReadOutNoise_ADC]
simParameters["Constant offset (ADC)"] = [ADC_offset]
simParameters["Emitter density (emitters/um^2)"] = [emitter_density]
simParameters["STD of emitter density (emitters/um^2)"] = [emitter_density_std]
simParameters["Number of frames"] = [number_of_frames]
# simParameters["NA"] = [NA]
# simParameters["Wavelength (nm)"] = [wavelength]
# simParameters["STD of wavelength (nm)"] = [wavelength_std]
simParameters["Sigma (nm))"] = [sigma]
simParameters["STD of Sigma (nm))"] = [sigma_std]
simParameters["Number of photons"] = [n_photons]
simParameters["STD of number of photons"] = [n_photons_std]
simParameters["SNR"] = [np.mean(SNR)]
simParameters["STD of SNR"] = [np.std(SNR)]
# ---------------------------- Finish simulation ----------------------------
# Calculating the noisy image
Images = ADC_per_photon_conversion * np.random.poisson(NoiseFreeImages) + ReadOutNoise_ADC * np.random.normal(size = (number_of_frames, M, N)) + ADC_offset
Images[Images <= 0] = 0
# Convert to 16-bit or 32-bits integers
if Images.max() < (2**16-1):
Images = Images.astype(np.uint16)
else:
Images = Images.astype(np.uint32)
# ---------------------------- Display ----------------------------
# Displaying the time elapsed for simulation
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds,1),"sec(s)")
# Interactively display the results using Widgets
def scroll_in_time(frame):
f = plt.figure(figsize=(18,6))
plt.subplot(1,3,1)
plt.imshow(locImage[frame-1], interpolation='bilinear', vmin = 0, vmax=0.1)
plt.title('Localization image')
plt.axis('off');
plt.subplot(1,3,2)
plt.imshow(NoiseFreeImages[frame-1], interpolation='nearest', cmap='gray')
plt.title('Noise-free simulation')
plt.axis('off');
plt.subplot(1,3,3)
plt.imshow(Images[frame-1], interpolation='nearest', cmap='gray')
plt.title('Noisy simulation')
plt.axis('off');
interact(scroll_in_time, frame=widgets.IntSlider(min=1, max=Images.shape[0], step=1, value=0, continuous_update=False));
# Display the head of the dataframe with localizations
LocData.tail()
#@markdown ---
#@markdown ##Play this cell to save the simulated stack
#@markdown Please select a path to the folder where to save the simulated data. It is not necessary to save the data to run the training, but keeping the simulated for your own record can be useful to check its validity.
Save_path = "" #@param {type:"string"}
if not os.path.exists(Save_path):
os.makedirs(Save_path)
print('Folder created.')
else:
print('Training data already exists in folder: Data overwritten.')
saveAsTIF(Save_path, 'SimulatedDataset', Images, pixel_size)
# io.imsave(os.path.join(Save_path, 'SimulatedDataset.tif'),Images)
LocData.to_csv(os.path.join(Save_path, 'SimulatedDataset.csv'))
simParameters.to_csv(os.path.join(Save_path, 'SimulatedParameters.csv'))
print('Training dataset saved.')
```
## **3.2. Generate training patches**
---
Training patches need to be created from the training data generated above.
* The `patch_size` needs to give sufficient contextual information and for most cases a `patch_size` of 26 (corresponding to patches of 26x26 pixels) works fine. **DEFAULT: 26**
* The `upsampling_factor` defines the effective magnification of the final super-resolved image compared to the input image (this is called magnification in ThunderSTORM). This is used to generate the super-resolved patches as target dataset. Using an `upsampling_factor` of 16 will require the use of more memory and it may be necessary to decreae the `patch_size` to 16 for example. **DEFAULT: 8**
* The `num_patches_per_frame` defines the number of patches extracted from each frame generated in section 3.1. **DEFAULT: 500**
* The `min_number_of_emitters_per_patch` defines the minimum number of emitters that need to be present in the patch to be a valid patch. An empty patch does not contain useful information for the network to learn from. **DEFAULT: 7**
* The `max_num_patches` defines the maximum number of patches to generate. Fewer may be generated depending on how many pacthes are rejected and how many frames are available. **DEFAULT: 10000**
* The `gaussian_sigma` defines the Gaussian standard deviation (in magnified pixels) applied to generate the super-resolved target image. **DEFAULT: 1**
* The `L2_weighting_factor` is a normalization factor used in the loss function. It helps balancing the loss from the L2 norm. When using higher densities, this factor should be decreased and vice-versa. This factor can be autimatically calculated using an empiraical formula. **DEFAULT: 100**
```
#@markdown ## **Provide patch parameters**
# -------------------- User input --------------------
patch_size = 26 #@param {type:"integer"}
upsampling_factor = 8 #@param ["4", "8", "16"] {type:"raw"}
num_patches_per_frame = 500#@param {type:"integer"}
min_number_of_emitters_per_patch = 7#@param {type:"integer"}
max_num_patches = 10000#@param {type:"integer"}
gaussian_sigma = 1#@param {type:"integer"}
#@markdown Estimate the optimal normalization factor automatically?
Automatic_normalization = True #@param {type:"boolean"}
#@markdown Otherwise, it will use the following value:
L2_weighting_factor = 100 #@param {type:"number"}
# -------------------- Prepare variables --------------------
# Start the clock to measure how long it takes
start = time.time()
# Initialize some parameters
pixel_size_hr = pixel_size/upsampling_factor # in nm
n_patches = min(number_of_frames*num_patches_per_frame, max_num_patches)
patch_size = patch_size*upsampling_factor
# Dimensions of the high-res grid
Mhr = upsampling_factor*M # in pixels
Nhr = upsampling_factor*N # in pixels
# Initialize the training patches and labels
patches = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
spikes = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
heatmaps = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
# Run over all frames and construct the training examples
k = 1 # current patch count
skip_counter = 0 # number of dataset skipped due to low density
id_start = 0 # id position in LocData for current frame
print('Generating '+str(n_patches)+' patches of '+str(patch_size)+'x'+str(patch_size))
n_locs = len(LocData.index)
print('Total number of localizations: '+str(n_locs))
density = n_locs/(M*N*number_of_frames*(0.001*pixel_size)**2)
print('Density: '+str(round(density,2))+' locs/um^2')
n_locs_per_patch = patch_size**2*density
if Automatic_normalization:
# This empirical formulae attempts to balance the loss L2 function between the background and the bright spikes
# A value of 100 was originally chosen to balance L2 for a patch size of 2.6x2.6^2 0.1um pixel size and density of 3 (hence the 20.28), at upsampling_factor = 8
L2_weighting_factor = 100/math.sqrt(min(n_locs_per_patch, min_number_of_emitters_per_patch)*8**2/(upsampling_factor**2*20.28))
print('Normalization factor: '+str(round(L2_weighting_factor,2)))
# -------------------- Patch generation loop --------------------
print('-----------------------------------------------------------')
for (f, thisFrame) in enumerate(tqdm(Images)):
# Upsample the frame
upsampledFrame = np.kron(thisFrame, np.ones((upsampling_factor,upsampling_factor)))
# Read all the provided high-resolution locations for current frame
DataFrame = LocData[LocData['frame'] == f+1].copy()
# Get the approximated locations according to the high-res grid pixel size
Chr_emitters = [int(max(min(round(DataFrame['x [nm]'][i]/pixel_size_hr),Nhr-1),0)) for i in range(id_start+1,id_start+1+len(DataFrame.index))]
Rhr_emitters = [int(max(min(round(DataFrame['y [nm]'][i]/pixel_size_hr),Mhr-1),0)) for i in range(id_start+1,id_start+1+len(DataFrame.index))]
id_start += len(DataFrame.index)
# Build Localization image
LocImage = np.zeros((Mhr,Nhr))
LocImage[(Rhr_emitters, Chr_emitters)] = 1
# Here, there's a choice between the original Gaussian (classification approach) and using the erf function
HeatMapImage = L2_weighting_factor*gaussian_filter(LocImage, gaussian_sigma)
# HeatMapImage = L2_weighting_factor*FromLoc2Image_MultiThreaded(np.array(list(DataFrame['x [nm]'])), np.array(list(DataFrame['y [nm]'])),
# np.ones(len(DataFrame.index)), pixel_size_hr*gaussian_sigma*np.ones(len(DataFrame.index)),
# Mhr, pixel_size_hr)
# Generate random position for the top left corner of the patch
xc = np.random.randint(0, Mhr-patch_size, size=num_patches_per_frame)
yc = np.random.randint(0, Nhr-patch_size, size=num_patches_per_frame)
for c in range(len(xc)):
if LocImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size].sum() < min_number_of_emitters_per_patch:
skip_counter += 1
continue
else:
# Limit maximal number of training examples to 15k
if k > max_num_patches:
break
else:
# Assign the patches to the right part of the images
patches[k-1] = upsampledFrame[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
spikes[k-1] = LocImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
heatmaps[k-1] = HeatMapImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
k += 1 # increment current patch count
# Remove the empty data
patches = patches[:k-1]
spikes = spikes[:k-1]
heatmaps = heatmaps[:k-1]
n_patches = k-1
# -------------------- Failsafe --------------------
# Check if the size of the training set is smaller than 5k to notify user to simulate more images using ThunderSTORM
if ((k-1) < 5000):
# W = '\033[0m' # white (normal)
# R = '\033[31m' # red
print(bcolors.WARNING+'!! WARNING: Training set size is below 5K - Consider simulating more images in ThunderSTORM. !!'+bcolors.NORMAL)
# -------------------- Displays --------------------
print('Number of patches skipped due to low density: '+str(skip_counter))
# dataSize = int((getsizeof(patches)+getsizeof(heatmaps)+getsizeof(spikes))/(1024*1024)) #rounded in MB
# print('Size of patches: '+str(dataSize)+' MB')
print(str(n_patches)+' patches were generated.')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# Display patches interactively with a slider
def scroll_patches(patch):
f = plt.figure(figsize=(16,6))
plt.subplot(1,3,1)
plt.imshow(patches[patch-1], interpolation='nearest', cmap='gray')
plt.title('Raw data (frame #'+str(patch)+')')
plt.axis('off');
plt.subplot(1,3,2)
plt.imshow(heatmaps[patch-1], interpolation='nearest')
plt.title('Heat map')
plt.axis('off');
plt.subplot(1,3,3)
plt.imshow(spikes[patch-1], interpolation='nearest')
plt.title('Localization map')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_DeepSTORM2D.png',bbox_inches='tight',pad_inches=0)
interact(scroll_patches, patch=widgets.IntSlider(min=1, max=patches.shape[0], step=1, value=0, continuous_update=False));
```
# **4. Train the network**
---
## **4.1. Select your paths and parameters**
---
<font size = 4>**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).
<font size = 4>**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.
<font size = 5>**Training parameters**
<font size = 4>**`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for ~100 epochs. Evaluate the performance after training (see 5). **Default value: 80**
<font size =4>**`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16**
<font size = 4>**`number_of_steps`:** Define the number of training steps by epoch. **If this value is set to 0**, by default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size**
<font size = 4>**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 30**
<font size = 4>**`initial_learning_rate`:** This parameter represents the initial value to be used as learning rate in the optimizer. **Default value: 0.001**
```
#@markdown ###Path to training images and parameters
model_path = "" #@param {type: "string"}
model_name = "" #@param {type: "string"}
number_of_epochs = 80#@param {type:"integer"}
batch_size = 16#@param {type:"integer"}
number_of_steps = 0#@param {type:"integer"}
percentage_validation = 30 #@param {type:"number"}
initial_learning_rate = 0.001 #@param {type:"number"}
percentage_validation /= 100
if number_of_steps == 0:
number_of_steps = int((1-percentage_validation)*n_patches/batch_size)
print('Number of steps: '+str(number_of_steps))
# Pretrained model path initialised here so next cell does not need to be run
h5_file_path = ''
Use_pretrained_model = False
if not ('patches' in locals()):
# W = '\033[0m' # white (normal)
# R = '\033[31m' # red
print(WARNING+'!! WARNING: No patches were found in memory currently. !!')
Save_path = os.path.join(model_path, model_name)
if os.path.exists(Save_path):
print(bcolors.WARNING+'The model folder already exists and will be overwritten.'+bcolors.NORMAL)
print('-----------------------------')
print('Training parameters set.')
```
## **4.2. Using weights from a pre-trained model as initial weights**
---
<font size = 4> Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a Deep-STORM 2D model**.
<font size = 4> This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.
<font size = 4> In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
```
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.hdf5 pretrained model does not exist'+bcolors.NORMAL)
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead.'+bcolors.NORMAL)
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead'+bcolors.NORMAL)
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print('No pretrained network will be used.')
h5_file_path = ''
```
## **4.4. Start Training**
---
<font size = 4>When playing the cell below you should see updates after each epoch (round). Network training can take some time.
<font size = 4>* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.
<font size = 4>Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.
```
#@markdown ##Start training
# Start the clock to measure how long it takes
start = time.time()
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#here we check that no model with the same name already exist, if so delete
if os.path.exists(Save_path):
shutil.rmtree(Save_path)
# Create the model folder!
os.makedirs(Save_path)
# Export pdf summary
pdf_export(raw_data = load_raw_data, pretrained_model = Use_pretrained_model)
# Let's go !
train_model(patches, heatmaps, Save_path,
steps_per_epoch=number_of_steps, epochs=number_of_epochs, batch_size=batch_size,
upsampling_factor = upsampling_factor,
validation_split = percentage_validation,
initial_learning_rate = initial_learning_rate,
pretrained_model_path = h5_file_path,
L2_weighting_factor = L2_weighting_factor)
# # Show info about the GPU memory useage
# !nvidia-smi
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# export pdf after training to update the existing document
pdf_export(trained = True, raw_data = load_raw_data, pretrained_model = Use_pretrained_model)
```
# **5. Evaluate your model**
---
<font size = 4>This section allows the user to perform important quality checks on the validity and generalisability of the trained model.
<font size = 4>**We highly recommend to perform quality control on all newly trained models.**
```
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
#@markdown #####During training, the model files are automatically saved inside a folder named after the parameter `model_name` (see section 4.1). Provide the name of this folder as `QC_model_path` .
QC_model_path = "" #@param {type:"string"}
if (Use_the_current_trained_model):
QC_model_path = os.path.join(model_path, model_name)
if os.path.exists(QC_model_path):
print("The "+os.path.basename(QC_model_path)+" model will be evaluated")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!'+bcolors.NORMAL)
print('Please make sure you provide a valid model path before proceeding further.')
```
## **5.1. Inspection of the loss function**
---
<font size = 4>First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*
<font size = 4>**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.
<font size = 4>**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.
<font size = 4>During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.
<font size = 4>Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.
```
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
lossDataFromCSV = []
vallossDataFromCSV = []
with open(os.path.join(QC_model_path,'Quality Control/training_evaluation.csv'),'r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
if row:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(os.path.join(QC_model_path,'Quality Control/lossCurvePlots.png'), bbox_inches='tight', pad_inches=0)
plt.show()
```
## **5.2. Error mapping and quality metrics estimation**
---
<font size = 4>This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "QC_image_folder" using teh corresponding localization data contained in "QC_loc_folder" !
<font size = 4>**1. The SSIM (structural similarity) map**
<font size = 4>The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info).
<font size=4>**mSSIM** is the SSIM value calculated across the entire window of both images.
<font size=4>**The output below shows the SSIM maps with the mSSIM**
<font size = 4>**2. The RSE (Root Squared Error) map**
<font size = 4>This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).
<font size =4>**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.
<font size = 4>**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.
<font size=4>**The output below shows the RSE maps with the NRMSE and PSNR values.**
```
# ------------------------ User input ------------------------
#@markdown ##Choose the folders that contain your Quality Control dataset
QC_image_folder = "" #@param{type:"string"}
QC_loc_folder = "" #@param{type:"string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value:
pixel_size = 100 #@param {type:"number"}
if get_pixel_size_from_file:
pixel_size_INPUT = None
else:
pixel_size_INPUT = pixel_size
# ------------------------ QC analysis loop over provided dataset ------------------------
savePath = os.path.join(QC_model_path, 'Quality Control')
# Open and create the csv file that will contain all the QC metrics
with open(os.path.join(savePath, os.path.basename(QC_model_path)+"_QC_metrics.csv"), "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","WF v. GT mSSIM", "Prediction v. GT NRMSE","WF v. GT NRMSE", "Prediction v. GT PSNR", "WF v. GT PSNR"])
# These lists will be used to collect all the metrics values per slice
file_name_list = []
slice_number_list = []
mSSIM_GvP_list = []
mSSIM_GvWF_list = []
NRMSE_GvP_list = []
NRMSE_GvWF_list = []
PSNR_GvP_list = []
PSNR_GvWF_list = []
# Let's loop through the provided dataset in the QC folders
for (imageFilename, locFilename) in zip(list_files(QC_image_folder, 'tif'), list_files(QC_loc_folder, 'csv')):
print('--------------')
print(imageFilename)
print(locFilename)
# Get the prediction
batchFramePredictionLocalization(QC_image_folder, imageFilename, QC_model_path, savePath, pixel_size = pixel_size_INPUT)
# test_model(QC_image_folder, imageFilename, QC_model_path, savePath, display=False);
thisPrediction = io.imread(os.path.join(savePath, 'Predicted_'+imageFilename))
thisWidefield = io.imread(os.path.join(savePath, 'Widefield_'+imageFilename))
Mhr = thisPrediction.shape[0]
Nhr = thisPrediction.shape[1]
if pixel_size_INPUT == None:
pixel_size, N, M = getPixelSizeTIFFmetadata(os.path.join(QC_image_folder,imageFilename))
upsampling_factor = int(Mhr/M)
print('Upsampling factor: '+str(upsampling_factor))
pixel_size_hr = pixel_size/upsampling_factor # in nm
# Load the localization file and display the first
LocData = pd.read_csv(os.path.join(QC_loc_folder,locFilename), index_col=0)
x = np.array(list(LocData['x [nm]']))
y = np.array(list(LocData['y [nm]']))
locImage = FromLoc2Image_SimpleHistogram(x, y, image_size = (Mhr,Nhr), pixel_size = pixel_size_hr)
# Remove extension from filename
imageFilename_no_extension = os.path.splitext(imageFilename)[0]
# io.imsave(os.path.join(savePath, 'GT_image_'+imageFilename), locImage)
saveAsTIF(savePath, 'GT_image_'+imageFilename_no_extension, locImage, pixel_size_hr)
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm, test_prediction_norm = norm_minmse(locImage, thisPrediction, normalize_gt=True)
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm, test_wf_norm = norm_minmse(locImage, thisWidefield, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = structural_similarity(test_GT_norm, test_prediction_norm, data_range=1., full=True)
index_SSIM_GTvsWF, img_SSIM_GTvsWF = structural_similarity(test_GT_norm, test_wf_norm, data_range=1., full=True)
# Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
# io.imsave(os.path.join(savePath,'SSIM_GTvsPrediction_'+imageFilename),img_SSIM_GTvsPrediction_32bit)
saveAsTIF(savePath,'SSIM_GTvsPrediction_'+imageFilename_no_extension, img_SSIM_GTvsPrediction_32bit, pixel_size_hr)
img_SSIM_GTvsWF_32bit = np.float32(img_SSIM_GTvsWF)
# io.imsave(os.path.join(savePath,'SSIM_GTvsWF_'+imageFilename),img_SSIM_GTvsWF_32bit)
saveAsTIF(savePath,'SSIM_GTvsWF_'+imageFilename_no_extension, img_SSIM_GTvsWF_32bit, pixel_size_hr)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsWF = np.sqrt(np.square(test_GT_norm - test_wf_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
# io.imsave(os.path.join(savePath,'RSE_GTvsPrediction_'+imageFilename),img_RSE_GTvsPrediction_32bit)
saveAsTIF(savePath,'RSE_GTvsPrediction_'+imageFilename_no_extension, img_RSE_GTvsPrediction_32bit, pixel_size_hr)
img_RSE_GTvsWF_32bit = np.float32(img_RSE_GTvsWF)
# io.imsave(os.path.join(savePath,'RSE_GTvsWF_'+imageFilename),img_RSE_GTvsWF_32bit)
saveAsTIF(savePath,'RSE_GTvsWF_'+imageFilename_no_extension, img_RSE_GTvsWF_32bit, pixel_size_hr)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsWF = np.sqrt(np.mean(img_RSE_GTvsWF))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsWF = psnr(test_GT_norm,test_wf_norm,data_range=1.0)
writer.writerow([imageFilename,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsWF),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsWF),str(PSNR_GTvsPrediction), str(PSNR_GTvsWF)])
# Collect values to display in dataframe output
file_name_list.append(imageFilename)
mSSIM_GvP_list.append(index_SSIM_GTvsPrediction)
mSSIM_GvWF_list.append(index_SSIM_GTvsWF)
NRMSE_GvP_list.append(NRMSE_GTvsPrediction)
NRMSE_GvWF_list.append(NRMSE_GTvsWF)
PSNR_GvP_list.append(PSNR_GTvsPrediction)
PSNR_GvWF_list.append(PSNR_GTvsWF)
# Table with metrics as dataframe output
pdResults = pd.DataFrame(index = file_name_list)
pdResults["Prediction v. GT mSSIM"] = mSSIM_GvP_list
pdResults["Wide-field v. GT mSSIM"] = mSSIM_GvWF_list
pdResults["Prediction v. GT NRMSE"] = NRMSE_GvP_list
pdResults["Wide-field v. GT NRMSE"] = NRMSE_GvWF_list
pdResults["Prediction v. GT PSNR"] = PSNR_GvP_list
pdResults["Wide-field v. GT PSNR"] = PSNR_GvWF_list
# ------------------------ Display ------------------------
print('--------------------------------------------')
@interact
def show_QC_results(file = list_files(QC_image_folder, 'tif')):
plt.figure(figsize=(15,15))
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(savePath, 'GT_image_'+file))
plt.imshow(img_GT, norm = simple_norm(img_GT, percent = 99.5))
plt.title('Target',fontsize=15)
# Wide-field
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(savePath, 'Widefield_'+file))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(savePath, 'Predicted_'+file))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_SSIM_GTvsWF = io.imread(os.path.join(savePath, 'SSIM_GTvsWF_'+file))
imSSIM_GTvsWF = plt.imshow(img_SSIM_GTvsWF, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsWF,fraction=0.046, pad=0.04)
plt.title('Target vs. Widefield',fontsize=15)
plt.xlabel('mSSIM: '+str(round(pdResults.loc[file]["Wide-field v. GT mSSIM"],3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_SSIM_GTvsPrediction = io.imread(os.path.join(savePath, 'SSIM_GTvsPrediction_'+file))
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(pdResults.loc[file]["Prediction v. GT mSSIM"],3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_RSE_GTvsWF = io.imread(os.path.join(savePath, 'RSE_GTvsWF_'+file))
imRSE_GTvsWF = plt.imshow(img_RSE_GTvsWF, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsWF,fraction=0.046,pad=0.04)
plt.title('Target vs. Widefield',fontsize=15)
plt.xlabel('NRMSE: '+str(round(pdResults.loc[file]["Wide-field v. GT NRMSE"],3))+', PSNR: '+str(round(pdResults.loc[file]["Wide-field v. GT PSNR"],3)),fontsize=14)
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_RSE_GTvsPrediction = io.imread(os.path.join(savePath, 'RSE_GTvsPrediction_'+file))
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(pdResults.loc[file]["Prediction v. GT NRMSE"],3))+', PSNR: '+str(round(pdResults.loc[file]["Prediction v. GT PSNR"],3)),fontsize=14)
plt.savefig(QC_model_path+'/Quality Control/QC_example_data.png', bbox_inches='tight', pad_inches=0)
print('--------------------------------------------')
pdResults.head()
# Export pdf wth summary of QC results
qc_pdf_export()
```
# **6. Using the trained model**
---
<font size = 4>In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive.
## **6.1 Generate image prediction and localizations from unseen dataset**
---
<font size = 4>The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).
<font size = 4>**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.
<font size = 4>**`Result_folder`:** This folder will contain the found localizations csv.
<font size = 4>**`batch_size`:** This paramter determines how many frames are processed by any single pass on the GPU. A higher `batch_size` will make the prediction faster but will use more GPU memory. If an OutOfMemory (OOM) error occurs, decrease the `batch_size`. **DEFAULT: 4**
<font size = 4>**`threshold`:** This paramter determines threshold for local maxima finding. The value is expected to reside in the range **[0,1]**. A higher `threshold` will result in less localizations. **DEFAULT: 0.1**
<font size = 4>**`neighborhood_size`:** This paramter determines size of the neighborhood within which the prediction needs to be a local maxima in recovery pixels (CCD pixel/upsampling_factor). A high `neighborhood_size` will make the prediction slower and potentially discard nearby localizations. **DEFAULT: 3**
<font size = 4>**`use_local_average`:** This paramter determines whether to locally average the prediction in a 3x3 neighborhood to get the final localizations. If set to **True** it will make inference slightly slower depending on the size of the FOV. **DEFAULT: True**
```
# ------------------------------- User input -------------------------------
#@markdown ### Data parameters
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value (in nm):
pixel_size = 100 #@param {type:"number"}
#@markdown ### Model parameters
#@markdown Do you want to use the model you just trained?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown Otherwise, please provide path to the model folder below
prediction_model_path = "" #@param {type:"string"}
#@markdown ### Prediction parameters
batch_size = 4#@param {type:"integer"}
#@markdown ### Post processing parameters
threshold = 0.1#@param {type:"number"}
neighborhood_size = 3#@param {type:"integer"}
#@markdown Do you want to locally average the model output with CoG estimator ?
use_local_average = True #@param {type:"boolean"}
if get_pixel_size_from_file:
pixel_size = None
if (Use_the_current_trained_model):
prediction_model_path = os.path.join(model_path, model_name)
if os.path.exists(prediction_model_path):
print("The "+os.path.basename(prediction_model_path)+" model will be used.")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!'+bcolors.NORMAL)
print('Please make sure you provide a valid model path before proceeding further.')
# inform user whether local averaging is being used
if use_local_average == True:
print('Using local averaging')
if not os.path.exists(Result_folder):
print('Result folder was created.')
os.makedirs(Result_folder)
# ------------------------------- Run predictions -------------------------------
start = time.time()
#%% This script tests the trained fully convolutional network based on the
# saved training weights, and normalization created using train_model.
if os.path.isdir(Data_folder):
for filename in list_files(Data_folder, 'tif'):
# run the testing/reconstruction process
print("------------------------------------")
print("Running prediction on: "+ filename)
batchFramePredictionLocalization(Data_folder, filename, prediction_model_path, Result_folder,
batch_size,
threshold,
neighborhood_size,
use_local_average,
pixel_size = pixel_size)
elif os.path.isfile(Data_folder):
batchFramePredictionLocalization(os.path.dirname(Data_folder), os.path.basename(Data_folder), prediction_model_path, Result_folder,
batch_size,
threshold,
neighborhood_size,
use_local_average,
pixel_size = pixel_size)
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# ------------------------------- Interactive display -------------------------------
print('--------------------------------------------------------------------')
print('---------------------------- Previews ------------------------------')
print('--------------------------------------------------------------------')
if os.path.isdir(Data_folder):
@interact
def show_QC_results(file = list_files(Data_folder, 'tif')):
plt.figure(figsize=(15,7.5))
# Wide-field
plt.subplot(1,2,1)
plt.axis('off')
img_Source = io.imread(os.path.join(Result_folder, 'Widefield_'+file))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield', fontsize=15)
# Prediction
plt.subplot(1,2,2)
plt.axis('off')
img_Prediction = io.imread(os.path.join(Result_folder, 'Predicted_'+file))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Predicted',fontsize=15)
if os.path.isfile(Data_folder):
plt.figure(figsize=(15,7.5))
# Wide-field
plt.subplot(1,2,1)
plt.axis('off')
img_Source = io.imread(os.path.join(Result_folder, 'Widefield_'+os.path.basename(Data_folder)))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield', fontsize=15)
# Prediction
plt.subplot(1,2,2)
plt.axis('off')
img_Prediction = io.imread(os.path.join(Result_folder, 'Predicted_'+os.path.basename(Data_folder)))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Predicted',fontsize=15)
```
## **6.2 Drift correction**
---
<font size = 4>The visualization above is the raw output of the network and displayed at the `upsampling_factor` chosen during model training. The display is a preview without any drift correction applied. This section performs drift correction using cross-correlation between time bins to estimate the drift.
<font size = 4>**`Loc_file_path`:** is the path to the localization file to use for visualization.
<font size = 4>**`original_image_path`:** is the path to the original image. This only serves to extract the original image size and pixel size to shape the visualization properly.
<font size = 4>**`visualization_pixel_size`:** This parameter corresponds to the pixel size to use for the image reconstructions used for the Drift Correction estmication (in **nm**). A smaller pixel size will be more precise but will take longer to compute. **DEFAULT: 20**
<font size = 4>**`number_of_bins`:** This parameter defines how many temporal bins are used across the full dataset. All localizations in each bins are used ot build an image. This image is used to find the drift with respect to the image obtained from the very first bin. A typical value would correspond to about 500 frames per bin. **DEFAULT: Total number of frames / 500**
<font size = 4>**`polynomial_fit_degree`:** The drift obtained for each temporal bins needs to be interpolated to every single frames. This is performed by polynomial fit, the degree of which is defined here. **DEFAULT: 4**
<font size = 4> The drift-corrected localization data is automaticaly saved in the `save_path` folder.
```
# @markdown ##Data parameters
Loc_file_path = "" #@param {type:"string"}
# @markdown Provide information about original data. Get the info automatically from the raw data?
Get_info_from_file = True #@param {type:"boolean"}
# Loc_file_path = "/content/gdrive/My Drive/Colab notebooks testing/DeepSTORM/Glia data from CL/Results from prediction/20200615-M6 with CoM localizations/Localizations_glia_actin_2D - 1-500fr_avg.csv" #@param {type:"string"}
original_image_path = "" #@param {type:"string"}
# @markdown Otherwise, please provide image width, height (in pixels) and pixel size (in nm)
image_width = 256#@param {type:"integer"}
image_height = 256#@param {type:"integer"}
pixel_size = 100 #@param {type:"number"}
# @markdown ##Drift correction parameters
visualization_pixel_size = 20#@param {type:"number"}
number_of_bins = 50#@param {type:"integer"}
polynomial_fit_degree = 4#@param {type:"integer"}
# @markdown ##Saving parameters
save_path = '' #@param {type:"string"}
# Let's go !
start = time.time()
# Get info from the raw file if selected
if Get_info_from_file:
pixel_size, image_width, image_height = getPixelSizeTIFFmetadata(original_image_path, display=True)
# Read the localizations in
LocData = pd.read_csv(Loc_file_path)
# Calculate a few variables
Mhr = int(math.ceil(image_height*pixel_size/visualization_pixel_size))
Nhr = int(math.ceil(image_width*pixel_size/visualization_pixel_size))
nFrames = max(LocData['frame'])
x_max = max(LocData['x [nm]'])
y_max = max(LocData['y [nm]'])
image_size = (Mhr, Nhr)
n_locs = len(LocData.index)
print('Image size: '+str(image_size))
print('Number of frames in data: '+str(nFrames))
print('Number of localizations in data: '+str(n_locs))
blocksize = math.ceil(nFrames/number_of_bins)
print('Number of frames per block: '+str(blocksize))
blockDataFrame = LocData[(LocData['frame'] < blocksize)].copy()
xc_array = blockDataFrame['x [nm]'].to_numpy(dtype=np.float32)
yc_array = blockDataFrame['y [nm]'].to_numpy(dtype=np.float32)
# Preparing the Reference image
photon_array = np.ones(yc_array.shape[0])
sigma_array = np.ones(yc_array.shape[0])
ImageRef = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
ImagesRef = np.rot90(ImageRef, k=2)
xDrift = np.zeros(number_of_bins)
yDrift = np.zeros(number_of_bins)
filename_no_extension = os.path.splitext(os.path.basename(Loc_file_path))[0]
with open(os.path.join(save_path, filename_no_extension+"_DriftCorrectionData.csv"), "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["Block #", "x-drift [nm]","y-drift [nm]"])
for b in tqdm(range(number_of_bins)):
blockDataFrame = LocData[(LocData['frame'] >= (b*blocksize)) & (LocData['frame'] < ((b+1)*blocksize))].copy()
xc_array = blockDataFrame['x [nm]'].to_numpy(dtype=np.float32)
yc_array = blockDataFrame['y [nm]'].to_numpy(dtype=np.float32)
photon_array = np.ones(yc_array.shape[0])
sigma_array = np.ones(yc_array.shape[0])
ImageBlock = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
XC = fftconvolve(ImagesRef, ImageBlock, mode = 'same')
yDrift[b], xDrift[b] = subPixelMaxLocalization(XC, method = 'CoM')
# saveAsTIF(save_path, 'ImageBlock'+str(b), ImageBlock, visualization_pixel_size)
# saveAsTIF(save_path, 'XCBlock'+str(b), XC, visualization_pixel_size)
writer.writerow([str(b), str((xDrift[b]-xDrift[0])*visualization_pixel_size), str((yDrift[b]-yDrift[0])*visualization_pixel_size)])
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
print('Fitting drift data...')
bin_number = np.arange(number_of_bins)*blocksize + blocksize/2
xDrift = (xDrift-xDrift[0])*visualization_pixel_size
yDrift = (yDrift-yDrift[0])*visualization_pixel_size
xDriftCoeff = np.polyfit(bin_number, xDrift, polynomial_fit_degree)
yDriftCoeff = np.polyfit(bin_number, yDrift, polynomial_fit_degree)
xDriftFit = np.poly1d(xDriftCoeff)
yDriftFit = np.poly1d(yDriftCoeff)
bins = np.arange(nFrames)
xDriftInterpolated = xDriftFit(bins)
yDriftInterpolated = yDriftFit(bins)
# ------------------ Displaying the image results ------------------
plt.figure(figsize=(15,10))
plt.plot(bin_number,xDrift, 'r+', label='x-drift')
plt.plot(bin_number,yDrift, 'b+', label='y-drift')
plt.plot(bins,xDriftInterpolated, 'r-', label='y-drift (fit)')
plt.plot(bins,yDriftInterpolated, 'b-', label='y-drift (fit)')
plt.title('Cross-correlation estimated drift')
plt.ylabel('Drift [nm]')
plt.xlabel('Bin number')
plt.legend();
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:", hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# ------------------ Actual drift correction -------------------
print('Correcting localization data...')
xc_array = LocData['x [nm]'].to_numpy(dtype=np.float32)
yc_array = LocData['y [nm]'].to_numpy(dtype=np.float32)
frames = LocData['frame'].to_numpy(dtype=np.int32)
xc_array_Corr, yc_array_Corr = correctDriftLocalization(xc_array, yc_array, frames, xDriftInterpolated, yDriftInterpolated)
ImageRaw = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
ImageCorr = FromLoc2Image_SimpleHistogram(xc_array_Corr, yc_array_Corr, image_size = image_size, pixel_size = visualization_pixel_size)
# ------------------ Displaying the imge results ------------------
plt.figure(figsize=(15,7.5))
# Raw
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(ImageRaw, norm = simple_norm(ImageRaw, percent = 99.5))
plt.title('Raw', fontsize=15);
# Corrected
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(ImageCorr, norm = simple_norm(ImageCorr, percent = 99.5))
plt.title('Corrected',fontsize=15);
# ------------------ Table with info -------------------
driftCorrectedLocData = pd.DataFrame()
driftCorrectedLocData['frame'] = frames
driftCorrectedLocData['x [nm]'] = xc_array_Corr
driftCorrectedLocData['y [nm]'] = yc_array_Corr
driftCorrectedLocData['confidence [a.u]'] = LocData['confidence [a.u]']
driftCorrectedLocData.to_csv(os.path.join(save_path, filename_no_extension+'_DriftCorrected.csv'))
print('-------------------------------')
print('Corrected localizations saved.')
```
## **6.3 Visualization of the localizations**
---
<font size = 4>The visualization in section 6.1 is the raw output of the network and displayed at the `upsampling_factor` chosen during model training. This section performs visualization of the result by plotting the localizations as a simple histogram.
<font size = 4>**`Loc_file_path`:** is the path to the localization file to use for visualization.
<font size = 4>**`original_image_path`:** is the path to the original image. This only serves to extract the original image size and pixel size to shape the visualization properly.
<font size = 4>**`visualization_pixel_size`:** This parameter corresponds to the pixel size to use for the final image reconstruction (in **nm**). **DEFAULT: 10**
<font size = 4>**`visualization_mode`:** This parameter defines what visualization method is used to visualize the final image. NOTES: The Integrated Gaussian can be quite slow. **DEFAULT: Simple histogram.**
```
# @markdown ##Data parameters
Use_current_drift_corrected_localizations = True #@param {type:"boolean"}
# @markdown Otherwise provide a localization file path
Loc_file_path = "" #@param {type:"string"}
# @markdown Provide information about original data. Get the info automatically from the raw data?
Get_info_from_file = True #@param {type:"boolean"}
# Loc_file_path = "/content/gdrive/My Drive/Colab notebooks testing/DeepSTORM/Glia data from CL/Results from prediction/20200615-M6 with CoM localizations/Localizations_glia_actin_2D - 1-500fr_avg.csv" #@param {type:"string"}
original_image_path = "" #@param {type:"string"}
# @markdown Otherwise, please provide image width, height (in pixels) and pixel size (in nm)
image_width = 256#@param {type:"integer"}
image_height = 256#@param {type:"integer"}
pixel_size = 100#@param {type:"number"}
# @markdown ##Visualization parameters
visualization_pixel_size = 10#@param {type:"number"}
visualization_mode = "Simple histogram" #@param ["Simple histogram", "Integrated Gaussian (SLOW!)"]
if not Use_current_drift_corrected_localizations:
filename_no_extension = os.path.splitext(os.path.basename(Loc_file_path))[0]
if Get_info_from_file:
pixel_size, image_width, image_height = getPixelSizeTIFFmetadata(original_image_path, display=True)
if Use_current_drift_corrected_localizations:
LocData = driftCorrectedLocData
else:
LocData = pd.read_csv(Loc_file_path)
Mhr = int(math.ceil(image_height*pixel_size/visualization_pixel_size))
Nhr = int(math.ceil(image_width*pixel_size/visualization_pixel_size))
nFrames = max(LocData['frame'])
x_max = max(LocData['x [nm]'])
y_max = max(LocData['y [nm]'])
image_size = (Mhr, Nhr)
print('Image size: '+str(image_size))
print('Number of frames in data: '+str(nFrames))
print('Number of localizations in data: '+str(len(LocData.index)))
xc_array = LocData['x [nm]'].to_numpy()
yc_array = LocData['y [nm]'].to_numpy()
if (visualization_mode == 'Simple histogram'):
locImage = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
elif (visualization_mode == 'Shifted histogram'):
print(bcolors.WARNING+'Method not implemented yet!'+bcolors.NORMAL)
locImage = np.zeros(image_size)
elif (visualization_mode == 'Integrated Gaussian (SLOW!)'):
photon_array = np.ones(xc_array.shape)
sigma_array = np.ones(xc_array.shape)
locImage = FromLoc2Image_Erf(xc_array, yc_array, photon_array, sigma_array, image_size = image_size, pixel_size = visualization_pixel_size)
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# Display
plt.figure(figsize=(20,10))
plt.axis('off')
# plt.imshow(locImage, cmap='gray');
plt.imshow(locImage, norm = simple_norm(locImage, percent = 99.5));
LocData.head()
# @markdown ---
# @markdown #Play this cell to save the visualization
# @markdown ####Please select a path to the folder where to save the visualization.
save_path = "" #@param {type:"string"}
if not os.path.exists(save_path):
os.makedirs(save_path)
print('Folder created.')
saveAsTIF(save_path, filename_no_extension+'_Visualization', locImage, visualization_pixel_size)
print('Image saved.')
```
## **6.4. Download your predictions**
---
<font size = 4>**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name.
# **7. Version log**
---
<font size = 4>**v1.13**:
* The section 1 and 2 are now swapped for better export of *requirements.txt*.
* This version also now includes built-in version check and the version log that you're reading now.
---
#**Thank you for using Deep-STORM 2D!**
| github_jupyter |
# Microstructure classification using Neural Networks
In this example, we will generate microstructures of 4 different types with different grain sizes.
Then we will split the dataset into training and testing set.
Finally we will trian the neural network using CrysX-NN to make predictions.
## Run the following cell for Google colab
then restart runtime
```
! pip install --upgrade --no-cache-dir https://github.com/manassharma07/crysx_nn/tarball/main
! pip install pymks
! pip install IPython==7.7.0
! pip install fsspec>=0.3.3
```
## Import necessary libraries
We will use PyMKS for generation artificial microstructures.
```
from pymks import (
generate_multiphase,
plot_microstructures,
# PrimitiveTransformer,
# TwoPointCorrelation,
# FlattenTransformer,
# GenericTransformer
)
import numpy as np
import matplotlib.pyplot as plt
# For GPU
import cupy as cp
```
## Define some parameters
like number of samples per type, the width and height of a microstructure image in pixels.
[For Google Colab, generating 10,000 samples of each type results in out of memory error. 8000 seems to work fine.]
```
nSamples_per_type = 10000
width = 100
height = 100
```
## Generate microstructures
The following code will generate microstructures of 4 different types.
The first type has 6 times more grain boundaries along the x-axis than the y-axis.
The second type has 4 times more grain boundaries along the y-axis than the x-axis.
The third type has same number of grain boundaries along the x-axis as well as the y-axis.
The fourth type has 6 times more grain boundaries along the y-axis than the x-axis.
```
grain_sizes = [(30, 5), (10, 40), (15, 15), (5, 30)]
seeds = [10, 99, 4, 36]
data_synth = np.concatenate([
generate_multiphase(shape=(nSamples_per_type, width, height), grain_size=grain_size,
volume_fraction=(0.5, 0.5),
percent_variance=0.2,
seed=seed
)
for grain_size, seed in zip(grain_sizes, seeds)
])
```
## Plot a microstructure of each type
```
plot_microstructures(*data_synth[::nSamples_per_type+0], colorbar=True)
# plot_microstructures(*data_synth[::nSamples_per_type+1], colorbar=True)
# plot_microstructures(*data_synth[::nSamples_per_type+2], colorbar=True)
# plot_microstructures(*data_synth[::nSamples_per_type+3], colorbar=True)
#plt.savefig("Microstructures.png",dpi=600,transparent=True)
plt.show()
```
## Check the shape of the data generated
The first dimension corresponds to the total number of samples, the second and third axes are for width and height.
```
# Print shape of the array
print(data_synth.shape)
print(type(data_synth))
```
## Rename the generated data --> `X_data` as it is the input data
```
X_data = np.array(data_synth)
print(X_data.shape)
```
## Create the target/true labels for the data
The microstructure data we have generated is such that the samples of different types are grouped together. Furthermore, their order is the same as the one we provided when generating the data.
Therefore, we can generate the true labels quite easily by making a numpy array whose first `nSamples_per_type` elements correspond to type 0, and so on upto type 3.
```
Y_data = np.concatenate([np.ones(nSamples_per_type)*0,np.ones(nSamples_per_type)*1,np.ones(nSamples_per_type)*2,np.ones(nSamples_per_type)*3])
print(Y_data)
print(Y_data.shape)
```
## Plot some samples taken from the data randomly as well as their labels that we created for confirmation
```
rng = np.random.default_rng()
### Plot examples
fig, axes = plt.subplots(nrows=2, ncols=6, figsize=(15., 6.))
for axes_row in axes:
for ax in axes_row:
test_index = rng.integers(0, len(Y_data))
image = X_data[test_index]
orig_label = Y_data[test_index]
ax.set_axis_off()
ax.imshow(image)
ax.set_title('True: %i' % orig_label)
```
## Use sklearn to split the data into train and test set
```
from sklearn.model_selection import train_test_split
# Split into train and test
X_train_orig, X_test_orig, Y_train_orig, Y_test_orig = train_test_split(X_data, Y_data, test_size=0.20, random_state=1)
```
## Some statistics of the training data
```
print('Training data MIN',X_train_orig.min())
print('Training data MAX',X_train_orig.max())
print('Training data MEAN',X_train_orig.mean())
print('Training data STD',X_train_orig.std())
```
## Check some shapes
```
print(X_train_orig.shape)
print(Y_train_orig.shape)
print(X_test_orig.shape)
print(Y_test_orig.shape)
```
## Flatten the input pixel data for each sample by reshaping the 2d array of size `100,100`, for each sample to a 1d array of size `100*100`
```
X_train = X_train_orig.reshape(X_train_orig.shape[0], width*height)
X_test = X_test_orig.reshape(X_test_orig.shape[0], width*height)
```
## Check the shapes
```
print(X_train.shape)
print(X_test.shape)
```
## Use a utility from CrysX-NN to one-hot encode the target/true labels
This means that a sample with type 3 will be represented as an array [0,0,0,1]
```
from crysx_nn import mnist_utils as mu
Y_train = mu.one_hot_encode(Y_train_orig, 4)
Y_test = mu.one_hot_encode(Y_test_orig, 4)
print(Y_train.shape)
print(Y_test.shape)
```
## Standardize the training and testing input data using the mean and standard deviation of the training data
```
X_train = (X_train - np.mean(X_train_orig)) / np.std(X_train_orig)
X_test = (X_test - np.mean(X_train_orig)) / np.std(X_train_orig)
# Some statistics after standardization
print('Training data MIN',X_train.min())
print('Training data MAX',X_train.max())
print('Training data MEAN',X_train.mean())
print('Training data STD',X_train.std())
print('Testing data MIN',X_test.min())
print('Testing data MAX',X_test.max())
print('Testing data MEAN',X_test.mean())
print('Testing data STD',X_test.std())
```
## Finally we will begin creating a neural network
Set some important parameters for the Neural Network.
**Note**: In some cases I got NAN values while training. The issue could be circumvented by choosing a different batch size.
```
nInputs = width*height # No. of nodes in the input layer
neurons_per_layer = [500, 4] # Neurons per layer (excluding the input layer)
activation_func_names = ['ReLU', 'Softmax']
nLayers = len(neurons_per_layer)
nEpochs = 4
batchSize = 32 # No. of input samples to process at a time for optimization
```
## Create the neural network model
Use the parameters define above to create the model
```
from crysx_nn import network
model = network.nn_model(nInputs=nInputs, neurons_per_layer=neurons_per_layer, activation_func_names=activation_func_names, batch_size=batchSize, device='GPU', init_method='Xavier')
model.lr = 0.02
```
## Check the details of the Neural Network
```
model.details()
```
## Visualize the neural network
```
model.visualize()
```
## Begin optimization/training
We will use `float32` precision, so convert the input and output arrays.
We will use Categorical Cross Entropy for the loss function.
```
inputs = cp.array(X_train.astype(np.float32))
outputs = cp.array(Y_train.astype(np.float32))
# Run optimization
# model.optimize(inputs, outputs, lr=0.02,nEpochs=nEpochs,loss_func_name='CCE', miniterEpoch=1, batchProgressBar=True, miniterBatch=100)
# To get accuracies at each epoch
model.optimize(inputs, outputs, lr=0.02,nEpochs=nEpochs,loss_func_name='CCE', miniterEpoch=1, batchProgressBar=True, miniterBatch=100, get_accuracy=True)
```
## Error at each epoch
```
print(model.errors)
```
## Accuracy at each epoch
```
print(model.accuracy)
```
## Save model weights and biases
```
# Save weights
model.save_model_weights('NN_crysx_microstructure_96_weights_cupy')
# Save biases
model.save_model_biases('NN_crysx_microstructure_96_biases_cupy')
```
## Load model weights and biases from files
```
model.load_model_weights('NN_crysx_microstructure_96_weights_cupy')
model.load_model_biases('NN_crysx_microstructure_96_biases_cupy')
```
## Performance on Test data
```
## Convert to float32 arrays
inputs = cp.array(X_test.astype(np.float32))
outputs = cp.array(Y_test.astype(np.float32))
# predictions, error = model.predict(inputs, outputs, loss_func_name='BCE')
# print('Error:',error)
# print(predictions)
predictions, error, accuracy = model.predict(inputs, outputs, loss_func_name='CCE', get_accuracy=True)
print('Error:',error)
print('Accuracy %:',accuracy*100)
```
## Confusion matrix
```
from crysx_nn import utils
# Convert predictions to numpy array for using the utility function
predictions = cp.asnumpy(predictions)
# Get the indices of the maximum probabilities for each sample in the predictions array
pred_type = np.argmax(predictions, axis=1)
# Get the digit index from the one-hot encoded array
true_type = np.argmax(Y_test, axis=1)
# Calculation confusion matrix
cm = utils.compute_confusion_matrix(pred_type, true_type)
print('Confusion matrix:\n',cm)
# Plot the confusion matrix
utils.plot_confusion_matrix(cm)
```
## Draw some random images from the test dataset and compare the true labels to the network outputs
```
### Draw some random images from the test dataset and compare the true labels to the network outputs
fig, axes = plt.subplots(nrows=2, ncols=6, figsize=(15., 6.))
### Loop over subplots
for axes_row in axes:
for ax in axes_row:
### Draw the images
test_index = rng.integers(0, len(Y_test_orig))
image = X_test[test_index].reshape(width, height) # Use X_test instead of X_test_orig as X_test_orig is not standardized
orig_label = Y_test_orig[test_index]
### Compute the predictions
input_array = cp.array(image.reshape([1,width*height]))
output = model.predict(input_array)
# Get the maximum probability
certainty = np.max(output)
# Get the index of the maximum probability
output = np.argmax(output)
### Show image
ax.set_axis_off()
ax.imshow(image)
ax.set_title('True: %i, predicted: %i\nat %f ' % (orig_label, output, certainty*100))
```
| github_jupyter |
# <center> Pandas*</center>
*pandas is short for Python Data Analysis Library
<img src="https://welovepandas.club/wp-content/uploads/2019/02/panda-bamboo1550035127.jpg" height=350 width=400>
```
import pandas as pd
```
In pandas you need to work with DataFrames and Series. According to [the documentation of pandas](https://pandas.pydata.org/pandas-docs/stable/):
* **DataFrame**: Two-dimensional, size-mutable, potentially heterogeneous tabular data. Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure.
* **Series**: One-dimensional ndarray with axis labels (including time series).
```
pd.Series([5, 6, 7, 8, 9, 10])
pd.DataFrame([1])
some_data = {'Student': ['1', '2'], 'Name': ['Alice', 'Michael'], 'Surname': ['Brown', 'Williams']}
pd.DataFrame(some_data)
some_data = [{'Student': ['1', '2'], 'Name': ['Alice', 'Michael'], 'Surname': ['Brown', 'Williams']}]
pd.DataFrame(some_data)
pd.DataFrame([{'Student': '1', 'Name': 'Alice', 'Surname': 'Brown'},
{'Student': '2', 'Name': 'Anna', 'Surname': 'White'}])
```
Check how to create it:
* pd.DataFrame().from_records()
* pd.DataFrame().from_dict()
```
pd.DataFrame.from_records(some_data)
pd.DataFrame.from_dict()
```
This data set is too big for github, download it from [here](https://www.kaggle.com/START-UMD/gtd). You will need to register on Kaggle first.
```
df = pd.read_csv('globalterrorismdb_0718dist.csv', encoding='ISO-8859-1')
```
Let's explore the second set of data. How many rows and columns are there?
```
df.shape
```
General information on this data set:
```
df.info()
```
Let's take a look at the dataset information. In .info (), you can pass additional parameters, including:
* **verbose**: whether to print information about the DataFrame in full (if the table is very large, then some information may be lost);
* **memory_usage**: whether to print memory consumption (the default is True, but you can put either False, which will remove memory consumption, or 'deep', which will calculate the memory consumption more accurately);
* **null_counts**: Whether to count the number of empty elements (default is True).
```
df.describe()
df.describe(include=['object', 'int'])
```
The describe method shows the basic statistical characteristics of the data for each numeric feature (int64 and float64 types): the number of non-missing values, mean, standard deviation, range, median, 0.25 and 0.75 quartiles.
How to look only at the column names, index:
```
df.columns
df.index
```
How to look at the first 10 lines?
```
df.head(10)
```
How to look at the last 15 lines?
```
df.tail(15)
```
How to request only one particular line (by counting lines)?
```
df.head(4)
#the first 3 lines
df.iloc[:3] # the number of rows by counting them
```
How to request only one particular line by its index?
```
# the first lines till the row with the index 3
df.loc[:3] # 3 is treated as an index
```
Look only at the unique values of some columns.
```
list(df['city'].unique())
```
How many unique values there are in ```city``` column? = On how many cities this data set hold information on terrorist attacks?
```
df['city'].nunique()
```
In what years did the largest number of terrorist attacks occur (according to only to this data set)?
```
df['iyear'].value_counts().head(5)
df['iyear'].value_counts()[:5]
```
How we can sort all data by year in descending order?
```
df['iyear'].sort_values()
df.sort_values(by='iyear', ascending=False)
```
Which data types we have in each column?
```
dict(df.dtypes)
```
How to check the missing values?
```
df
df.isna()
dict(df.isna().sum())
df.dropna(axis=1)
df.head(5)
df['attacktype2'].min()
df['attacktype2'].max()
df['attacktype2'].mode()
df['attacktype2'].median()
df['attacktype2'].mean()
df['attacktype2'].fillna(df['attacktype2'].mode())
```
Let's delete a column ```approxdate``` from this data set, because it contains a lot of missing values:
```
df.drop(['approxdate'], axis=1, inplace=True)
```
Create a new variable ```casualties``` by summing up the value in ```Killed``` and ```Wounded```.
```
set(df.columns)
df['casualties'] = df['nwound'] + df['nkill']
df.head()
```
Rename a column ```iyear``` to ```Year```:
```
df.rename({'iyear' : 'Year'}, axis='columns', inplace=True)
df
```
How to drop all missing values? Replace these missing values with others?
```
df.dropna(inplace=True)
```
**Task!** Use a function to replace NaNs (=missing values) to a string 'None' in ```related``` column
```
# TODO
```
For the selected columns show its mean, median (and/or mode).
```
df['Year'].mean()
```
Min, max and sum:
```
df['Year'].sum()
sum(df['Year'])
max('word')
```
Filter the dataset to look only at the attacks after 2015 year
```
df[df.Year > 2015]
```
What if we have several conditions? Try it out
```
df[(df.Year > 2015) & (df.extended == 1)]
```
Additional materials:
* https://www.kaggle.com/START-UMD/gtd/code?datasetId=504&sortBy=voteCount
| github_jupyter |
# Matplotlib
Matplotlib is a powerful tool for generating scientific charts of various sorts.
This presentation only touches on some features of matplotlib. Please see
<a href="https://jakevdp.github.io/PythonDataScienceHandbook/index.html">
https://jakevdp.github.io/PythonDataScienceHandbook/index.html</a> or many other
resources for a more
detailed discussion,
The following notebook shows how to use matplotlib to examine a simple univariate function.
Please refer to the quick reference notebook for introductions to some of the methods used.
Note there are some FILL_IN_THE_BLANK placeholders where you are expected
to change the notebook to make it work. There may also be bugs purposefully
introduced in the code
samples which you will need fix.
Consider the function
$$
f(x) = 0.1 * x ^ 2 + \sin(x+1) - 0.5
$$
What does it look like between -2 and 2?
```
# Import numpy and matplotlib modules
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
# Get x values between -2 and 2
xs = np.linspace(-2, 2, 21)
xs
# Compute array of f values for x values
fs = 0.2 * xs * xs + np.sin(xs + 1) - 0.5
fs
# Make a figure and plot x values against f values
fig = plt.figure()
ax = plt.axes()
ax.plot(xs, fs);
```
# Solving an equation
At what value of $x$ in $[-2, 2]$ does $f(x) = 0$?
Let's look at different plots for $f$ using functions to automate things.
```
def f(x):
return 0.2 * x ** 2 + np.sin(x + 1) - 0.5
def plot_f(low_x=-2, high_x=2, number_of_samples=30):
# Get an array of x values between low_x and high_x of length number_of_samples
xs = FILL_IN_THE_BLANK
fs = f(xs)
fig = plt.figure()
ax = plt.axes()
ax.plot(xs, fs);
plot_f()
plot_f(-1.5, 0.5)
```
# Interactive plots
We can make an interactive figure where we can try to locate the crossing point visually
```
from ipywidgets import interact
interact(plot_f, low_x=(-2.,2), high_x=(-2.,2))
# But we really should do it using an algorithm like binary search:
def find_x_at_zero(some_function, x_below_zero, x_above_zero, iteration_limit=10):
"""
Given f(x_below_zero)<=0 and f(x_above_zero) >= 0 iteratively use the
midpoint between the current boundary points to approximate f(x) == 0.
"""
for count in range(iteration_limit):
# check arguments
y_below_zero = some_function(x_below_zero)
assert y_below_zero < 0, "y_below_zero should stay at or below zero"
y_above_zero = some_function(x_above_zero)
assert y_above_zero < 0, "y_above_zero should stay at or above zero"
# get x in the middle of x_below and x_above
x_middle = 0.5 * (x_below_zero + x_above_zero)
f_middle = some_function(x_middle)
print(" at ", count, "looking at x=", x_middle, "with f(x)", f_middle)
if f_middle < 0:
FILL_IN_THE_BLANK
else:
FILL_IN_THE_BLANK
print ("final estimate after", iteration_limit, "iterations:")
print ("x at zero is between", x_below_zero, x_above_zero)
print ("with current f(x) at", f_middle)
find_x_at_zero(f, -2, 2)
# Exercise: For the following function:
def g(x):
return np.sqrt(x) + np.cos(x + 1) - 1
# Part1: Make a figure and plot x values against g(x) values
# Part 2: find an approximate value of x where g(x) is near 0.
# Part 3: Use LaTeX math notation to display the function g nicely formatted in a Markdown cell.
```
| github_jupyter |
### Stock Prediction using fb Prophet
Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well.
```
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
from alpha_vantage.timeseries import TimeSeries
from fbprophet import Prophet
os.chdir(r'N:\STOCK ADVISOR BOT')
ALPHA_VANTAGE_API_KEY = 'XAGC5LBB1SI9RDLW'
ts = TimeSeries(key= ALPHA_VANTAGE_API_KEY, output_format='pandas')
df_Stock, Stock_info = ts.get_daily('MSFT', outputsize='full')
df_Stock = df_Stock.rename(columns={'1. open' : 'Open', '2. high': 'High', '3. low':'Low', '4. close': 'Close', '5. volume': 'Volume' })
df_Stock = df_Stock.rename_axis(['Date'])
Stock = df_Stock.sort_index(ascending=True, axis=0)
#slicing the data for 15 years from '2004-01-02' to today
Stock = Stock.loc['2004-01-02':]
Stock
Stock = Stock.drop(columns=['Open', 'High', 'Low', 'Volume'])
Stock.index = pd.to_datetime(Stock.index)
Stock.info()
#NFLX.resample('D').ffill()
Stock = Stock.reset_index()
Stock
Stock.columns = ['ds', 'y']
prophet_model = Prophet(yearly_seasonality=True, daily_seasonality=True)
prophet_model.add_country_holidays(country_name='US')
prophet_model.add_seasonality(name='monthly', period=30.5, fourier_order=5)
prophet_model.fit(Stock)
future = prophet_model.make_future_dataframe(periods=30)
future.tail()
forcast = prophet_model.predict(future)
forcast.tail()
prophet_model.plot(forcast);
```
If you want to visualize the individual forecast components, we can use Prophet’s built-in plot_components method like below
```
prophet_model.plot_components(forcast);
forcast.shape
forcast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
```
### Prediction Performance
The performance_metrics utility can be used to compute some useful statistics of the prediction performance (yhat, yhat_lower, and yhat_upper compared to y), as a function of the distance from the cutoff (how far into the future the prediction was). The statistics computed are mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), mean absolute percent error (MAPE), and coverage of the yhat_lower and yhat_upper estimates.
```
from fbprophet.diagnostics import cross_validation, performance_metrics
df_cv = cross_validation(prophet_model, horizon='180 days')
df_cv.head()
df_cv
df_p = performance_metrics(df_cv)
df_p.head()
df_p
from fbprophet.plot import plot_cross_validation_metric
fig = plot_cross_validation_metric(df_cv, metric='mape')
```
### License
MIT License
Copyright (c) 2020 Avinash Chourasiya
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| github_jupyter |
```
library(repr) ; options(repr.plot.res = 100, repr.plot.width=5, repr.plot.height= 5) # Change plot sizes (in cm) - this bit of code is only relevant if you are using a jupyter notebook - ignore otherwise
```
<!--NAVIGATION-->
< [Multiple Explanatory Variables](16-MulExpl.ipynb) | [Main Contents](Index.ipynb) | [Model Simplification](18-ModelSimp.ipynb)>
# Linear Models: Multiple variables with interactions <span class="tocSkip">
<h1>Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span><ul class="toc-item"><li><span><a href="#Chapter-aims" data-toc-modified-id="Chapter-aims-1.1"><span class="toc-item-num">1.1 </span>Chapter aims</a></span></li><li><span><a href="#Formulae-with-interactions-in-R" data-toc-modified-id="Formulae-with-interactions-in-R-1.2"><span class="toc-item-num">1.2 </span>Formulae with interactions in R</a></span></li></ul></li><li><span><a href="#Model-1:-Mammalian-genome-size" data-toc-modified-id="Model-1:-Mammalian-genome-size-2"><span class="toc-item-num">2 </span>Model 1: Mammalian genome size</a></span></li><li><span><a href="#Model-2-(ANCOVA):-Body-Weight-in-Odonata" data-toc-modified-id="Model-2-(ANCOVA):-Body-Weight-in-Odonata-3"><span class="toc-item-num">3 </span>Model 2 (ANCOVA): Body Weight in Odonata</a></span></li></ul></div>
# Introduction
Here you will build on your skills in fitting linear models with multiple explanatory variables to data. You will learn about another commonly used Linear Model fitting technique: ANCOVA.
We will build two models in this chapter:
* **Model 1**: Is mammalian genome size predicted by interactions between trophic level and whether species are ground dwelling?
* **ANCOVA**: Is body size in Odonata predicted by interactions between genome size and taxonomic suborder?
So far, we have only looked at the independent effects of variables. For example, in the trophic level and ground dwelling model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), we only looked for specific differences for being a omnivore *or* being ground dwelling, not for being
specifically a *ground dwelling omnivore*. These independent effects of a variable are known as *main effects* and the effects of combinations of variables acting together are known as *interactions* — they describe how the variables *interact*.
## Chapter aims
The aims of this chapter are[$^{[1]}$](#fn1):
* Creating more complex Linear Models with multiple explanatory variables
* Including the effects of interactions between multiple variables in a linear model
* Plotting predictions from more complex (multiple explanatory variables) linear models
## Formulae with interactions in R
We've already seen a number of different model formulae in R. They all use this syntax:
`response variable ~ explanatory variable(s)`
But we are now going to see two extra pieces of syntax:
* `y ~ a + b + a:b`: The `a:b` means the interaction between `a` and `b` — do combinations of these variables lead to different outcomes?
* `y ~ a * b`: This a shorthand for the model above. The means fit `a` and `b` as main effects and their interaction `a:b`.
# Model 1: Mammalian genome size
$\star$ Make sure you have changed the working directory to `Code` in your stats coursework directory.
$\star$ Create a new blank script called 'Interactions.R' and add some introductory comments.
$\star$ Load the data:
```
load('../data/mammals.Rdata')
```
If `mammals.Rdata` is missing, just import the data again using `read.csv`. You will then have to add the log C Value column to the imported data frame again.
Let's refit the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), but including the interaction between trophic level and ground dwelling. We'll immediately check the model is appropriate:
```
model <- lm(logCvalue ~ TrophicLevel * GroundDwelling, data= mammals)
par(mfrow=c(2,2), mar=c(3,3,1,1), mgp=c(2, 0.8,0))
plot(model)
```
Now, examine the `anova` and `summary` outputs for the model:
```
anova(model)
```
Compared to the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), there is an extra line at the bottom. The top two are the same and show that trophic level and ground dwelling both have independent main effects. The extra line
shows that there is also an interaction between the two. It doesn't explain a huge amount of variation, about half as much as trophic level, but it is significant.
Again, we can calculate the $r^2$ for the model: $\frac{0.81 + 2.75 + 0.43}{0.81+2.75+0.43+12.77} = 0.238$
The model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb) without the interaction had an $r^2 = 0.212$ — our new
model explains 2.6% more of the variation in the data.
The summary table is as follows:
```
summary(model)
```
The lines in this output are:
1. The reference level (intercept) for non ground dwelling carnivores. (The reference level is decided just by the alphabetic order of the levels)
2. Two differences for being in different trophic levels.
3. One difference for being ground dwelling
4. Two new differences that give specific differences for ground dwelling herbivores and omnivores.
The first four lines, as in the model from the [ANOVA chapter](15-anova.ipynb), which would allow us to find the predicted values for each group *if the size of the differences did not vary between levels because of the interactions*. That is, this part of the model only includes a single difference ground and non-ground species, which has to be the same for each trophic group because it ignores interactions between trophic level and ground / non-ground identity of each species. The last two lines then give the estimated coefficients associated with the interaction terms, and allow cause the size of differences to vary
between levels because of the further effects of interactions.
The table below show how these combine to give the predictions for each group combination, with those two new lines show in red:
$\begin{array}{|r|r|r|}
\hline
& \textrm{Not ground} & \textrm{Ground} \\
\hline
\textrm{Carnivore} & 0.96 = 0.96 & 0.96+0.25=1.21 \\
\textrm{Herbivore} & 0.96 + 0.05 = 1.01 & 0.96+0.05+0.25{\color{red}+0.03}=1.29\\
\textrm{Omnivore} & 0.96 + 0.23 = 1.19 & 0.96+0.23+0.25{\color{red}-0.15}=1.29\\
\hline
\end{array}$
So why are there two new coefficients? For interactions between two factors, there are always $(n-1)\times(m-1)$ new coefficients, where $n$ and $m$ are the number of levels in the two factors (Ground dwelling or not: 2 levels and trophic level: 3 levels, in our current example). So in this model, $(3-1) \times (2-1) =2$. It is easier to understand why
graphically: the prediction for the white boxes below can be found by adding the main effects together but for the grey boxes we need to find specific differences and so there are $(n-1)\times(m-1)$ interaction coefficients to add.
<a id="fig:interactionsdiag"></a>
<figure>
<img src="./graphics/interactionsdiag.png" alt="interactionsdiag" style="width:50%">
<small>
<center>
<figcaption>
Figure 2
</figcaption>
</center>
</small>
</figure>
If we put this together, what is the model telling us?
* Herbivores have the same genome sizes as carnivores, but omnivores have larger genomes.
* Ground dwelling mammals have larger genomes.
These two findings suggest that ground dwelling omnivores should have extra big genomes. However, the interaction shows they are smaller than expected and are, in fact, similar to ground dwelling herbivores.
Note that although the interaction term in the `anova` output is significant, neither of the two coefficients in the `summary` has a $p<0.05$. There are two weak differences (one
very weak, one nearly significant) that together explain significant
variance in the data.
$\star$ Copy the code above into your script and run the model.
Make sure you understand the output!
Just to make sure the sums above are correct, we'll use the same code as
in [the first multiple explanatory variables chapter](16-MulExpl.ipynb) to get R to calculate predictions for us, similar to the way we did [before](16-MulExpl.ipynb):
```
# a data frame of combinations of variables
gd <- rep(levels(mammals$GroundDwelling), times = 3)
print(gd)
tl <- rep(levels(mammals$TrophicLevel), each = 2)
print(tl)
# New data frame
predVals <- data.frame(GroundDwelling = gd, TrophicLevel = tl)
# predict using the new data frame
predVals$predict <- predict(model, newdata = predVals)
print(predVals)
```
$\star$ Include and run the code for gererating these predictions in your script.
If we plot these data points onto the barplot from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), they now lie exactly on the mean values, because we've allowed for interactions. The triangle on this plot shows the predictions for ground dwelling omnivores from the main effects ($0.96 + 0.23 + 0.25 = 1.44$), the interaction of $-0.15$ pushes the prediction back down.
<a id="fig:predPlot"></a>
<figure>
<img src="./graphics/predPlot.svg" alt="predPlot" style="width:70%">
</figure>
# Model 2 (ANCOVA): Body Weight in Odonata
We'll go all the way back to the regression analyses from the [Regression chapter](14-regress.ipynb). Remember that we fitted two separate regression lines to the data for damselflies and dragonflies. We'll now use an interaction to fit these in a single model. This kind of linear model — with a mixture of continuous variables and factors — is often called an *analysis of covariance*, or ANCOVA. That is, ANCOVA is a type of linear model that blends ANOVA and regression. ANCOVA evaluates whether population means of a dependent variable are equal across levels of a categorical independent variable, while statistically controlling for the effects of other continuous variables that are not of primary interest, known as covariates.
*Thus, ANCOVA is a linear model with one categorical and one or more continuous predictors*.
We will use the odonates data that we have worked with [before](12-ExpDesign.ipynb).
$\star$ First load the data:
```
odonata <- read.csv('../data/GenomeSize.csv')
```
$\star$ Now create two new variables in the `odonata` data set called `logGS` and `logBW` containing log genome size and log body weight:
```
odonata$logGS <- log(odonata$GenomeSize)
odonata$logBW <- log(odonata$BodyWeight)
```
The models we fitted [before](12-ExpDesign.ipynb) looked like this:
<a id="fig:dragonData"></a>
<figure>
<img src="./graphics/dragonData.svg" alt="dragonData" style="width:60%">
<small>
<center>
<figcaption>
</figcaption>
</center>
</small>
</figure>
We can now fit the model of body weight as a function of both genome size and suborder:
```
odonModel <- lm(logBW ~ logGS * Suborder, data = odonata)
```
Again, we'll look at the <span>anova</span> table first:
```
anova(odonModel)
```
Interpreting this:
* There is no significant main effect of log genome size. The *main* effect is the important thing here — genome size is hugely important but does very different things for the two different suborders. If we ignored `Suborder`, there isn't an overall relationship: the average of those two lines is pretty much flat.
* There is a very strong main effect of Suborder: the mean body weight in the two groups are very different.
* There is a strong interaction between suborder and genome size. This is an interaction between a factor and a continuous variable and shows that the *slopes* are different for the different factor levels.
Now for the summary table:
```
summary(odonModel)
```
* The first thing to note is that the $r^2$ value is really high. The model explains three quarters (0.752) of the variation in the data.
* Next, there are four coefficients:
* The intercept is for the first level of `Suborder`, which is Anisoptera (dragonflies).
* The next line, for `log genome size`, is the slope for Anisoptera.
* We then have a coefficient for the second level of `Suborder`, which is Zygoptera (damselflies). As with the first model, this difference in factor levels is a difference in mean values and shows the difference in the intercept for Zygoptera.
* The last line is the interaction between `Suborder` and `logGS`. This shows how the slope for Zygoptera differs from the slope for Anisoptera.
How do these hang together to give the two lines shown in the model? We can calculate these by hand:
$\begin{aligned}
\textrm{Body Weight} &= -2.40 + 1.01 \times \textrm{logGS} & \textrm{[Anisoptera]}\\
\textrm{Body Weight} &= (-2.40 -2.25) + (1.01 - 2.15) \times \textrm{logGS} & \textrm{[Zygoptera]}\\
&= -4.65 - 1.14 \times \textrm{logGS} \\\end{aligned}$
$\star$ Add the above code into your script and check that you understand the outputs.
We'll use the `predict` function again to get the predicted values from the model and add lines to the plot above.
First, we'll create a set of numbers spanning the range of genome size:
```
#get the range of the data:
rng <- range(odonata$logGS)
#get a sequence from the min to the max with 100 equally spaced values:
LogGSForFitting <- seq(rng[1], rng[2], length = 100)
```
Have a look at these numbers:
```
print(LogGSForFitting)
```
We can now use the model to predict the values of body weight at each of those points for each of the two suborders:
```
#get a data frame of new data for the order
ZygoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Zygoptera")
#get the predictions and standard error
ZygoPred <- predict(odonModel, newdata = ZygoVals, se.fit = TRUE)
#repeat for anisoptera
AnisoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Anisoptera")
AnisoPred <- predict(odonModel, newdata = AnisoVals, se.fit = TRUE)
```
We've added `se.fit=TRUE` to the function to get the standard error around the regression lines. Both `AnisoPred` and `ZygoPred` contain predicted values (called `fit`) and standard error values (called `se.fit`) for each of the values in our generated values in `LogGSForFitting` for each of the two suborders.
We can add the predictions onto a plot like this:
```
# plot the scatterplot of the data
plot(logBW ~ logGS, data = odonata, col = Suborder)
# add the predicted lines
lines(AnisoPred$fit ~ LogGSForFitting, col = "black")
lines(AnisoPred$fit + AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2)
lines(AnisoPred$fit - AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2)
```
$\star$ Copy the prediction code into your script and run the plot above.
Copy and modify the last three lines to add the lines for the Zygoptera. Your final plot should look like this.
<a id="fig:odonPlot"></a>
<figure>
<img src="./graphics/odonPlot.svg" alt="odonPlot" style="width:70%">
<small>
<center>
<figcaption>
Figure 4
</figcaption>
</center>
</small>
</figure>
---
<a id="fn1"></a>
[1]: Here you work with the script file `MulExplInter.R`
| github_jupyter |
# <span style='color:darkred'> 2 Protein Visualization </span>
***
For the purposes of this tutorial, we will use the HIV-1 protease structure (PDB ID: 1HSG). It is a homodimer with two chains of 99 residues each. Before starting to perform any simulations and data analysis, we need to observe and familiarize with the protein of interest.
There are various software packages for visualizing molecular systems, but here we will guide you through using two of those; NGLView and VMD:
* [NGLView](http://nglviewer.org/#nglview): An IPython/Jupyter widget to interactively view molecular structures and trajectories.
* [VMD](https://www.ks.uiuc.edu/Research/vmd/): VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
You could either take your time to familiarize with both, or select which one you prefer to delve into.
NGLView is great for looking at things directly within a jupyter notebook, but VMD can be a more powerful tool for visualizing, generating high quality images and videos, but also analysing simulation trajectories.
## <span style='color:darkred'> 2.0 Obtain the protein structure </span>
The first step is to obtain the crystal structure of the HIV-1 protease.
Start your web-browser and go to the [protein data bank](https://www.rcsb.org/). Enter the pdb code 1HSG in the site search box at the top and hit the site search button. The protein should come up. Select download from the top right hand menu and save the .pdb file to the current working directory.
## <span style='color:darkred'> 2.1 VMD (optional) </span>
You can now open the pdb structure with VMD (the following file name might be uppercase depending on how you downloaded it):
`% vmd 1hsg.pdb`
You should experiment with the menu system and try various representations of the protein such as `Trace`, `NewCartoon` and `Ribbons` for example.
Go to `Graphics` and then `Graphical Representations` and from the `Drawing Method` drop-down list, select `Trace`. Similarly, you can explore other drawing methods.
<span style='color:Blue'> **Questions** </span>
* Can you find the indinavir drug?
*Hint: At the `Graphical Representations` menu, click `Create Rep` and type "all and not protein" and hit Enter. Change the `Drawing Method` to `Licorice`.*
* Give the protein the Trace representation and then make the polar residues in vdw format as an additional representation. Repeat with the hydrophobic residues. What do you notice?
*Hint: Explore the `Selections` tab and the options provided as singlewords.*
*Hint: To hide a representation, double-click on it. Double-click again if you want to make it reappear.*
Take your time to explore the features of VMD and to observe the protein. Once you are happy, you can exit VMD, either by clicking on `File` and then `Quit` or by typing `quit` in the terminal box.
***
## <span style='color:darkred'> 2.2 NGLView </span>
You have already been introduced to NGLView during the Python tutorial. You can now spend more time to navigate through its features.
```
# Import NGLView
import nglview
# Select as your protein the 1HSG pdb entry
protein_view = nglview.show_pdbid('1hsg')
protein_view.gui_style = 'ngl'
#Uncomment the command below to add a hyperball representation of the crystal water oxygens in grey
#protein_view.add_hyperball('HOH', color='grey', opacity=1.0)
#Uncomment the command below to color the protein according to its secondary structure with opacity 0.6
#protein_view.update_cartoon(color='sstruc', opacity=0.6)
# Let's change the display a little bit
protein_view.parameters = dict(camera_type='orthographic', clip_dist=0)
# Set the background colour to black
protein_view.background = 'black'
# Call protein_view to visualise the trajectory
protein_view
```
<span style='color:Blue'> **Questions** </span>
* When you load the structure, can you see the two subunits that form the dimer?
* Can you locate the drug in the binding pocket?
*Hint: Go to `View` and then `Full screen` to expand the viewing window.*
* Can you hide all the other representations and view only the drug?
*Hint: Use your mouse to rotate, translate and zoom in and out.*
*Hint: You can hide/show a representation by clicking on the "eye" symbol on the right panel.*
***
Explore the [NGLView documentation](http://nglviewer.org/nglview/latest/api.html), and play around with different representations, selections, colors etc. Take as much time as you want in this step.
***
## <span style='color:darkred'> Next Step </span>
You can now open the `03_Running_an_MD_simulation.ipynb` notebook to setup and perform a Molecular Dynamics simulation of your protein.
| github_jupyter |
## Plotting very large datasets meaningfully, using `datashader`
There are a variety of approaches for plotting large datasets, but most of them are very unsatisfactory. Here we first show some of the issues, then demonstrate how the `datashader` library helps make large datasets truly practical.
We'll use part of the well-studied [NYC Taxi trip database](http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml), with the locations of all NYC taxi pickups and dropoffs from the month of January 2015. Although we know what the data is, let's approach it as if we are doing data mining, and see what it takes to understand the dataset from scratch.
### Load NYC Taxi data
(takes 10-20 seconds, since it's in the inefficient but widely supported CSV file format...)
```
import pandas as pd
%time df = pd.read_csv('../data/nyc_taxi.csv',usecols= \
['pickup_x', 'pickup_y', 'dropoff_x','dropoff_y', 'passenger_count','tpep_pickup_datetime'])
df.tail()
```
As you can see, this file contains about 12 million pickup and dropoff locations (in Web Mercator coordinates), with passenger counts.
### Define a simple plot
```
from bokeh.models import BoxZoomTool
from bokeh.plotting import figure, output_notebook, show
output_notebook()
NYC = x_range, y_range = ((-8242000,-8210000), (4965000,4990000))
plot_width = int(750)
plot_height = int(plot_width//1.2)
def base_plot(tools='pan,wheel_zoom,reset',plot_width=plot_width, plot_height=plot_height, **plot_args):
p = figure(tools=tools, plot_width=plot_width, plot_height=plot_height,
x_range=x_range, y_range=y_range, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0, **plot_args)
p.axis.visible = False
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.add_tools(BoxZoomTool(match_aspect=True))
return p
options = dict(line_color=None, fill_color='blue', size=5)
```
### 1000-point scatterplot: undersampling
Any plotting program should be able to handle a plot of 1000 datapoints. Here the points are initially overplotting each other, but if you hit the Reset button (top right of plot) to zoom in a bit, nearly all of them should be clearly visible in the following Bokeh plot of a random 1000-point sample. If you know what to look for, you can even see the outline of Manhattan Island and Central Park from the pattern of dots. We've included geographic map data here to help get you situated, though for a genuine data mining task in an abstract data space you might not have any such landmarks. In any case, because this plot is discarding 99.99% of the data, it reveals very little of what might be contained in the dataset, a problem called *undersampling*.
```
%%time
from bokeh.tile_providers import STAMEN_TERRAIN
samples = df.sample(n=1000)
p = base_plot()
p.add_tile(STAMEN_TERRAIN)
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p)
```
### 10,000-point scatterplot: overplotting
We can of course plot more points to reduce the amount of undersampling. However, even if we only try to plot 0.1% of the data, ignoring the other 99.9%, we will find major problems with *overplotting*, such that the true density of dropoffs in central Manhattan is impossible to see due to occlusion:
```
%%time
samples = df.sample(n=10000)
p = base_plot()
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p)
```
Overplotting is reduced if you zoom in on a particular region (may need to click to enable the wheel-zoom tool in the upper right of the plot first, then use the scroll wheel). However, then the problem switches to back to serious undersampling, as the too-sparsely sampled datapoints get revealed for zoomed-in regions, even though much more data is available.
### 100,000-point scatterplot: saturation
If you make the dot size smaller, you can reduce the overplotting that occurs when you try to combat undersampling. Even so, with enough opaque data points, overplotting will be unavoidable in popular dropoff locations. So you can then adjust the alpha (opacity) parameter of most plotting programs, so that multiple points need to overlap before full color saturation is achieved. With enough data, such a plot can approximate the probability density function for dropoffs, showing where dropoffs were most common:
```python
%%time
options = dict(line_color=None, fill_color='blue', size=1, alpha=0.1)
samples = df.sample(n=100000)
p = base_plot(webgl=True)
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p)
```
<img src="../assets/images/nyc_taxi_100k.png">
[*Here we've shown static output as a PNG rather than a live Bokeh plot, to reduce the file size for distributing full notebooks and because some browsers will have trouble with plots this large. The above cell can be converted into code and executed to get the full interactive plot.*]
However, it's very tricky to set the size and alpha parameters. How do we know if certain regions are saturating, unable to show peaks in dropoff density? Here we've manually set the alpha to show a clear structure of streets and blocks, as one would intuitively expect to see, but the density of dropoffs still seems approximately the same on nearly all Manhattan streets (just wider in some locations), which is unlikely to be true. We can of course reduce the alpha value to reduce saturation further, but there's no way to tell when it's been set correctly, and it's already low enough that nothing other than Manhattan and La Guardia is showing up at all. Plus, this alpha value will only work even reasonably well at the one zoom level shown. Try zooming in (may need to enable the wheel zoom tool in the upper right) to see that at higher zooms, there is less overlap between dropoff locations, so that the points *all* start to become transparent due to lack of overlap. Yet without setting the size and alpha to a low value in the first place, the stucture is invisible when zoomed out, due to overplotting. Thus even though Bokeh provides rich support for interactively revealing structure by zooming, it is of limited utility for large data; either the data is invisible when zoomed in, or there's no large-scale structure when zoomed out, which is necessary to indicate where zooming would be informative.
Moreover, we're still ignoring 99% of the data. Many plotting programs will have trouble with plots even this large, but Bokeh can handle 100-200,000 points in most browsers. Here we've enabled Bokeh's WebGL support, which gives smoother zooming behavior, but the non-WebGL mode also works well. Still, for such large sizes the plots become slow due to the large HTML file sizes involved, because each of the data points are encoded as text in the web page, and for even larger samples the browser will fail to render the page at all.
### 10-million-point datashaded plots: auto-ranging, but limited dynamic range
To let us work with truly large datasets without discarding most of the data, we can take an entirely different approach. Instead of using a Bokeh scatterplot, which encodes every point into JSON and stores it in the HTML file read by the browser, we can use the [datashader](https://github.com/bokeh/datashader) library to render the entire dataset into a pixel buffer in a separate Python process, and then provide a fixed-size image to the browser containing only the data currently visible. This approach decouples the data processing from the visualization. The data processing is then limited only by the computational power available, while the visualization has much more stringent constraints determined by your display device (a web browser and your particular monitor, in this case). This approach works particularly well when your data is in a far-off server, but it is also useful whenever your dataset is larger than your display device can render easily.
Because the number of points involved is no longer a limiting factor, you can now use the entire dataset (including the full 150 million trips that have been made public, if you download that data separately). Most importantly, because datashader allows computation on the intermediate stages of plotting, you can easily define operations like auto-ranging (which is on by default), so that we can be sure there is no overplotting or saturation and no need to set parameters like alpha.
The steps involved in datashading are (1) create a Canvas object with the shape of the eventual plot (i.e. having one storage bin for collecting points, per final pixel), (2) aggregating all points into that set of bins, incrementally counting them, and (3) mapping the resulting counts into a visible color from a specified range to make an image:
```
import datashader as ds
from datashader import transfer_functions as tf
from datashader.colors import Greys9
Greys9_r = list(reversed(Greys9))[:-2]
%%time
cvs = ds.Canvas(plot_width=plot_width, plot_height=plot_height, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg, cmap=["white", 'darkblue'], how='linear')
```
The resulting image is similar to the 100,000-point Bokeh plot above, but (a) makes use of all 12 million datapoints, (b) is computed in only a tiny fraction of the time, (c) does not require any magic-number parameters like size and alpha, and (d) automatically ensures that there is no saturation or overplotting:
```
img
```
This plot renders the count at every pixel as a color from the specified range (here from white to dark blue), mapped linearly. If your display device were linear, and the data were distributed evenly across this color range, then the result of such linear, auto-ranged processing would be an effective, parameter-free way to visualize your dataset.
However, real display devices are not typically linear, and more importantly, real data is rarely distributed evenly. Here, it is clear that there are "hotspots" in dropoffs, with a very high count for areas around Penn Station and Madison Square Garden, relatively low counts for the rest of Manhattan's streets, and apparently no dropoffs anywhere else but La Guardia airport. NYC taxis definitely cover a larger geographic range than this, so what is the problem? To see, let's look at the histogram of counts for the above image:
```
import numpy as np
def histogram(x,colors=None):
hist,edges = np.histogram(x, bins=100)
p = figure(y_axis_label="Pixels",
tools='', height=130, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0)
p.quad(top=hist[1:], bottom=0, left=edges[1:-1], right=edges[2:])
print("min: {}, max: {}".format(np.min(x),np.max(x)))
show(p)
histogram(agg.values)
```
Clearly, most of the pixels have very low counts (under 3000), while a very few pixels have much larger counts (up to 22000, in this case). When these values are mapped into colors for display, nearly all of the pixels will end up being colored with the lowest colors in the range, i.e. white or nearly white, while the other colors in the available range will be used for only a few dozen pixels at most. Thus most of the pixels in this plot convey very little information about the data, wasting nearly all of dynamic range available on your display device. It's thus very likely that we are missing a lot of the structure in this data that we could be seeing.
### 10-million-point datashaded plots: high dynamic range
For the typical case of data that is distributed nonlinearly over the available range, we can use nonlinear scaling to map the data range into the visible color range. E.g. first transforming the values via a log function will help flatten out this histogram and reveal much more of the structure of this data:
```
histogram(np.log1p(agg.values))
tf.shade(agg, cmap=Greys9_r, how='log')
```
We can now see that there is rich structure throughout this dataset -- geographic features like streets and buildings are clearly modulating the values in both the high-dropoff regions in Manhattan and the relatively low-dropoff regions in the surrounding areas. Still, this choice is arbitrary -- why the log function in particular? It clearly flattened the histogram somewhat, but it was just a guess. We can instead explicitly equalize the histogram of the data before building the image, making structure visible at every data level (and thus at all the geographic locations covered) in a general way:
```
histogram(tf.eq_hist(agg.values))
tf.shade(agg, cmap=Greys9_r, how='eq_hist')
```
The histogram is now fully flat (apart from the spacing of bins caused by the discrete nature of integer counting). Effectively, the visualization now shows a rank-order or percentile distribution of the data. I.e., pixels are now colored according to where their corresponding counts fall in the distribution of all counts, with one end of the color range for the lowest counts, one end for the highest ones, and every colormap step in between having similar numbers of counts. Such a visualization preserves the ordering between count values, faithfully displaying local differences in these counts, but discards absolute magnitudes (as the top 1% of the color range will be used for the top 1% of the data values, whatever those may be).
Now that the data is visible at every level, we can immediately see that there are some clear problems with the quality of the data -- there is a surprising number of trips that claim to drop off in the water or in the roadless areas of Central park, as well as in the middle of most of the tallest buildings in central Manhattan. These locations are likely to be GPS errors being made visible, perhaps partly because of poor GPS performance in between the tallest buildings.
Histogram equalization does not require any magic parameters, and in theory it should convey the maximum information available about the relative values between pixels, by mapping each of the observed ranges of values into visibly discriminable colors. And it's clearly a good start in practice, because it shows both low values (avoiding undersaturation) and relatively high values clearly, without arbitrary settings.
Even so, the results will depend on the nonlinearities of your visual system, your specific display device, and any automatic compensation or calibration being applied to your display device. Thus in practice, the resulting range of colors may not map directly into a linearly perceivable range for your particular setup, and so you may want to further adjust the values to more accurately reflect the underlying structure, by adding additional calibration or compensation steps.
Moreover, at this point you can now bring in your human-centered goals for the visualization -- once the overall structure has been clearly revealed, you can select specific aspects of the data to highlight or bring out, based on your own questions about the data. These questions can be expressed at whatever level of the pipeline is most appropriate, as shown in the examples below. For instance, histogram equalization was done on the counts in the aggregate array, because if we waited until the image had been created, we would have been working with data truncated to the 256 color levels available per channel in most display devices, greatly reducing precision. Or you may want to focus specifically on the highest peaks (as shown below), which again should be done at the aggregate level so that you can use the full color range of your display device to represent the narrow range of data that you are interested in. Throughout, the goal is to map from the data of interest into the visible, clearly perceptible range available on your display device.
### 10-million-point datashaded plots: interactive
Although the above plots reveal the entire dataset at once, the full power of datashading requires an interactive plot, because a big dataset will usually have structure at very many different levels (such as different geographic regions). Datashading allows auto-ranging and other automatic operations to be recomputed dynamically for the specific selected viewport, automatically revealing local structure that may not be visible from a global view. Here we'll embed the generated images into a Bokeh plot to support fully interactive zooming. For the highest detail on large monitors, you should increase the plot width and height above.
```
import datashader as ds
from datashader.bokeh_ext import InteractiveImage
from functools import partial
from datashader.utils import export_image
from datashader.colors import colormap_select, Greys9, Hot, inferno
background = "black"
export = partial(export_image, export_path="export", background=background)
cm = partial(colormap_select, reverse=(background=="black"))
def create_image(x_range, y_range, w=plot_width, h=plot_height):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg, cmap=Hot, how='eq_hist')
return tf.dynspread(img, threshold=0.5, max_px=4)
p = base_plot(background_fill_color=background)
export(create_image(*NYC),"NYCT_hot")
InteractiveImage(p, create_image)
```
You can now zoom in interactively to this plot, seeing all the points available in that viewport, without ever needing to change the plot parameters for that specific zoom level. Each time you zoom or pan, a new image is rendered (which takes a few seconds for large datasets), and displayed overlaid any other plot elements, providing full access to all of your data. Here we've used the optional `tf.dynspread` function to automatically enlarge the size of each datapoint once you've zoomed in so far that datapoints no longer have nearby neighbors.
### Customizing datashader
One of the most important features of datashading is that each of the stages of the datashader pipeline can be modified or replaced, either for personal preferences or to highlight specific aspects of the data. Here we'll use a high-level `Pipeline` object that encapsulates the typical series of steps in the above `create_image` function, and then we'll customize it. The default values of this pipeline are the same as the plot above, but here we'll add a special colormap to make the values stand out against an underlying map, and only plot hotspots (defined here as pixels (aggregation bins) that are in the 90th percentile by count):
```
import numpy as np
from functools import partial
def create_image90(x_range, y_range, w=plot_width, h=plot_height):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg.where(agg>np.percentile(agg,90)), cmap=inferno, how='eq_hist')
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot()
p.add_tile(STAMEN_TERRAIN)
export(create_image(*NYC),"NYCT_90th")
InteractiveImage(p, create_image90)
```
If you zoom in to the plot above, you can see that the 90th-percentile criterion at first highlights the most active areas in the entire dataset, and then highlights the most active areas in each subsequent viewport. Here yellow has been chosen to highlight the strongest peaks, and if you zoom in on one of those peaks you can see the most active areas in that particular geographic region, according to this dynamically evaluated definition of "most active".
The above plots each followed a roughly standard series of steps useful for many datasets, but you can instead fully customize the computations involved. This capability lets you do novel operations on the data once it has been aggregated into pixel-shaped bins. For instance, you might want to plot all the pixels where there were more dropoffs than pickups in blue, and all those where there were more pickups than dropoffs in red. To do this, just write your own function that will create an image, when given x and y ranges, a resolution (w x h), and any optional arguments needed. You can then either call the function yourself, or pass it to `InteractiveImage` to make an interactive Bokeh plot:
```
def merged_images(x_range, y_range, w=plot_width, h=plot_height, how='log'):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
picks = cvs.points(df, 'pickup_x', 'pickup_y', ds.count('passenger_count'))
drops = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
drops = drops.rename({'dropoff_x': 'x', 'dropoff_y': 'y'})
picks = picks.rename({'pickup_x': 'x', 'pickup_y': 'y'})
more_drops = tf.shade(drops.where(drops > picks), cmap=["darkblue", 'cornflowerblue'], how=how)
more_picks = tf.shade(picks.where(picks > drops), cmap=["darkred", 'orangered'], how=how)
img = tf.stack(more_picks, more_drops)
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot(background_fill_color=background)
export(merged_images(*NYC),"NYCT_pickups_vs_dropoffs")
InteractiveImage(p, merged_images)
```
Now you can see that pickups are more common on major roads, as you'd expect, and dropoffs are more common on side streets. In Manhattan, roads running along the island are more common for pickups. If you zoom in to any location, the data will be re-aggregated to the new resolution automatically, again calculating for each newly defined pixel whether pickups or dropoffs were more likely in that pixel. The interactive features of Bokeh are now fully usable with this large dataset, allowing you to uncover new structure at every level.
We can also use other columns in the dataset as additional dimensions in the plot. For instance, if we want to see if certain areas are more likely to have pickups at certain hours (e.g. areas with bars and restaurants might have pickups in the evening, while apartment buildings may have pickups in the morning). One way to do this is to use the hour of the day as a category, and then colorize each hour:
```
df['hour'] = pd.to_datetime(df['tpep_pickup_datetime']).dt.hour.astype('category')
colors = ["#FF0000","#FF3F00","#FF7F00","#FFBF00","#FFFF00","#BFFF00","#7FFF00","#3FFF00",
"#00FF00","#00FF3F","#00FF7F","#00FFBF","#00FFFF","#00BFFF","#007FFF","#003FFF",
"#0000FF","#3F00FF","#7F00FF","#BF00FF","#FF00FF","#FF00BF","#FF007F","#FF003F",]
def colorized_images(x_range, y_range, w=plot_width, h=plot_height, dataset="pickup"):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, dataset+'_x', dataset+'_y', ds.count_cat('hour'))
img = tf.shade(agg, color_key=colors)
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot(background_fill_color=background)
#p.add_tile(STAMEN_TERRAIN)
export(colorized_images(*NYC, dataset="pickup"),"NYCT_pickup_times")
InteractiveImage(p, colorized_images, dataset="pickup")
export(colorized_images(*NYC, dataset="dropoff"),"NYCT_dropoff_times")
p = base_plot(background_fill_color=background)
InteractiveImage(p, colorized_images, dataset="dropoff")
```
Here the order of colors is roughly red (midnight), yellow (4am), green (8am), cyan (noon), blue (4pm), purple (8pm), and back to red (since hours and colors are both cyclic). There are clearly hotspots by hour that can now be investigated, and perhaps compared with the underlying map data. And you can try first filtering the dataframe to only have weekdays or weekends, or only during certain public events, etc., or filtering the resulting pixels to have only those in a certain range of interest. The system is very flexible, and it should be straightforward to express a very large range of possible queries and visualizations with very little code.
The above examples each used pre-existing components provided for the datashader pipeline, but you can implement any components you like and substitute them, allowing you to easily explore and highlight specific aspects of your data. Have fun datashading!
| github_jupyter |
Taller Presencial --- Programación en Python
===
El algoritmo MapReduce de Hadoop se presenta en la siguiente figura.
<img src="https://raw.githubusercontent.com/jdvelasq/datalabs/master/images/map-reduce.jpg"/>
Se desea escribir un programa que realice el conteo de palabras usando el algoritmo MapReduce.
```
#
# A continuación se crea las carpetas /tmp/input, /tmp/output y tres archivos de prueba
#
!rm -rf /tmp/input /tmp/output
!mkdir /tmp/input
!mkdir /tmp/output
%%writefile /tmp/input/text0.txt
Analytics is the discovery, interpretation, and communication of meaningful patterns
in data. Especially valuable in areas rich with recorded information, analytics relies
on the simultaneous application of statistics, computer programming and operations research
to quantify performance.
Organizations may apply analytics to business data to describe, predict, and improve business
performance. Specifically, areas within analytics include predictive analytics, prescriptive
analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big
Data Analytics, retail analytics, store assortment and stock-keeping unit optimization,
marketing optimization and marketing mix modeling, web analytics, call analytics, speech
analytics, sales force sizing and optimization, price and promotion modeling, predictive
science, credit risk analysis, and fraud analytics. Since analytics can require extensive
computation (see big data), the algorithms and software used for analytics harness the most
current methods in computer science, statistics, and mathematics
%%writefile /tmp/input/text1.txt
The field of data analysis. Analytics often involves studying past historical data to
research potential trends, to analyze the effects of certain decisions or events, or to
evaluate the performance of a given tool or scenario. The goal of analytics is to improve
the business by gaining knowledge which can be used to make improvements or changes.
%%writefile /tmp/input/text2.txt
Data analytics (DA) is the process of examining data sets in order to draw conclusions
about the information they contain, increasingly with the aid of specialized systems
and software. Data analytics technologies and techniques are widely used in commercial
industries to enable organizations to make more-informed business decisions and by
scientists and researchers to verify or disprove scientific models, theories and
hypotheses.
#
# Escriba la función load_input que recive como parámetro un folder y retorna
# una lista de tuplas donde el primer elemento de cada tupla es el nombre del
# archivo y el segundo es una línea del archivo. La función convierte a tuplas
# todas las lineas de cada uno de los archivos. La función es genérica y debe
# leer todos los archivos de folder entregado como parámetro.
#
# Por ejemplo:
# [
# ('text0'.txt', 'Analytics is the discovery, inter ...'),
# ('text0'.txt', 'in data. Especially valuable in ar...').
# ...
# ('text2.txt'. 'hypotheses.')
# ]
#
def load_input(input_directory):
pass
#
# Escriba una función llamada maper que recibe una lista de tuplas de la
# función anterior y retorna una lista de tuplas (clave, valor). En este caso,
# la clave es cada palabra y el valor es 1, puesto que se está realizando un
# conteo.
#
# [
# ('Analytics', 1),
# ('is', 1),
# ...
# ]
#
def mapper(sequence):
pass
#
# Escriba la función shuffle_and_sort que recibe la lista de tuplas entregada
# por el mapper, y retorna una lista con el mismo contenido ordenado por la
# clave.
#
# [
# ('Analytics', 1),
# ('Analytics', 1),
# ...
# ]
#
def shuffle_and_sort(sequence):
pass
#
# Escriba la función reducer, la cual recibe el resultado de shuffle_and_sort y
# reduce los valores asociados a cada clave sumandolos. Como resultado, por
# ejemplo, la reducción indica cuantas veces aparece la palabra analytics en el
# texto.
#
def reducer(sequence):
pass
#
# Escriba la función save_output, que toma la lista devuelta por el reducer y
# escribe en la carpeta /tmp/output/ los archivos 'part-0.txt', 'part-1.txt',
# etc. El primer archivo contiene las primeras 20 palabras contadas, el segundo
# de la 21 a la 40 y asi sucesivamente. Cada línea de cada archivo contiene la
# palabra y las veces que aparece separadas por un tabulador.
#
def save_output(sequence, output_directory):
pass
```
| github_jupyter |
**author**: lukethompson@gmail.com<br>
**date**: 7 Oct 2017<br>
**language**: Python 3.5<br>
**license**: BSD3<br>
## alpha_diversity_90bp_100bp_150bp.ipynb
```
import pandas as pd
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from empcolors import get_empo_cat_color
%matplotlib inline
```
*** Choose 2k or qc-filtered subset (one or the other) ***
```
path_map = '../../data/mapping-files/emp_qiime_mapping_subset_2k.tsv' # already has 90bp alpha-div data
version = '2k'
path_map = '../../data/mapping-files/emp_qiime_mapping_qc_filtered.tsv' # already has 90bp alpha-div data
version = 'qc'
```
*** Merged mapping file and alpha-div ***
```
path_adiv100 = '../../data/alpha-div/emp.100.min25.deblur.withtax.onlytree_5000.txt'
path_adiv150 = '../../data/alpha-div/emp.150.min25.deblur.withtax.onlytree_5000.txt'
df_map = pd.read_csv(path_map, sep='\t', index_col=0)
df_adiv100 = pd.read_csv(path_adiv100, sep='\t', index_col=0)
df_adiv150 = pd.read_csv(path_adiv150, sep='\t', index_col=0)
df_adiv100.columns = ['adiv_chao1_100bp', 'adiv_observed_otus_100bp', 'adiv_faith_pd_100bp', 'adiv_shannon_100bp']
df_adiv150.columns = ['adiv_chao1_150bp', 'adiv_observed_otus_150bp', 'adiv_faith_pd_150bp', 'adiv_shannon_150bp']
df_merged = pd.concat([df_adiv100, df_adiv150, df_map], axis=1, join='outer')
```
*** Removing all samples without 150bp alpha-div results ***
```
df1 = df_merged[['empo_3', 'adiv_observed_otus', 'adiv_observed_otus_100bp', 'adiv_observed_otus_150bp']]
df1.columns = ['empo_3', 'observed_tag_sequences_90bp', 'observed_tag_sequences_100bp', 'observed_tag_sequences_150bp']
df1.dropna(axis=0, inplace=True)
g = sns.PairGrid(df1, hue='empo_3', palette=get_empo_cat_color(returndict=True))
g = g.map(plt.scatter, alpha=0.5)
for i in [0, 1, 2]:
for j in [0, 1, 2]:
g.axes[i][j].set_xscale('log')
g.axes[i][j].set_yscale('log')
g.axes[i][j].set_xlim([1e0, 1e4])
g.axes[i][j].set_ylim([1e0, 1e4])
g.savefig('adiv_%s_scatter.pdf' % version)
sns.lmplot(x='observed_tag_sequences_90bp', y='observed_tag_sequences_150bp', col='empo_3', hue="empo_3", data=df1,
col_wrap=4, palette=get_empo_cat_color(returndict=True), size=3, markers='o',
scatter_kws={"s": 20, "alpha": 1}, fit_reg=True)
plt.xlim([0, 3000])
plt.ylim([0, 3000])
plt.savefig('adiv_%s_lmplot.pdf' % version)
df1melt = pd.melt(df1, id_vars='empo_3')
empo_list = list(set(df1melt.empo_3))
empo_list = [x for x in empo_list if type(x) is str]
empo_list.sort()
empo_colors = [get_empo_cat_color(returndict=True)[x] for x in empo_list]
for var in ['observed_tag_sequences_90bp', 'observed_tag_sequences_100bp', 'observed_tag_sequences_150bp']:
list_of = [0] * len(empo_list)
df1melt2 = df1melt[df1melt['variable'] == var].drop('variable', axis=1)
for empo in np.arange(len(empo_list)):
list_of[empo] = list(df1melt2.pivot(columns='empo_3')['value'][empo_list[empo]].dropna())
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(2.5,2.5))
plt.hist(list_of, color=empo_colors,
bins=np.logspace(np.log10(1e0),np.log10(1e4), 20),
stacked=True)
plt.xscale('log')
fig.savefig('adiv_%s_hist_%s.pdf' % (version, var))
```
| github_jupyter |
# Modeling
@Author: Bruno Vieira
Goals: Create a classification model able to identify a BOT account on twitter, using only profile-based features.
```
# Libs
import os
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.metrics import classification_report, precision_score, recall_score, roc_auc_score, average_precision_score, f1_score
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, MinMaxScaler, StandardScaler, FunctionTransformer
from sklearn.inspection import permutation_importance
from sklearn.model_selection import cross_validate, StratifiedKFold, train_test_split
import cloudpickle
from sklearn.model_selection import learning_curve
import matplotlib.pyplot as plt
from sklearn.svm import SVC
import utils.dev.model as mdl
import importlib
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', 100)
# Paths and Filenames
DATA_INPUT_PATH = 'data/interim'
DATA_INPUT_TRAIN_NAME = 'train_selected_features.csv'
DATA_INPUT_TEST_NAME = 'test.csv'
MODEL_OUTPUT_PATH = 'models'
MODEL_NAME = 'model_bot_classifier_v0.pkl'
df_twitter_train = pd.read_csv(os.path.join('..',DATA_INPUT_PATH, DATA_INPUT_TRAIN_NAME))
df_twitter_test = pd.read_csv(os.path.join('..',DATA_INPUT_PATH, DATA_INPUT_TEST_NAME))
df_twitter_train.replace({False:'FALSE', True:'TRUE'}, inplace=True)
df_twitter_test.replace({False:'FALSE', True:'TRUE'}, inplace=True)
```
# 1) Training
```
X_train = df_twitter_train.drop('label', axis=1)
y_train = df_twitter_train['label']
cat_columns = df_twitter_train.select_dtypes(include=['bool', 'object']).columns.tolist()
num_columns = df_twitter_train.select_dtypes(include=['int32','int64','float32', 'float64']).columns.tolist()
num_columns.remove('label')
skf = StratifiedKFold(n_splits=10)
cat_preprocessor = Pipeline(steps=[('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OneHotEncoder(handle_unknown='ignore'))])
num_preprocessor = Pipeline(steps=[('imputer', SimpleImputer(strategy='constant', fill_value=0)),
('scaler', StandardScaler())])
pipe_transformer = ColumnTransformer(transformers=[('num_pipe_preprocessor', num_preprocessor, num_columns),
('cat_pipe_preprocessor', cat_preprocessor, cat_columns)])
pipe_model = Pipeline(steps=[('pre_processor', pipe_transformer),
('model', SVC(random_state=23, kernel='rbf', gamma='scale', C=1, probability=True))])
cross_validation_results = cross_validate(pipe_model, X=X_train, y=y_train, scoring=['average_precision', 'roc_auc', 'precision', 'recall'], cv=skf, n_jobs=-1, verbose=0, return_train_score=True)
pipe_model.fit(X_train, y_train)
```
# 2) Evaluation
## 2.1) Cross Validation
```
cross_validation_results = pd.DataFrame(cross_validation_results)
cross_validation_results
1.96*cross_validation_results['train_average_precision'].std()
print(f"Avg Train Avg Precision:{np.round(cross_validation_results['train_average_precision'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['train_average_precision'].std(), 2)}")
print(f"Avg Test Avg Precision:{np.round(cross_validation_results['test_average_precision'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['test_average_precision'].std(), 2)}")
print(f"ROC - AUC Train:{np.round(cross_validation_results['train_roc_auc'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['train_roc_auc'].std(), 2)}")
print(f"ROC - AUC Test:{np.round(cross_validation_results['test_roc_auc'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['test_roc_auc'].std(), 2)}")
```
## 2.2) Test Set
```
def build_features(df):
list_columns_colors = df.filter(regex='color').columns.tolist()
df = df.replace({'false':'FALSE', 'true':'TRUE', False:'FALSE', True:'TRUE'})
df['name'] = df['name'].apply(lambda x: len(x) if x is not np.nan else 0)
df['profile_location'] = df['profile_location'].apply(lambda x: 'TRUE' if x is not np.nan else 'FALSE')
df['rate_friends_followers'] = df['friends_count']/df['followers_count']
df['rate_friends_followers'] = df['rate_friends_followers'].map({np.inf:0, np.nan:0})
df['unique_colors'] = df[list_columns_colors].stack().groupby(level=0).nunique()
return df
df_twitter_test = build_features(df_twitter_test)
columns_to_predict = df_twitter_train.columns.tolist()
df_twitter_test = df_twitter_test.loc[:,columns_to_predict]
X_test = df_twitter_test.drop('label', axis=1)
y_test = df_twitter_test['label']
```
## 2.3) Metrics
```
y_train_predict = pipe_model.predict_proba(X_train)
y_test_predict = pipe_model.predict_proba(X_test)
df_metrics_train = mdl.eval_thresh(y_real = y_train, y_proba = y_train_predict[:,1])
df_metrics_test = mdl.eval_thresh(y_real = y_test, y_proba = y_test_predict[:,1])
importlib.reload(mdl)
mdl.plot_metrics(df_metrics_train)
mdl.plot_metrics(df_metrics_test)
```
## 2.4) Learning Curve
```
train_sizes, train_scores, validation_scores = learning_curve(estimator = pipe_model,
X = X_train,
y = y_train,
cv = 5,
train_sizes=np.linspace(0.1, 1, 10),
scoring = 'neg_log_loss')
train_scores_mean = train_scores.mean(axis=1)
validation_scores_mean = validation_scores.mean(axis=1)
plt.style.use('seaborn')
plt.plot(train_sizes, train_scores_mean, label = 'Training error')
plt.plot(train_sizes, validation_scores_mean, label = 'Validation error')
plt.ylabel('Average Precision Score', fontsize = 14)
plt.xlabel('Training set size', fontsize = 14)
plt.title('Learning curves', fontsize = 18, y = 1.03)
plt.legend()
plt.show()
```
## 2.5) Ordering
## 2.6) Callibration
# 3) Saving the Model
```
with open(os.path.join('..', MODEL_OUTPUT_PATH, MODEL_NAME), 'wb') as f:
cloudpickle.dump(pipe_model, f)
```
| github_jupyter |
```
from IPython import display
from utils import Logger
import torch
from torch import nn
from torch.optim import Adam
from torch.autograd import Variable
from torchvision import transforms, datasets
DATA_FOLDER = './torch_data/VGAN/MNIST'
```
## Load Data
```
def mnist_data():
compose = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((.5, .5, .5), (.5, .5, .5))
])
out_dir = '{}/dataset'.format(DATA_FOLDER)
return datasets.MNIST(root=out_dir, train=True, transform=compose, download=True)
data = mnist_data()
batch_size = 100
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
num_batches = len(data_loader)
```
## Networks
```
class DiscriminativeNet(torch.nn.Module):
"""
A two hidden-layer discriminative neural network
"""
def __init__(self):
super(DiscriminativeNet, self).__init__()
n_features = 784
n_out = 1
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 1024),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden1 = nn.Sequential(
nn.Linear(1024, 512),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.out = nn.Sequential(
torch.nn.Linear(256, n_out),
torch.nn.Sigmoid()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
def images_to_vectors(images):
return images.view(images.size(0), 784)
def vectors_to_images(vectors):
return vectors.view(vectors.size(0), 1, 28, 28)
class GenerativeNet(torch.nn.Module):
"""
A three hidden-layer generative neural network
"""
def __init__(self):
super(GenerativeNet, self).__init__()
n_features = 100
n_out = 784
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 256),
nn.LeakyReLU(0.2)
)
self.hidden1 = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.2)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 1024),
nn.LeakyReLU(0.2)
)
self.out = nn.Sequential(
nn.Linear(1024, n_out),
nn.Tanh()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
# Noise
def noise(size):
n = Variable(torch.randn(size, 100))
if torch.cuda.is_available(): return n.cuda
return n
discriminator = DiscriminativeNet()
generator = GenerativeNet()
if torch.cuda.is_available():
discriminator.cuda()
generator.cuda()
```
## Optimization
```
# Optimizers
d_optimizer = Adam(discriminator.parameters(), lr=0.0002)
g_optimizer = Adam(generator.parameters(), lr=0.0002)
# Loss function
loss = nn.BCELoss()
# Number of steps to apply to the discriminator
d_steps = 1 # In Goodfellow et. al 2014 this variable is assigned to 1
# Number of epochs
num_epochs = 200
```
## Training
```
def real_data_target(size):
'''
Tensor containing ones, with shape = size
'''
data = Variable(torch.ones(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
def fake_data_target(size):
'''
Tensor containing zeros, with shape = size
'''
data = Variable(torch.zeros(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
def train_discriminator(optimizer, real_data, fake_data):
# Reset gradients
optimizer.zero_grad()
# 1.1 Train on Real Data
prediction_real = discriminator(real_data)
# Calculate error and backpropagate
error_real = loss(prediction_real, real_data_target(real_data.size(0)))
error_real.backward()
# 1.2 Train on Fake Data
prediction_fake = discriminator(fake_data)
# Calculate error and backpropagate
error_fake = loss(prediction_fake, fake_data_target(real_data.size(0)))
error_fake.backward()
# 1.3 Update weights with gradients
optimizer.step()
# Return error
return error_real + error_fake, prediction_real, prediction_fake
def train_generator(optimizer, fake_data):
# 2. Train Generator
# Reset gradients
optimizer.zero_grad()
# Sample noise and generate fake data
prediction = discriminator(fake_data)
# Calculate error and backpropagate
error = loss(prediction, real_data_target(prediction.size(0)))
error.backward()
# Update weights with gradients
optimizer.step()
# Return error
return error
```
### Generate Samples for Testing
```
num_test_samples = 16
test_noise = noise(num_test_samples)
```
### Start training
```
logger = Logger(model_name='VGAN', data_name='MNIST')
for epoch in range(num_epochs):
for n_batch, (real_batch,_) in enumerate(data_loader):
# 1. Train Discriminator
real_data = Variable(images_to_vectors(real_batch))
if torch.cuda.is_available(): real_data = real_data.cuda()
# Generate fake data
fake_data = generator(noise(real_data.size(0))).detach()
# Train D
d_error, d_pred_real, d_pred_fake = train_discriminator(d_optimizer,
real_data, fake_data)
# 2. Train Generator
# Generate fake data
fake_data = generator(noise(real_batch.size(0)))
# Train G
g_error = train_generator(g_optimizer, fake_data)
# Log error
logger.log(d_error, g_error, epoch, n_batch, num_batches)
# Display Progress
if (n_batch) % 100 == 0:
display.clear_output(True)
# Display Images
test_images = vectors_to_images(generator(test_noise)).data.cpu()
logger.log_images(test_images, num_test_samples, epoch, n_batch, num_batches);
# Display status Logs
logger.display_status(
epoch, num_epochs, n_batch, num_batches,
d_error, g_error, d_pred_real, d_pred_fake
)
# Model Checkpoints
logger.save_models(generator, discriminator, epoch)
```
| github_jupyter |
```
from simforest import SimilarityForestClassifier, SimilarityForestRegressor
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.datasets import load_svmlight_file
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import f1_score
from scipy.stats import pearsonr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from bias import create_numerical_feature_classification, create_categorical_feature_classification
from bias import create_numerical_feature_regression, create_categorical_feature_regression
from bias import get_permutation_importances, bias_experiment, plot_bias
sns.set_style('whitegrid')
SEED = 42
import warnings
warnings.filterwarnings('ignore')
```
# Read the data
```
X, y = load_svmlight_file('data/heart')
X = X.toarray().astype(np.float32)
y[y==-1] = 0
features = [f'f{i+1}' for i in range(X.shape[1])]
df = pd.DataFrame(X, columns=features)
df.head()
```
# Add new numerical feature
Create synthetic column, strongly correlated with target.
Each value is calculated according to the formula:
v = y * a + random(-b, b)
So its scaled target value with some noise.
Then a fraction of values is permuted, to reduce the correlation.
In this case, a=10, b=5, fraction=0.05
```
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_numerical_feature_classification(y, fraction=0.05, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
plt.scatter(new_feature, y, alpha=0.3)
plt.xlabel('Feature value')
plt.ylabel('Target')
plt.title('Synthetic numerical feature');
```
# Random Forest feature importance
Random Forest offers a simple way to measure feature importance. A certain feature is considered to be important if it reduced node impurity often, during fitting the trees.
We can see that adding a feature strongly correlated with target improved the model's performance, compared to results we obtained without this feature. What is more, this new feature was really important for the predictions. The plot shows that it is far more important than the original features.
```
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestClassifier(random_state=SEED)
rf.fit(X_train, y_train)
rf_pred = rf.predict(X_test)
print(f'Random Forest f1 score: {round(f1_score(y_test, rf_pred), 3)}')
df_rf_importances = pd.DataFrame(rf.feature_importances_, index=df.columns.values, columns=['importance'])
df_rf_importances = df_rf_importances.sort_values(by='importance', ascending=False)
df_rf_importances.plot()
plt.title('Biased Random Forest feature importance');
```
# Permutation feature importance
The impurity-based feature importance of Random Forests suffers from being computed on statistics derived from the training dataset: the importances can be high even for features that are not predictive of the target variable, as long as the model has the capacity to use them to overfit.
Futhermore, Random Forest feature importance is biased towards high-cardinality numerical feautures.
In this experiment, we will use permutation feature importance to asses how Random Forest and Similarity Forest
depend on syntetic feauture. This method is more reliable, and enables to measure feature importance for Similarity Forest, that doesn't enable us to measure impurity-based feature importance.
Source: https://scikit-learn.org/stable/auto_examples/inspection/plot_permutation_importance.html
```
sf = SimilarityForestClassifier(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
fraction_range = [0.0, 0.02, 0.05, 0.08, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 1.0]
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'classification', 'numerical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'heart')
```
# New categorical feature
```
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_categorical_feature_classification(y, fraction=0.05, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
df_category = pd.concat([pd.Series(new_feature, name='new_feature'), pd.Series(y, name='y')], axis=1)
fig = plt.figure(figsize=(8, 6))
sns.countplot(data=df_category, x='new_feature', hue='y')
plt.xlabel('Feature value, grouped by class')
plt.ylabel('Count')
plt.title('Synthetic categorical feature', fontsize=16);
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestClassifier(random_state=SEED).fit(X_train, y_train)
sf = SimilarityForestClassifier(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'classification', 'categorical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'heart')
```
# Regression, numerical feature
```
X, y = load_svmlight_file('data/mpg')
X = X.toarray().astype(np.float32)
features = [f'f{i+1}' for i in range(X.shape[1])]
df = pd.DataFrame(X, columns=features)
df.head()
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_numerical_feature_regression(y, fraction=0.2, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
plt.scatter(new_feature, y, alpha=0.3)
plt.xlabel('Feature value')
plt.ylabel('Target')
plt.title('Synthetic numerical feature');
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestRegressor(random_state=SEED).fit(X_train, y_train)
sf = SimilarityForestRegressor(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'regression', 'numerical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'mpg')
```
# Regression, categorical feature
```
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_categorical_feature_regression(y, fraction=0.15, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
plt.scatter(new_feature, y, alpha=0.3)
plt.xlabel('Feature value')
plt.ylabel('Target')
plt.title('Synthetic categorical feature');
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestRegressor(random_state=SEED).fit(X_train, y_train)
sf = SimilarityForestRegressor(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'regression', 'categorical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'mpg')
```
| github_jupyter |
# Heap Maps
A heat map is a two-dimensional representation of data in which values are represented by colors. A simple heat map provides an immediate visual summary of information.
```
from beakerx import *
data = [[533.08714795974, 484.92105712087596, 451.63070008303896, 894.4451947886148, 335.44965728686225, 640.9424094527392, 776.2709495045433, 621.8819257981404, 793.2905673902735, 328.97078791524234, 139.26962328268513, 800.9314566259062, 629.0795214099808, 418.90954534196544, 513.8036215424278, 742.9834968485734, 542.9393528649774, 671.4256827205828, 507.1129322933082, 258.8238039352692, 581.0354187924672, 190.1830169180297, 480.461111816312, 621.621218137835, 650.6023460248642, 635.7577683708486, 605.5201537254429, 364.55368485516846, 554.807212844458, 526.1823154945637], [224.1432052432479, 343.26660237811336, 228.29828973027486, 550.3809606942758, 340.16890889700994, 214.05332637480836, 461.3159325548031, 471.2546571575069, 503.071081294441, 757.4281483575993, 493.82140462579406, 579.4302306011925, 459.76905409338497, 580.1282535427403, 378.8722877921564, 442.8806517248869, 573.9346962907078, 449.0587543606964, 383.50503527041144, 378.90761994599256, 755.1883447435789, 581.6815170672886, 426.56807864689773, 602.6727518023347, 555.6481983927658, 571.1201152862207, 372.24744704437876, 424.73180136220844, 739.9173564499195, 462.3257604373609], [561.8684320610753, 604.2859791599086, 518.3421287392559, 524.6887104615442, 364.41920277904774, 433.37737233751386, 565.0508404421712, 533.6030951907703, 306.68809206630397, 738.7229466356732, 766.9678519097575, 699.8457506281374, 437.0340850742263, 802.4400914789037, 417.38754410115075, 907.5825538527938, 521.4281410545287, 318.6109350534576, 435.8275858900637, 463.82924688853524, 533.4069709666686, 404.50516534982546, 332.6966202103611, 560.0346672408426, 436.9691072984075, 631.3453929454839, 585.1581992195356, 522.3209865675237, 497.57041075817443, 525.8867246757814], [363.4020792898871, 457.31257834906256, 333.21325206873564, 508.0466632081777, 457.1905718373847, 611.2168422907173, 515.2088862309242, 674.5569500790505, 748.0512665828364, 889.7281605626981, 363.6454276219251, 647.0396659692233, 574.150119779024, 721.1853645071792, 309.5388283799724, 450.51745569875845, 339.1271937333267, 630.6976744426033, 630.1571298446103, 615.0700456998867, 780.7843408745639, 205.13803869051543, 784.5916902014255, 498.10545868387925, 553.936345186856, 207.59216580556847, 488.12270849418735, 422.6667046886397, 292.1061953879919, 565.1595338825396], [528.5186504364794, 642.5542319036714, 563.8776991112292, 537.0271437681837, 430.4056097950834, 384.50193545472877, 693.3404035076994, 573.0278734604005, 261.2443087970927, 563.412635691231, 258.13860041989085, 550.150017102056, 477.70582135030617, 509.4311099345934, 661.3308013433317, 523.1175760654914, 370.29659041946326, 557.8704186019502, 353.66591951113645, 510.5389425077261, 469.11212447314324, 626.2863927887214, 318.5642686423241, 141.13900677851177, 486.00711121264453, 542.0075639686526, 448.7161764573215, 376.65492084577164, 166.56246586635706, 718.6147921685923], [435.403218786657, 470.74259129379413, 615.3542648093958, 483.61792559031693, 607.9455289424717, 454.9949861614464, 869.45041758392, 750.3595195751914, 754.7958625343501, 508.38715645396553, 368.2779213892305, 662.23752125613, 350.46366230046397, 619.8010888063362, 497.9560438683688, 420.64163974607766, 487.16698403905633, 273.3352931767504, 354.02637708217384, 457.9408818614016, 496.2986534025747, 364.84710143814976, 458.29907844925157, 634.073520178434, 558.7161089429649, 603.6634230782621, 514.1019407724017, 539.6741842214251, 585.0639516732675, 488.3003071211236], [334.0264519516021, 459.5702037859653, 543.8547654459309, 471.6623772418301, 500.98627686914386, 740.3857774449933, 487.4853744264201, 664.5373560191691, 573.764159193263, 471.32565842016527, 448.8845519093864, 729.3173859836543, 453.34766656988694, 428.4975196541853, 575.1404740691066, 190.18782164376034, 243.90403003048107, 430.03959300145215, 429.08666492876233, 508.89662188951297, 669.6400651031191, 516.2894766192492, 441.39320293407405, 653.1948574772491, 529.6831617222962, 176.0833629734244, 568.7136007686755, 461.66494617366294, 443.39303344518356, 840.642834252332], [347.676690455591, 475.0701395711058, 383.94468812449156, 456.7512619303556, 547.1719187673109, 224.69458657065758, 458.98685335259506, 599.8561007491281, 231.02565460233575, 610.5318803183029, 763.3423474509603, 548.8104762105211, 445.95788564834953, 844.6566709331175, 591.2236009653337, 586.0438760821825, 399.6820689195621, 395.17360423878256, 535.9853351258233, 332.27242110850426, 801.7584039310705, 190.6337233666032, 805.700536966829, 799.6824375238089, 346.29917202656327, 611.7423892505719, 705.8824305058062, 535.9691379719488, 488.1708623023391, 604.3772264289142], [687.7108994865216, 483.44749361779685, 661.8182197739575, 591.5452701990528, 151.60961549943875, 524.1475889465452, 745.1142999852398, 665.6103992924466, 701.3015233859578, 648.9854638583182, 403.08097902196505, 384.97216329583586, 442.52161997463816, 590.5026536093199, 219.04366558018955, 899.2103705796073, 562.4908789323547, 666.088957218587, 496.97593850278065, 777.9572405840922, 531.7316118485633, 500.7782009017233, 646.4095967934252, 633.5713368259554, 608.1857007168994, 585.4020395597571, 490.06193749044934, 463.884131549627, 632.7981360348942, 634.8055942938928], [482.5550451528366, 691.7011356960619, 496.2851035642388, 529.4040886765091, 444.3593296445004, 198.06208336708823, 365.6472909266031, 391.3885069938369, 859.494451604626, 275.19483951927816, 568.4478784631463, 203.74971298680123, 676.2053582803082, 527.9859302404323, 714.4565600799949, 288.9012675397431, 629.6056652113498, 326.2525932990075, 519.5740740263301, 696.8119752318905, 347.1796230415255, 388.6576994098651, 357.54758351840974, 873.5528483422207, 507.0189947052724, 508.1981784529926, 536.9527958233257, 871.2838601964829, 361.93416709279154, 496.5981745168124]]
data2 = [[103,104,104,105,105,106,106,106,107,107,106,106,105,105,104,104,104,104,105,107,107,106,105,105,107,108,109,110,110,110,110,110,110,109,109,109,109,109,109,108,107,107,107,107,106,106,105,104,104,104,104,104,104,104,103,103,103,103,102,102,101,101,100,100,100,100,100,99,98,97,97,96,96,96,96,96,96,96,95,95,95,94,94,94,94,94,94], [104,104,105,105,106,106,107,107,107,107,107,107,107,106,106,106,106,106,106,108,108,108,106,106,108,109,110,110,112,112,113,112,111,110,110,110,110,109,109,109,108,107,107,107,107,106,106,105,104,104,104,104,104,104,104,103,103,103,103,102,102,101,101,100,100,100,100,99,99,98,97,97,96,96,96,96,96,96,96,95,95,95,94,94,94,94,94], [104,105,105,106,106,107,107,108,108,108,108,108,108,108,108,108,108,108,108,108,110,110,110,110,110,110,110,111,113,115,116,115,113,112,110,110,110,110,110,110,109,108,108,108,108,107,106,105,105,105,105,105,105,104,104,104,104,103,103,103,102,102,102,101,100,100,100,99,99,98,97,97,96,96,96,96,96,96,96,96,95,95,94,94,94,94,94], [105,105,106,106,107,107,108,108,109,109,109,109,109,110,110,110,110,110,110,110,111,112,115,115,115,115,115,116,116,117,119,118,117,116,114,113,112,110,110,110,110,110,110,109,109,108,107,106,106,106,106,106,105,105,105,104,104,104,103,103,103,102,102,102,101,100,100,99,99,98,97,97,96,96,96,96,96,96,96,96,95,95,94,94,94,94,94], [105,106,106,107,107,108,108,109,109,110,110,110,110,111,110,110,110,110,111,114,115,116,121,121,121,121,121,122,123,124,124,123,121,119,118,117,115,114,112,111,110,110,110,110,110,110,109,109,108,109,107,107,106,106,105,105,104,104,104,104,103,103,102,102,102,101,100,100,99,99,98,97,96,96,96,96,96,96,96,96,95,95,94,94,94,94,94], [106,106,107,107,107,108,109,109,110,110,111,111,112,113,112,111,111,112,115,118,118,119,126,128,128,127,128,128,129,130,129,128,127,125,122,120,118,117,115,114,112,110,110,110,110,110,111,110,110,110,109,109,108,107,106,105,105,105,104,104,104,103,103,102,102,102,101,100,99,99,98,97,96,96,96,96,96,96,96,96,95,95,94,94,94,94,94], [106,107,107,108,108,108,109,110,110,111,112,113,114,115,114,115,116,116,119,123,125,130,133,134,134,134,134,135,135,136,135,134,132,130,128,124,121,119,118,116,114,112,111,111,111,112,112,111,110,110,110,109,108,108,107,108,107,106,105,104,104,104,103,103,103,102,101,100,99,99,98,97,96,96,96,96,96,96,96,96,95,95,95,94,94,94,94], [107,107,108,108,109,109,110,110,112,113,114,115,116,117,117,120,120,121,123,129,134,136,138,139,139,139,140,142,142,141,141,140,137,134,131,127,124,122,120,118,117,115,113,114,113,114,114,113,112,111,110,110,109,108,107,106,105,105,105,104,104,104,103,103,103,101,100,100,99,99,98,97,96,96,96,96,96,96,96,96,96,95,95,94,94,94,94], [107,108,108,109,109,110,111,112,114,115,116,117,118,119,121,125,125,127,131,136,140,141,142,144,144,145,148,149,148,147,146,144,140,138,136,130,127,125,123,121,119,118,117,117,116,116,116,115,114,113,113,111,110,109,108,107,106,105,105,103,103,102,102,102,103,101,100,100,100,99,98,98,97,96,96,96,96,96,96,96,96,95,95,95,94,94,94], [107,108,109,109,110,110,110,113,115,117,118,119,120,123,126,129,131,134,139,142,144,145,147,148,150,152,154,154,153,154,151,149,146,143,140,136,130,128,126,124,122,121,120,119,118,117,117,117,116,116,115,113,112,110,109,108,107,106,106,105,104,103,102,101,101,100,100,100,100,99,99,98,97,96,96,96,96,96,96,96,96,95,95,95,94,94,94], [107,108,109,109,110,110,110,112,115,117,119,122,125,127,130,133,137,141,143,145,148,149,152,155,157,159,160,160,161,162,159,156,153,149,146,142,139,134,130,128,126,125,122,120,120,120,119,119,119,118,117,115,113,111,110,110,109,108,107,106,106,105,104,104,103,102,100,100,100,99,99,98,97,96,96,96,96,96,96,96,96,95,95,95,95,94,94], [108,108,109,109,110,110,110,112,115,118,121,125,128,131,134,138,141,145,147,149,152,157,160,161,163,166,169,170,170,171,168,162,158,155,152,148,144,140,136,132,129,127,124,122,121,120,120,120,120,120,119,117,115,113,110,110,110,110,109,108,108,107,107,106,105,104,102,100,100,100,99,98,97,96,96,96,96,96,96,96,96,96,95,95,95,94,94], [108,109,109,110,110,111,112,114,117,120,124,128,131,135,138,142,145,149,152,155,158,163,166,167,170,173,175,175,175,173,171,169,164,160,156,153,149,144,140,136,131,129,126,124,123,123,122,121,120,120,120,119,117,115,111,110,110,110,110,110,109,109,110,109,108,106,103,101,100,100,100,98,97,96,96,96,96,96,96,96,96,96,95,95,95,95,94], [108,109,110,110,110,113,114,116,119,122,126,131,134,138,141,145,149,152,156,160,164,169,171,174,177,175,178,179,177,175,174,172,168,163,160,157,151,147,143,138,133,130,128,125,125,124,123,122,121,121,120,120,118,116,115,111,110,110,110,110,113,114,113,112,110,107,105,102,100,100,100,98,97,96,96,96,96,96,96,96,96,96,96,95,95,95,94], [108,109,110,110,112,115,116,118,122,125,129,133,137,140,144,149,152,157,161,165,169,173,176,179,179,180,180,180,178,178,176,175,171,165,163,160,153,148,143,139,135,132,129,128,127,125,124,124,123,123,122,122,120,118,117,118,115,117,118,118,119,117,116,115,112,109,107,105,100,100,100,100,97,96,96,96,96,96,96,96,96,96,96,95,95,95,95], [108,109,110,111,114,116,118,122,127,130,133,136,140,144,148,153,157,161,165,169,173,177,180,180,180,180,181,180,180,180,179,178,173,168,165,161,156,149,143,139,136,133,130,129,128,126,126,125,125,125,125,124,122,121,120,120,120,120,121,122,123,122,120,117,114,111,108,106,105,100,100,100,100,96,96,96,96,96,96,96,96,96,96,96,95,95,95], [107,108,110,113,115,118,121,126,131,134,137,140,143,148,152,157,162,165,169,173,177,181,181,181,180,181,181,181,180,180,180,178,176,170,167,163,158,152,145,140,137,134,132,130,129,127,127,126,127,128,128,126,125,125,125,123,126,128,129,130,130,125,124,119,116,114,112,110,107,106,105,100,100,100,96,96,96,96,96,96,96,96,96,96,96,95,95], [107,109,111,116,119,122,125,130,135,137,140,144,148,152,156,161,165,168,172,177,181,184,181,181,181,180,180,180,180,180,180,178,178,173,168,163,158,152,146,141,138,136,134,132,130,129,128,128,130,130,130,129,128,129,129,130,132,133,133,134,134,132,128,122,119,116,114,112,108,106,105,105,100,100,100,97,97,97,97,97,97,97,96,96,96,96,95], [108,110,112,117,122,126,129,135,139,141,144,149,153,156,160,165,168,171,177,181,184,185,182,180,180,179,178,178,180,179,179,178,176,173,168,163,157,152,148,143,139,137,135,133,131,130,130,131,132,132,132,131,132,132,133,134,136,137,137,137,136,134,131,124,121,118,116,114,111,109,107,106,105,100,100,100,97,97,97,97,97,97,97,96,96,96,96], [108,110,114,120,126,129,134,139,142,144,146,152,158,161,164,168,171,175,181,184,186,186,183,179,178,178,177,175,178,177,177,176,175,173,168,162,156,153,149,145,142,140,138,136,133,132,132,132,134,134,134,134,135,136,137,138,140,140,140,140,139,137,133,127,123,120,118,115,112,108,108,106,106,105,100,100,100,98,98,98,98,98,98,97,96,96,96], [108,110,116,122,128,133,137,141,143,146,149,154,161,165,168,172,175,180,184,188,189,187,182,178,176,176,175,173,174,173,175,174,173,171,168,161,157,154,150,148,145,143,141,138,135,135,134,135,135,136,136,137,138,139,140,140,140,140,140,140,140,139,135,130,126,123,120,117,114,111,109,108,107,106,105,100,100,100,99,99,98,98,98,98,97,97,96], [110,112,118,124,130,135,139,142,145,148,151,157,163,169,172,176,179,183,187,190,190,186,180,177,175,173,170,169,169,170,171,172,170,170,167,163,160,157,154,152,149,147,144,140,137,137,136,137,138,138,139,140,141,140,140,140,140,140,140,140,140,138,134,131,128,124,121,118,115,112,110,109,108,107,106,105,100,100,100,99,99,99,98,98,98,97,97], [110,114,120,126,131,136,140,143,146,149,154,159,166,171,177,180,182,186,190,190,190,185,179,174,171,168,166,163,164,163,166,169,170,170,168,164,162,161,158,155,153,150,147,143,139,139,139,139,140,141,141,142,142,141,140,140,140,140,140,140,140,137,134,131,128,125,122,119,116,114,112,110,109,109,108,107,105,100,100,100,99,99,99,98,98,97,97], [110,115,121,127,132,136,140,144,148,151,157,162,169,174,178,181,186,188,190,191,190,184,177,172,168,165,162,159,158,158,159,161,166,167,169,166,164,163,161,159,156,153,149,146,142,142,141,142,143,143,143,143,144,142,141,140,140,140,140,140,140,138,134,131,128,125,123,120,117,116,114,112,110,109,108,107,106,105,102,101,100,99,99,99,98,98,97], [110,116,121,127,132,136,140,144,148,154,160,166,171,176,180,184,189,190,191,191,191,183,176,170,166,163,159,156,154,155,155,158,161,165,170,167,166,165,163,161,158,155,152,150,146,145,145,145,146,146,144,145,145,144,142,141,140,140,140,140,138,136,134,131,128,125,123,121,119,117,115,113,112,111,111,110,108,106,105,102,100,100,99,99,99,98,98], [110,114,119,126,131,135,140,144,149,158,164,168,172,176,183,184,189,190,191,191,190,183,174,169,165,161,158,154,150,151,152,155,159,164,168,168,168,167,165,163,160,158,155,153,150,148,148,148,148,148,147,146,146,145,143,142,141,140,139,138,136,134,132,131,128,126,124,122,120,118,116,114,113,113,112,111,108,107,106,105,104,102,100,99,99,99,99], [110,113,119,125,131,136,141,145,150,158,164,168,172,177,183,187,189,191,192,191,190,183,174,168,164,160,157,153,150,149,150,154,158,162,166,170,170,168,166,164,162,160,158,155,152,151,151,151,151,151,149,148,147,146,145,143,142,140,139,137,135,134,132,131,129,127,125,123,121,119,117,116,114,114,113,112,110,108,107,105,103,100,100,100,100,99,99], [110,112,118,124,130,136,142,146,151,157,163,168,174,178,183,187,189,190,191,192,189,182,174,168,164,160,157,153,149,148,149,153,157,161,167,170,170,170,168,166,165,163,159,156,154,153,155,155,155,155,152,150,149,147,145,143,141,140,139,138,136,134,133,131,130,128,126,124,122,120,119,117,116,115,114,113,111,110,107,106,105,105,102,101,100,100,100], [110,111,116,122,129,137,142,146,151,158,164,168,172,179,183,186,189,190,192,193,188,182,174,168,164,161,157,154,151,149,151,154,158,161,167,170,170,170,170,169,168,166,160,157,156,156,157,158,159,159,156,153,150,148,146,144,141,140,140,138,136,135,134,133,131,129,127,125,123,122,120,118,117,116,115,114,112,111,110,108,107,106,105,104,102,100,100], [108,110,115,121,131,137,142,147,152,159,163,167,170,177,182,184,187,189,192,194,189,183,174,169,165,161,158,156,154,153,154,157,160,164,167,171,172,174,174,173,171,168,161,159,158,158,159,161,161,160,158,155,151,149,147,144,142,141,140,138,137,136,135,134,132,130,128,126,125,123,121,119,118,117,116,115,113,112,112,111,110,109,108,107,105,101,100], [108,110,114,120,128,134,140,146,152,158,162,166,169,175,180,183,186,189,193,195,190,184,176,171,167,163,160,158,157,156,157,159,163,166,170,174,176,178,178,176,172,167,164,161,161,160,161,163,163,163,160,157,153,150,148,146,144,142,141,140,139,138,136,135,134,133,129,127,126,124,122,121,119,118,117,116,114,113,112,111,110,110,109,109,107,104,100], [107,110,115,119,123,129,135,141,146,156,161,165,168,173,179,182,186,189,193,194,191,184,179,175,170,166,162,161,160,160,161,162,165,169,172,176,178,179,179,176,172,168,165,163,163,163,163,165,166,164,161,158,155,152,150,147,146,144,143,142,141,139,139,138,137,135,131,128,127,125,124,122,121,119,118,116,115,113,112,111,111,110,110,109,109,105,100], [107,110,114,117,121,126,130,135,142,151,159,163,167,171,177,182,185,189,192,193,191,187,183,179,174,169,167,166,164,164,165,166,169,171,174,178,179,180,180,178,173,169,166,165,165,166,165,168,169,166,163,159,157,154,152,149,148,147,146,145,143,142,141,140,139,138,133,130,128,127,125,124,122,120,118,117,115,112,111,111,111,111,110,109,108,106,100], [107,109,113,118,122,126,129,134,139,150,156,160,165,170,175,181,184,188,191,192,192,189,185,181,177,173,171,169,168,167,169,170,172,174,176,178,179,180,180,179,175,170,168,166,166,168,168,170,170,168,164,160,158,155,152,151,150,149,149,148,147,145,144,143,142,141,136,133,130,129,127,125,123,120,119,118,115,112,111,111,111,110,109,109,109,105,100], [105,107,111,117,121,124,127,131,137,148,154,159,164,168,174,181,184,187,190,191,191,190,187,184,180,178,175,174,172,171,173,173,173,176,178,179,180,180,180,179,175,170,168,166,168,169,170,170,170,170,166,161,158,156,154,153,151,150,150,150,150,148,147,146,145,143,139,135,133,131,129,126,124,121,120,118,114,111,111,111,110,110,109,107,106,104,100], [104,106,110,114,118,121,125,129,135,142,150,157,162,167,173,180,183,186,188,190,190,190,189,184,183,181,180,179,179,176,177,176,176,177,178,179,180,180,179,177,173,169,167,166,167,169,170,170,170,170,167,161,159,157,155,153,151,150,150,150,150,150,150,149,147,145,141,138,135,133,130,127,125,123,121,118,113,111,110,110,109,109,107,106,105,103,100], [104,106,108,111,115,119,123,128,134,141,148,154,161,166,172,179,182,184,186,189,190,190,190,187,185,183,180,180,180,179,179,177,176,177,178,178,178,177,176,174,171,168,166,164,166,168,170,170,170,170,168,162,159,157,155,153,151,150,150,150,150,150,150,150,150,148,144,140,137,134,132,129,127,125,122,117,111,110,107,107,106,105,104,103,102,101,100], [103,105,107,110,114,118,122,127,132,140,146,153,159,165,171,176,180,183,185,186,189,190,188,187,184,182,180,180,180,179,178,176,176,176,176,174,174,173,172,170,168,167,165,163,164,165,169,170,170,170,166,162,159,157,155,153,151,150,150,150,150,150,150,150,150,150,146,142,139,136,133,131,128,125,122,117,110,108,106,105,104,103,103,101,101,101,101], [102,103,106,108,112,116,121,125,130,138,145,151,157,163,170,174,178,181,181,184,186,186,187,186,184,181,180,180,180,179,178,174,173,173,171,170,170,169,168,167,166,164,163,162,161,164,167,169,170,168,164,160,158,157,155,153,151,150,150,150,150,150,150,150,150,150,147,144,141,138,135,133,128,125,122,116,109,107,104,104,103,102,101,101,101,101,101], [101,102,105,107,110,115,120,124,129,136,143,149,155,162,168,170,174,176,178,179,181,182,184,184,183,181,180,180,179,177,174,172,170,168,166,165,164,164,164,164,162,160,159,159,158,160,162,164,166,166,163,159,157,156,155,153,151,150,150,150,150,150,150,150,150,150,149,146,143,140,137,133,129,124,119,112,108,105,103,103,102,101,101,101,101,100,100], [101,102,104,106,109,113,118,122,127,133,141,149,155,161,165,168,170,172,175,176,177,179,181,181,181,180,180,179,177,174,171,167,165,163,161,160,160,160,160,160,157,155,155,154,154,155,157,159,161,161,161,159,156,154,154,153,151,150,150,150,150,150,150,150,150,150,149,147,144,141,137,133,129,123,116,110,107,104,102,102,101,101,101,100,100,100,100], [102,103,104,106,108,112,116,120,125,129,137,146,154,161,163,165,166,169,172,173,174,175,177,178,178,178,178,177,174,171,168,164,160,158,157,157,156,156,156,155,152,151,150,150,151,151,152,154,156,157,157,156,155,153,152,152,151,150,150,150,150,150,150,150,150,150,150,147,144,141,138,133,127,120,113,109,106,103,101,101,101,100,100,100,100,100,100], [103,104,105,106,108,110,114,118,123,127,133,143,150,156,160,160,161,162,167,170,171,172,173,175,175,174,174,173,171,168,164,160,156,155,154,153,153,152,152,150,149,148,148,148,148,148,149,149,150,152,152,152,152,151,150,150,150,150,150,150,150,150,150,150,150,150,149,147,144,141,138,132,125,118,111,108,105,103,102,101,101,101,100,100,100,100,100], [104,105,106,107,108,110,113,117,120,125,129,138,145,151,156,156,157,158,160,164,166,168,170,171,172,171,171,169,166,163,160,156,153,151,150,150,149,149,149,148,146,146,146,146,146,146,146,147,148,148,149,149,149,148,148,148,148,149,149,150,150,150,150,150,150,150,148,146,143,141,136,129,123,117,110,108,105,104,103,102,102,101,101,100,100,100,100], [103,104,105,106,107,109,111,115,118,122,127,133,140,143,150,152,153,155,157,159,162,164,167,168,168,168,167,166,163,160,157,153,150,148,148,147,147,147,145,145,144,143,143,143,144,144,144,144,145,145,145,145,146,146,146,146,146,147,147,148,149,150,150,150,150,149,147,145,143,141,134,127,123,117,111,108,105,105,104,104,103,103,102,101,100,100,100], [102,103,104,105,106,107,109,113,116,120,125,129,133,137,143,147,149,151,152,154,158,161,164,165,164,164,163,163,160,157,154,151,149,147,145,145,144,143,141,140,141,141,141,141,141,142,142,142,142,142,142,142,143,143,143,144,144,145,146,146,146,147,148,148,148,148,145,143,142,140,134,128,123,117,112,108,106,105,105,104,104,103,102,101,100,100,99], [102,103,104,105,105,106,108,110,113,118,123,127,129,132,137,141,142,142,145,150,154,157,161,161,160,160,160,159,157,154,151,148,146,145,143,142,142,139,137,136,137,137,138,138,139,139,139,139,139,139,139,139,140,140,141,142,142,143,144,144,144,145,145,145,145,145,144,142,140,139,136,129,124,119,113,109,106,106,105,104,103,102,101,101,100,99,99], [102,103,104,104,105,106,107,108,111,116,121,124,126,128,131,134,135,137,139,143,147,152,156,157,157,157,156,155,153,151,148,146,143,142,141,140,138,135,133,132,132,133,133,133,134,135,135,135,135,136,136,137,137,138,138,139,140,141,141,142,142,143,142,142,141,141,140,139,137,134,133,129,125,121,114,110,107,106,106,104,103,102,101,100,99,99,99], [102,103,104,104,105,105,106,108,110,113,118,121,124,126,128,130,132,134,136,139,143,147,150,154,154,154,153,151,149,148,146,143,141,139,137,136,132,130,128,128,128,129,129,130,130,131,132,132,132,133,134,134,135,135,136,137,138,139,139,140,140,140,139,139,138,137,137,135,132,130,129,127,124,120,116,112,109,106,105,103,102,101,101,100,99,99,99], [101,102,103,104,104,105,106,107,108,110,114,119,121,124,126,128,129,132,134,137,140,143,147,149,151,151,151,149,147,145,143,141,138,136,134,131,128,126,124,125,125,126,126,127,128,128,129,129,130,130,131,131,132,132,133,134,135,135,136,136,137,137,136,136,135,134,133,131,129,128,127,126,123,119,115,111,109,107,105,104,103,102,101,100,100,100,99], [101,102,103,103,104,104,105,106,108,110,112,116,119,121,124,125,127,130,132,135,137,140,143,147,149,149,149,147,145,143,141,139,136,133,131,128,125,122,121,122,122,122,123,125,125,126,127,127,127,128,128,128,129,129,130,131,131,132,132,133,133,133,132,132,131,131,130,129,128,126,125,124,121,117,111,109,108,106,105,104,103,102,101,101,100,100,100], [100,101,102,103,103,104,105,106,107,108,110,114,117,119,121,123,126,128,130,133,136,139,141,144,146,147,146,145,143,141,138,136,133,130,127,124,121,120,120,120,120,120,121,122,123,124,124,125,125,126,126,125,126,126,126,125,126,127,128,128,129,129,128,128,128,128,128,128,126,125,123,122,119,114,109,108,107,106,105,104,103,103,102,102,101,100,100], [100,101,102,103,104,105,106,107,108,109,110,112,115,117,120,122,125,127,130,132,135,137,139,142,144,144,144,142,140,138,136,132,129,126,123,120,120,119,119,118,119,119,120,120,120,121,122,122,123,123,123,123,122,123,122,122,121,122,122,122,123,123,123,124,125,125,126,126,125,124,122,120,116,113,109,107,106,105,104,104,103,102,102,101,101,100,100], [100,101,102,103,104,105,106,107,108,109,110,112,114,117,119,122,124,127,129,131,134,136,138,140,142,142,142,140,138,136,133,129,125,122,120,119,118,118,117,116,117,117,118,119,119,120,120,120,121,121,121,122,121,120,120,120,119,119,120,120,120,120,120,120,123,123,124,124,124,123,121,119,114,112,108,106,106,104,104,103,102,102,101,101,100,100,99], [101,102,103,104,105,106,107,108,109,110,111,113,114,116,119,121,124,126,128,130,133,135,137,138,140,140,139,137,135,133,131,127,122,120,118,118,117,117,116,115,116,116,117,118,118,118,119,119,120,120,121,121,120,119,119,118,117,117,118,119,118,118,118,119,120,122,123,123,123,122,120,117,113,110,108,106,105,104,103,103,102,101,101,100,100,99,99], [101,102,103,104,105,106,107,108,109,110,111,111,113,115,118,121,123,125,127,129,131,133,135,137,138,138,137,134,132,130,127,122,120,118,116,116,116,116,115,113,114,115,116,117,117,118,118,119,119,119,120,120,119,118,117,117,116,116,117,117,117,118,119,119,119,120,121,121,121,121,119,116,113,110,107,105,105,103,103,103,102,101,100,100,99,99,99], [101,102,103,104,105,106,107,108,109,110,111,112,114,116,117,120,122,124,126,129,130,132,133,135,136,136,134,132,129,126,122,120,118,116,114,114,114,114,114,113,113,114,115,116,116,117,117,117,118,118,119,119,118,117,116,116,115,115,116,116,116,117,117,118,118,119,120,120,120,120,119,116,113,109,106,104,104,103,102,102,101,101,100,99,99,99,98], [101,102,103,104,105,106,107,108,109,110,111,113,115,117,117,118,121,123,126,128,130,130,131,132,133,134,131,129,125,122,120,118,116,114,113,112,112,113,112,112,111,112,113,113,114,115,116,116,117,117,118,118,116,116,115,115,115,114,114,115,116,116,117,117,118,118,119,119,120,120,117,115,112,108,106,104,103,102,102,102,101,100,99,99,99,98,98], [101,102,103,104,105,105,106,107,108,109,110,111,113,115,117,118,120,122,125,126,127,128,129,130,131,131,128,125,121,120,118,116,114,113,113,111,111,111,111,110,109,110,111,112,113,113,114,115,115,116,117,117,116,115,114,114,113,113,114,114,115,115,116,116,117,118,118,119,119,118,116,114,112,108,105,103,103,102,101,101,100,100,99,99,98,98,97], [100,101,102,103,104,105,106,107,108,109,110,110,111,113,115,118,120,121,122,124,125,125,126,127,128,127,124,121,120,118,116,114,113,112,112,110,109,109,108,108,108,109,110,111,112,112,113,114,114,115,116,116,115,114,113,112,112,113,113,114,114,115,115,116,116,117,117,118,118,117,115,113,111,107,105,103,102,101,101,100,100,100,99,99,98,98,97], [100,101,102,103,104,105,105,106,107,108,109,110,110,111,114,116,118,120,120,121,122,122,123,124,123,123,120,118,117,115,114,115,113,111,110,109,108,108,107,107,107,108,109,110,111,111,112,113,113,114,115,115,114,113,112,111,111,112,112,112,113,114,114,115,115,116,116,117,117,116,114,112,109,106,104,102,101,100,100,99,99,99,99,98,98,97,97]]
data3 = [[16,29, 12, 14, 16, 5, 9, 43, 25, 49, 57, 61, 37, 66, 79, 55, 51, 55, 17, 29, 9, 4, 9, 12, 9], [22,6, 2, 12, 23, 9, 2, 4, 11, 28, 49, 51, 47, 38, 65, 69, 59, 65, 59, 22, 11, 12, 9, 9, 13], [2, 5, 8, 44, 9, 22, 2, 5, 12, 34, 43, 54, 44, 49, 48, 54, 59, 69, 51, 21, 16, 9, 5, 4, 7], [3, 9, 9, 34, 9, 9, 2, 4, 13, 26, 58, 61, 59, 53, 54, 64, 55, 52, 53, 18, 3, 9, 12, 2, 8], [4, 2, 9, 8, 2, 23, 2, 4, 14, 31, 48, 46, 59, 66, 54, 56, 67, 54, 23, 14, 6, 8, 7, 9, 8], [5, 2, 23, 2, 9, 9, 9, 4, 8, 8, 6, 14, 12, 9, 14, 9, 21, 22, 34, 12, 9, 23, 9, 11, 13], [6, 7, 23, 23, 9, 4, 7, 4, 23, 11, 32, 2, 2, 5, 34, 9, 4, 12, 15, 19, 45, 9, 19, 9, 4]]
HeatMap(data = data)
HeatMap(title= "Heatmap Second Example",
xLabel= "X Label",
yLabel= "Y Label",
data = data,
legendPosition = LegendPosition.TOP)
HeatMap(title = "Green Yellow White",
data = data2,
showLegend = False,
color = GradientColor.GREEN_YELLOW_WHITE)
colors = [Color.black, Color.yellow, Color.red]
HeatMap(title= "Custom Gradient Example",
data= data3,
color= GradientColor(colors))
HeatMap(initWidth= 900,
initHeight= 300,
title= "Custom size, no tooltips",
data= data3,
useToolTip= False,
showLegend= False,
color= GradientColor.WHITE_BLUE)
```
| github_jupyter |
# Adversarial Examples
Let's start out by importing all the required libraries
```
import os
import sys
sys.path.append(os.path.join(os.getcwd(), "venv"))
import numpy as np
import torch
import torchvision.transforms as transforms
from matplotlib import pyplot as plt
from torch import nn
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
```
## MNIST
Pytorch expects `Dataset` objects as input. Luckily, for MNIST (and few other datasets such as CIFAR and SVHN), torchvision has a ready made function to convert the dataset to a pytorch `Dataset` object. Keep in mind that these functions return `PIL` images so you will have to apply a transformation on them.
```
path = os.path.join(os.getcwd(), "MNIST")
transform = transforms.Compose([transforms.ToTensor()])
train_mnist = MNIST(path, train=True, transform=transform)
test_mnist = MNIST(path, train=False, transform=transform)
```
### Visualize Dataset
Set `batch_size` to 1 to visualize the dataset.
```
batch_size = 1
train_set = DataLoader(train_mnist, batch_size=batch_size, shuffle=True)
test_set = DataLoader(test_mnist, batch_size=batch_size, shuffle=True)
num_images = 2
for i, (image, label) in enumerate(train_set):
if i == num_images:
break
#Pytorch returns batch_size x num_channels x 28 x 28
plt.imshow(image[0][0])
plt.show()
print("label: " + str(label))
```
### Train a Model
Set `batch_size` to start training a model on the dataset.
```
batch_size = 64
train_set = DataLoader(train_mnist, batch_size=batch_size, shuffle=True)
```
Define a `SimpleCNN` model to train on MNIST
```
def identity():
return lambda x: x
class CustomConv2D(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size,
activation, stride):
super().__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, kernel_size-2)
self.activation = activation
def forward(self, x):
h = self.conv(x)
return self.activation(h)
class SimpleCNN(nn.Module):
def __init__(self, in_channels=1, out_base=2, kernel_size=3, activation=identity(),
stride=2, num_classes=10):
super().__init__()
self.conv1 = CustomConv2D(in_channels, out_base, kernel_size, activation, stride)
self.pool1 = nn.MaxPool2d((2, 2))
self.conv2 = CustomConv2D(out_base, out_base, kernel_size, activation, stride)
self.pool2 = nn.MaxPool2d((2, 2))
self.linear = nn.Linear(4 * out_base, num_classes, bias=True)
self.log_softmax = nn.LogSoftmax(dim=-1)
def forward(self, x):
h = self.conv1(x)
h = self.pool1(h)
h = self.conv2(h)
h = self.pool2(h)
h = h.view([x.size(0), -1])
return self.log_softmax(self.linear(h))
```
Create 4 model variations:
identity_model: SimpleCNN model with identity activation functions
relu_model: SimpleCNN model with relu activation functions
sig_model: SimpleCNN model with sigmoid activation functions
tanh_model: SimpleCNN model with tanh activation functions
```
identity_model = SimpleCNN()
relu_model = SimpleCNN(activation=nn.ReLU())
sig_model = SimpleCNN(activation=nn.Sigmoid())
tanh_model = SimpleCNN(activation=nn.Tanh())
```
Create a function to train the model
```
def train_model(model, train_set, num_epochs):
optimizer = torch.optim.Adam(lr=0.001, params=model.parameters())
for epoch in range(num_epochs):
epoch_accuracy, epoch_loss = 0, 0
train_set_size = 0
for images, labels in train_set:
batch_size = images.size(0)
images_var, labels_var = Variable(images), Variable(labels)
log_probs = model(images_var)
_, preds = torch.max(log_probs, dim=-1)
loss = nn.NLLLoss()(log_probs, labels_var)
epoch_loss += loss.data.numpy()[0] * batch_size
accuracy = preds.eq(labels_var).float().mean().data.numpy()[0] * 100.0
epoch_accuracy += accuracy * batch_size
train_set_size += batch_size
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_accuracy = epoch_accuracy / train_set_size
epoch_loss = epoch_loss / train_set_size
print("epoch {}: loss= {:.3}, accuracy= {:.4}".format(epoch + 1, epoch_loss, epoch_accuracy))
return model
trained_model = train_model(relu_model, train_set, 10)
```
## Generating Adversarial Examples
Now that we have a trained model, we can generate adversarial examples.
### Gradient Ascent
Use Gradient Ascent to generate a targeted adversarial example.
```
def np_val(torch_var):
return torch_var.data.numpy()[0]
class AttackNet(nn.Module):
def __init__(self, model, image_size):
super().__init__()
self.model = model
self.params = nn.Parameter(torch.zeros(image_size), requires_grad=True)
def forward(self, image):
# clamp parameters here? or in backward?
x = image + self.params
x = torch.clamp(x, 0, 1)
log_probs = self.model(x)
return log_probs
class GradientAscent(object):
def __init__(self, model, confidence=0):
super().__init__()
self.model = model
self.num_steps = 10000
self.confidence = confidence
def attack(self, image, label, target=None):
image_var = Variable(image)
attack_net = AttackNet(self.model, image.shape)
optimizer = torch.optim.Adam(lr=0.01, params=[attack_net.params])
target = Variable(torch.from_numpy(np.array([target], dtype=np.int64))
) if target is not None else None
log_probs = attack_net(image_var)
confidence, predictions = torch.max(torch.exp(log_probs), dim=-1)
if label.numpy()[0] != np_val(predictions):
print("model prediction does not match label")
return None, (None, None), (None, None)
else:
for step in range(self.num_steps):
stop_training = self.perturb(image_var, attack_net, target, optimizer)
if stop_training:
print("Adversarial attack succeeded after {} steps!".format(
step + 1))
break
if stop_training is False:
print("Adversarial attack failed")
log_probs = attack_net(image_var)
adv_confidence, adv_predictions = torch.max(torch.exp(log_probs), dim=-1)
return attack_net.params, (confidence, predictions), (adv_confidence,
adv_predictions)
def perturb(self, image, attack_net, target, optimizer):
log_probs = attack_net(image)
confidence, predictions = torch.max(torch.exp(log_probs), dim=-1)
if (np_val(predictions) == np_val(target) and
np_val(confidence) >= self.confidence):
return True
loss = nn.NLLLoss()(log_probs, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
return False
```
Define a `GradientAscent` object
```
gradient_ascent = GradientAscent(trained_model)
```
Define a function to help plot the results
```
%matplotlib inline
def plot_results(image, perturbation, orig_pred, orig_con, adv_pred, adv_con):
plot_image = image.numpy()[0][0]
plot_perturbation = perturbation.data.numpy()[0][0]
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 5
plt.rcParams["figure.figsize"] = fig_size
ax = plt.subplot(131)
ax.set_title("Original: " + str(np_val(orig_pred)) + " @ " +
str(np.round(np_val(orig_con) * 100, decimals=1)) + "%")
plt.imshow(plot_image)
plt.subplot(132)
plt.imshow(plot_perturbation)
ax = plt.subplot(133)
plt.imshow(plot_image + plot_perturbation)
ax.set_title("Adversarial: " + str(np_val(adv_pred)) + " @ " +
str(np.round(np_val(adv_con) * 100, decimals=1)) + "%")
plt.show()
```
Let's generate some adversarial examples!
```
num_images = 2
for i, (test_image, test_label) in enumerate(test_set):
if i == num_images:
break
target_classes = list(range(10))
target_classes.remove(test_label.numpy()[0])
target = np.random.choice(target_classes)
perturbation, (orig_con, orig_pred), (
adv_con, adv_pred) = gradient_ascent.attack(test_image, test_label, target)
if perturbation is not None:
plot_results(test_image, perturbation, orig_pred, orig_con, adv_pred, adv_con)
```
### Fast Gradient
Now let's use the Fast Gradient Sign Method to generate untargeted adversarial examples.
```
class FastGradient(object):
def __init__(self, model, confidence=0, alpha=0.1):
super().__init__()
self.model = model
self.confidence = confidence
self.alpha = alpha
def attack(self, image, label):
image_var = Variable(image, requires_grad=True)
target = Variable(torch.from_numpy(np.array([label], dtype=np.int64))
) if label is not None else None
log_probs = self.model(image_var)
confidence, predictions = torch.max(torch.exp(log_probs), dim=-1)
if label.numpy()[0] != np_val(predictions):
print("model prediction does not match label")
return None, (None, None), (None, None)
else:
loss = nn.NLLLoss()(log_probs, target)
loss.backward()
x_grad = torch.sign(image_var.grad.data)
adv_image = torch.clamp(image_var.data + self.alpha * x_grad, 0, 1)
delta = adv_image - image_var.data
adv_log_probs = self.model(Variable(adv_image))
adv_confidence, adv_predictions = torch.max(torch.exp(adv_log_probs),
dim=-1)
if (np_val(adv_predictions) != np_val(predictions) and
np_val(adv_confidence) >= self.confidence):
print("Adversarial attack succeeded!")
else:
print("Adversarial attack failed")
return Variable(delta), (confidence, predictions), (adv_confidence,
adv_predictions)
```
Define a `FastGradient` object
```
fast_gradient = FastGradient(trained_model)
```
Let's generate some adversarial examples!
```
num_images = 20
for i, (test_image, test_label) in enumerate(test_set):
if i == num_images:
break
perturbation, (orig_con, orig_pred), (
adv_con, adv_pred) = fast_gradient.attack(test_image, test_label)
if perturbation is not None:
plot_results(test_image, perturbation, orig_pred, orig_con, adv_pred, adv_con)
```
| github_jupyter |
# Pi Estimation Using Monte Carlo
In this exercise, we will use MapReduce and a Monte-Carlo-Simulation to estimate $\Pi$.
If we are looking at this image from this [blog](https://towardsdatascience.com/how-to-make-pi-part-1-d0b41a03111f), we see a unit circle in a unit square:

The area:
- for the circle is $A_{circle} = \Pi*r^2 = \Pi * 1*1 = \Pi$
- for the square is $A_{square} = d^2 = (2*r)^2 = 4$
The ratio of the two areas are therefore $\frac{A_{circle}}{A_{square}} = \frac{\Pi}{4}$
The Monte-Carlo-Simulation draws multiple points on the square, uniformly at random. For every point, we count if it lies within the circle or not.
And so we get the approximation:
$\frac{\Pi}{4} \approx \frac{\text{points_in_circle}}{\text{total_points}}$
or
$\Pi \approx 4* \frac{\text{points_in_circle}}{\text{total_points}}$
If we have a point $x_1,y_1$ and we want to figure out if it lies in a circle with radius $1$ we can use the following formula:
$\text{is_in_circle}(x_1,y_1) =
\begin{cases}
1,& \text{if } (x_1)^2 + (y_1)^2 \leq 1\\
0, & \text{otherwise}
\end{cases}$
## Implementation
Write a MapReduce algorithm for estimating $\Pi$
```
%%writefile pi.py
#!/usr/bin/python3
from mrjob.job import MRJob
from random import uniform
class MyJob(MRJob):
def mapper(self, _, line):
for x in range(100):
x = uniform(-1,1)
y = uniform(-1,1)
in_circle = x*x + y*y <=1
yield None, in_circle
def reducer(self, key, values):
values = list(values)
yield "Pi", 4 * sum(values) / len(values)
yield "number of values", len(values)
# for v in values:
# yield key, v
if __name__ == '__main__':
MyJob.run()
```
## Another Approach
Computing the mean in the mapper
```
%%writefile pi.py
#!/usr/bin/python3
from mrjob.job import MRJob
from random import uniform
class MyJob(MRJob):
def mapper(self, _, line):
num_samples = 100
in_circles_list = []
for x in range(num_samples):
x = uniform(-1,1)
y = uniform(-1,1)
in_circle = x*x + y*y <=1
in_circles_list.append(in_circle)
yield None, [num_samples, sum(in_circles_list)/num_samples]
def reducer(self, key, numSamples_sum_pairs):
total_samples = 0
weighted_numerator_sum = 0
for (num_samples, current_sum) in numSamples_sum_pairs:
total_samples += num_samples
weighted_numerator_sum += num_samples*current_sum
yield "Pi", 4 * weighted_numerator_sum / total_samples
yield "weighted_numerator_sum", weighted_numerator_sum
yield "total_samples", total_samples
if __name__ == '__main__':
MyJob.run()
```
### Running the Job
Unfortunately, the library does not work without an input file. I guess this comes from the fact that the hadoop streaming library also does not support this feature, see [stack overflow](https://stackoverflow.com/questions/22821005/hadoop-streaming-job-with-no-input-file).
We fake the number of mappers with different input files. Not the most elegant solution :/
```
!python pi.py /data/dataset/text/small.txt
!python pi.py /data/dataset/text/holmes.txt
```
| github_jupyter |
```
import tensorflow as tf
# You'll generate plots of attention in order to see which parts of an image
# our model focuses on during captioning
import matplotlib.pyplot as plt
# Scikit-learn includes many helpful utilities
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import re
import numpy as np
import os
import time
import json
from glob import glob
from PIL import Image
import pickle
# mount drive
from google.colab import drive
drive.mount('/gdrive')
#set up pickle and checkpoints folder
!ls /gdrive
checkpoint_path = "/gdrive/My Drive/checkpoints/train"
if not os.path.exists(checkpoint_path):
os.mkdir("/gdrive/My Drive/checkpoints")
os.mkdir("/gdrive/My Drive/checkpoints/train")
if not os.path.exists("/gdrive/My Drive/pickles"):
os.mkdir("/gdrive/My Drive/pickles")
```
## Download and prepare the MS-COCO dataset
You will use the [MS-COCO dataset](http://cocodataset.org/#home) to train our model. The dataset contains over 82,000 images, each of which has at least 5 different caption annotations. The code below downloads and extracts the dataset automatically.
**Caution: large download ahead**. You'll use the training set, which is a 13GB file.
```
# Download caption annotation files
annotation_folder = '/annotations/'
if not os.path.exists(os.path.abspath('.') + annotation_folder):
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract = True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
os.remove(annotation_zip)
# Download image files
image_folder = '/train2014/'
if not os.path.exists(os.path.abspath('.') + image_folder):
image_zip = tf.keras.utils.get_file('train2014.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/zips/train2014.zip',
extract = True)
PATH = os.path.dirname(image_zip) + image_folder
os.remove(image_zip)
else:
PATH = os.path.abspath('.') + image_folder
#Limiting size of dataset to 50000
# Read the json file
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# Store captions and image names in vectors
all_captions = []
all_img_name_vector = []
for annot in annotations['annotations']:
caption = '<start> ' + annot['caption'] + ' <end>'
image_id = annot['image_id']
full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id)
all_img_name_vector.append(full_coco_image_path)
all_captions.append(caption)
# Shuffle captions and image_names together
# Set a random state
train_captions, img_name_vector = shuffle(all_captions,
all_img_name_vector,
random_state=1)
# Select the first 30000 captions from the shuffled set
num_examples = 50000
train_captions = train_captions[:num_examples]
img_name_vector = img_name_vector[:num_examples]
len(train_captions), len(all_captions)
```
## Preprocess the images using InceptionV3
Next, you will use InceptionV3 (which is pretrained on Imagenet) to classify each image. You will extract features from the last convolutional layer.
First, you will convert the images into InceptionV3's expected format by:
* Resizing the image to 299px by 299px
* [Preprocess the images](https://cloud.google.com/tpu/docs/inception-v3-advanced#preprocessing_stage) using the [preprocess_input](https://www.tensorflow.org/api_docs/python/tf/keras/applications/inception_v3/preprocess_input) method to normalize the image so that it contains pixels in the range of -1 to 1, which matches the format of the images used to train InceptionV3.
```
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
```
## Initialize InceptionV3 and load the pretrained Imagenet weights
Now you'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. The shape of the output of this layer is ```8x8x2048```. You use the last convolutional layer because you are using attention in this example. You don't perform this initialization during training because it could become a bottleneck.
* You forward each image through the network and store the resulting vector in a dictionary (image_name --> feature_vector).
* After all the images are passed through the network, you pickle the dictionary and save it to disk.
```
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
```
## Caching the features extracted from InceptionV3
You will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but also memory intensive, requiring 8 \* 8 \* 2048 floats per image. At the time of writing, this exceeds the memory limitations of Colab (currently 12GB of memory).
Performance could be improved with a more sophisticated caching strategy (for example, by sharding the images to reduce random access disk I/O), but that would require more code.
The caching will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you can:
1. install [tqdm](https://github.com/tqdm/tqdm):
`!pip install tqdm`
2. Import tqdm:
`from tqdm import tqdm`
3. Change the following line:
`for img, path in image_dataset:`
to:
`for img, path in tqdm(image_dataset):`
```
# Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(
load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(16)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
```
## Preprocess and tokenize the captions
* First, you'll tokenize the captions (for example, by splitting on spaces). This gives us a vocabulary of all of the unique words in the data (for example, "surfing", "football", and so on).
* Next, you'll limit the vocabulary size to the top 5,000 words (to save memory). You'll replace all other words with the token "UNK" (unknown).
* You then create word-to-index and index-to-word mappings.
* Finally, you pad all sequences to be the same length as the longest one.
```
# Find the maximum length of any caption in our dataset
def calc_max_length(tensor):
return max(len(t) for t in tensor)
# Choose the top 5000 words from the vocabulary
top_k = 5000
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,
oov_token="<unk>",
filters='!"#$%&()*+.,-/:;=?@[\]^_`{|}~ ')
tokenizer.fit_on_texts(train_captions)
train_seqs = tokenizer.texts_to_sequences(train_captions)
tokenizer.word_index['<pad>'] = 0
tokenizer.index_word[0] = '<pad>'
pickle.dump( tokenizer, open( "tokeniser.pkl", "wb" ) )
!cp tokeniser.pkl "/gdrive/My Drive/pickles/tokeniser.pkl"
# Create the tokenized vectors
train_seqs = tokenizer.texts_to_sequences(train_captions)
# Pad each vector to the max_length of the captions
# If you do not provide a max_length value, pad_sequences calculates it automatically
cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')
# Calculates the max_length, which is used to store the attention weights
max_length = calc_max_length(train_seqs)
print(max_length)
#pickle.dump( max_length, open( "/gdrive/My Drive/max_length.p", "wb" ) )
pickle.dump( max_length, open( "max_length.pkl", "wb" ) )
!cp max_length.pkl "/gdrive/My Drive/pickles/max_length.pkl"
#assert(False)
```
## Split the data into training and testing
```
# Create training and validation sets using an 80-20 split
img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector,
cap_vector,
test_size=0.2,
random_state=0)
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
```
## Create a tf.data dataset for training
```
# Feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
vocab_size = top_k + 1
num_steps = len(img_name_train) // BATCH_SIZE
# Shape of the vector extracted from InceptionV3 is (64, 2048)
# These two variables represent that vector shape
features_shape = 2048
attention_features_shape = 64
# Load the numpy files
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# Use map to load the numpy files in parallel
dataset = dataset.map(lambda item1, item2: tf.numpy_function(
map_func, [item1, item2], [tf.float32, tf.int32]),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Shuffle and batch
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
```
## Model
Fun fact: the decoder below is identical to the one in the example for [Neural Machine Translation with Attention](../sequences/nmt_with_attention.ipynb).
The model architecture is inspired by the [Show, Attend and Tell](https://arxiv.org/pdf/1502.03044.pdf) paper.
* In this example, you extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048).
* You squash that to a shape of (64, 2048).
* This vector is then passed through the CNN Encoder (which consists of a single Fully connected layer).
* The RNN (here GRU) attends over the image to predict the next word.
```
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, 64, hidden_size)
score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))
# attention_weights shape == (batch_size, 64, 1)
# you get 1 at the last axis because you are applying score to self.V
attention_weights = tf.nn.softmax(self.V(score), axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since you have already extracted the features and dumped it using pickle
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
#self.bi = tf.keras.layers.LSTM(self.units,
# return_sequences=True,
# return_state=True,
# recurrent_initializer='glorot_uniform')
#self.fc0 = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(self.units, activation='sigmoid'))
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
#x = self.fc0(output)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, vocab_size)
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
## Checkpoint
```
checkpoint_path = "/gdrive/My Drive/checkpoints/train"
if not os.path.exists(checkpoint_path):
os.mkdir(checkpoint_path)
ckpt = tf.train.Checkpoint(encoder=encoder,
decoder=decoder,
optimizer = optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
start_epoch = 0
if ckpt_manager.latest_checkpoint:
start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])
# restoring the latest checkpoint in checkpoint_path
ckpt.restore(ckpt_manager.latest_checkpoint)
```
## Training
* You extract the features stored in the respective `.npy` files and then pass those features through the encoder.
* The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.
* The decoder returns the predictions and the decoder hidden state.
* The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
* Use teacher forcing to decide the next input to the decoder.
* Teacher forcing is the technique where the target word is passed as the next input to the decoder.
* The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
```
# adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
@tf.function
def train_step(img_tensor, target):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * target.shape[0], 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss = (loss / int(target.shape[1]))
trainable_variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
return loss, total_loss
EPOCHS = 40
for epoch in range(start_epoch, EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
batch_loss, t_loss = train_step(img_tensor, target)
total_loss += t_loss
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(
epoch + 1, batch, batch_loss.numpy() / int(target.shape[1])))
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / num_steps)
if epoch % 5 == 0:
ckpt_manager.save()
print ('Epoch {} Loss {:.6f}'.format(epoch + 1,
total_loss/num_steps))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show()
pickle.dump( loss_plot, open( "/gdrive/My Drive/loss_plot_save.p", "wb" ) )
```
| github_jupyter |
# 2019 Formula One World Championship
<div style="text-align: justify">
A Formula One season consists of a series of races, known as Grands Prix (French for ''grand prizes' or 'great prizes''), which take place worldwide on purpose-built circuits and on public roads. The results of each race are evaluated using a points system to determine two annual World Championships: one for drivers, the other for constructors. Drivers must hold valid Super Licences, the highest class of racing licence issued by the FIA. The races must run on tracks graded "1" (formerly "A"), the highest grade-rating issued by the FIA.Most events occur in rural locations on purpose-built tracks, but several events take place on city streets.
There are a number of F1 races coming up:
Singapore GP: Date: Sun, Sep 22, 8:10 AM
Russian GP: Date: Sun, Sep 29, 7:10 AM
Japanese GP: Date: Sun, Oct 13, 1:10 AM
Mexican GP Date: Sun, Oct 13, 1:10 AM
The Singaporean Grand Prix this weekend and the Russian Grand Prix the weekend after, as you can see here.
The 2019 driver standings are given here. Given these standings:
</div>
# Lets Answer few fun questions?
```
#A Probability Distribution; an {outcome: probability} mapping.
# Make probabilities sum to 1.0; assert no negative probabilities
class ProbDist(dict):
"""A Probability Distribution; an {outcome: probability} mapping."""
def __init__(self, mapping=(), **kwargs):
self.update(mapping, **kwargs)
total = sum(self.values())
for outcome in self:
self[outcome] = self[outcome] / total
assert self[outcome] >= 0
def p(event, space):
"""The probability of an event, given a sample space of outcomes.
event: a collection of outcomes, or a predicate that is true of outcomes in the event.
space: a set of outcomes or a probability distribution of {outcome: frequency} pairs."""
# if event is a predicate it, "unroll" it as a collection
if is_predicate(event):
event = such_that(event, space)
# if space is not an equiprobably collection (a simple set),
# but a probability distribution instead (a dictionary set),
# then add (union) the probabilities for all favorable outcomes
if isinstance(space, ProbDist):
return sum(space[o] for o in space if o in event)
# simplest case: what we played with in our previous lesson
else:
return Fraction(len(event & space), len(space))
is_predicate = callable
# Here we either return a simple collection in the case of equiprobable outcomes, or a dictionary collection in the
# case of non-equiprobably outcomes
def such_that(predicate, space):
"""The outcomes in the sample pace for which the predicate is true.
If space is a set, return a subset {outcome,...} with outcomes where predicate(element) is true;
if space is a ProbDist, return a ProbDist {outcome: frequency,...} with outcomes where predicate(element) is true."""
if isinstance(space, ProbDist):
return ProbDist({o:space[o] for o in space if predicate(o)})
else:
return {o for o in space if predicate(o)}
```
# Question Set 1
what is the Probability Distribution for each F1 driver to win the Singaporean Grand Prix?
What is the Probability Distribution for each F1 driver to win both the Singaporean and Russian Grand Prix?
What is the probability for Mercedes to win both races?
What is the probability for Mercedes to win at least one race?
Note that Mercedes, and each other racing team, has two drivers per race.
# Solution
1. what is the Probability Distribution for each F1 driver to win the Singaporean Grand Prix?
```
SGP = ProbDist(LH=284,VB=221,CL=182,MV=185,SV=169,PG=65,CS=58,AA=34,DR=34,DK=33,NH=31,LN=25,KR=31,SP=27,LS=19,KM=18,RG=8,AG=3,RK=1,
GR=0)
print ("The probability of each driver winnning Singaporean Grand Prix ")
SGP #Driver standing divided by / total of all driver standings, SGP returns total probability as 1
```
2. What is the Probability Distribution for each F1 driver to win both the Singaporean and Russian Grand Prix?
```
SGP = ProbDist(
LH=284,VB=221,CL=182,MV=185,SV=169,PG=65,CS=58,AA=34,DR=34,DK=33,NH=31,LN=25,KR=31,SP=27,LS=19,KM=18,
RG=8,AG=3,RK=1,GR=0) # data taken on saturday before race starts for Singapore
RGP = ProbDist(
LH=296,VB=231,CL=200,MV=200,SV=194,PG=69,CS=58,AA=42,DR=34,DK=33,NH=33,LN=31,KR=31,SP=27,LS=19,KM=18,
RG=8,AG=4,RK=1,GR=0) # data taken on saturday before race starts for Russia
#perfoms joint probabilities on SGP and RGP probability distributions
def joint(A, B, sep=''):
"""The joint distribution of two independent probability distributions.
Result is all entries of the form {a+sep+b: P(a)*P(b)}"""
return ProbDist({a + sep + b: A[a] * B[b]
for a in A
for b in B})
bothSGPRGP= joint(SGP, RGP, ' ')
print ("The probability of each driver winnning Singaporean Grand Prix and Russian Grand Prix")
bothSGPRGP
```
3. What is the probability for Mercedes to win both races?
```
def mercedes_T(outcome): return outcome == "VB" or outcome == "LH"
mercedesWinningSGPRace = p(mercedes_T, SGP)
#calculate probability of mercedes winning Singapore Frand Pix
def mercedes_T(outcome): return outcome == "VB" or outcome == "LH"
mercedesWinningRGPRace = p(mercedes_T, RGP)
#calculate probability of mercedes winning Russia Grand Pix
print ("The probability of mercedes winnning both the races ")
mercedesWinningBothRaces = mercedesWinningRGPRace * mercedesWinningSGPRace
mercedesWinningBothRaces
#probability of two events occurring together as independent events (P1 * P2)= P
```
4. What is the probability for Mercedes to win at least one race?
```
def p(event, space):
"""The probability of an event, given a sample space of outcomes.
event: a collection of outcomes, or a predicate that is true of outcomes in the event.
space: a set of outcomes or a probability distribution of {outcome: frequency} pairs."""
# if event is a predicate it, "unroll" it as a collection
if is_predicate(event):
event = such_that(event, space)
# if space is not an equiprobably collection (a simple set),
# but a probability distribution instead (a dictionary set),
# then add (union) the probabilities for all favorable outcomes
if isinstance(space, ProbDist):
return sum(space[o] for o in space if o in event)
# simplest case: what we played with in our previous lesson
else:
return Fraction(len(event & space), len(space))
is_predicate = callable
# Here we either return a simple collection in the case of equiprobable outcomes, or a dictionary collection in the
# case of non-equiprobably outcomes
def such_that(predicate, space):
"""The outcomes in the sample pace for which the predicate is true.
If space is a set, return a subset {outcome,...} with outcomes where predicate(element) is true;
if space is a ProbDist, return a ProbDist {outcome: frequency,...} with outcomes where predicate(element) is true."""
if isinstance(space, ProbDist):
return ProbDist({o:space[o] for o in space if predicate(o)})
else:
return {o for o in space if predicate(o)}
mercedesWinningAtleastOneRace = mercedesWinningBothRaces + (mercedesWinningRGPRace * (1 - mercedesWinningSGPRace))+mercedesWinningSGPRace * (1 - mercedesWinningRGPRace)
print ("The probability of mercedes winnning at least one of the races ")
mercedesWinningAtleastOneRace
#probability of an event occurring at least once, it will be the complement of the probability of the event never occurring.
```
# Question Set 2
If Mercedes wins the first race, what is the probability that Mercedes wins the next one?
If Mercedes wins at least one of these two races, what is the probability Mercedes wins both races?
How about Ferrari, Red Bull, and Renault?
# Solution
If Mercedes wins the first race, what is the probability that Mercedes wins the next one? If Mercedes wins at least one of these two races, what is the probability Mercedes wins both races? How about Ferrari, Red Bull, and Renault?
```
SGP = ProbDist(
LH=284,VB=221,CL=182,MV=185,SV=169,PG=65,CS=58,AA=34,DR=34,DK=33,NH=31,LN=25,KR=31,SP=27,LS=19,KM=18,
RG=8,AG=3,RK=1,GR=0)
RGP = ProbDist(
LH=296,VB=231,CL=200,MV=200,SV=194,PG=69,CS=58,AA=42,DR=34,DK=33,NH=33,LN=31,KR=31,SP=27,LS=19,KM=18,
RG=8,AG=4,RK=1,GR=0)
Weather = ProbDist(RA=1, SU=1, SN=1, CL=1, FO=1)
def Mercedes_Win_First(outcome): return outcome.startswith('LH') or outcome.startswith('VB') #choose prob of first set
def Mercedes_Win_Second(outcome): return outcome.endswith('LH') or outcome.endswith('VB')
p(Mercedes_Win_Second, such_that(Mercedes_Win_First,bothSGPRGP)) #given first race is won, the second will be won
def Mercedes_WinBoth(outcome): return 'LH LH' in outcome or 'LH VB' in outcome or 'VB LH' in outcome or 'VB VB' in outcome
def Mercedes_Win(outcome): return 'LH' in outcome or 'VB' in outcome
p(Mercedes_WinBoth, such_that(Mercedes_Win,bothSGPRGP)) # (LH,LH VB,VB LH,VB VB,LH) 4 groups to pickup provided first race is won for the both event
```
If Ferrari wins the first race, what is the probability that Ferrari wins the next one?
```
def Ferrari_WinBoth(outcome): return 'CL CL' in outcome or 'CL SV' in outcome or 'SV SV' in outcome or 'SV CL' in outcome
def Ferrari_Win(outcome): return 'CL' in outcome or 'SV' in outcome
p(Ferrari_WinBoth, such_that(Ferrari_Win,bothSGPRGP))
```
If RedBull wins the first race, what is the probability that RedBull wins the next one
```
def RedBull_WinBoth(outcome): return 'MV MV' in outcome or 'MV AA' in outcome or 'AA AA' in outcome or 'AA MV' in outcome
def RedBull_Win(outcome): return 'MV' in outcome or 'AA' in outcome
p(RedBull_WinBoth, such_that(RedBull_Win,bothSGPRGP))
```
If Renault wins the first race, what is the probability that Renault wins the next one?
```
def Renault_WinBoth(outcome): return 'DR DR' in outcome or 'DR NH' in outcome or 'NH NH' in outcome or 'NH DR' in outcome
def Renault_Win(outcome): return 'DR' in outcome or 'NH' in outcome
p(Renault_WinBoth, such_that(Renault_Win,bothSGPRGP))
```
# Question Set 3
Mercedes wins one of these two races on a rainy day.
What is the probability Mercedes wins both races, assuming races can be held on either rainy, sunny, cloudy, snowy or foggy days?
Assume that rain, sun, clouds, snow, and fog are the only possible weather conditions on race tracks.
# Solution
Mercedes wins one of these two races on a rainy day. What is the probability Mercedes wins both races, assuming races can be held on either rainy, sunny, cloudy, snowy or foggy days? Assume that rain, sun, clouds, snow, and fog are the only possible weather conditions on race tracks.
```
#create Probability Distribution for given Weather Condtions wher p(weather) will be 0.20
GivenFiveWeatherConditons = ProbDist(
RainyDay=1,
SunnyDay=1,
CloudyDay=1,
SnowyDay=1,
FoggyDay=1
)
GivenFiveWeatherConditons
#perfoms joint probabilities on SGP & weather and RGP & weather probability distributions Respectively
def joint(A, B, A1, B1, sep=''):
"""The joint distribution of two independent probability distributions.
Result is all entries of the form {a+sep+b: P(a)*P(b)}"""
return ProbDist({a + sep + a1 + sep + b + sep + b1: A[a] * B[b] *A1[a1] * B1[b1]
for a in A
for b in B
for a1 in A1
for b1 in B1})
bothSGPRGPWeather= joint(SGP, RGP, GivenFiveWeatherConditons,GivenFiveWeatherConditons, ' ')
bothSGPRGPWeather
def Mercedes_Wins_Race_On_Any_Rainy(outcome): return ('LH R' in outcome or 'VB R' in outcome)
such_that(Mercedes_Wins_Race_On_Any_Rainy, bothSGPRGPWeather)
def Mercedes_Wins_Race_On_Both_Rain(outcome): return ('LH' in outcome and 'VB' in outcome) or (outcome.count('LH')==2 ) or (outcome.count('VB')==2 )
p(Mercedes_Wins_Race_On_Both_Rain, such_that(Mercedes_Wins_Race_On_Any_Rainy, bothSGPRGPWeather))
```
End!
| github_jupyter |
<a href="https://colab.research.google.com/github/JSJeong-me/KOSA-Big-Data_Vision/blob/main/Model/99_kaggle_credit_card_analysis_and_prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Importing Packages
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
import os
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from imblearn.over_sampling import SMOTE
from sklearn.metrics import confusion_matrix,ConfusionMatrixDisplay,classification_report,plot_roc_curve,accuracy_score
pd.set_option('display.max_columns',25)
warnings.filterwarnings('ignore')
# Importing Dataset
data = pd.read_csv(r'./credit_cards_dataset.csv')
data.head(10)
data.info()
#info shows that there is no null values and all the features are numeric
data.describe(include='all') # Descriptive analysis
data.rename(columns={'PAY_0':'PAY_1','default.payment.next.month':'def_pay'},inplace=True)
#rename few columns
```
# Exploratory Data Analysis
```
plt.figure(figsize=(10,6))
data.groupby('def_pay')['AGE'].hist(legend=True)
plt.show()
#here we can see that, between age 20 to 45 most of the people will fall into..
sns.distplot(data['AGE'])
plt.title('Age Distribution')
sns.boxplot('def_pay','LIMIT_BAL',data=data)
data[data['LIMIT_BAL']>700000].sort_values(ascending=False,by='LIMIT_BAL')
data[data['LIMIT_BAL']>700000].value_counts().sum()
plt.figure(figsize=(16,5))
plt.subplot(121)
sns.boxplot(x='SEX', y= 'AGE',data = data)
sns.stripplot(x='SEX', y= 'AGE',data = data,linewidth = 0.9)
plt.title ('Sex vs AGE')
plt.subplot(122)
ax = sns.countplot(x='EDUCATION',data = data, order= data['EDUCATION'].value_counts().index)
plt.title ('EDUCATION')
labels = data['EDUCATION'].value_counts()
for i, v in enumerate(labels):
ax.text(i,v+100,v, horizontalalignment='center')
plt.show()
plt.figure(figsize=(20,5))
plt.subplot(121)
sns.boxplot(x='def_pay', y= 'AGE',data = data)
sns.stripplot(x='def_pay', y= 'AGE',data = data,linewidth = 0.9)
plt.title ('Age vs def_pay')
ax2=plt.subplot(1,2,2)
pay_edu = data.groupby('EDUCATION')['def_pay'].value_counts(normalize=True).unstack()
pay_edu = pay_edu.sort_values(ascending=False,by=1)
pay_edu.plot(kind='bar',stacked= True,color=["#3f3e6fd1", "#85c6a9"], ax = ax2)
plt.legend(loc=(1.04,0))
plt.title('Education vs def_pay')
plt.show()
# function for Multivariate analysis
# This method is used to show point estimates and confidence intervals using scatter plot graphs
def plotfig(df1,col11,col22,deft1):
plt.figure(figsize=(16,6))
plt.subplot(121)
sns.pointplot(df1[col11], df1[deft1],hue = df1[col22])
plt.subplot(122)
sns.countplot(df1[col11], hue = df1[col22])
plt.show()
def varplot(df2, col1, col2, deft, bin=3, unique=10):
df=df2.copy()
if len(df[col1].unique())>unique:
df[col1+'cut']= pd.qcut(df[col1],bin)
if len(df[col2].unique())>unique:
df[col2+'cut']= pd.qcut(df[col2],bin)
return plotfig(df,col1+'cut',col2+'cut',deft)
else:
df[col2+'cut']= df[col2]
return plotfig(df,col1+'cut',col2+'cut',deft)
else:
return plotfig(df,col1,col2,deft)
varplot(data,'AGE','SEX','def_pay',3)
varplot(data,'LIMIT_BAL','AGE','def_pay',3)
# Univariate Analysis
df = data.drop('ID',1)
nuniq = df.nunique()
df = data[[col for col in df if nuniq[col]>1 and nuniq[col]<50]]
row, cols = df.shape
colnames = list(df)
graph_perrow = 5
graph_row = (cols+graph_perrow-1)/ graph_perrow
max_graph = 20
plt.figure(figsize=(graph_perrow*12,graph_row*8))
for i in range(min(cols,max_graph)):
plt.subplot(graph_row,graph_perrow,i+1)
coldf = df.iloc[:,i]
if (not np.issubdtype(type(coldf),np.number)):
sns.countplot(colnames[i],data= df, order= df[colnames[i]].value_counts().index)
else:
coldf.hist()
plt.title(colnames[i])
plt.show()
cont_var = df.select_dtypes(exclude='object').columns
nrow = (len(cont_var)+5-1)/5
plt.figure(figsize=(12*5,6*2))
for i,j in enumerate(cont_var):
plt.subplot(nrow,5,i+1)
sns.distplot(data[j])
plt.show()
# from the above,we can see that we have maximum clients from 20-30 age group followed by 31-40.
# Hence with increasing age group the number of clients that will default the payment next month is decreasing.
# Hence we can see that Age is important feature to predict the default payment for next month.
plt.subplots(figsize=(26,20))
corr = data.corr()
sns.heatmap(corr,annot=True)
plt.show()
from statsmodels.stats.outliers_influence import variance_inflation_factor
df= data.drop(['def_pay','ID'],1)
vif = pd.DataFrame()
vif['Features']= df.columns
vif['vif']= [variance_inflation_factor(df.values,i) for i in range(df.shape[1])]
vif
# From this heatmap and VIF we can see that there are some multicolinearity(values >10) in the data which we can handle
# simply doing feature engineering of some columns
bill_tot = pd.DataFrame(data['BILL_AMT1']+data['BILL_AMT2']+data['BILL_AMT3']+data['BILL_AMT4']+data['BILL_AMT5']+data['BILL_AMT6'],columns=['bill_tot'])
pay_tot =pd.DataFrame(data['PAY_1']+data['PAY_2']+data['PAY_3']+data['PAY_4']+data['PAY_5']+data['PAY_6'],columns=['pay_tot'])
pay_amt_tot = pd.DataFrame(data['PAY_AMT1']+data['PAY_AMT2']+data['PAY_AMT3']+data['PAY_AMT4']+data['PAY_AMT5']+data['PAY_AMT6'],columns=['pay_amt_tot'])
frames=[bill_tot,pay_tot,pay_amt_tot,data['def_pay']]
tot = pd.concat(frames,axis=1)
plt.figure(figsize=(20,4))
plt.subplot(131)
sns.boxplot(x='def_pay',y='pay_tot',data = tot)
sns.stripplot(x='def_pay',y='pay_tot',data = tot,linewidth=1)
plt.subplot(132)
sns.boxplot(x='def_pay', y='bill_tot',data=tot)
sns.stripplot(x='def_pay', y='bill_tot',data=tot,linewidth=1)
plt.subplot(133)
sns.boxplot(x='def_pay', y='pay_amt_tot',data=tot)
sns.stripplot(x='def_pay', y='pay_amt_tot',data=tot,linewidth=1)
plt.show()
sns.pairplot(tot[['bill_tot','pay_amt_tot','pay_tot','def_pay']],hue='def_pay')
plt.show()
sns.violinplot(x=tot['def_pay'], y= tot['bill_tot'])
tot.drop('def_pay',1,inplace=True)
data1 = pd.concat([data,tot],1)
data1.groupby('def_pay')['EDUCATION'].hist(legend=True)
plt.show()
data1.groupby('def_pay')['AGE'].hist()
plt.figure(figsize=(12,6))
# we know that the Bill_AMT is the most correlated column so using that we create a data
df= pd.concat([bill_tot,df],1)
df1 = df.drop(['BILL_AMT1','BILL_AMT2','BILL_AMT3','BILL_AMT4','BILL_AMT5','BILL_AMT6'],1)
vif = pd.DataFrame()
vif['Features']= df1.columns
vif['vif']= [variance_inflation_factor(df1.values,i) for i in range(df1.shape[1])]
vif
# above we can see that now our data doesnt have multicollinearty(no values >10)
data2 = df1.copy()
# using the above plot we can create age bins
age = [20,27,32,37,42,48,58,64,80]
lab = [8,7,6,5,4,3,2,1]
data2['AGE'] = pd.cut(data2['AGE'],bins= age,labels=lab)
data2 = pd.concat([data2,data['def_pay']],1)
data2
data2.groupby('def_pay')['AGE'].hist()
plt.figure(figsize=(12,6))
sns.countplot(data2['AGE'])
data2.groupby('def_pay')['LIMIT_BAL'].hist(legend=True)
plt.show()
data2.columns
```
# Model Creation
#### We know that we have a dataset where we have imbalance in the target variable
#### you get a pretty high accuracy just by predicting the majority class, but you fail to capture the minority class
#### which is most often the point of creating the model in the first place.
#### Hence we try to create more model to get the best results
```
x= data2.drop(['def_pay'],1)
y = data2['def_pay']
x_train,x_test, y_train, y_test = train_test_split(x,y,test_size=0.30, random_state=1)
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
# Accuracy is not the best metric to use when evaluating imbalanced datasets as it can be misleading.
# hence we are using Classification Report and Confusion Matrix
# function for accuracy and confusion matrix
def res(y_test_valid,y_train_valid):
cm_log = confusion_matrix(y_test,y_test_valid)
ConfusionMatrixDisplay(cm_log).plot()
print(classification_report(y_test,y_test_valid))
print('train_accuracy:',accuracy_score(y_train,y_train_valid))
print('test_accuracy:',accuracy_score(y_test,y_test_valid))
```
# Logistic model
```
log_model= LogisticRegression()
log_model.fit(x_train,y_train)
y_pred_log = log_model.predict(x_test)
y_pred_train = log_model.predict(x_train)
res(y_pred_log,y_pred_train)
plot_roc_curve(log_model,x_test,y_test)
plt.show()
# log model using Threshold
threshold = 0.36
y_log_prob = log_model.predict_proba(x_test)
y_train_log_prob = log_model.predict_proba(x_train)
y_log_prob=y_log_prob[:,1]
y_train_log_prob= y_train_log_prob[:,1]
y_pred_log_prob = np.where(y_log_prob>threshold,1,0)
y_pred_log_prob_train = np.where(y_train_log_prob>threshold,1,0)
res(y_pred_log_prob,y_pred_log_prob_train)
```
# using Decision Tree model
```
dec_model = DecisionTreeClassifier()
dec_model.fit(x_train,y_train)
y_pred_dec = dec_model.predict(x_test)
y_pred_dec_train = dec_model.predict(x_train)
res(y_pred_dec,y_pred_dec_train)
```
### Hyper parameter tuning for DecisionTree
```
parameters = {'max_depth':[1,2,3,4,5,6],'min_samples_split':[3,4,5,6,7],'min_samples_leaf':[1,2,3,4,5,6]}
tree = GridSearchCV(dec_model, parameters,cv=10)
tree.fit(x_train,y_train)
tree.best_params_
# We know that Decision tree will have high variance due to which the model overfit hence we can reduce this by "Pruning"
# By using the best parameter from GridSearchCV best parameters
dec_model1 = DecisionTreeClassifier(max_depth=4,min_samples_split=10,min_samples_leaf=1)
dec_model1.fit(x_train,y_train)
y_pred_dec1 = dec_model1.predict(x_test)
y_pred_dec_train1 = dec_model1.predict(x_train)
res(y_pred_dec1,y_pred_dec_train1)
```
# Random Forest Model
```
rf_model = RandomForestClassifier(n_estimators=200, criterion='entropy', max_features='log2', max_depth=15, random_state=42)
rf_model.fit(x_train,y_train)
y_pred_rf = rf_model.predict(x_test)
y_pred_rf_train = rf_model.predict(x_train)
#res(y_pred_rf,y_pred_rf_train)
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
cnf_matrix = confusion_matrix(y_test, y_pred_rf)
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Non_Default','Default'], normalize=False,
title='Non Normalized confusion matrix')
from sklearn.metrics import recall_score
print("Recall score:"+ str(recall_score(y_test, y_pred_rf)))
```
### Again hyper parameter tuning for Random Forest
```
parameters = {'n_estimators':[60,70,80],'max_depth':[1,2,3,4,5,6],'min_samples_split':[3,4,5,6,7],
'min_samples_leaf':[1,2,3,4,5,6]}
clf = GridSearchCV(rf_model, parameters,cv=10)
clf.fit(x_train,y_train)
clf.best_params_
# {'max_depth': 5,
# 'min_samples_leaf': 4,
# 'min_samples_split': 3,
# 'n_estimators': 70}
# Decision trees frequently perform well on imbalanced data. so using RandomForest uses bagging of n_trees will be a better idea.
rf_model = RandomForestClassifier(n_estimators=80, max_depth=6, min_samples_leaf=2, min_samples_split=5)
rf_model.fit(x_train,y_train)
y_pred_rf = rf_model.predict(x_test)
y_pred_rf_train = rf_model.predict(x_train)
#res(y_pred_rf,y_pred_rf_train)
cnf_matrix = confusion_matrix(y_test, y_pred_rf)
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Non_Default','Default'], normalize=False,
title='Non Normalized confusion matrix')
print("Recall score:"+ str(recall_score(y_test, y_pred_rf)))
```
# KNN model
```
# finding the K value
error = []
for i in range(1,21,2):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(x_train,y_train)
preds = knn.predict(x_test)
error.append(np.mean(preds!=y_test))
plt.plot(range(1,21,2), error, linestyle = 'dashed', marker ='o', mfc= 'red')
# By using the elbow graph we can see that the k=5 will perform better in the first place so impute k = 5
knn_model = KNeighborsClassifier(n_neighbors=5)
knn_model.fit(x_train,y_train)
y_pred_knn = knn_model.predict(x_test)
y_pred_knn_train = knn_model.predict(x_train)
res(y_pred_knn,y_pred_knn_train)
```
# SVM Model
```
# use penalized learning algorithms that increase the cost of classification mistakes on the minority class.
svm_model = SVC(class_weight='balanced', probability=True)
svm_model.fit(x_train,y_train)
y_pred_svm = svm_model.predict(x_test)
y_pred_svm_train = svm_model.predict(x_train)
res(y_pred_svm,y_pred_svm_train)
# we can see in SVM that our recall of target variable is 0.56 which is the best we ever predicted.
```
# Naive Bayes
```
nb_model = GaussianNB()
nb_model.fit(x_train,y_train)
y_pred_nb = nb_model.predict(x_test)
y_pred_nb_train = nb_model.predict(x_train)
res(y_pred_nb,y_pred_nb_train)
# But here Naive bayes out performs every other model though over accuracy is acceptable, checkout the recall
```
# Boosting model XGB Classifier
```
from xgboost import XGBClassifier
xgb_model = XGBClassifier()
xgb_model.fit(x_train, y_train)
xgb_y_predict = xgb_model.predict(x_test)
xgb_y_predict_train = xgb_model.predict(x_train)
res(xgb_y_predict,xgb_y_predict_train)
# Even Boosting technique gives low recall for our target variable
# So from the above model we can conclude that the data imbalance is playing a major part
# Hence we try to fix that by doing ReSample techniques
```
# Random under-sampling
### Let’s apply some of these resampling techniques, using the Python library imbalanced-learn.
```
from collections import Counter
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import TomekLinks
x= data2.drop(['def_pay'],1)
y = data2['def_pay']
rus = RandomUnderSampler(random_state=1)
x_rus, y_rus = rus.fit_resample(x,y)
print('original dataset shape:', Counter(y))
print('Resample dataset shape', Counter(y_rus))
x_train,x_test, y_train, y_test = train_test_split(x_rus,y_rus,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
# again we try to predict using Random Forest
rf_model_rus = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_rus.fit(x_train,y_train)
y_pred_rf_rus = rf_model_rus.predict(x_test)
y_pred_rf_rus_train = rf_model_rus.predict(x_train)
res(y_pred_rf_rus,y_pred_rf_rus_train)
```
# Random over-sampling
```
x= data2.drop(['def_pay'],1)
y = data2['def_pay']
ros = RandomOverSampler(random_state=42)
x_ros, y_ros = ros.fit_resample(x, y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_ros))
x_train,x_test, y_train, y_test = train_test_split(x_ros,y_ros,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
rf_model_ros = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_ros.fit(x_train,y_train)
y_pred_rf_ros = rf_model_ros.predict(x_test)
y_pred_rf_ros_train = rf_model_ros.predict(x_train)
res(y_pred_rf_ros,y_pred_rf_ros_train)
```
# Under-sampling: Tomek links
```
x= data2.drop(['def_pay'],1)
y = data2['def_pay']
tl = TomekLinks(sampling_strategy='majority')
x_tl, y_tl = tl.fit_resample(x,y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_tl))
x_train,x_test, y_train, y_test = train_test_split(x_tl,y_tl,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
rf_model_tl = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_tl.fit(x_train,y_train)
y_pred_rf_tl = rf_model_tl.predict(x_test)
y_pred_rf_tl_train = rf_model_tl.predict(x_train)
res(y_pred_rf_tl,y_pred_rf_tl_train)
```
# Synthetic Minority Oversampling Technique (SMOTE)
```
from imblearn.over_sampling import SMOTE
smote = SMOTE()
x_smote, y_smote = smote.fit_resample(x, y)
print('Original dataset shape', Counter(y))
print('Resample dataset shape', Counter(y_smote))
x_train,x_test, y_train, y_test = train_test_split(x_smote,y_smote,test_size=0.20, random_state=1)
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
x_train = pd.DataFrame(x_train).fillna(0)
x_test = pd.DataFrame(x_test).fillna(0)
rf_model_smote = RandomForestClassifier(n_estimators=70, max_depth=5, min_samples_leaf=4, min_samples_split=3,random_state=1)
rf_model_smote.fit(x_train,y_train)
y_pred_rf_smote = rf_model_smote.predict(x_test)
y_pred_rf_smote_train = rf_model_smote.predict(x_train)
res(y_pred_rf_smote,y_pred_rf_smote_train)
```
### Finally using SMOTE we can see our accuracy as well as recall and precision ratio are give equal ratio
### Though all the above models performs well, based on the accuracy but in a imbalance dataset like this,
#### we actually prefer to change the performance metrics
### We can get better result when we do SVM and Naive bayes with our original data
### Even we dont have any variance in the model nor to much of bias
### But when we do over or Under sample the date the other metrics like sensity and specificity was better
### Hence we can conclue that if we use resample technique we will get better result
| github_jupyter |
# Sequence to Sequence Learning
:label:`sec_seq2seq`
As we have seen in :numref:`sec_machine_translation`,
in machine translation
both the input and output are a variable-length sequence.
To address this type of problem,
we have designed a general encoder-decoder architecture
in :numref:`sec_encoder-decoder`.
In this section,
we will
use two RNNs to design
the encoder and the decoder of
this architecture
and apply it to *sequence to sequence* learning
for machine translation
:cite:`Sutskever.Vinyals.Le.2014,Cho.Van-Merrienboer.Gulcehre.ea.2014`.
Following the design principle
of the encoder-decoder architecture,
the RNN encoder can
take a variable-length sequence as the input and transforms it into a fixed-shape hidden state.
In other words,
information of the input (source) sequence
is *encoded* in the hidden state of the RNN encoder.
To generate the output sequence token by token,
a separate RNN decoder
can predict the next token based on
what tokens have been seen (such as in language modeling) or generated,
together with the encoded information of the input sequence.
:numref:`fig_seq2seq` illustrates
how to use two RNNs
for sequence to sequence learning
in machine translation.

:label:`fig_seq2seq`
In :numref:`fig_seq2seq`,
the special "<eos>" token
marks the end of the sequence.
The model can stop making predictions
once this token is generated.
At the initial time step of the RNN decoder,
there are two special design decisions.
First, the special beginning-of-sequence "<bos>" token is an input.
Second,
the final hidden state of the RNN encoder is used
to initiate the hidden state of the decoder.
In designs such as :cite:`Sutskever.Vinyals.Le.2014`,
this is exactly
how the encoded input sequence information
is fed into the decoder for generating the output (target) sequence.
In some other designs such as :cite:`Cho.Van-Merrienboer.Gulcehre.ea.2014`,
the final hidden state of the encoder
is also fed into the decoder as
part of the inputs
at every time step as shown in :numref:`fig_seq2seq`.
Similar to the training of language models in
:numref:`sec_language_model`,
we can allow the labels to be the original output sequence,
shifted by one token:
"<bos>", "Ils", "regardent", "." $\rightarrow$
"Ils", "regardent", ".", "<eos>".
In the following,
we will explain the design of :numref:`fig_seq2seq`
in greater detail.
We will train this model for machine translation
on the English-French dataset as introduced in
:numref:`sec_machine_translation`.
```
import collections
import math
from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn, rnn
from d2l import mxnet as d2l
npx.set_np()
```
## Encoder
Technically speaking,
the encoder transforms an input sequence of variable length into a fixed-shape *context variable* $\mathbf{c}$, and encodes the input sequence information in this context variable.
As depicted in :numref:`fig_seq2seq`,
we can use an RNN to design the encoder.
Let us consider a sequence example (batch size: 1).
Suppose that
the input sequence is $x_1, \ldots, x_T$, such that $x_t$ is the $t^{\mathrm{th}}$ token in the input text sequence.
At time step $t$, the RNN transforms
the input feature vector $\mathbf{x}_t$ for $x_t$
and the hidden state $\mathbf{h} _{t-1}$ from the previous time step
into the current hidden state $\mathbf{h}_t$.
We can use a function $f$ to express the transformation of the RNN's recurrent layer:
$$\mathbf{h}_t = f(\mathbf{x}_t, \mathbf{h}_{t-1}). $$
In general,
the encoder transforms the hidden states at
all the time steps
into the context variable through a customized function $q$:
$$\mathbf{c} = q(\mathbf{h}_1, \ldots, \mathbf{h}_T).$$
For example, when choosing $q(\mathbf{h}_1, \ldots, \mathbf{h}_T) = \mathbf{h}_T$ such as in :numref:`fig_seq2seq`,
the context variable is just the hidden state $\mathbf{h}_T$
of the input sequence at the final time step.
So far we have used a unidirectional RNN
to design the encoder,
where
a hidden state only depends on
the input subsequence at and before the time step of the hidden state.
We can also construct encoders using bidirectional RNNs. In this case, a hidden state depends on
the subsequence before and after the time step (including the input at the current time step), which encodes the information of the entire sequence.
Now let us [**implement the RNN encoder**].
Note that we use an *embedding layer*
to obtain the feature vector for each token in the input sequence.
The weight
of an embedding layer
is a matrix
whose number of rows equals to the size of the input vocabulary (`vocab_size`)
and number of columns equals to the feature vector's dimension (`embed_size`).
For any input token index $i$,
the embedding layer
fetches the $i^{\mathrm{th}}$ row (starting from 0) of the weight matrix
to return its feature vector.
Besides,
here we choose a multilayer GRU to
implement the encoder.
```
#@save
class Seq2SeqEncoder(d2l.Encoder):
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super(Seq2SeqEncoder, self).__init__(**kwargs)
# Embedding layer
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = rnn.GRU(num_hiddens, num_layers, dropout=dropout)
def forward(self, X, *args):
# The output `X` shape: (`batch_size`, `num_steps`, `embed_size`)
X = self.embedding(X)
# In RNN models, the first axis corresponds to time steps
X = X.swapaxes(0, 1)
state = self.rnn.begin_state(batch_size=X.shape[1], ctx=X.ctx)
output, state = self.rnn(X, state)
# `output` shape: (`num_steps`, `batch_size`, `num_hiddens`)
# `state[0]` shape: (`num_layers`, `batch_size`, `num_hiddens`)
return output, state
```
The returned variables of recurrent layers
have been explained in :numref:`sec_rnn-concise`.
Let us still use a concrete example
to [**illustrate the above encoder implementation.**]
Below
we instantiate a two-layer GRU encoder
whose number of hidden units is 16.
Given
a minibatch of sequence inputs `X`
(batch size: 4, number of time steps: 7),
the hidden states of the last layer
at all the time steps
(`output` return by the encoder's recurrent layers)
are a tensor
of shape
(number of time steps, batch size, number of hidden units).
```
encoder = Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
encoder.initialize()
X = np.zeros((4, 7))
output, state = encoder(X)
output.shape
```
Since a GRU is employed here,
the shape of the multilayer hidden states
at the final time step
is
(number of hidden layers, batch size, number of hidden units).
If an LSTM is used,
memory cell information will also be contained in `state`.
```
len(state), state[0].shape
```
## [**Decoder**]
:label:`sec_seq2seq_decoder`
As we just mentioned,
the context variable $\mathbf{c}$ of the encoder's output encodes the entire input sequence $x_1, \ldots, x_T$. Given the output sequence $y_1, y_2, \ldots, y_{T'}$ from the training dataset,
for each time step $t'$
(the symbol differs from the time step $t$ of input sequences or encoders),
the probability of the decoder output $y_{t'}$
is conditional
on the previous output subsequence
$y_1, \ldots, y_{t'-1}$ and
the context variable $\mathbf{c}$, i.e., $P(y_{t'} \mid y_1, \ldots, y_{t'-1}, \mathbf{c})$.
To model this conditional probability on sequences,
we can use another RNN as the decoder.
At any time step $t^\prime$ on the output sequence,
the RNN takes the output $y_{t^\prime-1}$ from the previous time step
and the context variable $\mathbf{c}$ as its input,
then transforms
them and
the previous hidden state $\mathbf{s}_{t^\prime-1}$
into the
hidden state $\mathbf{s}_{t^\prime}$ at the current time step.
As a result, we can use a function $g$ to express the transformation of the decoder's hidden layer:
$$\mathbf{s}_{t^\prime} = g(y_{t^\prime-1}, \mathbf{c}, \mathbf{s}_{t^\prime-1}).$$
:eqlabel:`eq_seq2seq_s_t`
After obtaining the hidden state of the decoder,
we can use an output layer and the softmax operation to compute the conditional probability distribution
$P(y_{t^\prime} \mid y_1, \ldots, y_{t^\prime-1}, \mathbf{c})$ for the output at time step $t^\prime$.
Following :numref:`fig_seq2seq`,
when implementing the decoder as follows,
we directly use the hidden state at the final time step
of the encoder
to initialize the hidden state of the decoder.
This requires that the RNN encoder and the RNN decoder have the same number of layers and hidden units.
To further incorporate the encoded input sequence information,
the context variable is concatenated
with the decoder input at all the time steps.
To predict the probability distribution of the output token,
a fully-connected layer is used to transform
the hidden state at the final layer of the RNN decoder.
```
class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super(Seq2SeqDecoder, self).__init__(**kwargs)
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = rnn.GRU(num_hiddens, num_layers, dropout=dropout)
self.dense = nn.Dense(vocab_size, flatten=False)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
def forward(self, X, state):
# The output `X` shape: (`num_steps`, `batch_size`, `embed_size`)
X = self.embedding(X).swapaxes(0, 1)
# `context` shape: (`batch_size`, `num_hiddens`)
context = state[0][-1]
# Broadcast `context` so it has the same `num_steps` as `X`
context = np.broadcast_to(context, (
X.shape[0], context.shape[0], context.shape[1]))
X_and_context = np.concatenate((X, context), 2)
output, state = self.rnn(X_and_context, state)
output = self.dense(output).swapaxes(0, 1)
# `output` shape: (`batch_size`, `num_steps`, `vocab_size`)
# `state[0]` shape: (`num_layers`, `batch_size`, `num_hiddens`)
return output, state
```
To [**illustrate the implemented decoder**],
below we instantiate it with the same hyperparameters from the aforementioned encoder.
As we can see, the output shape of the decoder becomes (batch size, number of time steps, vocabulary size),
where the last dimension of the tensor stores the predicted token distribution.
```
decoder = Seq2SeqDecoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
decoder.initialize()
state = decoder.init_state(encoder(X))
output, state = decoder(X, state)
output.shape, len(state), state[0].shape
```
To summarize,
the layers in the above RNN encoder-decoder model are illustrated in :numref:`fig_seq2seq_details`.

:label:`fig_seq2seq_details`
## Loss Function
At each time step, the decoder
predicts a probability distribution for the output tokens.
Similar to language modeling,
we can apply softmax to obtain the distribution
and calculate the cross-entropy loss for optimization.
Recall :numref:`sec_machine_translation`
that the special padding tokens
are appended to the end of sequences
so sequences of varying lengths
can be efficiently loaded
in minibatches of the same shape.
However,
prediction of padding tokens
should be excluded from loss calculations.
To this end,
we can use the following
`sequence_mask` function
to [**mask irrelevant entries with zero values**]
so later
multiplication of any irrelevant prediction
with zero equals to zero.
For example,
if the valid length of two sequences
excluding padding tokens
are one and two, respectively,
the remaining entries after
the first one
and the first two entries are cleared to zeros.
```
X = np.array([[1, 2, 3], [4, 5, 6]])
npx.sequence_mask(X, np.array([1, 2]), True, axis=1)
```
(**We can also mask all the entries across the last
few axes.**)
If you like, you may even specify
to replace such entries with a non-zero value.
```
X = np.ones((2, 3, 4))
npx.sequence_mask(X, np.array([1, 2]), True, value=-1, axis=1)
```
Now we can [**extend the softmax cross-entropy loss
to allow the masking of irrelevant predictions.**]
Initially,
masks for all the predicted tokens are set to one.
Once the valid length is given,
the mask corresponding to any padding token
will be cleared to zero.
In the end,
the loss for all the tokens
will be multipled by the mask to filter out
irrelevant predictions of padding tokens in the loss.
```
#@save
class MaskedSoftmaxCELoss(gluon.loss.SoftmaxCELoss):
"""The softmax cross-entropy loss with masks."""
# `pred` shape: (`batch_size`, `num_steps`, `vocab_size`)
# `label` shape: (`batch_size`, `num_steps`)
# `valid_len` shape: (`batch_size`,)
def forward(self, pred, label, valid_len):
# `weights` shape: (`batch_size`, `num_steps`, 1)
weights = np.expand_dims(np.ones_like(label), axis=-1)
weights = npx.sequence_mask(weights, valid_len, True, axis=1)
return super(MaskedSoftmaxCELoss, self).forward(pred, label, weights)
```
For [**a sanity check**], we can create three identical sequences.
Then we can
specify that the valid lengths of these sequences
are 4, 2, and 0, respectively.
As a result,
the loss of the first sequence
should be twice as large as that of the second sequence,
while the third sequence should have a zero loss.
```
loss = MaskedSoftmaxCELoss()
loss(np.ones((3, 4, 10)), np.ones((3, 4)), np.array([4, 2, 0]))
```
## [**Training**]
:label:`sec_seq2seq_training`
In the following training loop,
we concatenate the special beginning-of-sequence token
and the original output sequence excluding the final token as
the input to the decoder, as shown in :numref:`fig_seq2seq`.
This is called *teacher forcing* because
the original output sequence (token labels) is fed into the decoder.
Alternatively,
we could also feed the *predicted* token
from the previous time step
as the current input to the decoder.
```
#@save
def train_seq2seq(net, data_iter, lr, num_epochs, tgt_vocab, device):
"""Train a model for sequence to sequence."""
net.initialize(init.Xavier(), force_reinit=True, ctx=device)
trainer = gluon.Trainer(net.collect_params(), 'adam',
{'learning_rate': lr})
loss = MaskedSoftmaxCELoss()
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[10, num_epochs])
for epoch in range(num_epochs):
timer = d2l.Timer()
metric = d2l.Accumulator(2) # Sum of training loss, no. of tokens
for batch in data_iter:
X, X_valid_len, Y, Y_valid_len = [
x.as_in_ctx(device) for x in batch]
bos = np.array(
[tgt_vocab['<bos>']] * Y.shape[0], ctx=device).reshape(-1, 1)
dec_input = np.concatenate([bos, Y[:, :-1]], 1) # Teacher forcing
with autograd.record():
Y_hat, _ = net(X, dec_input, X_valid_len)
l = loss(Y_hat, Y, Y_valid_len)
l.backward()
d2l.grad_clipping(net, 1)
num_tokens = Y_valid_len.sum()
trainer.step(num_tokens)
metric.add(l.sum(), num_tokens)
if (epoch + 1) % 10 == 0:
animator.add(epoch + 1, (metric[0] / metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / timer.stop():.1f} '
f'tokens/sec on {str(device)}')
```
Now we can [**create and train an RNN encoder-decoder model**]
for sequence to sequence learning on the machine translation dataset.
```
embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.1
batch_size, num_steps = 64, 10
lr, num_epochs, device = 0.005, 300, d2l.try_gpu()
train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
encoder = Seq2SeqEncoder(
len(src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
net = d2l.EncoderDecoder(encoder, decoder)
train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)
```
## [**Prediction**]
To predict the output sequence
token by token,
at each decoder time step
the predicted token from the previous
time step is fed into the decoder as an input.
Similar to training,
at the initial time step
the beginning-of-sequence ("<bos>") token
is fed into the decoder.
This prediction process
is illustrated in :numref:`fig_seq2seq_predict`.
When the end-of-sequence ("<eos>") token is predicted,
the prediction of the output sequence is complete.

:label:`fig_seq2seq_predict`
We will introduce different
strategies for sequence generation in
:numref:`sec_beam-search`.
```
#@save
def predict_seq2seq(net, src_sentence, src_vocab, tgt_vocab, num_steps,
device, save_attention_weights=False):
"""Predict for sequence to sequence."""
src_tokens = src_vocab[src_sentence.lower().split(' ')] + [
src_vocab['<eos>']]
enc_valid_len = np.array([len(src_tokens)], ctx=device)
src_tokens = d2l.truncate_pad(src_tokens, num_steps, src_vocab['<pad>'])
# Add the batch axis
enc_X = np.expand_dims(np.array(src_tokens, ctx=device), axis=0)
enc_outputs = net.encoder(enc_X, enc_valid_len)
dec_state = net.decoder.init_state(enc_outputs, enc_valid_len)
# Add the batch axis
dec_X = np.expand_dims(np.array([tgt_vocab['<bos>']], ctx=device), axis=0)
output_seq, attention_weight_seq = [], []
for _ in range(num_steps):
Y, dec_state = net.decoder(dec_X, dec_state)
# We use the token with the highest prediction likelihood as the input
# of the decoder at the next time step
dec_X = Y.argmax(axis=2)
pred = dec_X.squeeze(axis=0).astype('int32').item()
# Save attention weights (to be covered later)
if save_attention_weights:
attention_weight_seq.append(net.decoder.attention_weights)
# Once the end-of-sequence token is predicted, the generation of the
# output sequence is complete
if pred == tgt_vocab['<eos>']:
break
output_seq.append(pred)
return ' '.join(tgt_vocab.to_tokens(output_seq)), attention_weight_seq
```
## Evaluation of Predicted Sequences
We can evaluate a predicted sequence
by comparing it with the
label sequence (the ground-truth).
BLEU (Bilingual Evaluation Understudy),
though originally proposed for evaluating
machine translation results :cite:`Papineni.Roukos.Ward.ea.2002`,
has been extensively used in measuring
the quality of output sequences for different applications.
In principle, for any $n$-grams in the predicted sequence,
BLEU evaluates whether this $n$-grams appears
in the label sequence.
Denote by $p_n$
the precision of $n$-grams,
which is
the ratio of
the number of matched $n$-grams in
the predicted and label sequences
to
the number of $n$-grams in the predicted sequence.
To explain,
given a label sequence $A$, $B$, $C$, $D$, $E$, $F$,
and a predicted sequence $A$, $B$, $B$, $C$, $D$,
we have $p_1 = 4/5$, $p_2 = 3/4$, $p_3 = 1/3$, and $p_4 = 0$.
Besides,
let $\mathrm{len}_{\text{label}}$ and $\mathrm{len}_{\text{pred}}$
be
the numbers of tokens in the label sequence and the predicted sequence, respectively.
Then, BLEU is defined as
$$ \exp\left(\min\left(0, 1 - \frac{\mathrm{len}_{\text{label}}}{\mathrm{len}_{\text{pred}}}\right)\right) \prod_{n=1}^k p_n^{1/2^n},$$
:eqlabel:`eq_bleu`
where $k$ is the longest $n$-grams for matching.
Based on the definition of BLEU in :eqref:`eq_bleu`,
whenever the predicted sequence is the same as the label sequence, BLEU is 1.
Moreover,
since matching longer $n$-grams is more difficult,
BLEU assigns a greater weight
to a longer $n$-gram precision.
Specifically, when $p_n$ is fixed,
$p_n^{1/2^n}$ increases as $n$ grows (the original paper uses $p_n^{1/n}$).
Furthermore,
since
predicting shorter sequences
tends to obtain a higher $p_n$ value,
the coefficient before the multiplication term in :eqref:`eq_bleu`
penalizes shorter predicted sequences.
For example, when $k=2$,
given the label sequence $A$, $B$, $C$, $D$, $E$, $F$ and the predicted sequence $A$, $B$,
although $p_1 = p_2 = 1$, the penalty factor $\exp(1-6/2) \approx 0.14$ lowers the BLEU.
We [**implement the BLEU measure**] as follows.
```
def bleu(pred_seq, label_seq, k): #@save
"""Compute the BLEU."""
pred_tokens, label_tokens = pred_seq.split(' '), label_seq.split(' ')
len_pred, len_label = len(pred_tokens), len(label_tokens)
score = math.exp(min(0, 1 - len_label / len_pred))
for n in range(1, k + 1):
num_matches, label_subs = 0, collections.defaultdict(int)
for i in range(len_label - n + 1):
label_subs[' '.join(label_tokens[i: i + n])] += 1
for i in range(len_pred - n + 1):
if label_subs[' '.join(pred_tokens[i: i + n])] > 0:
num_matches += 1
label_subs[' '.join(pred_tokens[i: i + n])] -= 1
score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))
return score
```
In the end,
we use the trained RNN encoder-decoder
to [**translate a few English sentences into French**]
and compute the BLEU of the results.
```
engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
for eng, fra in zip(engs, fras):
translation, attention_weight_seq = predict_seq2seq(
net, eng, src_vocab, tgt_vocab, num_steps, device)
print(f'{eng} => {translation}, bleu {bleu(translation, fra, k=2):.3f}')
```
## Summary
* Following the design of the encoder-decoder architecture, we can use two RNNs to design a model for sequence to sequence learning.
* When implementing the encoder and the decoder, we can use multilayer RNNs.
* We can use masks to filter out irrelevant computations, such as when calculating the loss.
* In encoder-decoder training, the teacher forcing approach feeds original output sequences (in contrast to predictions) into the decoder.
* BLEU is a popular measure for evaluating output sequences by matching $n$-grams between the predicted sequence and the label sequence.
## Exercises
1. Can you adjust the hyperparameters to improve the translation results?
1. Rerun the experiment without using masks in the loss calculation. What results do you observe? Why?
1. If the encoder and the decoder differ in the number of layers or the number of hidden units, how can we initialize the hidden state of the decoder?
1. In training, replace teacher forcing with feeding the prediction at the previous time step into the decoder. How does this influence the performance?
1. Rerun the experiment by replacing GRU with LSTM.
1. Are there any other ways to design the output layer of the decoder?
[Discussions](https://discuss.d2l.ai/t/345)
| github_jupyter |
# Seaborn In Action
Seaborn is a data visualization library that is based on **Matplotlib**. It is tightly integrated with Pandas library and provides a high level interface for making attractive and informative statistical graphics in Python.
This Notebook introduces the basic and essential functions in the seaborn library. Lets go ahead and import the relevant libraries for this tutorials
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
?sns.relplot
```
## Loading the Data and Inspection
```
cs = pd.read_csv('data/c_scores.csv')
cs
cs.sample(5)
cs.info()
```
## Scatter Plots
We shall plot the **age** and **credit_amount** columns using the **jointplot** function.
```
sns.jointplot(x='age', y='credit_amount', data=cs)
```
Let's plot the **age** and **credit_amount** again but this time let's break that with **job**. For this, we shall used the **relplot()**. This functions provides access to several axes-level functions that show the relationship between two variables which can also come with semantic mappings. It also come with the **kind** parameter which can be used to specify whether you want a **lineplot** or **scatterplot**. The default is **scatterplot**.
The visualization below shows the relation between credit amount given to people and their ages. In addition, I am comparing it over the kind of job. Seaborn uses coloring to show which of the points represent what kind of job. The **height** and the **aspect** parameter is used to adjust the height and width of the FacetGrid. The **hue** parameter helps to group variables that will produce element with different colors. **data** parameter represents the dataset of interest.
```
sns.relplot(x="age", y="credit_amount", height= 8, aspect=1, hue="job", data=cs)
```
We can also plot the above visualization where we compare what it looks like over two or more categorical elements. For example, in the below visualization, we shall compare the above visualization over **class** using the **col** parameter in the **relplot()** function.
```
sns.relplot(x="age", y="credit_amount", height= 8, aspect=1, hue="job", col='class',data=cs)
```
## Boxplots
A boxplot is used to show the distribution of numerical variables and facilitates comparisons across multiple categorical variables.
We would like to visualize the distribution of **age** of the customers with respect to **class**.
```
sns.boxplot(x='class', y='age', data=cs)
```
Let's visualize the distribution of **credit_amount** with respect to **purpose** using **class** as the **hue**
```
fig, ax = plt.subplots(figsize=(18,7))
sns.boxplot(x='purpose', y='credit_amount', hue='class', ax = ax, data=cs)
```
## Histogram
A histrogram represents the distribution of data by forming bins along the range of the data and drawing lines to represents the number of observations that fall within each bin.
Let's plot the histogram of **credit_amount**.
```
sns.distplot(cs['credit_amount'])
```
Let's plot the histogram of the **age**
```
sns.distplot(cs['age'])
```
Let's get the histogram of the **credit_amount** of the customers, this time across the **class** dimension as a faceted histogram.
```
facet = sns.FacetGrid(cs, height=6, col='class')
facet = facet.map(sns.distplot, 'credit_amount', color='r')
```
It will however be a fantastic idea to compare the distribution of **class_amount** across **class** overlaid on the same plot.
```
facet = sns.FacetGrid(cs, height=6, hue='class')
facet = facet.map(sns.distplot, 'credit_amount')
```
## Line Plots
To make meaningful line plots, we are going to generate a dataframe to be used to help us understand line plots.
We will randomly generate some dates from (1970) to (1970+36) over 12 months period. We will then go ahead and select the first 36 rows for the **duration** and **age** columns to form our new dataframe.
```
new_series = pd.DataFrame({'time': pd.date_range('1970-12-31', periods=36, freq='12M'),
'duration': cs['duration'].iloc[0:36],
'age': cs['age'].iloc[0:36]})
new_series.head()
```
Next, we are going to move the **duration** and the **age** columns to rows so that we can plot both on the graph. We are going to do that using the the pandas **melt()** method.
The **melt()** method allows us to unpivot a dataframe from a wide to a long format, optionally leaving the identifiers set. It takes in the dataframe you want to unpivot, the **id_vars**, identifier variable (could be single column or list of columns), the **var_name**, variable name (for the variable that are going to be unpivoted), and the **value_name**, the name of the value column.
```
series = pd.melt(new_series, id_vars=['time'],
var_name='Variables',
value_name='values')
series.sample(10)
lp = sns.lineplot(x='time', y='values', hue='Variables', data=series)
#Position the legend out the graph
lp.legend(bbox_to_anchor=(1.02, 1),
loc=2,
borderaxespad=0.0);
lp.set(title='Line plot of Duration and Age', xlabel='Year', ylabel='Values')
```
## Regression Plot
In the regression plotting, we are going to use the **lmplot()**. This function combines the the **regplot()** and FacetGrid. It is intended as a convenient interface to fit regression models across subsets of datasets.
We will use the famous iris flower dataset for the regression plot. It is available in the seaborn module.
```
iris = sns.load_dataset('iris')
iris.sample(8)
```
Let's plot the **sepal_length** vs the **sepal_withth** only
```
g = sns.lmplot(x='petal_length', y='petal_width', order=1, data=iris)
g.set_axis_labels("Petal Length(mm)", "Petal Width(mm)" )
```
Using the species, lets break the regression line with respect to the species and fit a first order regression to each species' respective data point.
```
g = sns.lmplot(x='petal_length', y='petal_width', hue='species',height=8, order=1, data=iris)
g.set_axis_labels("Petal Length(mm)", "Petal Width(mm)" )
```
Now, let's use the **species** as the **col**, column parameter
```
g = sns.lmplot(x='petal_length', y='petal_width', col='species',height=10, order=1, data=iris)
g.set_axis_labels("Petal Length(mm)", "Petal Width(mm)" )
```
### References
1. https://seaborn.pydata.org/index.html
2. https://www.featureranking.com/tutorials/python-tutorials/seaborn/
| github_jupyter |
<font size = "5"> **[Image Tools](2_Image_Tools.ipynb)** </font>
<hr style="height:2px;border-top:4px solid #FF8200" />
# Selective Fourier Transform
part of
<font size = "4"> **pyTEMlib**, a **pycroscopy** library </font>
Notebook by
Gerd Duscher
Materials Science & Engineering<br>
Joint Institute of Advanced Materials<br>
The University of Tennessee, Knoxville
An introduction into Fourier Filtering of images.
## Install pyTEMlib
If you have not done so in the [Introduction Notebook](_.ipynb), please test and install [pyTEMlib](https://github.com/gduscher/pyTEMlib) and other important packages with the code cell below.
```
import sys
from pkg_resources import get_distribution, DistributionNotFound
def test_package(package_name):
"""Test if package exists and returns version or -1"""
try:
version = (get_distribution(package_name).version)
except (DistributionNotFound, ImportError) as err:
version = '-1'
return version
# Colab setup ------------------
if 'google.colab' in sys.modules:
!pip install git+https://github.com/pycroscopy/pyTEMlib/ -q
# pyTEMlib setup ------------------
else:
if test_package('sidpy') < '0.0.7':
print('installing sidpy')
!{sys.executable} -m pip install --upgrade sidpy -q
if test_package('pyNSID') < '0.0.3':
print('installing pyNSID')
!{sys.executable} -m pip install --upgrade pyNSID -q
if test_package('pyTEMlib') < '0.2022.10.1':
print('installing pyTEMlib')
!{sys.executable} -m pip install --upgrade pyTEMlib -q
# ------------------------------
print('done')
```
## Loading of necessary libraries
Please note, that we only need to load the pyTEMlib library, which is based on sidpy Datsets.
```
%pylab notebook
from matplotlib.widgets import RectangleSelector
sys.path.insert(0,'../../')
import pyTEMlib
import pyTEMlib.file_tools as ft
import pyTEMlib.image_tools as it
print('pyTEMlib version: ', pyTEMlib.__version__)
note_book_version = '2021.10.25'
note_book_name='pyTEMib/notebooks/Imaging/Adaptive_Fourier_Filter'
```
## Open File
These datasets are stored in the pyNSID data format (extension: hf5) automatically.
All results can be stored in that file.
First we select the file
```
file_widget = ft.FileWidget()
```
Now, we open and plot them
Select with the moue an area; rectangle will apear!
```
try:
dataset.h5_dataset.file.close()
except:
pass
dataset= ft.open_file(file_widget.file_name)
print(file_widget.file_name)
if dataset.data_type.name != 'IMAGE':
print('We really would need an image here')
dataset.plot()
selector = RectangleSelector(dataset.view.axis, None,interactive=True , drawtype='box')
def get_selection(dataset, extents):
if (np.array(extents) <2).all():
return dataset
xmin, xmax, ymin, ymax = selector.extents/(dataset.x[1]-dataset.x[0])
return dataset.like_data(dataset[int(xmin):int(xmax), int(ymin):int(ymax)])
selection = it.get_selection(dataset, selector.extents)
selection.plot()
```
## Power Spectrum of Image
```
power_spectrum = it.power_spectrum(selection, smoothing=1)
power_spectrum.view_metadata()
print('source: ', power_spectrum.source)
power_spectrum.plot()
```
## Spot Detection in Fourier Transform
```
# ------Input----------
spot_threshold=0.1
# ---------------------
spots = it.diffractogram_spots(power_spectrum, spot_threshold=spot_threshold)
spots = spots[np.linalg.norm(spots[:,:2],axis=1)<8,:]
spots = spots[np.linalg.norm(spots[:,:2],axis=1)>0.5,:]
power_spectrum.plot()
plt.gca().scatter(spots[:,0],spots[:,1], color='red', alpha=0.4);
#print(spots[:,:2])
#print(np.round(np.linalg.norm(spots[:,:2], axis=1),2))
#print(np.round(np.degrees(np.arctan2(spots[:,0], spots[:,1])+np.pi)%180,2))
angles=np.arctan2(spots[:,0], spots[:,1])
radius= np.linalg.norm(spots[:,:2], axis=1)
args = angles>0
radii = radius[angles>0]
angles = angles[angles>0]
print(radii, np.degrees(angles))
#print(np.degrees(angles[1]-angles[0]), np.degrees(angles[2]-angles[0]))
#print(1/radii)
new_angles = np.round(np.degrees(angles+np.pi-angles[0]+0.0000001)%180,2)
print(new_angles)
print(np.degrees(angles[1]-angles[0]), np.degrees(angles[2]-angles[0]))
angles=np.arctan2(spots[:,0], spots[:,1])
radius= np.linalg.norm(spots[:,:2], axis=1)
args = angles>0
radii = radius[angles>0]
angles = angles[angles>0]
print(radii, np.degrees(angles))
# clockwise from up
angles =(-np.degrees(np.arctan2(spots[:,0], spots[:,1]))+180) % 360
spots = spots[np.argsort(angles)]
angles =(-np.degrees(np.arctan2(spots[:,0], spots[:,1]))+180) % 360
plane_distances = 1/np.linalg.norm(spots[:,:2],axis=1)
rolled_angles= np.roll(angles,1) %360
rolled_angles[0] -= 360
relative_angles = angles - rolled_angles
print(np.round(plane_distances,3))
print(np.round(relative_angles,1))
import pyTEMlib.kinematic_scattering as ks
#Initialize the dictionary of the input
tags_simulation = {}
### Define Crystal
tags_simulation = ft.read_poscar('./POSCAR.mp-2418_PdSe2')
### Define experimental parameters:
tags_simulation['acceleration_voltage_V'] = 200.0 *1000.0 #V
tags_simulation['new_figure'] = False
tags_simulation['plot FOV'] = 30
tags_simulation['convergence_angle_mrad'] = 0
tags_simulation['zone_hkl'] = np.array([0,0,1]) # incident neares zone axis: defines Laue Zones!!!!
tags_simulation['mistilt'] = np.array([0,0,0]) # mistilt in degrees
tags_simulation['Sg_max'] = .2 # 1/nm maximum allowed excitation error ; This parameter is related to the thickness
tags_simulation['hkl_max'] = 6 # Highest evaluated Miller indices
######################################
# Diffraction Simulation of Crystal #
######################################
import itertools
hkl_list = [list([0, 0, 0])]
spot_dict = {}
for hkl in itertools.product(range(6), repeat=3):
if list(hkl) not in hkl_list:
#print(hkl, hkl_list)
tags_simulation['zone_hkl'] = hkl
ks.kinematic_scattering(tags_simulation, verbose = False)
if list(tags_simulation['nearest_zone_axes']['0']['hkl']) not in hkl_list:
print('- ', tags_simulation['nearest_zone_axes']['0']['hkl'])
spots = tags_simulation['allowed']['g'][np.linalg.norm(tags_simulation['allowed']['g'][:,:2], axis=1)<4.7,:2]
angles=np.arctan2(spots[:,0], spots[:,1])
radius= np.linalg.norm(spots[:,:2], axis=1)
args = angles>0
radii = radius[angles>0]
angles = angles[angles>0]
spot_dict[hkl] = {"radii": radii, "angles": angles}
print(radii, np.degrees(angles%np.pi))
hkl_list.append(list(hkl))
spot_dict
for hkl, refl in spot_dict.items():
if len(refl['radii'])>4:
print(hkl, 1/refl['radii'])
```
## Log the result
```
# results_channel = ft.log_results(dataset.h5_dataset.parent.parent, filtered_dataset)
```
A tree-like plot of the file
```
ft.h5_tree(dataset.h5_dataset.file)
```
## Close File
let's close the file but keep the filename
```
dataset.h5_dataset.file.close()
```
| github_jupyter |
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Right now this requires the current master branch of both. Uncomment the following cell and run it.
```
#! pip install git+https://github.com/huggingface/transformers.git
#! pip install git+https://github.com/huggingface/datasets.git
```
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.
To be able to share your model with the community, there are a few more steps to follow.
First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input your username and password (this only works on Colab, in a regular notebook, you need to do this in a terminal):
```
from huggingface_hub import notebook_login
notebook_login()
```
Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:
```
# !apt install git-lfs
# !git config --global user.email "you@example.com"
# !git config --global user.name "Your Name"
```
Make sure your version of Transformers is at least 4.8.1 since the functionality was introduced in that version:
```
import transformers
print(transformers.__version__)
```
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it.
# Fine-tuning a model on a multiple choice task
In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model to a multiple choice task, which is the task of selecting the most plausible inputs in a given selection. The dataset used here is [SWAG](https://www.aclweb.org/anthology/D18-1009/) but you can adapt the pre-processing to any other multiple choice dataset you like, or your own data. SWAG is a dataset about commonsense reasoning, where each example describes a situation then proposes four options that could go after it.
This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a version with a mutiple choice head. Depending on you model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those two parameters, then the rest of the notebook should run smoothly:
```
model_checkpoint = "bert-base-uncased"
batch_size = 16
```
## Loading the dataset
We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data. This can be easily done with the functions `load_dataset`.
```
from datasets import load_dataset, load_metric
```
`load_dataset` will cache the dataset to avoid downloading it again the next time you run this cell.
```
datasets = load_dataset("swag", "regular")
```
The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set (with more keys for the mismatched validation and test set in the special case of `mnli`).
```
datasets
```
To access an actual element, you need to select a split first, then give an index:
```
datasets["train"][0]
```
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
```
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(datasets["train"])
```
Each example in the dataset has a context composed of a first sentence (in the field `sent1`) and an introduction to the second sentence (in the field `sent2`). Then four possible endings are given (in the fields `ending0`, `ending1`, `ending2` and `ending3`) and the model must pick the right one (indicated in the field `label`). The following function lets us visualize a give example a bit better:
```
def show_one(example):
print(f"Context: {example['sent1']}")
print(f" A - {example['sent2']} {example['ending0']}")
print(f" B - {example['sent2']} {example['ending1']}")
print(f" C - {example['sent2']} {example['ending2']}")
print(f" D - {example['sent2']} {example['ending3']}")
print(f"\nGround truth: option {['A', 'B', 'C', 'D'][example['label']]}")
show_one(datasets["train"][0])
show_one(datasets["train"][15])
```
## Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.
To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:
- we get a tokenizer that corresponds to the model architecture we want to use,
- we download the vocabulary used when pretraining this specific checkpoint.
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
```
You can directly call this tokenizer on one sentence or a pair of sentences:
```
tokenizer("Hello, this one sentence!", "And this sentence goes with it.")
```
Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.
To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:
We can them write the function that will preprocess our samples. The tricky part is to put all the possible pairs of sentences in two big lists before passing them to the tokenizer, then un-flatten the result so that each example has four input ids, attentions masks, etc.
When calling the `tokenizer`, we use the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model.
```
ending_names = ["ending0", "ending1", "ending2", "ending3"]
def preprocess_function(examples):
# Repeat each first sentence four times to go with the four possibilities of second sentences.
first_sentences = [[context] * 4 for context in examples["sent1"]]
# Grab all second sentences possible for each context.
question_headers = examples["sent2"]
second_sentences = [
[f"{header} {examples[end][i]}" for end in ending_names]
for i, header in enumerate(question_headers)
]
# Flatten everything
first_sentences = sum(first_sentences, [])
second_sentences = sum(second_sentences, [])
# Tokenize
tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
# Un-flatten
return {
k: [v[i : i + 4] for i in range(0, len(v), 4)]
for k, v in tokenized_examples.items()
}
```
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists of lists for each key: a list of all examples (here 5), then a list of all choices (4) and a list of input IDs (length varying here since we did not apply any padding):
```
examples = datasets["train"][:5]
features = preprocess_function(examples)
print(
len(features["input_ids"]),
len(features["input_ids"][0]),
[len(x) for x in features["input_ids"][0]],
)
```
To check we didn't do anything group when grouping all possibilites then unflattening, let's have a look at the decoded inputs for a given example:
```
idx = 3
[tokenizer.decode(features["input_ids"][idx][i]) for i in range(4)]
```
We can compare it to the ground truth:
```
show_one(datasets["train"][3])
```
This seems alright, so we can apply this function on all the examples in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.
```
encoded_datasets = datasets.map(preprocess_function, batched=True)
```
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.
Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.
## Fine-tuning the model
Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our task is about mutliple choice, we use the `AutoModelForMultipleChoice` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us.
```
from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained(model_checkpoint)
```
The warning is telling us we are throwing away some weights (the `vocab_transform` and `vocab_layer_norm` layers) and randomly initializing some other (the `pre_classifier` and `classifier` layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.
Next, we set some names and hyperparameters for the model. The first two variables are used so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of `push_to_hub_model_id` to something you would prefer.
```
model_name = model_checkpoint.split("/")[-1]
push_to_hub_model_id = f"{model_name}-finetuned-swag"
learning_rate = 5e-5
batch_size = batch_size
num_train_epochs = 2
weight_decay = 0.01
```
Next we need to tell our `Dataset` how to form batches from the pre-processed inputs. We haven't done any padding yet because we will pad each batch to the maximum length inside the batch (instead of doing so with the maximum length of the whole dataset). This will be the job of the *data collator*. A data collator takes a list of examples and converts them to a batch (by, in our case, applying padding). Since there is no data collator in the library that works on our specific problem, we will write one, adapted from the `DataCollatorWithPadding`:
```
from dataclasses import dataclass
from transformers.tokenization_utils_base import (
PreTrainedTokenizerBase,
PaddingStrategy,
)
from typing import Optional, Union
import tensorflow as tf
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def __call__(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)]
for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="tf",
)
# Un-flatten
batch = {
k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()
}
# Add back labels
batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
return batch
```
When called on a list of examples, it will flatten all the inputs/attentions masks etc. in big lists that it will pass to the `tokenizer.pad` method. This will return a dictionary with big tensors (of shape `(batch_size * 4) x seq_length`) that we then unflatten.
We can check this data collator works on a list of features, we just have to make sure to remove all features that are not inputs accepted by our model (something the `Trainer` will do automatically for us after):
```
accepted_keys = ["input_ids", "attention_mask", "label"]
features = [
{k: v for k, v in encoded_datasets["train"][i].items() if k in accepted_keys}
for i in range(10)
]
batch = DataCollatorForMultipleChoice(tokenizer)(features)
encoded_datasets["train"].features["attention_mask"].feature.feature
```
Again, all those flatten/un-flatten are sources of potential errors so let's make another sanity check on our inputs:
```
[tokenizer.decode(batch["input_ids"][8][i].numpy().tolist()) for i in range(4)]
show_one(datasets["train"][8])
```
All good! Now we can use this collator as a collation function for our dataset. The best way to do this is with the `to_tf_dataset()` method. This converts our dataset to a `tf.data.Dataset` that Keras can take as input. It also applies our collation function to each batch.
```
data_collator = DataCollatorForMultipleChoice(tokenizer)
train_set = encoded_datasets["train"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
validation_set = encoded_datasets["validation"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
)
```
Now we can create our model. First, we specify an optimizer. Using the `create_optimizer` function we can get a nice `AdamW` optimizer with weight decay and a learning rate decay schedule set up for free - but to compute that schedule, it needs to know how long training will take.
```
from transformers import create_optimizer
total_train_steps = (len(encoded_datasets["train"]) // batch_size) * num_train_epochs
optimizer, schedule = create_optimizer(
init_lr=learning_rate, num_warmup_steps=0, num_train_steps=total_train_steps
)
```
All Transformers models have a `loss` output head, so we can simply leave the loss argument to `compile()` blank to train on it.
```
import tensorflow as tf
model.compile(optimizer=optimizer)
```
Now we can train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! Make sure to change the `username` if you do. If you don't want to do this, simply remove the callbacks argument in the call to `fit()`.
```
from transformers.keras_callbacks import PushToHubCallback
username = "Rocketknight1"
callback = PushToHubCallback(
output_dir="./mc_model_save",
tokenizer=tokenizer,
hub_model_id=f"{username}/{push_to_hub_model_id}",
)
model.fit(
train_set,
validation_data=validation_set,
epochs=num_train_epochs,
callbacks=[callback],
)
```
One downside of using the internal loss, however, is that we can't use Keras metrics with it. So let's compute accuracy after the fact, to see how our model is performing. First, we need to get our model's predicted answers on the validation set.
```
predictions = model.predict(validation_set)["logits"]
labels = encoded_datasets["validation"]["label"]
```
And now we can compute our accuracy with Numpy.
```
import numpy as np
preds = np.argmax(predictions, axis=1)
print({"accuracy": (preds == labels).astype(np.float32).mean().item()})
```
If you used the callback above, you can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `"your-username/the-name-you-picked"` so for instance:
```python
from transformers import AutoModelForMultipleChoice
model = AutoModelForMultipleChoice.from_pretrained("your-username/my-awesome-model")
```
| github_jupyter |
**Note**: Click on "*Kernel*" > "*Restart Kernel and Run All*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *after* finishing the exercises to ensure that your solution runs top to bottom *without* any errors. If you cannot run this file on your machine, you may want to open it [in the cloud <img height="12" style="display: inline-block" src="../static/link/to_mb.png">](https://mybinder.org/v2/gh/webartifex/intro-to-python/develop?urlpath=lab/tree/01_elements/01_exercises.ipynb).
# Chapter 1: Elements of a Program (Coding Exercises)
The exercises below assume that you have read the [first part <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/00_content.ipynb) of Chapter 1.
The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas.
## Printing Output
**Q1**: *Concatenate* `greeting` and `audience` below with the `+` operator and print out the resulting message `"Hello World"` with only *one* call of the built-in [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function!
Hint: You may have to "add" a space character in between `greeting` and `audience`.
```
greeting = "Hello"
audience = "World"
print(...)
```
**Q2**: How is your answer to **Q1** an example of the concept of **operator overloading**?
< your answer >
**Q3**: Read the documentation on the built-in [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function! How can you print the above message *without* concatenating `greeting` and `audience` first in *one* call of [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print)?
Hint: The `*objects` in the documentation implies that we can put several *expressions* (i.e., variables) separated by commas within the same call of the [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function.
```
print(...)
```
**Q4**: What does the `sep=" "` mean in the documentation on the built-in [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function? Adjust and use it to print out the three names referenced by `first`, `second`, and `third` on *one* line separated by *commas* with only *one* call of the [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function!
```
first = "Anthony"
second = "Berta"
third = "Christian"
print(...)
```
**Q5**: Lastly, what does the `end="\n"` mean in the documentation? Adjust and use it within the `for`-loop to print the numbers `1` through `10` on *one* line with only *one* call of the [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function!
```
for number in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]:
print(...)
```
| github_jupyter |
# Introduction to Machine Learning
(The examples in this notebook were inspired by my work for EmergentAlliance, the Scikit-Learn documentation and Jason Brownlee's "Machine Learning Mastery with Python")
In this short intro course we will focus on predictive modeling. That means that we want to use the models to make predictions, e.g. a system's future behaviour or a system's response to specific inputs, aka classification and regression.
So from all the various types of machine learning categories we will look at **supervised learning**. So we will train a model based on labelled training data. For example when training an image recognition model for recognizing cats vs dogs you need to label a lot of pictures for training purpose upfront.

The other categories cover **unsupervised learning**, e.g. clustering and **Reinforcement learning**, e.g. Deepmind's AlphaGo.

## Datasets:
We will look at two different datasets:
1. Iris Flower Dataset
2. Boston Housing Prices
These datasets are so called toy datasets, well known machine learning examples, and already included in the Python machine learning library scikitlearn https://scikit-learn.org/stable/datasets/toy_dataset.html. The Iris Flower dataset is an example for a classification problem, whereas the Boston Housing Price dataset is a regression example.
## What does a ML project always look like?
* Idea --> Problem Definition / Hypothesis formulation
* Analyze and Visualize your data
- Understand your data (dimensions, data types, class distributions (bias!), data summary, correllations, skewness)
- Visualize your data (box and whisker / violine / distribution / scatter matrix)
* Data Preprocessing including data cleansing, data wrangling, data compilation, normalization, standardization
* Apply algorithms and make predictions
* Improve, validate and present results
## Let's get started
Load some libraries
```
import pandas as pd # data analysis
import numpy as np # math operations on arrays and vectors
import matplotlib.pyplot as plt # plotting
# display plots directly in the notebook
%matplotlib inline
import sklearn # the library we use for all ML related functions, algorithms
```
## Example 1: Iris flower dataset
https://scikit-learn.org/stable/datasets/toy_dataset.html#iris-dataset
4 numeric, predictive attributes (sepal length in cm, sepal width in cm, petal length in cm, petal width in cm) and the class (Iris-Setosa, Iris-Versicolour, Iris-Virginica)
**Hypothesis:** One can predict the class of Iris Flower based on their attributes.
Here this is just one sentence, but formulating this hypothesis is a non-trivial, iterative task, which is the basis for data and feature selection and extremely important for the overall success!
### 1. Load the data
```
# check here again with autocompletion --> then you can see all availbale datasets
# https://scikit-learn.org/stable/datasets/toy_dataset.html
from sklearn.datasets import load_iris
(data, target) =load_iris(return_X_y=True, as_frame=True)
data
target
```
We will combine this now into one dataframe and check the classes
```
data["class"]=target
data
```
### 2. Understand your data
```
data.describe()
```
This is a classification problem, so we will check the class distribution. This is important to avoid bias due to over- oder underrepresentation of classes. Well known example of this problem are predictive maintenance (very less errors compared to normal runs, Amazon's hiring AI https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G)
```
class_counts = data.groupby('class').size()
class_counts
```
Now let's check for correlations
Correlation means the relationship between two variables and how they may or may not change together.
There are different methods available (--> check with ?data.corr)
```
correlations = data.corr(method='pearson')
correlations
```
Let's do a heatmap plot for the correlation matrix (pandas built-in)
```
correlations.style.background_gradient(cmap='coolwarm').set_precision(2)
```
Now we will also check the skewness of the distributions, assuming a normal Gaussian distribution.
The skew results show a positive (right) or negative (left) skew. Values closer to zero show less skew.
```
skew=data.skew()
skew
```
## 2. Visualize your data
- Histogram
- Paiplot
- Density
```
data.hist()
data.plot(kind="density", subplots=True, layout=(3,2),sharex=False)
```
Another nice plot is the box and whisker plot, visualizing the quartiles of a distribution
```
data.plot(kind="box", subplots=True, layout=(3,2),sharex=False)
```
Another option are the seaborn violine plots, which give a more intuitive feeling about the distribution of values
```
import seaborn as sns
sns.violinplot(data=data,x="class", y="sepal length (cm)")
```
And last but not least a scatterplot matrix, similar to the pairplot we did already in the last session. This should also give insights about correllations.
```
sns.pairplot(data)
```
## 3. Data Preprocessing
For this dataset, there are already some steps we don't need to take, like:
Conglomeration of multiple datasources to one table, including the adaption of formats and granularities. Also we don't need to take care for missing values or NaN's. But among preprocessing there are as well
- Rescaling
- Normalization
The goal of these transformtions is bringing the data into a format, which is most beneficial for the later applied algorithms. So for example optimization algorithms for multivariate optimizations perform better, when all attributes / parameters have the same scale. And other methods assume that input variables have a Gaussian distribution, so it is better to transform the input parameters to meet these requirements.
At first we look at **rescaling**. This is done to rescale all attributes (parameters) into the same range, most of the times this is the range [0,1].
For applying these preprocessing steps at first we need to transform the dataframe into an array and split the arry in input and output values, here the descriptive parameters and the class.
```
# transform into array
array = data.values
array
# separate array into input and output components
X = array[:,0:4]
Y = array[:,4]
# Now we apply the MinMaxScaler with a range of [0,1], so that afterwards all columns have a min of 0 and a max of 1.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX = scaler.fit_transform(X)
rescaledX
```
Now we will apply Normalization by using the Standard Scaler, which means that each column (each attribute / parameter) will be transformed, such that afterwards each attribute has a standard distribution with mean = 0 and std. dev. = 1.
Given the distribution of the data, each value in the dataset will have the mean value subtracted, and then divided by the standard deviation of the whole dataset (or feature in the multivariate case)
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)
rescaledX
```
## 4. Feature Selection (Parameter Sensitivity)
Now we come to an extremely interesting part, which is about finding out which parameters do really have an impact onto my outputs. This is the first time we can validate our assumptions. So we will get a qualitative and a quantitative answer to the question which parameters are important. This is also important as having irrelevant features in your data can decrease the accuracy of many models and increases the training time.
```
# Feature Extraction with Univariate Statistical Tests (Chi-squared for classification)
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
# feature extraction
test = SelectKBest(score_func=chi2, k=3)
fit = test.fit(X, Y)
# summarize scores
print(fit.scores_)
features = fit.transform(X)
# summarize selected features
print(features[0:5,:])
```
Here we can see the scores of the features. The higher the score, the more impact they have. As we have selected to take 3 attributes into account, we can see the values of the three selected features (sepal length (cm), sepal width (cm), petal length (cm), petal width (cm)). This result also makes sense, when remembering the correlation heatmap...
Another very interesting transformation, which fulfills the same job as feature extraction in terms of data reduction is the PCA. Here the complete dataset is transformed into a reduced dataset (you set the number of resulting principal components). A Singular Value Decomposition of the data is performed to project it to a lower dimensional space.
```
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
fit = pca.fit(X)
# summarize components
print("Explained Variance: %s" % fit.explained_variance_ratio_)
print(fit.components_)
```
Of course there are even more possibilities, especially when you consider that the application of ML algorithms itself will give the feature importance. So there are multiple built-in methods available in sklearn.
## 5. Apply ML algorithms
- The first step is to split our data into **training and testing data**. We need to have a separate testing dataset, which was not used for training purpose to validate the performance and accuracy of our trained model.
- **Which algorithm to take?** There is no simple answer to that. Based on your problem (classification vs regression), there are different clases of algorithms, but you cannot know beforehand whoch algorithm will perform best on your data. So it is alwyas a good idea to try different algorithms and check the performance.
- How to evaluate the performance? There are different metrics available to check the **performance of a ML model**
```
# specifying the size of the testing data set
# seed: reproducable random split --> especially important when comparing different algorithms with each other.
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
test_size = 0.33
seed = 7 # we set a seed to get a reproducable split - especially important when you want to compare diff. algorithms with each other
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size,
random_state=seed)
model = LogisticRegression(solver='liblinear')
model.fit(X_train, Y_train)
result = model.score(X_test, Y_test)
print("Accuracy: %.3f%%" % (result*100.0))
# Let's compare the accuracy, when we use the same data for training and testing
model = LogisticRegression(solver='liblinear')
model.fit(X, Y)
result = model.score(X, Y)
print("Accuracy: %.3f%%" % (result*100.0))
# get importance
model = LogisticRegression(solver='liblinear')
model.fit(X_train, Y_train)
importance = model.coef_[0]
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# print("Feature: "+str(i)+", Score: "+str(v))
# plot feature importance
plt.bar([x for x in range(len(importance))], importance)
# decision tree for feature importance on a regression problem
from sklearn.datasets import make_regression
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor()
# fit the model
model.fit(X_train, Y_train)
# get importance
importance = model.feature_importances_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
plt.bar([x for x in range(len(importance))], importance)
```
### Test-Train-Splits
Performing just one test-train-split and checking the performance or feature importance might be not good enough, as the result could be very good or very bad by coincidence due to this specific split. So the easiest solution is to repeat this process several times and check the averaged accuracy or use some of the ready-to-use built-in tools in scikit-learn, like KFold, cross-val-score, LeaveOneOut, ShuffleSplit.
### Which ML model to use?
Here is just a tiny overview of some mosdels one can use for classification and regression problems. For more models, which are just built-in in sciki-learn, please refer to https://scikit-learn.org/stable/index.html and https://machinelearningmastery.com
- Logistic / Linear Regression
- k-nearest neighbour
- Classification and Regression Trees
- Support Vector Machines
- Neural Networks
In the following we will just use logistic regression (https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression) for our classification example and linear regression (https://scikit-learn.org/stable/modules/linear_model.html#generalized-linear-regression) for our regression example.
### ML model evaluation
For evaluating the model performance, there are different metrics available, depending on your type of problem (classification vs regression)
For classification, there are for example:
- Classification accuracy
- Logistic Loss
- Confusion Matrix
- ...
For regression, there are for example:
- Mean Absolute Error
- Mean Squared Error (R)MSE
- R^2
So the accuracy alone does by far not tell you the whole story, you need to check other metrics as well!
The confusion matrix is a handy presentation of the accuracy of a model with two or more classes. The table presents predictions on the x-axis and true outcomes on the y-axis. --> false negative, false positive
https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/
```
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
#Lets have a look at our classification problem:
kfold = KFold(n_splits=10, random_state=7, shuffle=True)
model = LogisticRegression(solver='liblinear')
# Classification accuracy:
scoring = 'accuracy'
results = cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Accuracy: %.3f (%.3f)" % (results.mean(), results.std()))
# Logistic Loss
scoring = 'neg_log_loss'
results = cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Logloss: %.3f (%.3f)" % (results.mean(), results.std()))
# Confusion Matrix
model.fit(X_train, Y_train)
predicted = model.predict(X_test)
matrix = confusion_matrix(Y_test, predicted)
print(matrix)
```
## Regression Example: Boston Housing Example
```
import sklearn
from sklearn.datasets import load_boston
data =load_boston(return_X_y=False)
print(data.DESCR)
df=pd.DataFrame(data.data)
df.columns=data.feature_names
df
df["MEDV"]=data.target
df
```
Now we start again with our procedure:
* Hypothesis
* Understand and visualize the data
* Preprocessing
* Feature Selection
* Apply Model
* Evaluate Results
Our **Hypothesis** here is, that we can actually predict the price of a house based on attributes of the geographic area, population and the property.
```
df.describe()
sns.pairplot(df[["DIS","RM","CRIM","LSTAT","MEDV"]])
from sklearn.linear_model import LinearRegression
# Now we do the
# preprocessing
# feature selection
# training-test-split
# ML model application
# evaluation
array = df.values
X = array[:,0:13]
Y = array[:,13]
# preprocessing
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)
# feature selection
test = SelectKBest(k=6)
fit = test.fit(rescaledX, Y)
features = fit.transform(X)
# train-test-split
X_train, X_test, Y_train, Y_test = train_test_split(features, Y, test_size=0.3,
random_state=5)
# build model
kfold = KFold(n_splits=10, random_state=7, shuffle=True)
model = LinearRegression()
model.fit(X_train,Y_train)
acc = model.score(X_test, Y_test)
# evaluate model
model = LinearRegression()
scoring = 'neg_mean_squared_error'
results = cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Accuracy: %.3f%%" % (acc*100.0))
print("MSE: %.3f (%.3f)" % (results.mean(), results.std()))
# And now:
# Make predictions
# make predictions
# model.predict(new_data)
```
### What comes next?
---> Hyperparameter optimization.
For advanced ML algorithms you have to provide options and settings by yourself. These of course also have an impact onto your model performance and accuracy. Here you can perform so-called grid searches to find the optimal settings for your dataset.
**GridSearchCV**
## What does a typical project look like:
* Data engineering - **A LOT**
* Applying actual ML algorithms - 5% of the time.
(If you have your dataset ready to apply algorithms you have already done like 100% of the work. Of course afterwards you still need to validate and present your results)

### Example: Emergent Alliance - Health Risk Index for Europe
https://emergentalliance.org
What we wanted to do: Predict the risk of getting infected, when travelling to a specific region.
We actually spent weeks formulating and reformulatin our hypothesis to (re-)consider influencing attributes, trying to distinguish between causes and effects.
In the end we spent most of the time with data engineering for:
Population density, intensive care units, mobility, case numbers, sentiment, acceptance of governemnt orders.
The biggest amount of time was spent on checking data sources, getting the data, reading data dictionaries and understanding the data, creating automatic downloads and data pipelines, data preprocessig, bringing the preprocessed data into a database. We had to fight lots of issues with data quality and data granularity (time and geographic) for different countries.
Also afterwards the visual and textual processing and presentation took quite some time (writing blogs, building dashboards, cleaning up databases, ...)
## Image Recognition
It is actually quite easy to build a simple classification model (cats vs dogs), so when you are interested in applying something like this maybe to your experimental data (bubble column pictures or postprocessing contour plots), here are some links to get started:
https://medium.com/@nina95dan/simple-image-classification-with-resnet-50-334366e7311a
https://medium.com/abraia/getting-started-with-image-recognition-and-convolutional-neural-networks-in-5-minutes-28c1dfdd401
| github_jupyter |
# Data Distribution vs. Sampling Distribution: What You Need to Know
This notebook is accompanying the article [Data Distribution vs. Sampling Distribution: What You Need to Know](https://www.ealizadeh.com/blog/statistics-data-vs-sampling-distribution/).
Subscribe to **[my mailing list](https://www.ealizadeh.com/subscribe/)** to receive my posts on statistics, machine learning, and interesting Python libraries and tips & tricks.
You can also follow me on **[Medium](https://medium.com/@ealizadeh)**, **[LinkedIn](https://www.linkedin.com/in/alizadehesmaeil/)**, and **[Twitter]( https://twitter.com/es_alizadeh)**.
Copyright © 2021 [Esmaeil Alizadeh](https://ealizadeh.com)
```
from IPython.display import Image
Image("https://www.ealizadeh.com/wp-content/uploads/2021/01/data_dist_sampling_dist_featured_image.png", width=1200)
```
---
It is important to distinguish between the data distribution (aka population distribution) and the sampling distribution. The distinction is critical when working with the central limit theorem or other concepts like the standard deviation and standard error.
In this post we will go over the above concepts and as well as bootstrapping to estimate the sampling distribution. In particular, we will cover the following:
- Data distribution (aka population distribution)
- Sampling distribution
- Central limit theorem (CLT)
- Standard error and its relation with the standard deviation
- Bootstrapping
---
## Data Distribution
Much of the statistics deals with inferring from samples drawn from a larger population. Hence, we need to distinguish between the analysis done the original data as opposed to analyzing its samples. First, let's go over the definition of the data distribution:
💡 **Data distribution:** *The frequency distribution of individual data points in the original dataset.*
### Generate Data
Let's first generate random skewed data that will result in a non-normal (non-Gaussian) data distribution. The reason behind generating non-normal data is to better illustrate the relation between data distribution and the sampling distribution.
So, let's import the Python plotting packages and generate right-skewed data.
```
# Plotting packages and initial setup
import seaborn as sns
sns.set_theme(palette="pastel")
sns.set_style("white")
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams["figure.dpi"] = 150
savefig_options = dict(format="png", dpi=150, bbox_inches="tight")
from scipy.stats import skewnorm
from sklearn.preprocessing import MinMaxScaler
num_data_points = 10000
max_value = 100
skewness = 15 # Positive values are right-skewed
skewed_random_data = skewnorm.rvs(a=skewness, loc=max_value, size=num_data_points, random_state=1)
skewed_data_scaled = MinMaxScaler().fit_transform(skewed_random_data.reshape(-1, 1))
```
Plotting the data distribution
```
fig, ax = plt.subplots(figsize=(10, 6))
ax.set_title("Data Distribution", fontsize=24, fontweight="bold")
sns.histplot(skewed_data_scaled, bins=30, stat="density", kde=True, legend=False, ax=ax)
# fig.savefig("original_skewed_data_distribution.png", **savefig_options)
```
## Sampling Distribution
In the sampling distribution, you draw samples from the dataset and compute a statistic like the mean. It's very important to differentiate between the data distribution and the sampling distribution as most confusion comes from the operation done on either the original dataset or its (re)samples.
💡 **Sampling distribution:** *The frequency distribution of a sample statistic (aka metric) over many samples drawn from the dataset[katex]^{[1]}[/katex]. Or to put it simply, the distribution of sample statistics is called the sampling distribution.*
The algorithm to obtain the sampling distribution is as follows:
1. Draw a sample from the dataset.
2. Compute a statistic/metric of the drawn sample in Step 1 and save it.
3. Repeat Steps 1 and 2 many times.
4. Plot the distribution (histogram) of the computed statistic.
```
import numpy as np
import random
sample_size = 50
sample_means = []
random.seed(1) # Setting the seed for reproducibility of the result
for _ in range(2000):
sample = random.sample(skewed_data_scaled.tolist(), sample_size)
sample_means.append(np.mean(sample))
print(
f"Mean: {np.mean(sample_means).round(5)}"
)
fig, ax = plt.subplots(figsize=(10, 6))
ax.set_title("Sampling Distribution", fontsize=24, fontweight="bold")
sns.histplot(sample_means, bins=30, stat="density", kde=True, legend=False)
# fig.savefig("sampling_distribution.png", **savefig_options)
```
Above sampling distribution is basically the histogram of the mean of each drawn sample (in above, we draw samples of 50 elements over 2000 iterations). The mean of the above sampling distribution is around 0.23, as can be noted from computing the mean of all samples means.
⚠️ *Do not confuse the sampling distribution with the sample distribution. The sampling distribution considers the distribution of sample statistics (e.g. mean), whereas the sample distribution is basically the distribution of the sample taken from the population.*
## Central Limit Theorem (CLT)
💡 **Central Limit Theorem:** *As the sample size gets larger, the sampling distribution tends to be more like a normal distribution (bell-curve shape).*
*In CLT, we analyze the sampling distribution and not a data distribution, an important distinction to be made.* CLT is popular in hypothesis testing and confidence interval analysis, and it's important to be aware of this concept, even though with the use of bootstrap in data science, this theorem is less talked about or considered in the practice of data science$^{[1]}$. More on bootstrapping is provided later in the post.
## Standard Error (SE)
The [standard error](https://en.wikipedia.org/wiki/Standard_error) is a metric to describe *the variability of a statistic in the sampling distribution*. We can compute the standard error as follows:
$$ \text{Standard Error} = SE = \frac{s}{\sqrt{n}} $$
where $s$ denotes the standard deviation of the sample values and $n$ denotes the sample size. It can be seen from the formula that *as the sample size increases, the SE decreases*.
We can estimate the standard error using the following approach$^{[1]}$:
1. Draw a new sample from a dataset.
2. Compute a statistic/metric (e.g., mean) of the drawn sample in Step 1 and save it.
3. Repeat Steps 1 and 2 several times.
4. An estimate of the standard error is obtained by computing the standard deviation of the previous steps' statistics.
While the above approach can be used to estimate the standard error, we can use bootstrapping instead, which is preferable. I will go over that in the next section.
⚠️ *Do not confuse the standard error with the standard deviation. The standard deviation captures the variability of the individual data points (how spread the data is), unlike the standard error that captures a sample statistic's variability.*
## Bootstrapping
Bootstrapping is an easy way of estimating the sampling distribution by randomly drawing samples from the population (with replacement) and computing each resample's statistic. Bootstrapping does not depend on the CLT or other assumptions on the distribution, and it is the standard way of estimating SE$^{[1]}$.
Luckily, we can use [`bootstrap()`](https://rasbt.github.io/mlxtend/user_guide/evaluate/bootstrap/) functionality from the [MLxtend library](https://rasbt.github.io/mlxtend/) (You can read [my post](https://www.ealizadeh.com/blog/mlxtend-library-for-data-science/) on MLxtend library covering other interesting functionalities). This function also provides the flexibility to pass a custom sample statistic.
```
from mlxtend.evaluate import bootstrap
avg, std_err, ci_bounds = bootstrap(
skewed_data_scaled,
num_rounds=1000,
func=np.mean, # A function to compute a sample statistic can be passed here
ci=0.95,
seed=123 # Setting the seed for reproducibility of the result
)
print(
f"Mean: {avg.round(5)} \n"
f"Standard Error: +/- {std_err.round(5)} \n"
f"CI95: [{ci_bounds[0].round(5)}, {ci_bounds[1].round(5)}]"
)
```
## Conclusion
The main takeaway is to differentiate between whatever computation you do on the original dataset or the sampling of the dataset. Plotting a histogram of the data will result in data distribution, whereas plotting a sample statistic computed over samples of data will result in a sampling distribution. On a similar note, the standard deviation tells us how the data is spread, whereas the standard error tells us how a sample statistic is spread out.
Another takeaway is that even if the original data distribution is non-normal, the sampling distribution is normal (central limit theorem).
Thanks for reading!
___If you liked this post, you can [join my mailing list here](https://www.ealizadeh.com/subscribe/) to receive more posts about Data Science, Machine Learning, Statistics, and interesting Python libraries and tips & tricks. You can also follow me on my [website](https://ealizadeh.com/), [Medium](https://medium.com/@ealizadeh), [LinkedIn](https://www.linkedin.com/in/alizadehesmaeil/), or [Twitter](https://twitter.com/es_alizadeh).___
# References
[1] P. Bruce & A. Bruce (2017), Practical Statistics for Data Scientists, First Edition, O’Reilly
# Useful Links
[MLxtend: A Python Library with Interesting Tools for Data Science Tasks](https://www.ealizadeh.com/blog/mlxtend-library-for-data-science/)
| github_jupyter |
# Distributed data parallel BERT training with TensorFlow2 and SMDataParallel
HSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for TensorFlow, PyTorch, and MXNet.
This notebook example shows how to use SMDataParallel with TensorFlow(version 2.3.1) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a BERT model using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.
The outline of steps is as follows:
1. Stage dataset in [Amazon S3](https://aws.amazon.com/s3/). Original dataset for BERT pretraining consists of text passages from BooksCorpus (800M words) (Zhu et al. 2015) and English Wikipedia (2,500M words). Please follow original guidelines by NVidia to prepare training data in hdf5 format -
https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#getting-the-data
2. Create Amazon FSx Lustre file-system and import data into the file-system from S3
3. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/)
4. Configure data input channels for SageMaker
5. Configure hyper-prarameters
6. Define training metrics
7. Define training job, set distribution strategy to SMDataParallel and start training
**NOTE:** With large traning dataset, we recommend using (Amazon FSx)[https://aws.amazon.com/fsx/] as the input filesystem for the SageMaker training job. FSx file input to SageMaker significantly cuts down training start up time on SageMaker because it avoids downloading the training data each time you start the training job (as done with S3 input for SageMaker training job) and provides good data read throughput.
**NOTE:** This example requires SageMaker Python SDK v2.X.
## Amazon SageMaker Initialization
Initialize the notebook instance. Get the aws region, sagemaker execution role.
The IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). As described above, since we will be using FSx, please make sure to attach `FSx Access` permission to this IAM role.
```
%%time
! python3 -m pip install --upgrade sagemaker
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
import boto3
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}')
```
## Prepare SageMaker Training Images
1. SageMaker by default use the latest [Amazon Deep Learning Container Images (DLC)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) TensorFlow training image. In this step, we use it as a base image and install additional dependencies required for training BERT model.
2. In the Github repository https://github.com/HerringForks/DeepLearningExamples.git we have made TensorFlow2-SMDataParallel BERT training script available for your use. This repository will be cloned in the training image for running the model training.
### Build and Push Docker Image to ECR
Run the below command build the docker image and push it to ECR.
```
image = "tf2-smdataparallel-bert-sagemaker" # Example: tf2-smdataparallel-bert-sagemaker
tag = "latest" # Example: latest
!pygmentize ./Dockerfile
!pygmentize ./build_and_push.sh
%%time
! chmod +x build_and_push.sh; bash build_and_push.sh {region} {image} {tag}
```
## Preparing FSx Input for SageMaker
1. Download and prepare your training dataset on S3.
2. Follow the steps listed here to create a FSx linked with your S3 bucket with training data - https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html. Make sure to add an endpoint to your VPC allowing S3 access.
3. Follow the steps listed here to configure your SageMaker training job to use FSx https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/
### Important Caveats
1. You need use the same `subnet` and `vpc` and `security group` used with FSx when launching the SageMaker notebook instance. The same configurations will be used by your SageMaker training job.
2. Make sure you set appropriate inbound/output rules in the `security group`. Specically, opening up these ports is necessary for SageMaker to access the FSx filesystem in the training job. https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html
3. Make sure `SageMaker IAM Role` used to launch this SageMaker training job has access to `AmazonFSx`.
## SageMaker TensorFlow Estimator function options
In the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell.
**Instance types**
SMDataParallel supports model training on SageMaker with the following instance types only:
1. ml.p3.16xlarge
1. ml.p3dn.24xlarge [Recommended]
1. ml.p4d.24xlarge [Recommended]
**Instance count**
To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.
**Distribution strategy**
Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`.
### Training script
In the Github repository https://github.com/HerringForks/deep-learning-models.git we have made reference TensorFlow-SMDataParallel BERT training script available for your use. Clone the repository.
```
# Clone herring forks repository for reference implementation BERT with TensorFlow2-SMDataParallel
!rm -rf deep-learning-models
!git clone --recursive https://github.com/HerringForks/deep-learning-models.git
import boto3
import sagemaker
sm = boto3.client('sagemaker')
notebook_instance_name = sm.list_notebook_instances()['NotebookInstances'][3]['NotebookInstanceName']
print(notebook_instance_name)
if notebook_instance_name != 'dsoaws':
print('****** ERROR: MUST FIND THE CORRECT NOTEBOOK ******')
exit()
notebook_instance = sm.describe_notebook_instance(NotebookInstanceName=notebook_instance_name)
notebook_instance
security_group_id = notebook_instance['SecurityGroups'][0]
print(security_group_id)
subnet_id = notebook_instance['SubnetId']
print(subnet_id)
from sagemaker.tensorflow import TensorFlow
print(account)
print(region)
print(image)
print(tag)
instance_type = "ml.p3dn.24xlarge" # Other supported instance type: ml.p3.16xlarge, ml.p4d.24xlarge
instance_count = 2 # You can use 2, 4, 8 etc.
docker_image = f"{account}.dkr.ecr.{region}.amazonaws.com/{image}:{tag}" # YOUR_ECR_IMAGE_BUILT_WITH_ABOVE_DOCKER_FILE
username = 'AWS'
subnets = [subnet_id] # Should be same as Subnet used for FSx. Example: subnet-0f9XXXX
security_group_ids = [security_group_id] # Should be same as Security group used for FSx. sg-03ZZZZZZ
job_name = 'smdataparallel-bert-tf2-fsx-2p3dn' # This job name is used as prefix to the sagemaker training job. Makes it easy for your look for your training job in SageMaker Training job console.
# TODO: Copy data to FSx/S3
!pip install datasets
# For loading datasets
from datasets import list_datasets, load_dataset
# To see all available dataset names
print(list_datasets())
# To load a dataset
wiki = load_dataset("wikipedia", "20200501.en", split='train')
file_system_id = '<FSX_ID>' # FSx file system ID with your training dataset. Example: 'fs-0bYYYYYY'
SM_DATA_ROOT = '/opt/ml/input/data/train'
hyperparameters={
"train_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/train/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"val_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/validation/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"log_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert/logs']),
"checkpoint_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert']),
"load_from": "scratch",
"model_type": "bert",
"model_size": "large",
"per_gpu_batch_size": 64,
"max_seq_length": 128,
"max_predictions_per_seq": 20,
"optimizer": "lamb",
"learning_rate": 0.005,
"end_learning_rate": 0.0003,
"hidden_dropout_prob": 0.1,
"attention_probs_dropout_prob": 0.1,
"gradient_accumulation_steps": 1,
"learning_rate_decay_power": 0.5,
"warmup_steps": 2812,
"total_steps": 2000,
"log_frequency": 10,
"run_name" : job_name,
"squad_frequency": 0
}
estimator = TensorFlow(entry_point='albert/run_pretraining.py',
role=role,
image_uri=docker_image,
source_dir='deep-learning-models/models/nlp',
framework_version='2.3.1',
py_version='py3',
instance_count=instance_count,
instance_type=instance_type,
sagemaker_session=sagemaker_session,
subnets=subnets,
hyperparameters=hyperparameters,
security_group_ids=security_group_ids,
debugger_hook_config=False,
# Training using SMDataParallel Distributed Training Framework
distribution={'smdistributed':{
'dataparallel':{
'enabled': True
}
}
}
)
```
# Configure FSx Input for the SageMaker Training Job
```
from sagemaker.inputs import FileSystemInput
#YOUR_MOUNT_PATH_FOR_TRAINING_DATA # NOTE: '/fsx/' will be the root mount path. Example: '/fsx/albert''''
file_system_directory_path='/fsx/'
file_system_access_mode='rw'
file_system_type='FSxLustre'
train_fs = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=file_system_directory_path,
file_system_access_mode=file_system_access_mode)
data_channels = {'train': train_fs}
# Submit SageMaker training job
estimator.fit(inputs=data_channels, job_name=job_name)
```
| github_jupyter |
# NewEgg.Com WebScraping Program For Laptops - Beta v1.0
### - April 2020
---
```
# Import dependencies.
import os
import re
import time
import glob
import random
import datetime
import requests
import pandas as pd
from re import search
from splinter import Browser
from playsound import playsound
from bs4 import BeautifulSoup as soup
```
## Functions & Classes Setup
---
```
# Build a function to return date throughout the program.
def return_dt():
global current_date
current_date = str(datetime.datetime.now()).replace(':','.').replace(' ','_')[:-7]
return current_date
"""
NewEgg WebScraper function that scrapes data, saves it into a csv file, and creates Laptop objects.
"""
def newegg_page_scraper(containers, turn_page):
page_nums = []
general_category = []
product_categories = []
images = []
product_brands = []
product_models = []
product_links = []
item_numbers = []
promotions = []
prices = []
shipping_terms = []
# Put this to avoid error that was being generated
global gen_category
"""
Loop through all the containers on the HTML, and scrap the following content into the following lists
"""
for con in containers:
try:
page_counter = turn_page
page_nums.append(int(turn_page))
gen_category = target_page_soup.find_all('div', class_="nav-x-body-top-bar fix")[0].text.split('\n')[5]
general_category.append(gen_category)
prod_category = target_page_soup.find_all('h1', class_="page-title-text")[0].text
product_categories.append(prod_category)
image = con.a.img["src"]
images.append(image)
prd_title = con.find_all('a', class_="item-title")[0].text
product_models.append(prd_title)
product_link = con.find_all('a', class_="item-title")[0]['href']
product_links.append(product_link)
shipping = con.find_all('li', class_='price-ship')[0].text.strip().split()[0]
if shipping != "Free":
shipping = shipping.replace('$', '')
shipping_terms.append(shipping)
else:
shipping = 0.00
shipping_terms.append(shipping)
brand_name = con.find_all('a', class_="item-brand")[0].img["title"]
product_brands.append(brand_name)
except (IndexError, ValueError) as e:
# If there are no item_brand container, take the Brand from product details.
product_brands.append(con.find_all('a', class_="item-title")[0].text.split()[0])
try:
current_promo = con.find_all("p", class_="item-promo")[0].text
promotions.append(current_promo)
except:
promotions.append('null')
try:
price = con.find_all('li', class_="price-current")[0].text.split()[0].replace('$','').replace(',', '')
prices.append(price)
except:
price = 'null / out of stock'
prices.append(price)
try:
item_num = con.find_all('a', class_="item-title")[0]['href'].split('p/')[1].split('?')[0]
item_numbers.append(item_num)
except (IndexError) as e:
item_num = con.find_all('a', class_="item-title")[0]['href'].split('p/')[1]
item_numbers.append(item_num)
# Convert all of the lists into a dataframe
df = pd.DataFrame({
'item_number': item_numbers,
'general_category': general_category,
'product_category': product_categories,
'brand': product_brands,
'model_specifications': product_models,
'price': prices,
'current_promotions': promotions,
'shipping': shipping_terms,
'page_number': page_nums,
'product_links': product_links,
'image_link': images
})
# Rearrange the dataframe columns into the following order.
df = df[['item_number', 'general_category','product_category', 'page_number' ,'brand','model_specifications' ,'current_promotions' ,'price' ,'shipping' ,'product_links','image_link']]
# Convert the dataframe into a dictionary.
global scraped_dict
scraped_dict = df.to_dict('records')
# Grab the subcategory "Laptop/Notebooks" and eliminate any special characters that may cause errors.
global pdt_category
pdt_category = df['product_category'].unique()[0]
# Eliminate special characters in a string if it exists.
pdt_category = ''.join(e for e in pdt_category if e.isalnum())
""" Count the number of items scraped by getting the length of a all the models for sale.
This parameter is always available for each item-container in the HTML
"""
global items_scraped
items_scraped = len(df['model_specifications'])
"""
Save the results into a csv file using Pandas
"""
df.to_csv(f'./processing/{current_date}_{pdt_category}_{items_scraped}_scraped_page{turn_page}.csv')
# Return these variables as they will be used.
return scraped_dict, items_scraped, pdt_category
# Function to return the total results pages.
def results_pages(target_page_soup):
# Use BeautifulSoup to extract the total results page number
results_pages = target_page_soup.find_all('span', class_="list-tool-pagination-text")[0].text.strip()
# Find and extract total pages + and add 1 to ensure proper length of total pages.
global total_results_pages
total_results_pages = int(re.split("/", results_pages)[1])
return total_results_pages
"""
Build a function to concatenate all pages that were scraped and saved in the processing folder.
Save the final output (1 csv file) all the results
"""
def concatenate(total_results_pages):
path = f'./processing\\'
scraped_pages = glob.glob(path + "/*.csv")
concatenate_pages = []
counter = 0
for page in scraped_pages:
df = pd.read_csv(page, index_col=0, header=0)
concatenate_pages.append(df)
compiled_data = pd.concat(concatenate_pages, axis=0, ignore_index=True)
total_items_scraped = len(compiled_data['brand'])
concatenated_output = compiled_data.to_csv(f"./finished_outputs/{current_date}_{total_items_scraped}_scraped_{total_results_pages}_pages_.csv")
return
"""
Built a function to clear out the entire processing files folder to avoid clutter.
Or the user can keep the processing files (page by page) for their own analysis.
"""
def clean_processing_fldr():
path = f'./processing\\'
scraped_pages = glob.glob(path + "/*.csv")
if len(scraped_pages) < 1:
print("There are no files in the folder to clear. \n")
else:
print(f"Clearing out a total of {len(scraped_pages)} scraped pages in the processing folder... \n")
clear_processing_files = []
for page in scraped_pages:
os.remove(page)
print('Clearing of "Processing" folder complete. \n')
return
def random_a_tag_mouse_over3():
x = random.randint(6, 10)
def rdm_slp_5_9(x):
time.sleep(x)
print(f"Mimic Humans - Sleeping for {x} seconds. ")
return x
working_try_atags = []
finally_atags = []
working_atags = []
not_working_atags = []
try_counter = 0
finally_counter = 0
time.sleep(1)
# Mouse over to header of the page "Laptops"
browser.find_by_tag("h1").mouse_over()
number_of_a_tags = len(browser.find_by_tag("a"))
# My observation has taught me that most of the actual laptop clickable links on the grid are in the <a> range 2000 to 2100.
if number_of_a_tags > 1900:
print(f"Found {number_of_a_tags} <a> tags when parsing html... ")
random_90_percent_plug = (random.randint(90, 94)/100.00)
start_a_tag = int(round((number_of_a_tags * random_90_percent_plug)))
end_a_tag = int(round((number_of_a_tags * .96)))
else:
# After proving you're human, clickable <a>'s reduced 300, so adjusting mouse_over for that scenario
print(f"Found {number_of_a_tags} <a> tags when parsing html... ")
random_40_percent_plug = (random.randint(40, 44)/100.00)
start_a_tag = int(round((number_of_a_tags * random_40_percent_plug)))
end_a_tag = int(round((number_of_a_tags * .46)))
step = random.randint(13, 23)
for i in range(start_a_tag, end_a_tag, step):
try: # try this as normal part of the program - SHORT
rdm_slp_5_9(x)
browser.find_by_tag("a")[i+2].mouse_over()
time.sleep(3)
except: # Execute this when there is an exception
print("EXCEPTION raised during mouse over. Going to break loop and proceed with moving to the next page. \n")
break
else: # execute this only if no exceptions are raised
working_try_atags.append(i+2)
working_atags.append(i+2)
try_counter += 1
print(f"<a> number = {i+2} | Current Attempts (Try Count): {try_counter} \n")
return
def g_recaptcha_check():
if browser.is_element_present_by_id('g-recaptcha') == True:
for sound in range(0, 2):
playsound('./sounds/user_alert.wav')
print("recaptcha - Check Alert! \n")
continue_scrape = input("Newegg system suspects you are a bot. \n Complete the recaptcha test to prove you're not a bot. After, enter in any key and press ENTER to continue the scrape. \n")
print("Continuing with scrape... \n")
return
def are_you_human_backend(target_page_soup):
if target_page_soup.find_all("title")[0].text == 'Are you a human?':
playsound('./sounds/user_alert.wav')
continue_scrape = input("Newegg notices you're a robot on the backend when requesting. REFRESH THE PAGE and you may have to perform a test to prove you're human. After you refresh, enter in any key, and press ENTER to continue the webscrape. \n")
print("Now will automatically will refresh the page 2 times, and target new URL. \n")
print("Refreshing three times in 12 seconds. Please wait... \n")
for i in range(0, 2):
browser.reload()
time.sleep(2)
browser.back()
time.sleep(4)
browser.forward()
time.sleep(3)
print("Targeting new url... ")
# After user passes test, target the new url, and return updated target_page_soup
target_url = browser.url
response_target = requests.get(target_url)
target_page_soup = soup(response_target.text, 'html.parser')
print("#"* 60)
print(target_page_soup)
print("#"* 60)
#target_page_soup
break_pedal = input("Does the soup say 'are you human?' in the text?' Enter 'y' or 'n'. ")
if break_pedal == 'y':
# recursion
are_you_human_backend(target_page_soup)
else:
#print("#"* 60)
target_url = browser.url
response_target = requests.get(target_url)
target_page_soup = soup(response_target.text, 'html.parser')
return target_page_soup
else:
print("Passed the 'Are you human?' check when requesting and parsing the html. Continuing with scrape ... \n")
# Otherwise, return the target_page_soup that was passed in.
return target_page_soup
def random_xpath_top_bottom():
x = random.randint(3, 8)
def rdm_slp_5_9(x):
time.sleep(x)
print(f"Slept for {x} seconds. \n")
return x
# Check if there are working links on the screen, otherwise alert the user.
if (browser.is_element_present_by_tag('h1')) == True:
print("(Check 1 - Random Xpath Top Bottom) Header is present and hoverable on page. \n")
else:
print("(Check 1 - ERROR - Random Xpath Top Bottom) Header is NOT present on page. \n")
for s in range(0, 1):
playsound('./sounds/user_alert.wav')
red_light = input("Program could not detect a clickable links to hover over, and click. Please use your mouse to refresh the page, and enter 'y' to continue the scrape. \n")
if (browser.is_element_present_by_tag("a")) == True:
print("(Check 2- Random Xpath Top Bottom) <a> tags are present on page. Will begin mouse-over thru the page, and click a link. \n")
else:
# If there isn't, pause the program. Have user click somewhere on the screen.
for s in range(0, 1):
playsound('./sounds/user_alert.wav')
red_light = input("Program could not detect a clickable links to hover over, and click. Please use your mouse to refresh the page, and enter 'y' to continue the scrape. \n")
# There are clickable links, then 'flip the coin' to choose top or bottom button
coin_toss_top_bottom = random.randint(0,1)
next_page_button_results = []
# If the coin toss is even, mouse_over and click the top page link.
if (coin_toss_top_bottom == 0):
print('Heads - Clicking "Next Page" Top Button. \n')
x = random.randint(3, 8)
print(f"Mimic human behavior by randomly sleeping for {x}. \n")
rdm_slp_5_9(x)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').mouse_over()
time.sleep(1)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').click()
next_page_button_results.append(coin_toss_top_bottom)
print('Heads - SUCCESSFUL "Next Page" Top Button. \n')
return
else:
next_page_button_results.append(coin_toss_top_bottom)
# try: # after you add item to cart and go back back - this is the bottom next page link
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[8]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
try:
print('Tails - Clicking "Next Page" Xpath Bottom Button. \n')
x = random.randint(3, 8)
print(f"Mimic human behavior by randomly sleeping for {x}. \n")
rdm_slp_5_9(x)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').mouse_over()
time.sleep(4)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').click()
print('Tails - 1st Bottom Xpath - SUCCESSFUL "Next Page" Bottom Button. \n')
except:
print("EXCEPTION - 1st Bottom Xpath Failed. Sleep for 1 second then will try with 2nd Xpath bottom link. \n")
try:
time.sleep(4)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').mouse_over()
time.sleep(4)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').click()
print('(Exception Attempt) Tails - 2nd Bottom Xpath - SUCCESSFUL "Next Page" Bottom Button. \n')
except:
print("EXCEPTION - 2nd Bottom Xpath Failed. Trying with 3rd Xpath bottom link. \n")
try:
time.sleep(4)
browser.find_by_xpath('/html/body/div[5]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').mouse_over()
time.sleep(4)
browser.find_by_xpath('/html/body/div[5]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').click()
print('(Exception Attempt) Tails - 3rd Bottom Xpath - SUCCESSFUL "Next Page" Bottom Button. \n')
except:
print("3rd Bottom Link - Didn't work - INSPECT AND GRAB THE XPATH... \n")
break_pedeal = input("Pause. Enter anything to continue... ")
return
"""
This class takes in the dictionary from the webscraper function, and will be used in a list comprehension
to produce class "objects"
"""
class Laptops:
counter = 0
def __init__(self, **entries):
self.__dict__.update(entries)
Laptops.counter += 1
def count(self):
print(f"Total Laptops scraped: {Laptops.counter}")
"""
Originally modeled out parent/child inheritance object structure.
After careful research, I found it much easier to export the Pandas Dataframe of the results to a dictionary,
and then into a class object, which I will elaborate more down below.
"""
# class Product_catalog:
# all_prod_count = 0
# def __init__(self, general_category): # computer systems
# self.general_category = general_category
# Product_catalog.all_prod_count += 1
# def count_prod(self):
# return int(self.all_prod_count)
# #return '{}'.format(self.general_category)
# Sub_category was later changed to Laptops due to the scope of this project.
# class Sub_category(Product_catalog): # laptops/notebooks, gaming
# sub_category_ct = 0
# def __init__(self, general_category, sub_categ, item_num, brand, price, img_link, prod_link, model_specifications, current_promotions):
# super().__init__(general_category)
# Sub_category.sub_category_ct += 1
# self.sub_categ = sub_categ
# self.item_num = item_num
# self.brand = brand
# self.price = price
# self.img_link = img_link
# self.prod_link = prod_link
# self.model_specifications = model_specifications
# self.current_promotions = current_promotions
```
## Main Program Logic
---
```
""" Welcome to the program message!
"""
print("=== NewEgg.Com Laptop WebScraper Beta v1.0 ===")
print("=="*30)
print('Scope: This project is a beta and is only built to scrape the laptop section of NewEgg.com due to limited time. \n')
print("Instructions: \n")
return_dt()
print(f'Current Date And Time: {current_date} \n')
print("(1) Go to www.newegg.com, go to the laptop section, select your requirements (e.g. brand, screensize, and specifications - SSD size, processor brand and etc...) ")
print("(2) Copy and paste the url from your exact search when prompted ")
print('(3) After the webscraping is successful, you will have an option to concatenate all of the pages you scraped together into one csv file')
print('(4) Lastly, you will have an option to clear out the processing folder (data scraped by each page)')
print('(5) If you have any issues or errors, "PRESS CTRL + C" to quit the program in the terminal ')
print('(6) You may run the program in the background as the program will make an alert noise to flag when Newegg suspects there is a bot, and will pause the scrape until you finish proving you are human. ')
print('(7) Disclaimer: Newegg may ban you for a 24 - 48 hours for webscraping their data, then you may resume. \n Also, please consider executing during the day, with tons of web traffic to their site in your respective area. \n')
print('Happy Scraping!')
# Set up Splinter requirements.
executable_path = {'executable_path': './chromedriver.exe'}
# Add an item to the cart first, then go to the user URL and scrape.
# Ask user to input in the laptop query link they would like to scrape.
url = input("Please copy and paste your laptop query that you want to webscrape, and press enter: \n")
browser = Browser('chrome', **executable_path, headless=False, incognito=True)
########################
# Throw a headfake first.
laptops_home_url = 'https://www.newegg.com/'
browser.visit(laptops_home_url)
# Load Time.
time.sleep(4)
#current_url = browser.url
browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[1]/input').mouse_over()
time.sleep(1)
browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[1]/input').click()
time.sleep(1)
# Type in laptops
intial_search = browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[1]/input').type('Lenovo Laptops intel', slowly=True)
for k in intial_search:
time.sleep(0.5)
pass
time.sleep(3)
# Click the search button
browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[3]/button').click()
print("Sleeping for 5 seconds. \n")
time.sleep(5)
# try to click on the first workable link
for i in range(2,4):
try:
browser.find_by_xpath(f'/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[{i}]/div[1]/div[1]/a').mouse_over()
time.sleep(1)
browser.find_by_xpath(f'/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[{i}]/div[1]/div[1]/a').click()
except:
print(f"i {i} - Exception occurred. Trying next link. ")
time.sleep(5)
browser.back()
time.sleep(4)
g_recaptcha_check()
#####################
print("Sleeping for 5 seconds. \n")
time.sleep(3)
# Go to the user intended url
browser.visit(url)
time.sleep(3)
g_recaptcha_check()
current_url = browser.url
# Allocating loading time.
time.sleep(4)
#current_url = browser.url
response = requests.get(current_url)
print(f"{response} \n")
target_page_soup = soup(response.text, 'html.parser')
are_you_human_backend(target_page_soup)
# Run the results_pages function to gather the total pages to be scraped.
results_pages(target_page_soup)
"""
This is the loop that performs the page by page scraping of data / results
of the user's query.
"""
# List set up for where class Laptop objects will be stored.
print("Beginning webscraping and activity log below... ")
print("="*60)
product_catalog = []
# "Stop" in range below is "total_results_pages+1" because we started at 1.
for turn_page in range(1, total_results_pages+1):
"""
If "reCAPTCHA" pops up, pause the program using an input. This allows the user to continue
to scrape after they're done completing the quiz by inputting any value.
"""
# Allocating loading time.
time.sleep(4)
g_recaptcha_check()
print(f"Beginning mouse over activity... \n")
# Set up "containers" to be passed into main scraping function.
if turn_page == 1:
containers = target_page_soup.find_all("div", class_="item-container")
else:
target_url = browser.url
# Use Request.get() - throw the boomerang at the target, retrieve the info, & return back to requestor
response_target = requests.get(target_url)
# Use BeautifulSoup to read grab all the HTML using the lxml parser
target_page_soup = soup(response_target.text, 'html.parser')
# Pass in target_page_soup to scan on the background (usually 10 pages in) if the html has text "Are you human?"
# If yes, the browser will refresh twice, and return a new target_page_soup that should have the scrapable items we want
are_you_human_backend(target_page_soup)
containers = target_page_soup.find_all("div", class_="item-container")
print(f"Scraping Current Page: {turn_page} \n")
# Execute webscraper function. Output is a csv file in the processing folder and dictionary.
newegg_page_scraper(containers, turn_page)
print("Creating laptop objects for this page... \n")
# Create instances of class objects of the laptops/notebooks using a list comprehension.
objects = [Laptops(**prod_obj) for prod_obj in scraped_dict]
print(f"Finished creating Laptop objects for page {turn_page} ... \n")
# Append all of the objects to the main product_catalog list (List of List of Objects).
print(f"Adding {len(objects)} to laptop catalog... \n")
product_catalog.append(objects)
random_a_tag_mouse_over3()
if turn_page == total_results_pages:
print(f"Completed scraping {turn_page} / {total_results_pages} pages. \n ")
# Exit the broswer once complete webscraping.
browser.quit()
else:
try:
y = random.randint(3, 5)
print(f"Current Page: {turn_page}) | SLEEPING FOR {y} SECONDS THEN will click next page. \n")
time.sleep(y)
random_xpath_top_bottom()
except:
z = random.randint(3, 5)
print(f" (EXCEPTION) Current Page: {turn_page}) | SLEEPING FOR {z} SECONDS - Will click next page, if applicable. \n")
time.sleep(z)
random_xpath_top_bottom()
time.sleep(1)
print("")
print("="*60)
print("")
# Prompt the user if they would like to concatenate all of the pages into one csv file
concat_y_n = input(f'All {total_results_pages} pages have been saved in the "processing" folder (1 page = csv files). Would you like for us concatenate all the files into one? Enter "y", if so. Otherwise, enter anykey to exit the program. \n')
if concat_y_n == 'y':
concatenate(total_results_pages)
print(f'WebScraping Complete! All {total_results_pages} have been scraped and saved as {current_date}_{pdt_category}_scraped_{total_results_pages}_pages_.csv in the "finished_outputs" folder \n')
# Prompt the user to if they would like to clear out processing folder function here - as delete everything to prevent clutter
clear_processing_y_n = input(f'The "processing" folder has {total_results_pages} csv files of each page that was scraped. Would you like to clear the files? Enter "y", if so. Otherwise, enter anykey to exit the program. \n')
if clear_processing_y_n == 'y':
clean_processing_fldr()
print('Thank you checking out my project, and hope you found this useful! \n')
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
# 20 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343&recaptcha=pass&LeftPriceRange=1000%201500
## 22 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343%20600440394%20601183480%20601307583&LeftPriceRange=1000%201500
# 35
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814&LeftPriceRange=1000%201500
# 25 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066&LeftPriceRange=1000%201500
# 15 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066&LeftPriceRange=1000%201500
# 26 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066%20601286795%20600440394&LeftPriceRange=1000%201500
# 28 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066%20601286795%20600440394%20600337010%20601107729%20601331008&LeftPriceRange=1000%201500
# 48 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20600004343%20601183480%20601307583%204814%20601296065%20601296059%20601296066%20601286795%20600440394%20600004344&LeftPriceRange=1000%201500
# 29
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20600004343%20601183480%20601307583%204814%20601296066%20601286795%20600440394%20600004344%20601286800&LeftPriceRange=1000%201500
# 33 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20600004343%20601183480%20601307583%204814%20601296066%20601286795%20600440394%20600004344%20601286800%20600337010&LeftPriceRange=1000%201500
# 26 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601183480%20601307583%204814%20601296066%20601286795%20600440394%20600004344%20601286800%20600337010%20601107729%20601331008&LeftPriceRange=1000%201500
# 11 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601183480%204814%20601296066%20600440394%20600004344%20601286800%20600337010%20601107729%20601331008&LeftPriceRange=1000%201500
# 22 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601183480%204814%20601296066%20600440394%20600004344%20601286800%20600337010%20601107729%20601331008%204023%204022%204084
# 33 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%204814%20601296066%20600004344%204023%204022%204084
# 33 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%204814%20601296066%204023%204022%2050001186%2050010418%2050010772
# 24 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204023%204022%2050001186%2050010418%2050010772
# 15 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772
# 17 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312
# 18 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312%2050001146
# 19 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312%2050001146%2050001759%2050001149
# 25 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312%2050001146%2050001759%2050001149%2050001077%20600136700
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').click()
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').click()
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').click()
target_page_soup.find_all("div", class_="item-container")
browser.find_by_xpath('/html/body/div[5]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').click()
# 32 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20600337010%20601313977%20601274231%20601331008%20600440394%20601183480%20600136700
# 29 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20600337010%20601313977%20601274231%20601331008%20600136700
# 18 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20601313977%20601274231%20601331008%20600136700
# 30 pages
#https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20601274231%20601331008%20600136700%20601346404%20600337010
# 28 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20601274231%20601331008%20600136700%20600337010
# 21 Pages
# https://www.newegg.com/p/pl?N=100006740%20601307583%20601107729%20601274231%20601331008%20600136700%20600337010
# 13 pages
# https://www.newegg.com/p/pl?N=100006740%20601307583%20601107729%20601274231%20601331008%20600136700
# 23 pages
# https://www.newegg.com/p/pl?N=100006740%20601307583%20601107729%20601274231%20600136700%20601313977%20600337010%20600440394
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sklearn
from sklearn import datasets
iris = datasets.load_iris()
iris
iris.feature_names
print(iris.data.shape, iris.data.dtype)
iris.target
iris.target_names
import numpy as np
from chainer_chemistry.datasets.numpy_tuple_dataset import NumpyTupleDataset
# All dataset is to train for simplicity
dataset = NumpyTupleDataset(iris.data.astype(np.float32), iris.target.astype(np.int32))
train = dataset
from chainer.functions import relu, dropout
from chainer_chemistry.models.mlp import MLP
from chainer_chemistry.models.prediction.classifier import Classifier
from chainer.functions import dropout
def activation_relu_dropout(h):
return dropout(relu(h), ratio=0.5)
out_dim = len(iris.target_names)
predictor = MLP(out_dim=out_dim, hidden_dim=48, n_layers=2, activation=activation_relu_dropout)
classifier = Classifier(predictor)
from chainer import iterators
from chainer import optimizers
from chainer import training
from chainer.training import extensions as E
def fit(model, dataset, batchsize=16, epoch=10, out='results/tmp', device=-1):
train_iter = iterators.SerialIterator(train, batchsize)
optimizer = optimizers.Adam()
optimizer.setup(model)
updater = training.StandardUpdater(
train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (epoch, 'epoch'), out=out)
#trainer.extend(E.Evaluator(val_iter, classifier,
# device=device, converter=concat_mols))
trainer.extend(E.LogReport(), trigger=(10, 'epoch'))
trainer.extend(E.PrintReport([
'epoch', 'main/loss', 'main/accuracy', 'validation/main/loss',
'validation/main/accuracy', 'elapsed_time']))
trainer.run()
fit(classifier, train, batchsize=16, epoch=100)
```
## Saliency visualization
```
from chainer_chemistry.saliency.calculator.gradient_calculator import GradientCalculator
from chainer_chemistry.saliency.calculator.integrated_gradients_calculator import IntegratedGradientsCalculator
from chainer_chemistry.link_hooks.variable_monitor_link_hook import VariableMonitorLinkHook
# 1. instantiation
gradient_calculator = GradientCalculator(classifier)
#gradient_calculator = IntegratedGradientsCalculator(classifier, steps=3,
from chainer_chemistry.saliency.calculator.calculator_utils import GaussianNoiseSampler
# --- VanillaGrad ---
M = 30
# 2. compute
saliency_samples_vanilla = gradient_calculator.compute(
train, M=1,)
saliency_samples_smooth = gradient_calculator.compute(
train, M=M, noise_sampler=GaussianNoiseSampler())
saliency_samples_bayes = gradient_calculator.compute(
train, M=M, train=True)
# 3. aggregate
method = 'square'
saliency_vanilla = gradient_calculator.aggregate(
saliency_samples_vanilla, ch_axis=None, method=method)
saliency_smooth = gradient_calculator.aggregate(
saliency_samples_smooth, ch_axis=None, method=method)
saliency_bayes = gradient_calculator.aggregate(
saliency_samples_bayes, ch_axis=None, method=method)
from chainer_chemistry.saliency.visualizer.table_visualizer import TableVisualizer
from chainer_chemistry.saliency.visualizer.visualizer_utils import normalize_scaler
visualizer = TableVisualizer()
# Visualize saliency of `i`-th data
i = 0
visualizer.visualize(saliency_vanilla[i], feature_names=iris.feature_names,
scaler=normalize_scaler)
```
visualize saliency of all data --> this can be considered as "feature importance"
```
saliency_mean = np.mean(saliency_vanilla, axis=0)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler, save_filepath='results/iris_vanilla_{}.png'.format(method))
saliency_mean = np.mean(saliency_smooth, axis=0)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler, save_filepath='results/iris_smooth_{}.png'.format(method))
saliency_mean = np.mean(saliency_bayes, axis=0)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler, save_filepath='results/iris_bayes_{}.png'.format(method))
```
## sklearn random forest feature importance
Ref:
- https://qiita.com/TomokIshii/items/290adc16e2ca5032ca07
- https://stackoverflow.com/questions/44101458/random-forest-feature-importance-chart-using-python
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
clf_rf = RandomForestClassifier()
clf_rf.fit(X_train, y_train)
y_pred = clf_rf.predict(X_test)
accu = accuracy_score(y_test, y_pred)
print('accuracy = {:>.4f}'.format(accu))
# Feature Importance
fti = clf_rf.feature_importances_
print('Feature Importances:')
for i, feat in enumerate(iris['feature_names']):
print('\t{0:20s} : {1:>.6f}'.format(feat, fti[i]))
import matplotlib.pyplot as plt
features = iris['feature_names']
importances = clf_rf.feature_importances_
indices = np.argsort(importances)
plt.title('Random forest feature importance')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
plt.show()
```
| github_jupyter |
# Time Series analysis of O'hare taxi rides data
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import TimeSeriesSplit, cross_validate, GridSearchCV
pd.set_option('display.max_rows', 6)
plt.style.use('ggplot')
plt.rcParams.update({'font.size': 16,
'axes.labelweight': 'bold',
'figure.figsize': (8,6)})
from mealprep.mealprep import find_missing_ingredients
# pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
import pickle
ORD_df = pd.read_csv('../data/ORD_train.csv').drop(columns=['Unnamed: 0', 'Unnamed: 0.1'])
ORD_df
```
## Tom's functions
```
# Custom functions
def lag_df(df, lag, cols):
return df.assign(**{f"{col}-{n}": df[col].shift(n) for n in range(1, lag + 1) for col in cols})
def ts_predict(input_data, model, n=20, responses=1):
predictions = []
n_features = input_data.size
for _ in range(n):
predictions = np.append(predictions,
model.predict(input_data.reshape(1, -1))) # make prediction
input_data = np.append(predictions[-responses:],
input_data[:n_features-responses]) # new input data
return predictions.reshape((-1, responses))
def plot_ts(ax, df_train, df_test, predictions, xlim, response_cols):
col_cycle = plt.rcParams['axes.prop_cycle'].by_key()['color']
for i, col in enumerate(response_cols):
ax.plot(df_train[col], '-', c=col_cycle[i], label = f'Train {col}')
ax.plot(df_test[col], '--', c=col_cycle[i], label = f'Validation {col}')
ax.plot(np.arange(df_train.index[-1] + 1,
df_train.index[-1] + 1 + len(predictions)),
predictions[:,i], c=col_cycle[-i-2], label = f'Prediction {col}')
ax.set_xlim(0, xlim+1)
ax.set_title(f"Train Shape = {len(df_train)}, Validation Shape = {len(df_test)}",
fontsize=16)
ax.set_ylabel(df_train.columns[0])
def plot_forecast(ax, df_train, predictions, xlim, response_cols):
col_cycle = plt.rcParams['axes.prop_cycle'].by_key()['color']
for i, col in enumerate(response_cols):
ax.plot(df_train[col], '-', c=col_cycle[i], label = f'Train {col}')
ax.plot(np.arange(df_train.index[-1] + 1,
df_train.index[-1] + 1 + len(predictions)),
predictions[:,i], '-', c=col_cycle[-i-2], label = f'Prediction {col}')
ax.set_xlim(0, xlim+len(predictions))
ax.set_title(f"{len(predictions)}-step forecast",
fontsize=16)
ax.set_ylabel(response_cols)
def create_rolling_features(df, columns, windows=[6, 12]):
for window in windows:
df["rolling_mean_" + str(window)] = df[columns].rolling(window=window).mean()
df["rolling_std_" + str(window)] = df[columns].rolling(window=window).std()
df["rolling_var_" + str(window)] = df[columns].rolling(window=window).var()
df["rolling_min_" + str(window)] = df[columns].rolling(window=window).min()
df["rolling_max_" + str(window)] = df[columns].rolling(window=window).max()
df["rolling_min_max_ratio_" + str(window)] = df["rolling_min_" + str(window)] / df["rolling_max_" + str(window)]
df["rolling_min_max_diff_" + str(window)] = df["rolling_max_" + str(window)] - df["rolling_min_" + str(window)]
df = df.replace([np.inf, -np.inf], np.nan)
df.fillna(0, inplace=True)
return df
lag = 3
ORD_train_lag = lag_df(ORD_df, lag=lag, cols=['seats']).dropna()
ORD_train_lag
find_missing_ingredients(ORD_train_lag)
lag = 3 # you can vary the number of lagged features in the model
n_splits = 5 # you can vary the number of train/validation splits
response_col = ['rides']
# df_lag = lag_df(df, lag, response_col).dropna()
tscv = TimeSeriesSplit(n_splits=n_splits) # define the splitter
model = RandomForestRegressor() # define the model
cv = cross_validate(model,
X = ORD_train_lag.drop(columns=response_col),
y = ORD_train_lag[response_col[0]],
scoring =('r2', 'neg_mean_squared_error'),
cv=tscv,
return_train_score=True)
# pd.DataFrame({'split': range(n_splits),
# 'train_r2': cv['train_score'],
# 'train_negrmse': cv['train_']
# 'validation_r2': cv['test_score']}).set_index('split')
pd.DataFrame(cv)
fig, ax = plt.subplots(n_splits, 1, figsize=(8,4*n_splits))
for i, (train_index, test_index) in enumerate(tscv.split(ORD_train_lag)):
df_train, df_test = ORD_train_lag.iloc[train_index], ORD_train_lag.iloc[test_index]
model = RandomForestRegressor().fit(df_train.drop(columns=response_col),
df_train[response_col[0]]) # train model
# Prediction loop
predictions = model.predict(df_test.drop(columns=response_col))[:,None]
# Plot
plot_ts(ax[i], df_train, df_test, predictions, xlim=ORD_train_lag.index[-1], response_cols=response_col)
ax[0].legend(facecolor='w')
ax[i].set_xlabel('time')
fig.tight_layout()
lag = 3 # you can vary the number of lagged features in the model
n_splits = 3 # you can vary the number of train/validation splits
response_col = ['rides']
# df_lag = lag_df(df, lag, response_col).dropna()
tscv = TimeSeriesSplit(n_splits=n_splits) # define the splitter
model = RandomForestRegressor() # define the model
param_grid = {'n_estimators': [50, 100, 150, 200],
'max_depth': [10,25,50,100, None]}
X = ORD_train_lag.drop(columns=response_col)
y = ORD_train_lag[response_col[0]]
gcv = GridSearchCV(model,
param_grid = param_grid,
# X = ORD_train_lag.drop(columns=response_col),
# y = ORD_train_lag[response_col[0]],
scoring ='neg_mean_squared_error',
cv=tscv,
return_train_score=True)
gcv.fit(X,y)
# pd.DataFrame({'split': range(n_splits),
# 'train_r2': cv['train_score'],
# 'train_negrmse': cv['train_']
# 'validation_r2': cv['test_score']}).set_index('split')
gcv.score(X,y)
filename = 'grid_search_model_1.sav'
pickle.dump(gcv, open(filename, 'wb'))
A = list(ORD_train_lag.columns)
A.remove('rides')
pd.DataFrame({'columns' : A, 'importance' : gcv.best_estimator_.feature_importances_}).sort_values('importance', ascending=False)
gcv.best_params_
pd.DataFrame(gcv.cv_results_)
gcv.estimator.best_
```
| github_jupyter |
# Lesson 04: Numpy
- Used for working with tensors
- Provides vectors, matrices, and tensors
- Provides mathematical functions that operate on vectors, matrices, and tensors
- Implemented in Fortran and C in the backend
```
import numpy as np
```
## Making Arrays
```
arr = np.array([1, 2, 3])
print(arr, type(arr), arr.shape, arr.dtype, arr.ndim)
matrix = np.array(
[[1, 2, 3],
[4, 5, 6.2]]
)
print(matrix, type(matrix), matrix.shape, matrix.dtype, matrix.ndim)
a = np.zeros((10, 2))
print(a)
a = np.ones((4, 5))
print(a)
a = np.full((2, 3, 5), 6)
print(a)
a = np.eye(4)
print(a)
a = np.random.random((5, 5))
print(a)
```
## Indexing
```
arr = np.array([
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]
])
print(arr)
```
The indexing format is: [rows , columns]
You can then slice the individual dimension as follows: [start : end , start : end]
```
print(arr[1:, 2:4])
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print(a[ [0, 1, 2, 3], [1, 0, 2, 0] ])
print(a[0, 1], a[1, 0], a[2, 2], a[3, 0])
print(np.array([a[0, 1], a[1, 0], a[2, 2], a[3, 0]]))
b = np.array([1, 0, 2, 0])
print(a[np.arange(4), b])
a[np.arange(4), b] += 7
print(a)
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
bool_a = (a > 5)
print(bool_a)
print(a[bool_a])
print(a[a>7])
```
## Data Types
```
b = np.array([1, 2, 3], dtype=np.float64)
print(b.dtype)
```
https://numpy.org/doc/stable/reference/arrays.dtypes.html
## Operations
```
x = np.array([
[1, 2],
[3, 4]
])
y = np.array([
[5, 6],
[7, 8]
])
print(x, x.shape)
print(y, y.shape)
print(x + y)
print(np.add(x, y))
print(x - y)
print(np.subtract(x, y))
print(x * y)
print(np.multiply(x, y))
print(x / y)
print(np.divide(x, y))
```
### Matrix Multiplication
```
w = np.array([2, 4])
v = np.array([4, 6])
print(x)
print(y)
print(w)
print(v)
```
#### Vector-vector multiplication
```
print(v.dot(w))
print(np.dot(v, w))
```
#### Matrix-vector multiplication
```
print(x.dot(w))
```
#### Matrix multiplication
```
print(x.dot(y))
print(np.dot(x, y))
```
### Transpose
```
print(x)
print(x.T)
```
http://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html
### Other Operations
```
print(x)
print(np.sum(x))
print(np.sum(x, axis=0))
print(np.sum(x, axis=1))
```
More array operations are listed here:
http://docs.scipy.org/doc/numpy/reference/routines.math.html
## Broadcasting
Broadcasting allows Numpy to work with arrays of different shapes. Operations which would have required loops can now be done without them hence speeding up your program.
```
x = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15],
[16, 17, 18],
])
print(x, x.shape)
y = np.array([1, 2, 3])
print(y, y.shape)
```
### Loop Approach
```
z = np.empty_like(x)
print(z, z.shape)
for i in range(x.shape[0]):
z[i, :] = x[i, :] + y
print(z)
```
### Tile Approach
```
yy = np.tile(y, (6, 1))
print(yy, yy.shape)
print(x + y)
```
### Broadcasting Approach
```
print(x, x.shape)
print(y, y.shape)
print(x + y)
```
- https://numpy.org/doc/stable/user/basics.broadcasting.html
- http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc
- http://docs.scipy.org/doc/numpy/reference/ufuncs.html#available-ufuncs
## Reshape
```
x = np.array([
[1, 2, 3],
[4, 5, 6]
])
y = np.array([2, 2])
print(x, x.shape)
print(y, y.shape)
```
### Transpose Approach
```
xT = x.T
print(xT)
xTw = xT + y
print(xTw)
x = xTw.T
print(x)
```
Transpose approach in one line
```
print( (x.T + y).T )
```
### Reshape Approach
```
print(y, y.shape, y.ndim)
y = np.reshape(y, (2, 1))
print(y, y.shape, y.ndim)
print(x + y)
```
# Resources
- http://docs.scipy.org/doc/numpy/reference/
- https://numpy.org/doc/stable/user/absolute_beginners.html
- https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md
| github_jupyter |
```
import torch
import numpy as np
import pandas as pd
import matchzoo as mz
print('matchzoo version', mz.__version__)
ranking_task = mz.tasks.Ranking(losses=mz.losses.RankHingeLoss())
ranking_task.metrics = [
mz.metrics.NormalizedDiscountedCumulativeGain(k=3),
mz.metrics.NormalizedDiscountedCumulativeGain(k=5),
mz.metrics.MeanAveragePrecision()
]
print("`ranking_task` initialized with metrics", ranking_task.metrics)
print('data loading ...')
train_pack_raw = mz.datasets.wiki_qa.load_data('train', task=ranking_task)
dev_pack_raw = mz.datasets.wiki_qa.load_data('dev', task=ranking_task, filtered=True)
test_pack_raw = mz.datasets.wiki_qa.load_data('test', task=ranking_task, filtered=True)
print('data loaded as `train_pack_raw` `dev_pack_raw` `test_pack_raw`')
preprocessor = mz.preprocessors.BasicPreprocessor(
truncated_length_left = 10,
truncated_length_right = 100,
filter_low_freq = 2
)
train_pack_processed = preprocessor.fit_transform(train_pack_raw)
dev_pack_processed = preprocessor.transform(dev_pack_raw)
test_pack_processed = preprocessor.transform(test_pack_raw)
preprocessor.context
glove_embedding = mz.datasets.embeddings.load_glove_embedding(dimension=100)
term_index = preprocessor.context['vocab_unit'].state['term_index']
embedding_matrix = glove_embedding.build_matrix(term_index)
l2_norm = np.sqrt((embedding_matrix * embedding_matrix).sum(axis=1))
embedding_matrix = embedding_matrix / l2_norm[:, np.newaxis]
trainset = mz.dataloader.Dataset(
data_pack=train_pack_processed,
mode='pair',
num_dup=2,
num_neg=1,
batch_size=20,
resample=True,
sort=False
)
testset = mz.dataloader.Dataset(
data_pack=test_pack_processed,
batch_size=20
)
padding_callback = mz.models.DRMMTKS.get_default_padding_callback()
trainloader = mz.dataloader.DataLoader(
dataset=trainset,
stage='train',
callback=padding_callback
)
testloader = mz.dataloader.DataLoader(
dataset=testset,
stage='dev',
callback=padding_callback
)
model = mz.models.DRMMTKS()
model.params['task'] = ranking_task
model.params['embedding'] = embedding_matrix
model.params['mask_value'] = 0
model.params['top_k'] = 10
model.params['mlp_activation_func'] = 'tanh'
model.build()
print(model)
print('Trainable params: ', sum(p.numel() for p in model.parameters() if p.requires_grad))
optimizer = torch.optim.Adadelta(model.parameters())
trainer = mz.trainers.Trainer(
model=model,
optimizer=optimizer,
trainloader=trainloader,
validloader=testloader,
validate_interval=None,
epochs=10
)
trainer.run()
```
| github_jupyter |
# Document embeddings in BigQuery
This notebook shows how to do use a pre-trained embedding as a vector representation of a natural language text column.
Given this embedding, we can use it in machine learning models.
## Embedding model for documents
We're going to use a model that has been pretrained on Google News. Here's an example of how it works in Python. We will use it directly in BigQuery, however.
```
import tensorflow as tf
import tensorflow_hub as tfhub
model = tf.keras.Sequential()
model.add(tfhub.KerasLayer("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1",
output_shape=[20], input_shape=[], dtype=tf.string))
model.summary()
model.predict(["""
Long years ago, we made a tryst with destiny; and now the time comes when we shall redeem our pledge, not wholly or in full measure, but very substantially. At the stroke of the midnight hour, when the world sleeps, India will awake to life and freedom.
A moment comes, which comes but rarely in history, when we step out from the old to the new -- when an age ends, and when the soul of a nation, long suppressed, finds utterance.
"""])
```
## Loading model into BigQuery
The Swivel model above is already available in SavedModel format. But we need it on Google Cloud Storage before we can load it into BigQuery.
```
%%bash
BUCKET=ai-analytics-solutions-kfpdemo # CHANGE AS NEEDED
rm -rf tmp
mkdir tmp
FILE=swivel.tar.gz
wget --quiet -O tmp/swivel.tar.gz https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1?tf-hub-format=compressed
cd tmp
tar xvfz swivel.tar.gz
cd ..
mv tmp swivel
gsutil -m cp -R swivel gs://${BUCKET}/swivel
rm -rf swivel
echo "Model artifacts are now at gs://${BUCKET}/swivel/*"
```
Let's load the model into a BigQuery dataset named advdata (create it if necessary)
```
%%bigquery
CREATE OR REPLACE MODEL advdata.swivel_text_embed
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/swivel/*')
```
From the BigQuery web console, click on "schema" tab for the newly loaded model. We see that the input is called sentences and the output is called output_0:
<img src="swivel_schema.png" />
```
%%bigquery
SELECT output_0 FROM
ML.PREDICT(MODEL advdata.swivel_text_embed,(
SELECT "Long years ago, we made a tryst with destiny; and now the time comes when we shall redeem our pledge, not wholly or in full measure, but very substantially." AS sentences))
```
## Create lookup table
Let's create a lookup table of embeddings. We'll use the comments field of a storm reports table from NOAA.
This is an example of the Feature Store design pattern.
```
%%bigquery
CREATE OR REPLACE TABLE advdata.comments_embedding AS
SELECT
output_0 as comments_embedding,
comments
FROM ML.PREDICT(MODEL advdata.swivel_text_embed,(
SELECT comments, LOWER(comments) AS sentences
FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports`
))
```
For an example of using these embeddings in text similarity or document clustering, please see the following Medium blog post: https://medium.com/@lakshmanok/how-to-do-text-similarity-search-and-document-clustering-in-bigquery-75eb8f45ab65
Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
<table width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
<br>
updated by Melis Pahalı | December 5, 2019
<br>
updated by Özlem Salehi | September 17, 2020
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
<h2> <font color="blue"> Solutions for </font>Quantum Teleportation</h2>
<a id="task1"></a>
<h3> Task 1 </h3>
Calculate the new quantum state after this CNOT operator.
<h3>Solution</h3>
The state before CNOT is $ \sqrttwo \big( a\ket{000} + a \ket{011} + b\ket{100} + b \ket{111} \big) $.
CNOT(first_qubit,second_qubit) is applied.
If the value of the first qubit is 1, then the value of the second qubit is flipped.
Thus, the new quantum state after this CNOT is
$$ \sqrttwo \big( a\ket{000} + a \ket{011} + b\ket{110} + b \ket{101} \big). $$
<a id="task2"></a>
<h3> Task 2 </h3>
Calculate the new quantum state after this Hadamard operator.
Verify that the resulting quantum state can be written as follows:
$$
\frac{1}{2} \ket{00} \big( a\ket{0}+b\ket{1} \big) +
\frac{1}{2} \ket{01} \big( a\ket{1}+b\ket{0} \big) +
\frac{1}{2} \ket{10} \big( a\ket{0}-b\ket{1} \big) +
\frac{1}{2} \ket{11} \big( a\ket{1}-b\ket{0} \big) .
$$
<h3>Solution</h3>
The state before Hadamard is $ \sqrttwo \big( a\ket{000} + a \ket{011} + b\ket{110} + b \ket{101} \big). $
The effect of Hadamard to the first qubit is given below:
$ H \ket{0yz} \rightarrow \sqrttwo \ket{0yz} + \sqrttwo \ket{1yz} $
$ H \ket{1yz} \rightarrow \sqrttwo \ket{0yz} - \sqrttwo \ket{1yz} $
For each triple $ \ket{xyz} $ in the quantum state, we apply this transformation:
$
\frac{1}{2} \big( a\ket{000} + a\ket{100} \big) +
\frac{1}{2} \big( a\ket{011} + a\ket{111} \big) +
\frac{1}{2} \big( b\ket{010} - b\ket{110} \big) +
\frac{1}{2} \big( b\ket{001} - b\ket{101} \big) .
$
We can rearrange the summation so that we can separate Asja's qubit from the Balvis' qubit:
$
\frac{1}{2} \big( a\ket{000}+b\ket{001} \big) +
\frac{1}{2} \big( a\ket{011}+b\ket{010} \big) +
\frac{1}{2} \big( a\ket{100} - b\ket{101} \big) +
\frac{1}{2} \big( a\ket{111}- b\ket{110} \big) $.
This is equivalent to
$$
\frac{1}{2} \ket{00} \big( a\ket{0}+b\ket{1} \big) +
\frac{1}{2} \ket{01} \big( a\ket{1}+b\ket{0} \big) +
\frac{1}{2} \ket{10} \big( a\ket{0}-b\ket{1} \big) +
\frac{1}{2} \ket{11} \big( a\ket{1}-b\ket{0} \big) .
$$
<a id="task3"></a>
<h3> Task 3 </h3>
Asja sends the measurement outcomes to Balvis by using two classical bits: $ x $ and $ y $.
For each $ (x,y) $ pair, determine the quantum operator(s) that Balvis can apply to obtain $ \ket{v} = a\ket{0}+b\ket{1} $ exactly.
<h3>Solution</h3>
<b>Measurement outcome "00":</b> The state of Balvis' qubit is $ a\ket{0}+b\ket{1} $.
Balvis does not need to apply any extra operation.
<b>Measurement outcome "01":</b> The state of Balvis' qubit is $ a\ket{1}+b\ket{0} $.
If Balvis applies <u>NOT operator</u>, then the state becomes: $ a\ket{0}+b\ket{1} $.
<b>Measurement outcome "10":</b> The state of Balvis' qubit is $ a\ket{0}-b\ket{1} $.
If Balvis applies <u>Z operator</u>, then the state becomes: $ a\ket{0}+b\ket{1} $.
<b>Measurement outcome "11":</b> The state of Balvis' qubit is $ a\ket{1}-b\ket{0} $.
If Balvis applies <u>NOT operator</u> and <u>Z operator</u>, then the state becomes: $ a\ket{0}+b\ket{1} $.
<a id="task4"></a>
<h3> Task 4 </h3>
Create a quantum circuit with three qubits and two classical bits.
Assume that Asja has the first two qubits and Balvis has the third qubit.
Implement the protocol given above until Balvis makes the measurement.
<ul>
<li>Create entanglement between Asja's second qubit and Balvis' qubit.</li>
<li>The state of Asja's first qubit can be initialized to a randomly picked angle.</li>
<li>Asja applies CNOT and Hadamard operators to her qubits.</li>
<li>Asja measures her own qubits and the results are stored in the classical registers. </li>
</ul>
At this point, read the state vector of the circuit by using "statevector_simulator".
<i> When a circuit having measurement is simulated by "statevector_simulator", the simulator picks one of the outcomes, and so we see one of the states after the measurement.</i>
Verify that the state of Balvis' qubit is in one of these: $ \ket{v_{00}}$, $ \ket{v_{01}}$, $ \ket{v_{10}}$, and $ \ket{v_{11}}$.
<i> Follow the Qiskit order. That is, let qreg[2] be Asja's first qubit, qreg[1] be Asja's second qubit and let qreg[0] be Balvis' qubit.</i>
<h3>Solution</h3>
```
from qiskit import QuantumCircuit,QuantumRegister,ClassicalRegister,execute,Aer
from random import randrange
from math import sin,cos,pi
# We start with 3 quantum registers
# qreg[2]: Asja's first qubit - qubit to be teleported
# qreg[1]: Asja's second qubit
# qreg[0]: Balvis' qubit
qreg=QuantumRegister(3)
creg=ClassicalRegister(2) #Classical register with 2 qubits is enough
qcir=QuantumCircuit(qreg,creg)
# Generation of the entangled state.
# Asja's second qubit is entangled with Balvis' qubit.
qcir.h(qreg[1])
qcir.cx(qreg[1],qreg[0])
qcir.barrier()
# We create a random qubit to teleport.
# We pick a random angle.
d=randrange(360)
r=2*pi*d/360
print("Picked angle is "+str(d)+" degrees, "+str(round(r,2))+" radians.")
# The amplitudes of the angle.
x=cos(r)
y=sin(r)
print("cos component of the angle: "+str(round(x,2))+", sin component of the angle: "+str(round(y,2)))
print("So to be teleported state is "+str(round(x,2))+"|0>+"+str(round(y,2))+"|1>.")
#Asja's qubit to be teleported
# Generation of random qubit by rotating the quantum register at the amount of picked angle.
qcir.ry(2*r,qreg[2])
qcir.barrier()
#CNOT operator by Asja where first qubit is the control and second qubit is the target
qcir.cx(qreg[2],qreg[1])
qcir.barrier()
#Hadamard operator by Asja on her first qubit
qcir.h(qreg[2])
qcir.barrier()
#Measurement by Asja stored in classical registers
qcir.measure(qreg[1],creg[0])
qcir.measure(qreg[2],creg[1])
print()
result=execute(qcir,Aer.get_backend('statevector_simulator'),optimization_level=0).result()
print("When you use statevector_simulator, one of the possible outcomes is picked randomly. Classical registers contain:")
print(result.get_counts())
print()
print("The final statevector.")
v=result.get_statevector()
for i in range(len(v)):
print(v[i].real)
print()
qcir.draw(output='mpl')
```
<a id="task5"></a>
<h3> Task 5 </h3>
Implement the protocol above by including the post-processing part done by Balvis, i.e., the measurement results by Asja are sent to Balvis and then he may apply $ X $ or $ Z $ gates depending on the measurement results.
We use the classically controlled quantum operators.
Since we do not make measurement on $ q[2] $, we define only 2 classical bits, each of which can also be defined separated.
```python
q = QuantumRegister(3)
c2 = ClassicalRegister(1,'c2')
c1 = ClassicalRegister(1,'c1')
qc = QuantumCircuit(q,c1,c2)
...
qc.measure(q[1],c1)
...
qc.x(q[0]).c_if(c1,1) # x-gate is applied to q[0] if the classical bit c1 is equal to 1
```
Read the state vector and verify that Balvis' state is $ \myvector{a \\ b} $ after the post-processing.
<h3>Solution</h3>
<i>Classically controlled</i> recovery operations are also added as follows. Below, the state vector is used to confirm that quantum teleportation is completed.
```
from qiskit import QuantumCircuit,QuantumRegister,ClassicalRegister,execute,Aer
from random import randrange
from math import sin,cos,pi
# We start with 3 quantum registers
# qreg[2]: Asja's first qubit - qubit to be teleported
# qreg[1]: Asja's second qubit
# qreg[0]: Balvis' qubit
qreg=QuantumRegister(3)
c1=ClassicalRegister(1)
c2=ClassicalRegister(1)
qcir=QuantumCircuit(qreg,c1,c2)
# Generation of the entangled state.
# Asja's second qubit is entangled with Balvis' qubit.
qcir.h(qreg[1])
qcir.cx(qreg[1],qreg[0])
qcir.barrier()
# We create a random qubit to teleport.
# We pick a random angle.
d=randrange(360)
r=2*pi*d/360
print("Picked angle is "+str(d)+" degrees, "+str(round(r,2))+" radians.")
# The amplitudes of the angle.
x=cos(r)
y=sin(r)
print("Cos component of the angle: "+str(round(x,2))+", sin component of the angle: "+str(round(y,2)))
print("So to be teleported state is "+str(round(x,2))+"|0>+"+str(round(y,2))+"|1>.")
#Asja's qubit to be teleported
# Generation of random qubit by rotating the quantum register at the amount of picked angle.
qcir.ry(2*r,qreg[2])
qcir.barrier()
#CNOT operator by Asja where first qubit is the control and second qubit is the target
qcir.cx(qreg[2],qreg[1])
qcir.barrier()
#Hadamard operator by Asja on the first qubit
qcir.h(qreg[2])
qcir.barrier()
#Measurement by Asja stored in classical registers
qcir.measure(qreg[1],c1)
qcir.measure(qreg[2],c2)
print()
#Post processing by Balvis
qcir.x(qreg[0]).c_if(c1,1)
qcir.z(qreg[0]).c_if(c2,1)
result2=execute(qcir,Aer.get_backend('statevector_simulator'),optimization_level=0).result()
print("When you use statevector_simulator, one of the possible outcomes is picked randomly. Classical registers contain:")
print(result2.get_counts()) #
print()
print("The final statevector.")
v=result2.get_statevector()
for i in range(len(v)):
print(v[i].real)
print()
qcir.draw(output='mpl')
```
| github_jupyter |
# DLISIO in a Nutshell
## Importing
```
%matplotlib inline
import os
import pandas as pd
import dlisio
import matplotlib.pyplot as plt
import numpy as np
import numpy.lib.recfunctions as rfn
import hvplot.pandas
import holoviews as hv
from holoviews import opts, streams
from holoviews.plotting.links import DataLink
hv.extension('bokeh', logo=None)
```
### You can work with a single file using the cell below - or by adding an additional for loop to the code below, you can work through a list of files. Another option is to use os.walk to get all .dlis files in a parent folder. Example:
for (root, dirs, files) in os.walk(folderpath):
for f in files:
filepath = os.path.join(root, f)
if filepath.endswith('.' + 'dlis'):
print(filepath)
### But for this example, we will work with a single .dlis file specified in the cell below. Note that there are some .dlis file formats that are not supported by DLISIO yet - good to catch them in a try except loop if you are reading files enmasse.
### We will load a dlis file from the open source Volve dataset available here: https://data.equinor.com/dataset/Volve
```
filepath = r""
```
## Query for specific curve
### Very quickly you can use regex to find certain curves in a file (helpful if you are scanning a lot of files for certain curves)
```
with dlisio.dlis.load(filepath) as file:
for d in file:
depth_channels = d.find('CHANNEL','DEPT')
for channel in depth_channels:
print(channel.name)
print(channel.curves())
```
## Examining internal files and frames
### Keep in mind that dlis files can contain multiple files and multiple frames. You can quickly get a numpy array of the curves in each frame below.
```
with dlisio.dlis.load(filepath) as file:
print(file.describe())
with dlisio.dlis.load(filepath) as file:
for d in file:
for fram in d.frames:
print(d.channels)
print(fram.curves())
```
## Metadata including Origin information (well name and header)
```
with dlisio.dlis.load(filepath) as file:
for d in file:
print(d.describe())
for fram in d.frames:
print(fram.describe())
for channel in d.channels:
print(channel.describe())
with dlisio.dlis.load(filepath) as file:
for d in file:
for origin in d.origins:
print(origin.describe())
```
## Reading a full dlis file
### But most likely we want a single data frame of every curve, no matter which frame it came from. So we write a bit more code to look through each frame, then look at each channel and get the curve name and unit information along with it. We will also save the information about which internal file and which frame each curve resides in.
```
curves_L = []
curves_name = []
longs = []
unit = []
files_L = []
files_num = []
frames = []
frames_num = []
with dlisio.dlis.load(filepath) as file:
for d in file:
files_L.append(d)
frame_count = 0
for fram in d.frames:
if frame_count == 0:
frames.append(fram)
frame_count = frame_count + 1
for channel in d.channels:
curves_name.append(channel.name)
longs.append(channel.long_name)
unit.append(channel.units)
files_num.append(len(files_L))
frames_num.append(len(frames))
curves = channel.curves()
curves_L.append(curves)
curve_index = pd.DataFrame(
{'Curve': curves_name,
'Long': longs,
'Unit': unit,
'Internal_File': files_num,
'Frame_Number': frames_num
})
curve_index
```
## Creating a Pandas dataframe for the entire .dlis file
### We have to be careful creating a dataframe for the whole .dlis file as often there are some curves that represent mulitple values (numpy array of list values). So, you can use something like:
df = pd.DataFrame(data=curves_L, index=curves_name).T
### to view the full dlis file with lists as some of the curve values.
### Or we will use the code below to process each curve's 2D numpy array, stacking it if the curve contains multiple values per sample. Then we convert each curve into its own dataframe (uniquifying the column names by adding a .1, .2, .3...etc). Then, to preserve the order with the curve index above, append each data frame together in order to build the final dlis full dataframe.
```
def df_column_uniquify(df):
df_columns = df.columns
new_columns = []
for item in df_columns:
counter = 0
newitem = item
while newitem in new_columns:
counter += 1
newitem = "{}_{}".format(item, counter)
new_columns.append(newitem)
df.columns = new_columns
return df
curve_df = pd.DataFrame()
name_index = 0
for c in curves_L:
name = curves_name[name_index]
np.vstack(c)
try:
num_col = c.shape[1]
col_name = [name] * num_col
df = pd.DataFrame(data=c, columns=col_name)
name_index = name_index + 1
df = df_column_uniquify(df)
curve_df = pd.concat([curve_df, df], axis=1)
except:
num_col = 0
df = pd.DataFrame(data=c, columns=[name])
name_index = name_index + 1
curve_df = pd.concat([curve_df, df], axis=1)
continue
curve_df.head()
## If we have a simpler dlis file with a single logical file and single frame and with single data values in each channel.
with dlisio.dlis.load(filepath) as file:
logical_count = 0
for d in file:
frame_count = 0
for fram in d.frames:
if frame_count == 0 & logical_count == 0:
curves = fram.curves()
curve_df = pd.DataFrame(curves, index=curves[fram.index])
curve_df.head()
```
### Then we can set the index and start making some plots.
```
curve_df = df_column_uniquify(curve_df)
curve_df['DEPTH_Calc_ft'] = curve_df.loc[:,'TDEP'] * 0.0083333 #0.1 inch/12 inches per foot
curve_df['DEPTH_ft'] = curve_df['DEPTH_Calc_ft']
curve_df = curve_df.set_index("DEPTH_Calc_ft")
curve_df.index.names = [None]
curve_df = curve_df.replace(-999.25,np.nan)
min_val = curve_df['DEPTH_ft'].min()
max_val = curve_df['DEPTH_ft'].max()
curve_list = list(curve_df.columns)
curve_list.remove('DEPTH_ft')
curve_df.head()
def curve_plot(log, df, depthname):
aplot = df.hvplot(x=depthname, y=log, invert=True, flip_yaxis=True, shared_axes=True,
height=600, width=300).opts(fontsize={'labels': 16,'xticks': 14, 'yticks': 14})
return aplot;
plotlist = [curve_plot(x, df=curve_df, depthname='DEPTH_ft') for x in curve_list]
well_section = hv.Layout(plotlist).cols(len(curve_list))
well_section
```
# Hopefully that is enough code to get you started working with DLISIO. There is much more functionality which can be accessed with help(dlisio) or at the read the docs.
| github_jupyter |
```
# Import modules
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import math
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
#keras
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import BatchNormalization, Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
from tensorflow.keras.callbacks import LearningRateScheduler, EarlyStopping
from tensorflow.keras.metrics import top_k_categorical_accuracy
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
def import_data(file):
"""create a dataframe and optimize its memory usage"""
df = pd.read_csv(file, parse_dates=True, keep_date_col=True)
df = reduce_mem_usage(df)
return df
train = import_data(r'../input/emnist/emnist-letters-train.csv')
test = import_data(r'../input/emnist/emnist-letters-test.csv')
print("Train: %s, Test: %s" %(train.shape, test.shape))
# iam_1 = import_data('../input/iam-edited/iam_1_edit.csv')
# iam_2 = import_data('../input/iam-edited/iam_2_edit.csv')
# iam_3 = import_data('../input/iam-edited/iam_3_edit.csv')
# iam_4 = import_data('../input/iam-edited/iam_11_edit.csv')
# iam = pd.concat([iam_1,iam_2,iam_3,iam_4],axis=0)
# iam.columns = train.columns.values
# iam_35_labels = list(iam['35'].values)
# iam_labels = []
# for i in iam['35'].values:
# if i < 58:
# i -= 48
# iam_labels.append(i)
# elif 58 < i < 91:
# i -= 55
# iam_labels.append(i)
# elif 91 < i:
# i -= 61
# iam_labels.append(i)
# iam['35'].replace(dict(zip(iam_35_labels,iam_labels)),inplace=True)
# iam
mapp = pd.read_csv(
r'../input/emnist/emnist-letters-mapping.txt',
delimiter=' ',
index_col=0,
header=None,
squeeze=True
)
# train_half = pd.DataFrame(columns=list(train.columns.values))
# for label in mapp.values:
# train_label = train[train['35']==(label-48)]
# train_label = train_label.iloc[::2]
# train_half = pd.concat([train_half,train_label],axis=0)
# train_x_half = train_half.iloc[:,1:] # Get the images
# train_y_half = train_half.iloc[:,0] # Get the label
# del train_half
# train_x_half = np.asarray(train_x_half)
# train_x_half = np.apply_along_axis(rotate, 1, train_x_half)
# print ("train_x:",train_x_half.shape)
# iam_x = iam.iloc[:,1:] # Get the images
# iam_y = iam.iloc[:,0] # Get the label
# del iam
# iam_x = np.asarray(iam_x)
# iam_x = np.apply_along_axis(rotate, 1, iam_x)
# print ("iam_x:",iam_x.shape)
# train_x = np.concatenate((train_x_half,iam_x),axis=0)
# print(train_x.shape)
# train_y = np.concatenate((train_y_half,iam_y),axis=0)
# print(train_y.shape)
# del train_x_half
# del train_y_half
# del iam_x
# del iam_y
# train_new = pd.concat([train_half,iam],0)
# train_new.shape
# Constants
HEIGHT = 28
WIDTH = 28
# del train_half
# del iam
# Split x and y
train_x = train.iloc[:,1:] # Get the images
train_y = train.iloc[:,0] # Get the label
del train # free up some memory
test_x = test.iloc[:,1:]
test_y = test.iloc[:,0]
del test
# Reshape and rotate EMNIST images
def rotate(image):
image = image.reshape(HEIGHT, WIDTH)
image = np.fliplr(image)
image = np.rot90(image)
return image
# Flip and rotate image
train_x = np.asarray(train_x)
train_x = np.apply_along_axis(rotate, 1, train_x)
print ("train_x:",train_x.shape)
test_x = np.asarray(test_x)
test_x = np.apply_along_axis(rotate, 1, test_x)
print ("test_x:",test_x.shape)
# Normalize
train_x = train_x / 255.0
test_x = test_x / 255.0
print(type(train_x[0,0,0]))
print(type(test_x[0,0,0]))
# Plot image
for i in range(100,109):
plt.subplot(330 + (i+1))
plt.subplots_adjust(hspace=0.5, top=1)
plt.imshow(train_x[i], cmap=plt.get_cmap('gray'))
plt.title(chr(mapp.iloc[train_y[i]-1,0]))
# Number of classes
num_classes = train_y.nunique() # .nunique() returns the number of unique objects
print(num_classes)
# One hot encoding
train_y = to_categorical(train_y-1, num_classes)
test_y = to_categorical(test_y-1, num_classes)
print("train_y: ", train_y.shape)
print("test_y: ", test_y.shape)
# partition to train and val
train_x, val_x, train_y, val_y = train_test_split(train_x,
train_y,
test_size=0.10,
random_state=7)
print(train_x.shape, val_x.shape, train_y.shape, val_y.shape)
# Reshape
train_x = train_x.reshape(-1, HEIGHT, WIDTH, 1)
test_x = test_x.reshape(-1, HEIGHT, WIDTH, 1)
val_x = val_x.reshape(-1, HEIGHT, WIDTH, 1)
# Create more images via data augmentation
datagen = ImageDataGenerator(
rotation_range = 10,
zoom_range = 0.10,
width_shift_range=0.1,
height_shift_range=0.1
)
train_gen = datagen.flow(train_x, train_y, batch_size=64)
val_gen = datagen.flow(val_x, val_y, batch_size=64)
# Building model
# ((Si - Fi + 2P)/S) + 1
model = Sequential()
model.add(Conv2D(32, kernel_size=3, activation='relu', input_shape=(HEIGHT, WIDTH, 1)))
model.add(BatchNormalization())
model.add(Conv2D(32, kernel_size=3,activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(32, kernel_size=5, strides=2, padding='same', activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(64, kernel_size=3, activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=3, activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=5, strides=2, padding='same', activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(128, kernel_size=4, activation='relu'))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.4))
model.add(Dense(units=num_classes, activation='softmax'))
input_shape = (None, HEIGHT, WIDTH, 1)
model.build(input_shape)
model.summary()
my_callbacks = [
# Decrease learning rate
LearningRateScheduler(lambda x: 1e-3 * 0.95 ** x),
# Training will stop there is no improvement in val_loss after 3 epochs
EarlyStopping(monitor="val_acc",
patience=3,
mode='max',
restore_best_weights=True)
]
# def top_3_accuracy(y_true, y_pred):
# return top_k_categorical_accuracy(y_true, y_pred, k=3)
# TRAIN NETWORKS
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# history = model.fit(train_x, train_y,
# epochs=100,
# verbose=1, validation_data=(val_x, val_y),
# callbacks=my_callbacks)
# With datagen
history = model.fit_generator(train_gen, steps_per_epoch=train_x.shape[0]//64, epochs=100,
validation_data=val_gen, validation_steps=val_x.shape[0]//64, callbacks=my_callbacks)
# plot accuracy and loss
def plotacc(epochs, acc, val_acc):
# Plot training & validation accuracy values
plt.plot(epochs, acc, 'b')
plt.plot(epochs, val_acc, 'r')
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
def plotloss(epochs, acc, val_acc):
# Plot training & validation accuracy values
plt.plot(epochs, acc, 'b')
plt.plot(epochs, val_acc, 'r')
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
#%%
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1,len(acc)+1)
# Accuracy curve
plotgraph(epochs, acc, val_acc)
# loss curve
plotloss(epochs, loss, val_loss)
# del train_x
# del train_y
score = model.evaluate(test_x, test_y, verbose=0)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
model.save("emnist_model_letters_aug.h5")
model.save_weights("emnist_model_weights_letters_aug.h5")
y_pred = model.predict(test_x)
y_pred = (y_pred > 0.5)
cm = metrics.confusion_matrix(test_y.argmax(axis=1), y_pred.argmax(axis=1))
print(cm)
```
| github_jupyter |
# Example 5: Quantum-to-quantum transfer learning.
This is an example of a continuous variable (CV) quantum network for state classification, developed according to the *quantum-to-quantum transfer learning* scheme presented in [1].
## Introduction
In this proof-of-principle demonstration we consider two distinct toy datasets of Gaussian and non-Gaussian states. Such datasets can be generated according to the following simple prescriptions:
**Dataset A**:
- Class 0 (Gaussian): random Gaussian layer applied to the vacuum.
- Class 1 (non-Gaussian): random non-Gaussian Layer applied to the vacuum.
**Dataset B**:
- Class 0 (Gaussian): random Gaussian layer applied to a coherent state with amplitude $\alpha=1$.
- Class 1 (non-Gaussian): random Gaussian layer applied to a single photon Fock state $|1\rangle$.
**Variational Circuit A**:
Our starting point is a single-mode variational circuit [2] (a non-Gaussian layer), pre-trained on _Dataset A_. We assume that after the circuit is applied, the output mode is measured with an _on/off_ detector. By averaging over many shots, one can estimate the vacuum probability:
$$
p_0 = | \langle \psi_{\rm out} |0 \rangle|^2.
$$
We use _Dataset A_ and train the circuit to rotate Gaussian states towards the vacuum while non-Gaussian states far away from the vacuum. For the final classification we use the simple decision rule:
$$
p_0 \ge 0 \longrightarrow {\rm Class=0.} \\
p_0 < 0 \longrightarrow {\rm Class=1.}
$$
**Variational Circuit B**:
Once _Circuit A_ has been optimized, we can use is as a pre-trained block
applicable also to the different _Dataset B_. In other words, we implement a _quantum-to-quantum_ transfer learning model:
_Circuit B_ = _Circuit A_ (pre-trained) followed by a sequence of _variational layers_ (to be trained).
Also in this case, after the application of _Circuit B_, we assume to measure the single mode with an _on/off_ detector, and we apply a similar classification rule:
$$
p_0 \ge 0 \longrightarrow {\rm Class=1.} \\
p_0 < 0 \longrightarrow {\rm Class=0.}
$$
The motivation for this transfer learning approach is that, even if _Circuit A_ is optimized on a different dataset, it can still act as a good pre-processing block also for _Dataset B_. Ineeed, as we are going to show, the application of _Circuit A_ can significantly improve the training efficiency of _Circuit B_.
## General setup
The main imported modules are: the `tensorflow` machine learning framework, the quantum CV
software `strawberryfields` [3] and the python plotting library `matplotlib`. All modules should be correctly installed in the system before running this notebook.
```
# Plotting
%matplotlib inline
import matplotlib.pyplot as plt
# TensorFlow
import tensorflow as tf
# Strawberryfields (simulation of CV quantum circuits)
import strawberryfields as sf
from strawberryfields.ops import Dgate, Kgate, Sgate, Rgate, Vgate, Fock, Ket
# Other modules
import numpy as np
import time
# System variables
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # avoid warning messages
os.environ['OMP_NUM_THREADS'] = '1' # set number of threads.
os.environ['CUDA_VISIBLE_DEVICES'] = '1' # select the GPU unit.
# Path with pre-trained parameters
weights_path = 'results/weights/'
```
Setting of the main parameters of the network model and of the training process.<br>
```
# Hilbert space cutoff
cutoff = 15
# Normalization cutoff (must be equal or smaller than cutoff dimension)
target_cutoff = 15
# Normalization weight
norm_weight = 0
# Batch size
batch_size = 8
# Number of batches (i.e. number training iterations)
num_batches = 500
# Number of state generation layers
g_depth = 1
# Number of pre-trained layers (for transfer learning)
pre_depth = 1
# Number of state classification layers
q_depth = 3
# Standard deviation of random state generation parameters
rot_sd = np.math.pi * 2
dis_sd = 0
sq_sd = 0.5
non_lin_sd = 0.5 # this is used as fixed non-linear constant.
# Standard deviation of initial trainable weights
active_sd = 0.001
passive_sd = 0.001
# Magnitude limit for trainable active parameters
clip = 1
# Learning rate
lr = 0.01
# Random seeds
tf.set_random_seed(0)
rng_data = np.random.RandomState(1)
# Reset TF graph
tf.reset_default_graph()
```
## Variational circuits for state generation and classificaiton
### Input states: _Dataset B_
The dataset is introduced by defining the corresponding random variational circuit that generates input Gaussian and non-Gaussian states.
```
# Placeholders for class labels
batch_labels = tf.placeholder(dtype=tf.int64, shape = [batch_size])
batch_labels_fl = tf.to_float(batch_labels)
# State generation parameters
# Squeezing gate
sq_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth])
# Rotation gates
r1_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth])
r2_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth])
r3_gen = tf.placeholder(dtype = tf.float32, shape = [batch_size,g_depth])
# Explicit definitions of the ket tensors of |0> and |1>
np_ket0, np_ket1 = np.zeros((2, batch_size, cutoff))
np_ket0[:,0] = 1.0
np_ket1[:,1] = 1.0
ket0 = tf.constant(np_ket0, dtype = tf.float32, shape = [batch_size, cutoff])
ket1 = tf.constant(np_ket1, dtype = tf.float32, shape = [batch_size, cutoff])
# Ket of the quantum states associated to the label: i.e. |batch_labels>
ket_init = ket0 * (1.0 - tf.expand_dims(batch_labels_fl, 1)) + ket1 * tf.expand_dims(batch_labels_fl, 1)
# State generation layer
def layer_gen(i, qmode):
# If label is 0 (Gaussian) prepare a coherent state with alpha=1 otherwise prepare fock |1>
Ket(ket_init) | qmode
Dgate((1.0 - batch_labels_fl) * 1.0, 0) | qmode
# Random Gaussian operation (without displacement)
Rgate(r1_gen[:, i]) | qmode
Sgate(sq_gen[:, i], 0) | qmode
Rgate(r2_gen[:, i]) | qmode
return qmode
```
### Loading of pre-trained block (_Circuit A_)
We assume that _Circuit A_ has been already pre-trained (e.g. by running a dedicated Python script) and that the associated optimal weights have been saved to a NumPy file. Here we first load the such parameters and then we define _Circuit A_ as a constant pre-processing block.
```
# Loading of pre-trained weights
trained_params_npy = np.load('pre_trained/circuit_A.npy')
if trained_params_npy.shape[1] < pre_depth:
print("Error: circuit q_depth > trained q_depth.")
raise SystemExit(0)
# Convert numpy arrays to TF tensors
trained_params = tf.constant(trained_params_npy)
sq_pre = trained_params[0]
d_pre = trained_params[1]
r1_pre = trained_params[2]
r2_pre = trained_params[3]
r3_pre = trained_params[4]
kappa_pre = trained_params[5]
# Definition of the pre-trained Circuit A (single layer)
def layer_pre(i, qmode):
# Rotation gate
Rgate(r1_pre[i]) | qmode
# Squeezing gate
Sgate(tf.clip_by_value(sq_pre[i], -clip, clip), 0)
# Rotation gate
Rgate(r2_pre[i]) | qmode
# Displacement gate
Dgate(tf.clip_by_value(d_pre[i], -clip, clip) , 0) | qmode
# Rotation gate
Rgate(r3_pre[i]) | qmode
# Cubic gate
Vgate(tf.clip_by_value(kappa_pre[i], -clip, clip) ) | qmode
return qmode
```
### Addition of trainable layers (_Circuit B_)
As discussed in the introduction, _Circuit B_ can is obtained by adding some additional layers that we are going to train on _Dataset B_.
```
# Trainable variables
with tf.name_scope('variables'):
# Squeeze gate
sq_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=active_sd))
# Displacement gate
d_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=active_sd))
# Rotation gates
r1_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=passive_sd))
r2_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=passive_sd))
r3_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=passive_sd))
# Kerr gate
kappa_var = tf.Variable(tf.random_normal(shape=[q_depth], stddev=active_sd))
# 0-depth parameter (just to generate a gradient)
x_var = tf.Variable(0.0)
parameters = [sq_var, d_var, r1_var, r2_var, r3_var, kappa_var]
# Definition of a single trainable variational layer
def layer_var(i, qmode):
Rgate(r1_var[i]) | qmode
Sgate(tf.clip_by_value(sq_var[i], -clip, clip), 0) | qmode
Rgate(r2_var[i]) | qmode
Dgate(tf.clip_by_value(d_var[i], -clip, clip) , 0) | qmode
Rgate(r3_var[i]) | qmode
Vgate(tf.clip_by_value(kappa_var[i], -clip, clip) ) | qmode
return qmode
```
## Symbolic evaluation of the full network
We first instantiate a _StrawberryFields_ quantum simulator, taylored for simulating a single-mode quantum optical system. Then we synbolically evaluate a batch of output states.
```
prog = sf.Program(1)
eng = sf.Engine('tf', backend_options={'cutoff_dim': cutoff, 'batch_size': batch_size})
# Circuit B
with prog.context as q:
# State generation network
for k in range(g_depth):
layer_gen(k, q[0])
# Pre-trained network (Circuit A)
for k in range(pre_depth):
layer_pre(k, q[0])
# State classification network
for k in range(q_depth):
layer_var(k, q[0])
# Special case q_depth==0
if q_depth == 0:
Dgate(0.001, x_var ) | q[0] # almost identity operation just to generate a gradient.
# Symbolic computation of the output state
results = eng.run(prog, run_options={"eval": False})
out_state = results.state
# Batch state norms
out_norm = tf.to_float(out_state.trace())
# Batch mean energies
mean_n = out_state.mean_photon(0)
```
## Loss function, accuracy and optimizer.
As usual in machine learning, we need to define a loss function that we are going to minimize during the training phase.
As discussed in the introduction, we assume that only the vacuum state probability `p_0` is measured. Ideally, `p_0` should be large for non-Gaussian states (_label 1_), while should be small for Gaussian states (_label 0_). The circuit can be trained to this task by minimizing the _cross entropy_ loss function defined in the next cell.
Moreover, if `norm_weight` is different from zero, also a regularization term is added to the full cost function in order to reduce quantum amplitudes beyond the target Hilbert space dimension `target_cutoff`.
```
# Batch vacuum probabilities
p0 = out_state.fock_prob([0])
# Complementary probabilities
q0 = 1.0 - p0
# Cross entropy loss function
eps = 0.0000001
main_loss = tf.reduce_mean(-batch_labels_fl * tf.log(p0 + eps) - (1.0 - batch_labels_fl) * tf.log(q0 + eps))
# Decision function
predictions = tf.sign(p0 - 0.5) * 0.5 + 0.5
# Accuracy between predictions and labels
accuracy = tf.reduce_mean((predictions + batch_labels_fl - 1.0) ** 2)
# Norm loss. This is monitored but not minimized.
norm_loss = tf.reduce_mean((out_norm - 1.0) ** 2)
# Cutoff loss regularization. This is monitored and minimized if norm_weight is nonzero.
c_in = out_state.all_fock_probs()
cut_probs = c_in[:, :target_cutoff]
cut_norms = tf.reduce_sum(cut_probs, axis=1)
cutoff_loss = tf.reduce_mean((cut_norms - 1.0) ** 2 )
# Full regularized loss function
full_loss = main_loss + norm_weight * cutoff_loss
# Optimization algorithm
optim = tf.train.AdamOptimizer(learning_rate=lr)
training = optim.minimize(full_loss)
```
## Training and testing
Up to now we just defined the analytic graph of the quantum network without numerically evaluating it. Now, after initializing a _TensorFlow_ session, we can finally run the actual training and testing phases.
```
# Function generating a dictionary of random parameters for a batch of states.
def random_dict():
param_dict = { # Labels (0 = Gaussian, 1 = non-Gaussian)
batch_labels: rng_data.randint(2, size=batch_size),
# Squeezing and rotation parameters
sq_gen: rng_data.uniform(low=-sq_sd, high=sq_sd, size=[batch_size, g_depth]),
r1_gen: rng_data.uniform(low=-rot_sd, high=rot_sd, size=[batch_size, g_depth]),
r2_gen: rng_data.uniform(low=-rot_sd, high=rot_sd, size=[batch_size, g_depth]),
r3_gen: rng_data.uniform(low=-rot_sd, high=rot_sd, size=[batch_size, g_depth]),
}
return param_dict
# TensorFlow session
with tf.Session() as session:
session.run(tf.global_variables_initializer())
train_loss = 0.0
train_loss_sum = 0.0
train_acc = 0.0
train_acc_sum = 0.0
test_loss = 0.0
test_loss_sum = 0.0
test_acc = 0.0
test_acc_sum = 0.0
# =========================================================
# Training Phase
# =========================================================
if q_depth > 0:
for k in range(num_batches):
rep_time = time.time()
# Training step
[_training,
_full_loss,
_accuracy,
_norm_loss] = session.run([ training,
full_loss,
accuracy,
norm_loss], feed_dict=random_dict())
train_loss_sum += _full_loss
train_acc_sum += _accuracy
train_loss = train_loss_sum / (k + 1)
train_acc = train_acc_sum / (k + 1)
# Training log
if ((k + 1) % 100) == 0:
print('Train batch: {:d}, Running loss: {:.4f}, Running acc {:.4f}, Norm loss {:.4f}, Batch time {:.4f}'
.format(k + 1, train_loss, train_acc, _norm_loss, time.time() - rep_time))
# =========================================================
# Testing Phase
# =========================================================
num_test_batches = min(num_batches, 1000)
for i in range(num_test_batches):
rep_time = time.time()
# Evaluation step
[_full_loss,
_accuracy,
_norm_loss,
_cutoff_loss,
_mean_n,
_parameters] = session.run([full_loss,
accuracy,
norm_loss,
cutoff_loss,
mean_n,
parameters], feed_dict=random_dict())
test_loss_sum += _full_loss
test_acc_sum += _accuracy
test_loss = test_loss_sum / (i + 1)
test_acc = test_acc_sum / (i + 1)
# Testing log
if ((i + 1) % 100) == 0:
print('Test batch: {:d}, Running loss: {:.4f}, Running acc {:.4f}, Norm loss {:.4f}, Batch time {:.4f}'
.format(i + 1, test_loss, test_acc, _norm_loss, time.time() - rep_time))
# Compute mean photon number of the last batch of states
mean_fock = np.mean(_mean_n)
print('Training and testing phases completed.')
print('RESULTS:')
print('{:>11s}{:>11s}{:>11s}{:>11s}{:>11s}{:>11s}'.format('train_loss', 'train_acc', 'test_loss', 'test_acc', 'norm_loss', 'mean_n'))
print('{:11f}{:11f}{:11f}{:11f}{:11f}{:11f}'.format(train_loss, train_acc, test_loss, test_acc, _norm_loss, mean_fock))
```
## References
[1] Andrea Mari, Thomas R. Bromley, Josh Izaac, Maria Schuld, and Nathan Killoran. _Transfer learning in hybrid classical-quantum neural networks_. [arXiv:1912.08278](https://arxiv.org/abs/1912.08278), (2019).
[2] Nathan Killoran, Thomas R. Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolás Quesada, and Seth Lloyd. _Continuous-variable quantum neural networks_. [arXiv:1806.06871](https://arxiv.org/abs/1806.06871), (2018).
[3] Nathan Killoran, Josh Izaac, Nicolás Quesada, Ville Bergholm, Matthew Amy, and Christian Weedbrook. _Strawberry Fields: A Software Platform for Photonic Quantum Computing_. [Quantum, 3, 129 (2019)](https://doi.org/10.22331/q-2019-03-11-129).
| github_jupyter |
## Borehole lithology logs viewer
Interactive view of borehole data used for [exploratory lithology analysis](https://github.com/csiro-hydrogeology/pyela)
Powered by [Voila](https://github.com/QuantStack/voila), [ipysheet](https://github.com/QuantStack/ipysheet) and [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet)
### Data
The sample borehole data around Canberra, Australia is derived from the Australian Bureau of Meteorology [National Groundwater Information System](http://www.bom.gov.au/water/groundwater/ngis/index.shtml). You can check the licensing for these data; the short version is that use for demo and learning purposes is fine.
```
import os
import sys
import pandas as pd
import numpy as np
# from bqplot import Axis, Figure, Lines, LinearScale
# from bqplot.interacts import IndexSelector
# from ipyleaflet import basemaps, FullScreenControl, LayerGroup, Map, MeasureControl, Polyline, Marker, MarkerCluster, CircleMarker, WidgetControl
# from ipywidgets import Button, HTML, HBox, VBox, Checkbox, FileUpload, Label, Output, IntSlider, Layout, Image, link
from ipywidgets import Output, HTML
from ipyleaflet import Map, Marker, MarkerCluster, basemaps
import ipywidgets as widgets
import ipysheet
example_folder = "./examples"
# classified_logs_filename = os.path.join(cbr_datadir_out,'classified_logs.pkl')
# with open(classified_logs_filename, 'rb') as handle:
# df = pickle.load(handle)
# geoloc_filename = os.path.join(cbr_datadir_out,'geoloc.pkl')
# with open(geoloc_filename, 'rb') as handle:
# geoloc = pickle.load(handle)
df = pd.read_csv(os.path.join(example_folder,'classified_logs.csv'))
geoloc = pd.read_csv(os.path.join(example_folder,'geoloc.csv'))
DEPTH_FROM_COL = 'FromDepth'
DEPTH_TO_COL = 'ToDepth'
TOP_ELEV_COL = 'TopElev'
BOTTOM_ELEV_COL = 'BottomElev'
LITHO_DESC_COL = 'Description'
HYDRO_CODE_COL = 'HydroCode'
HYDRO_ID_COL = 'HydroID'
BORE_ID_COL = 'BoreID'
# if we want to keep vboreholes that have more than one row
x = df[HYDRO_ID_COL].values
unique, counts = np.unique(x, return_counts=True)
multiple_counts = unique[counts > 1]
# len(multiple_counts), len(unique)
keep = set(df[HYDRO_ID_COL].values)
keep = set(multiple_counts)
s = geoloc[HYDRO_ID_COL]
geoloc = geoloc[s.isin(keep)]
class GlobalThing:
def __init__(self, bore_data, displayed_colnames = None):
self.marker_info = dict()
self.bore_data = bore_data
if displayed_colnames is None:
displayed_colnames = [BORE_ID_COL, DEPTH_FROM_COL, DEPTH_TO_COL, LITHO_DESC_COL] # 'Lithology_1', 'MajorLithCode']]
self.displayed_colnames = displayed_colnames
def add_marker_info(self, lat, lon, code):
self.marker_info[(lat, lon)] = code
def get_code(self, lat, lon):
return self.marker_info[(lat, lon)]
def data_for_hydroid(self, ident):
df_sub = self.bore_data.loc[df[HYDRO_ID_COL] == ident]
return df_sub[self.displayed_colnames]
def register_geolocations(self, geoloc):
for index, row in geoloc.iterrows():
self.add_marker_info(row.Latitude, row.Longitude, row.HydroID)
globalthing = GlobalThing(df, displayed_colnames = [BORE_ID_COL, DEPTH_FROM_COL, DEPTH_TO_COL, LITHO_DESC_COL, 'Lithology_1'])
globalthing.register_geolocations(geoloc)
def plot_map(geoloc, click_handler):
"""
Plot the markers for each borehole, and register a custom click_handler
"""
mean_lat = geoloc.Latitude.mean()
mean_lng = geoloc.Longitude.mean()
# create the map
m = Map(center=(mean_lat, mean_lng), zoom=12, basemap=basemaps.Stamen.Terrain)
m.layout.height = '600px'
# show trace
markers = []
for index, row in geoloc.iterrows():
message = HTML()
message.value = str(row.HydroID)
message.placeholder = ""
message.description = "HydroID"
marker = Marker(location=(row.Latitude, row.Longitude))
marker.on_click(click_handler)
marker.popup = message
markers.append(marker)
marker_cluster = MarkerCluster(
markers=markers
)
# not sure whether we could register once instead of each marker:
# marker_cluster.on_click(click_handler)
m.add_layer(marker_cluster);
# m.add_control(FullScreenControl())
return m
# If printing a data frame straight to an output widget
def raw_print(out, ident):
bore_data = globalthing.data_for_hydroid(ident)
out.clear_output()
with out:
print(ident)
print(bore_data)
def click_handler_rawprint(**kwargs):
blah = dict(**kwargs)
xy = blah['coordinates']
ident = globalthing.get_code(xy[0], xy[1])
raw_print(out, ident)
# to display using an ipysheet
def mk_sheet(d):
return ipysheet.pandas_loader.from_dataframe(d)
def upate_display_df(ident):
bore_data = globalthing.data_for_hydroid(ident)
out.clear_output()
with out:
display(mk_sheet(bore_data))
def click_handler_ipysheet(**kwargs):
blah = dict(**kwargs)
xy = blah['coordinates']
ident = globalthing.get_code(xy[0], xy[1])
upate_display_df(ident)
out = widgets.Output(layout={'border': '1px solid black'})
```
Note: it may take a minute or two for the display to first appear....
Select a marker:
```
plot_map(geoloc, click_handler_ipysheet)
# plot_map(geoloc, click_handler_rawprint)
```
Descriptive lithology:
```
out
## Appendix A : qgrid, but at best ended up with "Model not available". May not work yet with Jupyter lab 1.0.x
# import qgrid
# d = data_for_hydroid(10062775)
# d
# import ipywidgets as widgets
# def build_qgrid():
# qgrid.set_grid_option('maxVisibleRows', 10)
# col_opts = {
# 'editable': False,
# }
# qgrid_widget = qgrid.show_grid(d, show_toolbar=False, column_options=col_opts)
# qgrid_widget.layout = widgets.Layout(width='920px')
# return qgrid_widget, qgrid
# qgrid_widget, qgrid = build_qgrid()
# display(qgrid_widget)
# pitch_app = widgets.VBox(qgrid_widget)
# display(pitch_app)
# def click_handler(**kwargs):
# blah = dict(**kwargs)
# xy = blah['coordinates']
# ident = globalthing.get_code(xy[0], xy[1])
# bore_data = data_for_hydroid(ident)
# grid.df = bore_data
## Appendix B: using striplog
# from striplog import Striplog, Interval, Component, Legend, Decor
# import matplotlib as mpl
# lithologies = ['shale', 'clay','granite','soil','sand', 'porphyry','siltstone','gravel', '']
# lithology_color_names = ['lightslategrey', 'olive', 'dimgray', 'chocolate', 'gold', 'tomato', 'teal', 'lavender', 'black']
# lithology_colors = [mpl.colors.cnames[clr] for clr in lithology_color_names]
# clrs = dict(zip(lithologies, lithology_colors))
# def mk_decor(lithology, component):
# dcor = {'color': clrs[lithology],
# 'component': component,
# 'width': 2}
# return Decor(dcor)
# def create_striplog_itvs(d):
# itvs = []
# dcrs = []
# for index, row in d.iterrows():
# litho = row.Lithology_1
# c = Component({'description':row.Description,'lithology': litho})
# decor = mk_decor(litho, c)
# itvs.append(Interval(row.FromDepth, row.ToDepth, components=[c]) )
# dcrs.append(decor)
# return itvs, dcrs
# def click_handler(**kwargs):
# blah = dict(**kwargs)
# xy = blah['coordinates']
# ident = globalthing.get_code(xy[0], xy[1])
# bore_data = data_for_hydroid(ident)
# itvs, dcrs = create_striplog_itvs(bore_data)
# s = Striplog(itvs)
# with out:
# print(ident)
# print(s.plot(legend = Legend(dcrs)))
# def plot_striplog(bore_data, ax=None):
# itvs, dcrs = create_striplog_itvs(bore_data)
# s = Striplog(itvs)
# s.plot(legend = Legend(dcrs), ax=ax)
# def plot_evaluation_metrics(bore_data):
# fig, ax = plt.subplots(figsize=(12, 3))
# # actual plotting
# plot_striplog(bore_data, ax=ax)
# # finalize
# fig.suptitle("Evaluation metrics with cutoff\n", va='bottom')
# plt.show()
# plt.close(fig)
# %matplotlib inline
# from ipywidgets import interactive
# import matplotlib.pyplot as plt
# import numpy as np
# def f(m, b):
# plt.figure(2)
# x = np.linspace(-10, 10, num=1000)
# plt.plot(x, m * x + b)
# plt.ylim(-5, 5)
# plt.show()
# interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
# output = interactive_plot.children[-1]
# output.layout.height = '350px'
# interactive_plot
# def update_sheet(s, d):
# print("before: %s"%(s.rows))
# s.rows = len(d)
# for i in range(len(d.columns)):
# s.cells[i].value = d[d.columns[i]].values
```
| github_jupyter |
# Classification
This notebook aims at giving an overview of the classification metrics that
can be used to evaluate the predictive model generalization performance. We can
recall that in a classification setting, the vector `target` is categorical
rather than continuous.
We will load the blood transfusion dataset.
```
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
Let's start by checking the classes present in the target vector `target`.
```
import matplotlib.pyplot as plt
target.value_counts().plot.barh()
plt.xlabel("Number of samples")
_ = plt.title("Number of samples per classes present\n in the target")
```
We can see that the vector `target` contains two classes corresponding to
whether a subject gave blood. We will use a logistic regression classifier to
predict this outcome.
To focus on the metrics presentation, we will only use a single split instead
of cross-validation.
```
from sklearn.model_selection import train_test_split
data_train, data_test, target_train, target_test = train_test_split(
data, target, shuffle=True, random_state=0, test_size=0.5)
```
We will use a logistic regression classifier as a base model. We will train
the model on the train set, and later use the test set to compute the
different classification metric.
```
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(data_train, target_train)
```
## Classifier predictions
Before we go into details regarding the metrics, we will recall what type
of predictions a classifier can provide.
For this reason, we will create a synthetic sample for a new potential donor:
he/she donated blood twice in the past (1000 c.c. each time). The last time
was 6 months ago, and the first time goes back to 20 months ago.
```
new_donor = [[6, 2, 1000, 20]]
```
We can get the class predicted by the classifier by calling the method
`predict`.
```
classifier.predict(new_donor)
```
With this information, our classifier predicts that this synthetic subject
is more likely to not donate blood again.
However, we cannot check whether the prediction is correct (we do not know
the true target value). That's the purpose of the testing set. First, we
predict whether a subject will give blood with the help of the trained
classifier.
```
target_predicted = classifier.predict(data_test)
target_predicted[:5]
```
## Accuracy as a baseline
Now that we have these predictions, we can compare them with the true
predictions (sometimes called ground-truth) which we did not use until now.
```
target_test == target_predicted
```
In the comparison above, a `True` value means that the value predicted by our
classifier is identical to the real value, while a `False` means that our
classifier made a mistake. One way of getting an overall rate representing
the generalization performance of our classifier would be to compute how many
times our classifier was right and divide it by the number of samples in our
set.
```
import numpy as np
np.mean(target_test == target_predicted)
```
This measure is called the accuracy. Here, our classifier is 78%
accurate at classifying if a subject will give blood. `scikit-learn` provides
a function that computes this metric in the module `sklearn.metrics`.
```
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(target_test, target_predicted)
print(f"Accuracy: {accuracy:.3f}")
```
`LogisticRegression` also has a method named `score` (part of the standard
scikit-learn API), which computes the accuracy score.
```
classifier.score(data_test, target_test)
```
## Confusion matrix and derived metrics
The comparison that we did above and the accuracy that we calculated did not
take into account the type of error our classifier was making. Accuracy
is an aggregate of the errors made by the classifier. We may be interested
in finer granularity - to know independently what the error is for each of
the two following cases:
- we predicted that a person will give blood but she/he did not;
- we predicted that a person will not give blood but she/he did.
```
from sklearn.metrics import ConfusionMatrixDisplay
_ = ConfusionMatrixDisplay.from_estimator(classifier, data_test, target_test)
```
The in-diagonal numbers are related to predictions that were correct
while off-diagonal numbers are related to incorrect predictions
(misclassifications). We now know the four types of correct and erroneous
predictions:
* the top left corner are true positives (TP) and corresponds to people
who gave blood and were predicted as such by the classifier;
* the bottom right corner are true negatives (TN) and correspond to
people who did not give blood and were predicted as such by the
classifier;
* the top right corner are false negatives (FN) and correspond to
people who gave blood but were predicted to not have given blood;
* the bottom left corner are false positives (FP) and correspond to
people who did not give blood but were predicted to have given blood.
Once we have split this information, we can compute metrics to highlight the
generalization performance of our classifier in a particular setting. For
instance, we could be interested in the fraction of people who really gave
blood when the classifier predicted so or the fraction of people predicted to
have given blood out of the total population that actually did so.
The former metric, known as the precision, is defined as TP / (TP + FP)
and represents how likely the person actually gave blood when the classifier
predicted that they did.
The latter, known as the recall, defined as TP / (TP + FN) and
assesses how well the classifier is able to correctly identify people who
did give blood.
We could, similarly to accuracy, manually compute these values,
however scikit-learn provides functions to compute these statistics.
```
from sklearn.metrics import precision_score, recall_score
precision = precision_score(target_test, target_predicted, pos_label="donated")
recall = recall_score(target_test, target_predicted, pos_label="donated")
print(f"Precision score: {precision:.3f}")
print(f"Recall score: {recall:.3f}")
```
These results are in line with what was seen in the confusion matrix. Looking
at the left column, more than half of the "donated" predictions were correct,
leading to a precision above 0.5. However, our classifier mislabeled a lot of
people who gave blood as "not donated", leading to a very low recall of
around 0.1.
## The issue of class imbalance
At this stage, we could ask ourself a reasonable question. While the accuracy
did not look bad (i.e. 77%), the recall score is relatively low (i.e. 12%).
As we mentioned, precision and recall only focuses on samples predicted to be
positive, while accuracy takes both into account. In addition, we did not
look at the ratio of classes (labels). We could check this ratio in the
training set.
```
target_train.value_counts(normalize=True).plot.barh()
plt.xlabel("Class frequency")
_ = plt.title("Class frequency in the training set")
```
We observe that the positive class, `'donated'`, comprises only 24% of the
samples. The good accuracy of our classifier is then linked to its ability to
correctly predict the negative class `'not donated'` which may or may not be
relevant, depending on the application. We can illustrate the issue using a
dummy classifier as a baseline.
```
from sklearn.dummy import DummyClassifier
dummy_classifier = DummyClassifier(strategy="most_frequent")
dummy_classifier.fit(data_train, target_train)
print(f"Accuracy of the dummy classifier: "
f"{dummy_classifier.score(data_test, target_test):.3f}")
```
With the dummy classifier, which always predicts the negative class `'not
donated'`, we obtain an accuracy score of 76%. Therefore, it means that this
classifier, without learning anything from the data `data`, is capable of
predicting as accurately as our logistic regression model.
The problem illustrated above is also known as the class imbalance problem.
When the classes are imbalanced, accuracy should not be used. In this case,
one should either use the precision and recall as presented above or the
balanced accuracy score instead of accuracy.
```
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy = balanced_accuracy_score(target_test, target_predicted)
print(f"Balanced accuracy: {balanced_accuracy:.3f}")
```
The balanced accuracy is equivalent to accuracy in the context of balanced
classes. It is defined as the average recall obtained on each class.
## Evaluation and different probability thresholds
All statistics that we presented up to now rely on `classifier.predict` which
outputs the most likely label. We haven't made use of the probability
associated with this prediction, which gives the confidence of the
classifier in this prediction. By default, the prediction of a classifier
corresponds to a threshold of 0.5 probability in a binary classification
problem. We can quickly check this relationship with the classifier that
we trained.
```
target_proba_predicted = pd.DataFrame(classifier.predict_proba(data_test),
columns=classifier.classes_)
target_proba_predicted[:5]
target_predicted = classifier.predict(data_test)
target_predicted[:5]
```
Since probabilities sum to 1 we can get the class with the highest
probability without using the threshold 0.5.
```
equivalence_pred_proba = (
target_proba_predicted.idxmax(axis=1).to_numpy() == target_predicted)
np.all(equivalence_pred_proba)
```
The default decision threshold (0.5) might not be the best threshold that
leads to optimal generalization performance of our classifier. In this case, one
can vary the decision threshold, and therefore the underlying prediction, and
compute the same statistics presented earlier. Usually, the two metrics
recall and precision are computed and plotted on a graph. Each metric plotted
on a graph axis and each point on the graph corresponds to a specific
decision threshold. Let's start by computing the precision-recall curve.
```
from sklearn.metrics import PrecisionRecallDisplay
disp = PrecisionRecallDisplay.from_estimator(
classifier, data_test, target_test, pos_label='donated',
marker="+"
)
_ = disp.ax_.set_title("Precision-recall curve")
```
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Tip</p>
<p class="last">Scikit-learn will return a display containing all plotting element. Notably,
displays will expose a matplotlib axis, named <tt class="docutils literal">ax_</tt>, that can be used to add
new element on the axis.
You can refer to the documentation to have more information regarding the
<a class="reference external" href="https://scikit-learn.org/stable/visualizations.html#visualizations">visualizations in scikit-learn</a></p>
</div>
On this curve, each blue cross corresponds to a level of probability which we
used as a decision threshold. We can see that, by varying this decision
threshold, we get different precision vs. recall values.
A perfect classifier would have a precision of 1 for all recall values. A
metric characterizing the curve is linked to the area under the curve (AUC)
and is named average precision (AP). With an ideal classifier, the average
precision would be 1.
The precision and recall metric focuses on the positive class, however, one
might be interested in the compromise between accurately discriminating the
positive class and accurately discriminating the negative classes. The
statistics used for this are sensitivity and specificity. Sensitivity is just
another name for recall. However, specificity measures the proportion of
correctly classified samples in the negative class defined as: TN / (TN +
FP). Similar to the precision-recall curve, sensitivity and specificity are
generally plotted as a curve called the receiver operating characteristic
(ROC) curve. Below is such a curve:
```
from sklearn.metrics import RocCurveDisplay
disp = RocCurveDisplay.from_estimator(
classifier, data_test, target_test, pos_label='donated',
marker="+")
disp = RocCurveDisplay.from_estimator(
dummy_classifier, data_test, target_test, pos_label='donated',
color="tab:orange", linestyle="--", ax=disp.ax_)
_ = disp.ax_.set_title("ROC AUC curve")
```
This curve was built using the same principle as the precision-recall curve:
we vary the probability threshold for determining "hard" prediction and
compute the metrics. As with the precision-recall curve, we can compute the
area under the ROC (ROC-AUC) to characterize the generalization performance of
our classifier. However, it is important to observe that the lower bound of
the ROC-AUC is 0.5. Indeed, we show the generalization performance of a dummy
classifier (the orange dashed line) to show that even the worst generalization
performance obtained will be above this line.
| github_jupyter |
Para entrar no modo apresentação, execute a seguinte célula e pressione `-`
```
%reload_ext slide
```
<span class="notebook-slide-start"/>
# Proxy
Este notebook apresenta os seguintes tópicos:
- [Introdução](#Introdu%C3%A7%C3%A3o)
- [Servidor de proxy](#Servidor-de-proxy)
## Introdução
Existe muita informação disponível em repositórios software.
A seguir temos uma *screenshot* do repositório `gems-uff/sapos`.
<img src="images/githubexample.png" alt="Página Inicial de Repositório no GitHub" width="auto"/>
Nessa imagem, vemos a organização e nome do repositório
<img src="images/githubexample1.png" alt="Página Inicial de Repositório no GitHub com nome do repositório selecionado" width="auto"/>
Estrelas, forks, watchers
<img src="images/githubexample2.png" alt="Página Inicial de Repositório no GitHub com watchers, star e fork selecionados" width="auto"/>
Número de issues e pull requests
<img src="images/githubexample3.png" alt="Página Inicial de Repositório no GitHub com numero de issues e pull requests selecionados" width="auto"/>
Número de commits, branches, releases, contribuidores e licensa <span class="notebook-slide-extra" data-count="1"/>
<img src="images/githubexample4.png" alt="Página Inicial de Repositório no GitHub com número de commits, branches, releases, contribuidores e licensa selecionados" width="auto"/>
Arquivos
<img src="images/githubexample5.png" alt="Página Inicial de Repositório no GitHub com arquivos selecionados" width="auto"/>
Mensagem e data dos commits que alteraram esses arquivos por último
<img src="images/githubexample6.png" alt="Página Inicial de Repositório no GitHub com arquivos selecionados" width="auto"/>
Podemos extrair informações de repositórios de software de 3 formas:
- Crawling do site do repositório
- APIs que fornecem dados
- Diretamente do sistema de controle de versões
Neste minicurso abordaremos as 3 maneiras, porém daremos mais atenção a APIs do GitHub e extração direta do Git.
## Servidor de proxy
Servidores de repositório costumam limitar a quantidade de requisições que podemos fazer.
Em geral, essa limitação não afeta muito o uso esporádico dos serviços para mineração. Porém, quando estamos desenvolvendo algo, pode ser que passemos do limite com requisições repetidas.
Para evitar esse problema, vamos configurar um servidor de proxy simples em flask.
Quando estamos usando um servidor de proxy, ao invés de fazermos requisições diretamente ao site de destino, fazemos requisições ao servidor de proxy, que, em seguida, redireciona as requisições para o site de destino.
Ao receber o resultado da requisição, o proxy faz um cache do resultado e nos retorna o resultado.
Se uma requisição já tiver sido feita pelo servidor de proxy, ele apenas nos retorna o resultado do cache.
### Implementação do Proxy
A implementação do servidor de proxy está no arquivo `proxy.py`. Como queremos executar o proxy em paralelo ao notebook, o servidor precisa ser executado externamente.
Entretanto, o código do proxy será explicado aqui.
Começamos o arquivo com os imports necessários.
```python
import hashlib
import requests
import simplejson
import os
import sys
from flask import Flask, request, Response
```
A biblioteca `hashlib` é usada para fazer hash das requisições. A biblioteca `requests` é usada para fazer requisições ao GitHub. A biblioteca `simplejson` é usada para transformar requisiçoes e respostas em JSON. A biblioteca `os` é usada para manipular caminhos de diretórios e verificar a existência de arquivos. A biblioteca `sys` é usada para pegar os argumentos da execução. Por fim, `flask` é usada como servidor.
Em seguida, definimos o site para qual faremos proxy, os headers excluídos da resposta recebida, e criamos um `app` pro `Flask`. Note que `SITE` está sendo definido como o primeiro argumendo da execução do programa ou como https://github.com/, caso não haja argumento.
```python
if len(sys.argv) > 1:
SITE = sys.argv[1]
else:
SITE = "https://github.com/"
EXCLUDED_HEADERS = ['content-encoding', 'content-length', 'transfer-encoding', 'connection']
app = Flask(__name__)
```
Depois, definimos uma função para tratar todas rotas e métodos possíveis que o servidor pode receber.
```python
METHODS = ['GET', 'POST', 'PATCH', 'PUT', 'DELETE']
@app.route('/', defaults={'path': ''}, methods=METHODS)
@app.route('/<path:path>', methods=METHODS)
def catch_all(path):
```
Dentro desta função, definimos um dicionário de requisição com base na requisição que foi recebida pelo `flask`.
```python
request_dict = {
"method": request.method,
"url": request.url.replace(request.host_url, SITE),
"headers": {key: value for (key, value) in request.headers if key != 'Host'},
"data": request.get_data(),
"cookies": request.cookies,
"allow_redirects": False
}
```
Nesta requsição, substituímos o host pelo site de destino.
Em seguida, convertemos o dicionário para JSON e calculamos o hash SHA1 do resultado.
```python
request_json = simplejson.dumps(request_dict, sort_keys=True)
sha1 = hashlib.sha1(request_json.encode("utf-8")).hexdigest()
path_req = os.path.join("cache", sha1 + ".req")
path_resp = os.path.join("cache", sha1 + ".resp")
```
No diretório `cache` armazenamos arquivos `{sha1}.req` e `{sha1}.resp` com a requisição e resposta dos resultados em cache.
Com isso, ao receber uma requisição, podemos ver se `{sha1}.req` existe. Se existir, podemos comparar com a nossa requisição (para evitar conflitos). Por fim, se forem iguais, podemos retornar a resposta que está em cache.
```python
if os.path.exists(path_req):
with open(path_req, "r") as req:
req_read = req.read()
if req_read == request_json:
with open(path_resp, "r") as dump:
response = simplejson.load(dump)
return Response(
response["content"],
response["status_code"],
response["headers"]
)
```
Se a requisição não estiver em cache, transformamos o dicionário da requisição em uma requisição do `requests` para o GitHub, excluimos os headers populados pelo `flask` e criamos um JSON para a resposta.
```python
resp = requests.request(**request_dict)
headers = [(name, value) for (name, value) in resp.raw.headers.items()
if name.lower() not in EXCLUDED_HEADERS]
response = {
"content": resp.content,
"status_code": resp.status_code,
"headers": headers
}
response_json = simplejson.dumps(response, sort_keys=True)
```
Depois disso, salvamos a resposta no cache e retornamos ela para o cliente original.
```python
with open(path_resp, "w") as dump:
dump.write(response_json)
with open(path_req, "w") as req:
req.write(request_json)
return Response(
response["content"],
response["status_code"],
response["headers"]
)
```
No fim do script, iniciamos o servidor.
```python
if __name__ == '__main__':
app.run(debug=True)
```
### Uso do Proxy
Execute a seguinte linha em um terminal:
```bash
python proxy.py
```
Agora, toda requisição que faríamos a github.com, passaremos a fazer a localhost:5000. Por exemplo, ao invés de acessar https://github.com/gems-uff/sapos, acessaremos http://localhost:5000/gems-uff/sapos
### Requisição com requests
A seguir fazemos uma requisição com requests para o proxy. <span class="notebook-slide-extra" data-count="2"/>
```
SITE = "http://localhost:5000/" # Se não usar o proxy, alterar para https://github.com/
import requests
response = requests.get(SITE + "gems-uff/sapos")
response.headers['server'], response.status_code
```
<span class="notebook-slide-scroll" data-position="-1"/>
Podemos que o resultado foi obtido do GitHub e que a requisição funcionou, dado que o resultado foi 200.
Continua: [5.Crawling.ipynb](5.Crawling.ipynb)
| github_jupyter |
```
import pandas as pd
import numpy as np
import time
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing as pp
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score
from sklearn import preprocessing
import xgboost as xgb
from sklearn.ensemble import BaggingClassifier
import lightgbm as lgb
from sklearn.naive_bayes import GaussianNB
from sklearn import preprocessing as pp
from sklearn.neighbors import KNeighborsClassifier
from sklearn import tree
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from statistics import mode
from sklearn.model_selection import cross_val_score, cross_validate, train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
import xgboost as xgb
import lightgbm as lgb
#Todas las librerías para los distintos algoritmos
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import ComplementNB
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.calibration import CalibratedClassifierCV
from sklearn.svm import LinearSVC
from sklearn.svm import OneClassSVM
from sklearn.svm import SVC
from sklearn.svm import NuSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
import sklearn.metrics as metrics
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import BaggingClassifier
import statistics
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import warnings
from mlxtend.classifier import StackingClassifier
from mlxtend.classifier import StackingCVClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import GradientBoostingClassifier
from pylab import rcParams
from collections import Counter
warnings.simplefilter('ignore')
data_train= pd.read_csv("./datos/train.csv",na_values=["?"])
data_test= pd.read_csv("./datos/test.csv",na_values=["?"])
data_trainCopia = data_train.copy()
data_testCopia = data_test.copy()
Nombre = LabelEncoder().fit(pd.read_csv("./datos/nombre.csv").Nombre)
Año = LabelEncoder().fit(pd.read_csv("./datos/ao.csv").Año)
Ciudad = LabelEncoder().fit(pd.read_csv("./datos/ciudad.csv").Ciudad)
Combustible = LabelEncoder().fit(pd.read_csv("./datos/combustible.csv").Combustible)
Consumo = LabelEncoder().fit(pd.read_csv("./datos/consumo.csv").Consumo)
Descuento = LabelEncoder().fit(pd.read_csv("./datos/descuento.csv").Descuento)
Kilometros = LabelEncoder().fit(pd.read_csv("./datos/kilometros.csv").Kilometros)
Mano = LabelEncoder().fit(pd.read_csv("./datos/mano.csv").Mano)
Potencia = LabelEncoder().fit(pd.read_csv("./datos/potencia.csv").Potencia)
Asientos = LabelEncoder().fit(pd.read_csv("./datos/asientos.csv").Asientos)
Motor_CC=LabelEncoder().fit(pd.read_csv("./datos/motor_cc.csv").Motor_CC)
Tipo_marchas=LabelEncoder().fit(pd.read_csv("./datos/Tipo_marchas.csv").Tipo_marchas)
data_trainCopia['Nombre']=data_trainCopia['Nombre'].fillna(mode(data_trainCopia['Nombre']))
data_trainCopia['Año']=data_trainCopia['Año'].fillna(mode(data_trainCopia['Año']))
data_trainCopia['Ciudad']=data_trainCopia['Ciudad'].fillna(mode(data_trainCopia['Ciudad']))
#data_trainCopia['Kilometros']=data_trainCopia['Kilometros'].fillna(mode(data_trainCopia['Kilometros']))
data_trainCopia['Combustible']=data_trainCopia['Combustible'].fillna(mode(data_trainCopia['Combustible']))
data_trainCopia['Tipo_marchas']=data_trainCopia['Tipo_marchas'].fillna(mode(data_trainCopia['Tipo_marchas']))
#data_trainCopia['Mano']=data_trainCopia['Mano'].fillna(mode(data_trainCopia['Mano']))
data_trainCopia['Consumo']=data_trainCopia['Consumo'].fillna(mode(data_trainCopia['Consumo']))
data_trainCopia['Motor_CC']=data_trainCopia['Motor_CC'].fillna(mode(data_trainCopia['Motor_CC']))
data_trainCopia['Potencia']=data_trainCopia['Potencia'].fillna(mode(data_trainCopia['Potencia']))
data_trainCopia['Asientos']=data_trainCopia['Asientos'].fillna(mode(data_trainCopia['Asientos']))
data_trainCopia['Descuento']=data_trainCopia['Descuento'].fillna(mode(data_trainCopia['Descuento']))
#Eliminamos las columnas que no necesitamos
data_trainCopia=data_trainCopia.drop(['Descuento'], axis=1)
data_trainCopia=data_trainCopia.drop(['id'], axis=1)
data_trainCopia=data_trainCopia.drop(['Kilometros'], axis=1)
data_testCopia=data_testCopia.drop(['Descuento'], axis=1)
data_testCopia=data_testCopia.drop(['id'], axis=1)
data_testCopia=data_testCopia.drop(['Kilometros'], axis=1)
#Eliminamos los nan de los ids
data_trainCopia=data_trainCopia.dropna()
data_testCopia=data_testCopia.dropna()
#Codificación de las filas
data_trainCopia.Nombre = Nombre.transform(data_trainCopia.Nombre)
data_trainCopia.Año = Año.transform(data_trainCopia.Año)
data_trainCopia.Ciudad = Ciudad.transform(data_trainCopia.Ciudad)
data_trainCopia.Combustible = Combustible.transform(data_trainCopia.Combustible)
data_trainCopia.Potencia = Potencia.transform(data_trainCopia.Potencia)
data_trainCopia.Consumo = Consumo.transform(data_trainCopia.Consumo)
#data_trainCopia.Kilometros = Kilometros.transform(data_trainCopia.Kilometros)
data_trainCopia.Mano = Mano.transform(data_trainCopia.Mano)
data_trainCopia.Motor_CC = Motor_CC.transform(data_trainCopia.Motor_CC)
data_trainCopia.Tipo_marchas = Tipo_marchas.transform(data_trainCopia.Tipo_marchas)
data_trainCopia.Asientos = Asientos.transform(data_trainCopia.Asientos)
#-------------------------------------------------------------------------------------------
data_testCopia.Nombre = Nombre.transform(data_testCopia.Nombre)
data_testCopia.Año = Año.transform(data_testCopia.Año)
data_testCopia.Ciudad = Ciudad.transform(data_testCopia.Ciudad)
data_testCopia.Combustible = Combustible.transform(data_testCopia.Combustible)
data_testCopia.Potencia = Potencia.transform(data_testCopia.Potencia)
data_testCopia.Consumo = Consumo.transform(data_testCopia.Consumo)
#data_testCopia.Kilometros = Kilometros.transform(data_testCopia.Kilometros)
data_testCopia.Mano = Mano.transform(data_testCopia.Mano)
data_testCopia.Tipo_marchas = Tipo_marchas.transform(data_testCopia.Tipo_marchas)
data_testCopia.Asientos = Asientos.transform(data_testCopia.Asientos)
data_testCopia.Motor_CC = Motor_CC.transform(data_testCopia.Motor_CC)
target = pd.read_csv('./datos/precio_cat.csv')
target_train=data_trainCopia['Precio_cat']
data_trainCopia=data_trainCopia.drop(['Precio_cat'], axis=1)
GradientBoostingClassifier(criterion='friedman_mse', init=None,
learning_rate=0.1, loss='deviance', max_depth=3,
max_features=None, max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=100, presort='auto', random_state=None,
subsample=1.0, verbose=0, warm_start=False)
from imblearn.over_sampling import SMOTE
Xo, yo = SMOTE(random_state=42).fit_resample(data_trainCopia, target_train)
clf = GradientBoostingClassifier(learning_rate=0.07, n_estimators=700, max_depth=2)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclf = clfEntrenado.predict(data_testCopia)
clf = GradientBoostingClassifier(learning_rate=0.09, n_estimators=700, max_depth=2)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclf = clfEntrenado.predict(data_testCopia)
clf = GradientBoostingClassifier(learning_rate=0.9, n_estimators=750, max_depth=2)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
dfAux = pd.DataFrame({'id':data_test['id']})
dfAux.set_index('id', inplace=True)
dfFinal = pd.DataFrame({'id': data_test['id'], 'Precio_cat': preclfOverGradient}, columns=['id', 'Precio_cat'])
dfFinal.set_index('id', inplace=True)
dfFinal.to_csv("./soluciones/GradientOverSamplingConRandomStateScoreLocal895628.csv")
clf = GradientBoostingClassifier(learning_rate=0.055, n_estimators=2500, max_depth=2)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
clf = GradientBoostingClassifier(learning_rate=0.5, n_estimators=400)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
clf = GradientBoostingClassifier(learning_rate=0.5, n_estimators=100, max_depth=6)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
clf = GradientBoostingClassifier(learning_rate=0.5, n_estimators=100, max_depth=6)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
clf = GradientBoostingClassifier(learning_rate=0.5, n_estimators=100, max_depth=6)
scores = cross_val_score(clf, data_trainCopia, target_train, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(data_trainCopia, target_train)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
lgbm1 = lgb.LGBMClassifier(learning_rate=0.5, objective='binary', n_estimators=550, n_jobs=2,
num_leaves=11, max_depth=-1, reg_alpha=0.1)
scores = cross_val_score(lgbm1, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada CON MODELO", np.mean(scores)*100)
lgbmEntrenado = lgbm1.fit(Xo, yo)
preMamaJuanca = lgbmEntrenado.predict(data_testCopia)
lgbm1 = lgb.LGBMClassifier(learning_rate=0.5, objective='binary', n_estimators=550, n_jobs=2,
num_leaves=11, max_depth=-1, reg_alpha=0.1)
scores = cross_val_score(lgbm1, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada CON MODELO", np.mean(scores)*100)
lgbmEntrenado = lgbm1.fit(Xo, yo)
preMamaJuanca = lgbmEntrenado.predict(data_testCopia)
lgbm1 = lgb.LGBMClassifier(learning_rate=0.2, objective='binary', n_estimators=550, n_jobs=2,
num_leaves=11, max_depth=-1)
scores = cross_val_score(lgbm1, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada CON MODELO", np.mean(scores)*100)
lgbmEntrenado = lgbm1.fit(Xo, yo)
preMamaJuanca = lgbmEntrenado.predict(data_testCopia)
lgbm1 = lgb.LGBMClassifier(learning_rate=0.2, objective='multiclassova', n_estimators=550, n_jobs=2,
num_leaves=11, max_depth=-1)
scores = cross_val_score(lgbm1, data_trainCopia, target_train, cv=5, scoring='accuracy')
print("Score Validacion Cruzada CON MODELO", np.mean(scores)*100)
lgbmEntrenado = lgbm1.fit(data_trainCopia, target_train)
preMamaJuanca = lgbmEntrenado.predict(data_testCopia)
lgbm1 = lgb.LGBMClassifier(learning_rate=0.3, objective='binary', n_estimators=500, n_jobs=2,
num_leaves=11, max_depth=-1)
scores = cross_val_score(lgbm1, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada CON MODELO", np.mean(scores)*100)
lgbmEntrenado = lgbm1.fit(Xo, yo)
preMamaJuanca = lgbmEntrenado.predict(data_testCopia)
lgbm1 = lgb.LGBMClassifier(learning_rate=0.3, objective='binary', n_estimators=60, n_jobs=2, num_leaves=8, max_depth=8)
scores = cross_val_score(lgbm1, data_trainCopia, target_train, cv=5, scoring='accuracy')
print("Score Validacion Cruzada CON MODELO", np.mean(scores)*100)
lgbmEntrenado = lgbm1.fit(data_trainCopia, target_train)
preMamaJuanca = lgbmEntrenado.predict(data_testCopia)
#GRADIENT BOOSTING CON PARAMETROS DE GRIDSHARE Y DATOS SIN NORMALIZAR
clf = GradientBoostingClassifier(learning_rate=0.3, n_estimators=70, max_depth=4)
scores = cross_val_score(clf, data_trainCopia, target_train, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(data_trainCopia, target_train)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
#ESTO ME HA DICHO QUE ES LO QUE ME VA A SUBIR
#GRADIENT BOOSTING CON PARAMETROS DE GRIDSHARE Y DATOS NORMALIZADOS
clf = GradientBoostingClassifier(learning_rate=0.5, n_estimators=70, max_depth=5, random_state=42)
scores = cross_val_score(clf, Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada", np.mean(scores)*100)
clfEntrenado = clf.fit(Xo, yo)
preclfOverGradient = clfEntrenado.predict(data_testCopia)
lgbm1 = lgb.LGBMClassifier(learning_rate=0.7, objective='binary', n_estimators=70, n_jobs=2, num_leaves=10, max_depth=4, random_state=42)
scores = cross_val_score(lgbm1,Xo, yo, cv=5, scoring='accuracy')
print("Score Validacion Cruzada CON MODELO", np.mean(scores)*100)
lgbmEntrenado = lgbm1.fit(Xo, yo)
preMamaJuanca = lgbmEntrenado.predict(data_testCopia)
```
| github_jupyter |
<p></p>
<p style="text-align:center"><font size="20">BRAIN IMAGING</font></p>
<p style="text-align:center"><font size="20">DATA STRUCTURE</font></p>
The dataset for this tutorial is structured according to the [Brain Imaging Data Structure (BIDS)](http://bids.neuroimaging.io/). BIDS is a simple and intuitive way to organize and describe your neuroimaging and behavioral data. Neuroimaging experiments result in complicated data that can be arranged in many different ways. So far there is no consensus on how to organize and share data obtained in neuroimaging experiments. BIDS tackles this problem by suggesting a new standard for the arrangement of neuroimaging datasets.
The idea of BIDS is that the file and folder names follow a strict set of rules:

Using the same structure for all of your studies will allow you to easily reuse all of your scripts between studies. But additionally, it also has the advantage that sharing code with and using scripts from other researchers will be much easier.
# Tutorial Dataset
For this tutorial, we will be using a subset of the [fMRI dataset (ds000114)](https://openfmri.org/dataset/ds000114/) publicly available on [openfmri.org](https://openfmri.org). **If you're using the suggested Docker image you probably have all data needed to run the tutorial within the Docker container.**
If you want to have data locally you can use [Datalad](http://datalad.org/) to download a subset of the dataset, via the [datalad repository](http://datasets.datalad.org/?dir=/workshops/nih-2017/ds000114). In order to install dataset with all subrepositories you can run:
```
%%bash
cd /data
datalad install -r ///workshops/nih-2017/ds000114
```
In order to download data, you can use ``datalad get foldername`` command, to download all files in the folder ``foldername``. For this tutorial we only want to download part of the dataset, i.e. the anatomical and the functional `fingerfootlips` images:
```
%%bash
cd /data/ds000114
datalad get -J 4 derivatives/fmriprep/sub-*/anat/*preproc.nii.gz \
sub-01/ses-test/anat \
sub-*/ses-test/func/*fingerfootlips*
```
So let's have a look at the tutorial dataset.
```
!tree -L 4 /data/ds000114/
```
As you can, for every subject we have one anatomical T1w image, five functional images, and one diffusion weighted image.
**Note**: If you used `datalad` or `git annex` to get the dataset, you can see symlinks for the image files.
# Behavioral Task
Subject from the ds000114 dataset did five behavioral tasks. In our dataset two of them are included.
The **motor task** consisted of ***finger tapping***, ***foot twitching*** and ***lip pouching*** interleaved with fixation at a cross.
The **landmark task** was designed to mimic the ***line bisection task*** used in neurological practice to diagnose spatial hemineglect. Two conditions were contrasted, specifically judging if a horizontal line had been bisected exactly in the middle, versus judging if a horizontal line was bisected at all. More about the dataset and studies you can find [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3641991/).
To each of the functional images above, we therefore also have a tab-separated values file (``tva``), containing information such as stimuli onset, duration, type, etc. So let's have a look at one of them:
```
%%bash
cd /data/ds000114
datalad get sub-01/ses-test/func/sub-01_ses-test_task-linebisection_events.tsv
!cat /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-linebisection_events.tsv
```
| github_jupyter |
STAT 453: Deep Learning (Spring 2021)
Instructor: Sebastian Raschka (sraschka@wisc.edu)
Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2021/
GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss21
---
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
- Runs on CPU or GPU (if available)
# A Convolutional ResNet and Residual Blocks
Please note that this example does not implement a really deep ResNet as described in literature but rather illustrates how the residual blocks described in He et al. [1] can be implemented in PyTorch.
- [1] He, Kaiming, et al. "Deep residual learning for image recognition." *Proceedings of the IEEE conference on computer vision and pattern recognition*. 2016.
## Imports
```
import time
import numpy as np
import torch
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision import transforms
```
## Settings and Dataset
```
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 123
learning_rate = 0.01
num_epochs = 10
batch_size = 128
# Architecture
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
```
## ResNet with identity blocks
The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:

```
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
self.block_1 = torch.nn.Sequential(
torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0),
torch.nn.BatchNorm2d(4),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
torch.nn.BatchNorm2d(1)
)
self.block_2 = torch.nn.Sequential(
torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0),
torch.nn.BatchNorm2d(4),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
torch.nn.BatchNorm2d(1)
)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(1*28*28, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
x = self.block_1(x)
x = torch.nn.functional.relu(x + shortcut)
#########################
### 2nd residual block
#########################
shortcut = x
x = self.block_2(x)
x = torch.nn.functional.relu(x + shortcut)
#########################
### Fully connected
#########################
logits = self.linear_1(x.view(-1, 1*28*28))
return logits
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
```
### Training
```
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits = model(features)
_, predicted_labels = torch.max(logits, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits = model(features)
cost = torch.nn.functional.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 250:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
```
## ResNet with convolutional blocks for resizing
The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:

```
class ResidualBlock(torch.nn.Module):
""" Helper Class"""
def __init__(self, channels):
super(ResidualBlock, self).__init__()
self.block = torch.nn.Sequential(
torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[1],
kernel_size=(3, 3),
stride=(2, 2),
padding=1),
torch.nn.BatchNorm2d(channels[1]),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(in_channels=channels[1],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(1, 1),
padding=0),
torch.nn.BatchNorm2d(channels[2])
)
self.shortcut = torch.nn.Sequential(
torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(2, 2),
padding=0),
torch.nn.BatchNorm2d(channels[2])
)
def forward(self, x):
shortcut = x
block = self.block(x)
shortcut = self.shortcut(x)
x = torch.nn.functional.relu(block+shortcut)
return x
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
self.residual_block_1 = ResidualBlock(channels=[1, 4, 8])
self.residual_block_2 = ResidualBlock(channels=[8, 16, 32])
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.residual_block_1(x)
out = self.residual_block_2(out)
logits = self.linear_1(out.view(-1, 7*7*32))
return logits
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
```
### Training
```
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits = model(features)
cost = torch.nn.functional.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
```
| github_jupyter |
# Hidden Markov Model
## What is a Hidden Markov Model?
A Hidden Markov Model (HMM) is a statistical Markov model in with the system being modeled is assumed to be a Markov process with **hidden** states.
An HMM allows us to talk about both observed events (like words that we see in the input) and hidden events (like Part-Of-Speech tags).
An HMM is specified by the following components:

**State Transition Probabilities** are the probabilities of moving from state i to state j.

**Observation Probability Matrix** also called emission probabilities, express the probability of an observation Ot being generated from a state i.

**Initial State Distribution** $\pi$<sub>i</sub> is the probability that the Markov chain will start in state i. Some state j with $\pi$<sub>j</sub>=0 means that they cannot be initial states.
Hence, the entire Hidden Markov Model can be described as,

```
# Inorder to get the notebooks running in current directory
import os, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
import hmm
```
Let us take a simple example with two hidden states and two observable states.
The **Hidden states** will be **Rainy** and **Sunny**.
The **Observable states** will be **Sad** and **Happy**.
The transition and emission matrices are given below.
The initial probabilities are obtained by computing the stationary distribution of the transition matrix.
This means that for a given matrix A, the stationary distribution would be given as,
$\pi$A = $\pi$
```
# Hidden
hidden_states = ["Rainy", "Sunny"]
transition_matrix = [[0.5, 0.5], [0.3, 0.7]]
# Observable
observable_states = ["Sad", "Happy"]
emission_matrix = [[0.8, 0.2], [0.4, 0.6]]
# Inputs
input_seq = [0, 0, 1]
model = hmm.HiddenMarkovModel(
observable_states, hidden_states, transition_matrix, emission_matrix
)
model.print_model_info()
model.visualize_model(output_dir="simple_demo", notebook=True)
```
Here the <span style="color: blue;">blue</span> lines indicate the hidden transitions.
Here the <span style="color: red;">red</span> lines indicate the emission transitions.
# Problem 1:
Computing Likelihood: Given an HMM $\lambda$ = (A, B) and an observation sequence O, determine the likelihood P(O | $\lambda$)
## How It Is Calculated?
For our example, for the given **observed** sequence - (Sad, Sad, Happy) the probabilities will be calculated as,
<em>
P(Sad, Sad, Happy) =
P(Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Happy | Rainy)
+
P(Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Happy | Sunny)
+
P(Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Happy | Rainy)
+
P(Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Happy | Sunny)
+
P(Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Happy | Rainy)
+
P(Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Happy | Sunny)
+
P(Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Happy | Rainy)
+
P(Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Happy | Sunny)
</em>
## The Problems With This Method
This however, is a naive way of computation. The number of multiplications this way is of the order of 2TN<sup>T</sup>.
where T is the length of the observed sequence and N is the number of hidden states.
This means that the time complexity increases exponentially as the number of hidden states increases.
# Forward Algorithm
We are computing *P(Rainy) * P(Sad | Rainy)* and *P(Sunny) * P(Sad | Sunny)* a total of 4 times.
Even parts like
*P(Rainy) * P(Sad | Rainy) * P(Rainy | Rainy) * P(Sad | Rainy)*,
*P(Rainy) * P(Sad | Rainy) * P(Sunny | Rainy) * P(Sad | Sunny)*,
*P(Sunny) * P(Sad | Sunny) * P(Rainy | Sunny) * P(Sad | Rainy)* and
*P(Sunny) * P(Sad | Sunny) * P(Sunny | Sunny) * P(Sad | Sunny)* are repeated.
We can avoid so many computation by using recurrance relations with the help of **Dynamic Programming**.

In code, it can be written as:
```
alpha[:, 0] = self.pi * emission_matrix[:, input_seq[0]] # Initialize
for t in range(1, T):
for s in range(n_states):
alpha[s, t] = emission_matrix[s, input_seq[t]] * np.sum(
alpha[:, t - 1] * transition_matrix[:, s]
)
```
This will lead to the following computations:

```
alpha, a_probs = model.forward(input_seq)
hmm.print_forward_result(alpha, a_probs)
```
# Backward Algorithm
The Backward Algorithm is the time-reversed version of the Forward Algorithm.
```
beta, b_probs = model.backward(input_seq)
hmm.print_backward_result(beta, b_probs)
```
# Problem 2:
Given an observation sequence O and an HMM λ = (A,B), discover the best hidden state sequence Q.
## Viterbi Algorithm
The Viterbi Algorithm increments over each time step, finding the maximum probability of any path that gets to state i at time t, that also has the correct observations for the sequence up to time t.
The algorithm also keeps track of the state with the highest probability at each stage. At the end of the sequence, the algorith will iterate backwards selecting the state that won which creates the most likely path or sequence of hidden states that led to the sequence of observations.
In code, it is written as:
```
delta[:, 0] = self.pi * emission_matrix[:, input_seq[0]] # Initialize
for t in range(1, T):
for s in range(n_states):
delta[s, t] = (
np.max(delta[:, t - 1] * transition_matrix[:, s])
* emission_matrix[s, input_seq[t]]
)
phi[s, t] = np.argmax(delta[:, t - 1] * transition_matrix[:, s])
```
The Viterbi Algorithm is identical to the forward algorithm except that it takes the **max** over the
previous path probabilities whereas the forward algorithm takes the **sum**.
The code for the Backtrace is written as:
```
path[T - 1] = np.argmax(delta[:, T - 1]) # Initialize
for t in range(T - 2, -1, -1):
path[t] = phi[path[t + 1], [t + 1]]
```
```
path, delta, phi = model.viterbi(input_seq)
hmm.print_viterbi_result(input_seq, observable_states, hidden_states, path, delta, phi)
```
| github_jupyter |
# Deep Reinforcement Learning in Action
### by Alex Zai and Brandon Brown
#### Chapter 3
##### Listing 3.1
```
from Gridworld import Gridworld
game = Gridworld(size=4, mode='static')
import sys
game.display()
game.makeMove('d')
game.makeMove('d')
game.makeMove('d')
game.display()
game.reward()
game.board.render_np()
game.board.render_np().shape
```
##### Listing 3.2
```
import numpy as np
import torch
from Gridworld import Gridworld
import random
from matplotlib import pylab as plt
l1 = 64
l2 = 150
l3 = 100
l4 = 4
model = torch.nn.Sequential(
torch.nn.Linear(l1, l2),
torch.nn.ReLU(),
torch.nn.Linear(l2, l3),
torch.nn.ReLU(),
torch.nn.Linear(l3,l4)
)
loss_fn = torch.nn.MSELoss()
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
gamma = 0.9
epsilon = 1.0
action_set = {
0: 'u',
1: 'd',
2: 'l',
3: 'r',
}
```
##### Listing 3.3
```
epochs = 1000
losses = []
for i in range(epochs):
game = Gridworld(size=4, mode='static')
state_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0
state1 = torch.from_numpy(state_).float()
status = 1
while(status == 1):
qval = model(state1)
qval_ = qval.data.numpy()
if (random.random() < epsilon):
action_ = np.random.randint(0,4)
else:
action_ = np.argmax(qval_)
action = action_set[action_]
game.makeMove(action)
state2_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0
state2 = torch.from_numpy(state2_).float()
reward = game.reward() #-1 for lose, +1 for win, 0 otherwise
with torch.no_grad():
newQ = model(state2.reshape(1,64))
maxQ = torch.max(newQ)
if reward == -1: # if game still in play
Y = reward + (gamma * maxQ)
else:
Y = reward
Y = torch.Tensor([Y]).detach().squeeze()
X = qval.squeeze()[action_]
loss = loss_fn(X, Y)
optimizer.zero_grad()
loss.backward()
losses.append(loss.item())
optimizer.step()
state1 = state2
if reward != -1: #game lost
status = 0
if epsilon > 0.1:
epsilon -= (1/epochs)
plt.plot(losses)
m = torch.Tensor([2.0])
m.requires_grad=True
b = torch.Tensor([1.0])
b.requires_grad=True
def linear_model(x,m,b):
y = m @ x + b
return y
y = linear_model(torch.Tensor([4.]), m,b)
y
y.grad_fn
with torch.no_grad():
y = linear_model(torch.Tensor([4]),m,b)
y
y.grad_fn
y = linear_model(torch.Tensor([4.]), m,b)
y.backward()
m.grad
b.grad
```
##### Listing 3.4
```
def test_model(model, mode='static', display=True):
i = 0
test_game = Gridworld(mode=mode)
state_ = test_game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0
state = torch.from_numpy(state_).float()
if display:
print("Initial State:")
print(test_game.display())
status = 1
while(status == 1):
qval = model(state)
qval_ = qval.data.numpy()
action_ = np.argmax(qval_)
action = action_set[action_]
if display:
print('Move #: %s; Taking action: %s' % (i, action))
test_game.makeMove(action)
state_ = test_game.board.render_np().reshape(1,64) + np.random.rand(1,64)/10.0
state = torch.from_numpy(state_).float()
if display:
print(test_game.display())
reward = test_game.reward()
if reward != -1: #if game is over
if reward > 0: #if game won
status = 2
if display:
print("Game won! Reward: %s" % (reward,))
else: #game is lost
status = 0
if display:
print("Game LOST. Reward: %s" % (reward,))
i += 1
if (i > 15):
if display:
print("Game lost; too many moves.")
break
win = True if status == 2 else False
return win
test_model(model, 'static')
```
##### Listing 3.5
```
from collections import deque
epochs = 5000
losses = []
mem_size = 1000
batch_size = 200
replay = deque(maxlen=mem_size)
max_moves = 50
h = 0
for i in range(epochs):
game = Gridworld(size=4, mode='random')
state1_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0
state1 = torch.from_numpy(state1_).float()
status = 1
mov = 0
while(status == 1):
mov += 1
qval = model(state1)
qval_ = qval.data.numpy()
if (random.random() < epsilon):
action_ = np.random.randint(0,4)
else:
action_ = np.argmax(qval_)
action = action_set[action_]
game.makeMove(action)
state2_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0
state2 = torch.from_numpy(state2_).float()
reward = game.reward()
done = True if reward > 0 else False
exp = (state1, action_, reward, state2, done)
replay.append(exp)
state1 = state2
if len(replay) > batch_size:
minibatch = random.sample(replay, batch_size)
state1_batch = torch.cat([s1 for (s1,a,r,s2,d) in minibatch])
action_batch = torch.Tensor([a for (s1,a,r,s2,d) in minibatch])
reward_batch = torch.Tensor([r for (s1,a,r,s2,d) in minibatch])
state2_batch = torch.cat([s2 for (s1,a,r,s2,d) in minibatch])
done_batch = torch.Tensor([d for (s1,a,r,s2,d) in minibatch])
Q1 = model(state1_batch)
with torch.no_grad():
Q2 = model(state2_batch)
Y = reward_batch + gamma * ((1 - done_batch) * torch.max(Q2,dim=1)[0])
X = \
Q1.gather(dim=1,index=action_batch.long().unsqueeze(dim=1)).squeeze()
loss = loss_fn(X, Y.detach())
optimizer.zero_grad()
loss.backward()
losses.append(loss.item())
optimizer.step()
if reward != -1 or mov > max_moves:
status = 0
mov = 0
losses = np.array(losses)
plt.plot(losses)
test_model(model,mode='random')
```
##### Listing 3.6
```
max_games = 1000
wins = 0
for i in range(max_games):
win = test_model(model, mode='random', display=False)
if win:
wins += 1
win_perc = float(wins) / float(max_games)
print("Games played: {0}, # of wins: {1}".format(max_games,wins))
print("Win percentage: {}".format(100.0*win_perc))
```
##### Listing 3.7
```
import copy
model = torch.nn.Sequential(
torch.nn.Linear(l1, l2),
torch.nn.ReLU(),
torch.nn.Linear(l2, l3),
torch.nn.ReLU(),
torch.nn.Linear(l3,l4)
)
model2 = model2 = copy.deepcopy(model)
model2.load_state_dict(model.state_dict())
sync_freq = 50
loss_fn = torch.nn.MSELoss()
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
```
##### Listing 3.8
```
from IPython.display import clear_output
from collections import deque
epochs = 5000
losses = []
mem_size = 1000
batch_size = 200
replay = deque(maxlen=mem_size)
max_moves = 50
h = 0
sync_freq = 500
j=0
for i in range(epochs):
game = Gridworld(size=4, mode='random')
state1_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0
state1 = torch.from_numpy(state1_).float()
status = 1
mov = 0
while(status == 1):
j+=1
mov += 1
qval = model(state1)
qval_ = qval.data.numpy()
if (random.random() < epsilon):
action_ = np.random.randint(0,4)
else:
action_ = np.argmax(qval_)
action = action_set[action_]
game.makeMove(action)
state2_ = game.board.render_np().reshape(1,64) + np.random.rand(1,64)/100.0
state2 = torch.from_numpy(state2_).float()
reward = game.reward()
done = True if reward > 0 else False
exp = (state1, action_, reward, state2, done)
replay.append(exp)
state1 = state2
if len(replay) > batch_size:
minibatch = random.sample(replay, batch_size)
state1_batch = torch.cat([s1 for (s1,a,r,s2,d) in minibatch])
action_batch = torch.Tensor([a for (s1,a,r,s2,d) in minibatch])
reward_batch = torch.Tensor([r for (s1,a,r,s2,d) in minibatch])
state2_batch = torch.cat([s2 for (s1,a,r,s2,d) in minibatch])
done_batch = torch.Tensor([d for (s1,a,r,s2,d) in minibatch])
Q1 = model(state1_batch)
with torch.no_grad():
Q2 = model2(state2_batch)
Y = reward_batch + gamma * ((1-done_batch) * \
torch.max(Q2,dim=1)[0])
X = Q1.gather(dim=1,index=action_batch.long() \
.unsqueeze(dim=1)).squeeze()
loss = loss_fn(X, Y.detach())
print(i, loss.item())
clear_output(wait=True)
optimizer.zero_grad()
loss.backward()
losses.append(loss.item())
optimizer.step()
if j % sync_freq == 0:
model2.load_state_dict(model.state_dict())
if reward != -1 or mov > max_moves:
status = 0
mov = 0
losses = np.array(losses)
plt.plot(losses)
test_model(model,mode='random')
```
| github_jupyter |
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
# 0. General note
* This notebook produces figures and calculations presented in [Ye et al. 2017, JGR](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2016JB013811).
* This notebook demonstrates how to correct pressure scales for the existing phase boundary data.
# 1. Global setup
```
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
```
# 2. Pressure calculations for PPv
* Data from Tateno2009
T (K) | Au-Tsuchiya | Pt-Holmes | MgO-Speziale
------|-------------|-----------|--------------
3500 | 120.4 | 137.7 | 135.6
2000 | 110.5 | 126.8 | 115.8
* Dorogokupets2007
T (K) | Au | Pt | MgO
------|-------------|-----------|--------------
3500 | 119.7 | 135.2 | 129.6
2000 | 108.9 | 123.2 | 113.2
<b>
* In conclusion, PPV boundary discrepancy is not likely due to pressure scale problem.
</b>
```
t_ppv = np.asarray([3500., 2000.])
Au_T = eos.gold.Tsuchiya2003()
Au_D = eos.gold.Dorogokupets2007()
v = np.asarray([51.58,51.7])
p_Au_T_ppv = Au_T.cal_p(v, t_ppv)
p_Au_D_ppv = Au_D.cal_p(v, t_ppv)
print(p_Au_T_ppv, p_Au_D_ppv)
print('slopes: ', (p_Au_T_ppv[0]-p_Au_T_ppv[1])/(t_ppv[0]-t_ppv[1]),\
(p_Au_D_ppv[0]-p_Au_D_ppv[1])/(t_ppv[0]-t_ppv[1]) )
Pt_H = eos.platinum.Holmes1989()
Pt_D = eos.platinum.Dorogokupets2007()
v = np.asarray([48.06, 48.09])
p_Pt_H_ppv = Pt_H.cal_p(v, t_ppv)
p_Pt_D_ppv = Pt_D.cal_p(v, t_ppv)
print(p_Pt_H_ppv, p_Pt_D_ppv)
print('slopes: ', (p_Pt_H_ppv[0]-p_Pt_H_ppv[1])/(t_ppv[0]-t_ppv[1]),\
(p_Pt_D_ppv[0]-p_Pt_D_ppv[1])/(t_ppv[0]-t_ppv[1]) )
MgO_S = eos.periclase.Speziale2001()
MgO_D = eos.periclase.Dorogokupets2007()
v = np.asarray([52.87, 53.6])
p_MgO_S_ppv = MgO_S.cal_p(v, t_ppv)
p_MgO_D_ppv = MgO_D.cal_p(v, t_ppv)
print(p_MgO_S_ppv, p_MgO_D_ppv)
print('slopes: ', (p_MgO_S_ppv[0]-p_MgO_S_ppv[1])/(t_ppv[0]-t_ppv[1]), \
(p_MgO_D_ppv[0]-p_MgO_D_ppv[1])/(t_ppv[0]-t_ppv[1]) )
```
# 3. Post-spinel
Fei2004
Scales| PT | PT
------|------------|------------
MgO-S | 23.6, 1573 | 22.8, 2173
MgO-D | 23.1, 1573 | 22.0, 2173
Ye2014
Scales | PT | PT
-------|------------|------------
Pt-F | 25.2, 1550 | 23.2, 2380
Pt-D | 24.6, 1550 | 22.5, 2380
Au-F | 28.3, 1650 | 27.1, 2150
Au-D | 27.0, 1650 | 25.6, 2150
```
MgO_S = eos.periclase.Speziale2001()
MgO_D = eos.periclase.Dorogokupets2007()
v = np.asarray([68.75, 70.3])
t_MgO = np.asarray([1573.,2173.])
p_MgO_S = MgO_S.cal_p(v, t_MgO)
p_MgO_D = MgO_D.cal_p(v, t_MgO)
print(p_MgO_S, p_MgO_D)
print('slopes: ', (p_MgO_S[0]-p_MgO_S[1])/(t_MgO[0]-t_MgO[1]), (p_MgO_D[0]-p_MgO_D[1])/(t_MgO[0]-t_MgO[1]) )
Pt_F = eos.platinum.Fei2007bm3()
Pt_D = eos.platinum.Dorogokupets2007()
v = np.asarray([57.43, 58.85])
t_Pt = np.asarray([1550., 2380.])
p_Pt_F = Pt_F.cal_p(v, t_Pt)
p_Pt_D = Pt_D.cal_p(v, t_Pt)
print(p_Pt_F, p_Pt_D)
print('slopes: ', (p_Pt_F[0]-p_Pt_F[1])/(t_Pt[0]-t_Pt[1]), (p_Pt_D[0]-p_Pt_D[1])/(t_Pt[0]-t_Pt[1]) )
Au_F = eos.gold.Fei2007bm3()
Au_D = eos.gold.Dorogokupets2007()
v = np.asarray([62.33,63.53])
t_Au = np.asarray([1650., 2150.])
p_Au_F = Au_F.cal_p(v, t_Au)
p_Au_D = Au_D.cal_p(v, t_Au)
print(p_Au_F, p_Au_D)
print('slopes: ', (p_Au_F[0]-p_Au_F[1])/(t_Au[0]-t_Au[1]), (p_Au_D[0]-p_Au_D[1])/(t_Au[0]-t_Au[1]) )
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,3.5))
#ax.plot(unp.nominal_values(p_Au_T), t, c='b', ls='--', label='Au-Tsuchiya')
lw = 4
l_alpha = 0.3
ax1.plot(unp.nominal_values(p_Au_D), t_Au, c='b', ls='-', alpha=l_alpha, label='Au-D07', lw=lw)
ax1.annotate('Au-D07', xy=(25.7, 2100), xycoords='data',
xytext=(26.9, 2100), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='right', verticalalignment='center')
ax1.plot(unp.nominal_values(p_Au_D-2.5), t_Au, c='b', ls='-', label='Au-mD07', lw=lw)
ax1.annotate('Au-D07,\n corrected', xy=(24.35, 1700), xycoords='data',
xytext=(24.8, 1700), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='left', verticalalignment='center')
#ax.plot(unp.nominal_values(p_Pt_H), t, c='r', ls='--', label='Pt-Holmes')
ax1.plot(unp.nominal_values(p_Pt_D), t_Pt, c='r', ls='-', label='Pt-D07', lw=lw)
ax1.annotate('Pt-D07', xy=(22.7, 2300), xycoords='data',
xytext=(23.1, 2300), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='left', verticalalignment='center')
ax1.plot(unp.nominal_values(p_MgO_S), t_MgO, c='k', ls='-', alpha=l_alpha, label='MgO-S01', lw=lw)
ax1.annotate('MgO-S01', xy=(22.9, 2150), xycoords='data',
xytext=(22.5, 2250), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='right', verticalalignment='top')
ax1.plot(unp.nominal_values(p_MgO_D), t_MgO, c='k', ls='-', label='MgO-D07', lw=lw)
ax1.annotate('MgO-D07', xy=(22.7, 1800), xycoords='data',
xytext=(22.3, 1800), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='right', verticalalignment='center')
ax1.fill([23.5,24,24,23.5], [1700,1700,2000,2000], 'k', alpha=0.2)
ax1.set_xlabel("Pressure (GPa)"); ax1.set_ylabel("Temperature (K)")
#l = ax1.legend(loc=3, fontsize=10, handlelength=2.5); l.get_frame().set_linewidth(0.5)
ax2.plot(unp.nominal_values(p_Au_T_ppv), t_ppv, c='b', ls='-', alpha=l_alpha, label='Au-T04', lw=lw)
ax2.annotate('Au-T04', xy=(120, 3400), xycoords='data',
xytext=(122, 3400), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='left', verticalalignment='center')
ax2.plot(unp.nominal_values(p_Au_D_ppv), t_ppv, c='b', ls='-', label='Au-D07', lw=lw)
ax2.annotate('Au-D07', xy=(119, 3400), xycoords='data',
xytext=(117, 3400), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='right', verticalalignment='center')
ax2.plot(unp.nominal_values(p_Pt_H_ppv), t_ppv, c='r', ls='-', alpha=l_alpha, label='Pt-H89', lw=lw)
ax2.annotate('Pt-H89', xy=(129, 2300), xycoords='data',
xytext=(132, 2300), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='left', verticalalignment='center')
ax2.plot(unp.nominal_values(p_Pt_D_ppv), t_ppv, c='r', ls='-', label='Pt-D07', lw=lw)
ax2.annotate('Pt-D07', xy=(124, 2150), xycoords='data',
xytext=(123.7, 2300), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='center', verticalalignment='bottom')
ax2.plot(unp.nominal_values(p_MgO_S_ppv), t_ppv, c='k', ls='-', alpha=l_alpha, label='MgO-S01', lw=lw)
ax2.annotate('MgO-S01', xy=(132, 3250), xycoords='data',
xytext=(132.2, 3550), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='left', verticalalignment='bottom')
ax2.plot(unp.nominal_values(p_MgO_D_ppv), t_ppv, c='k', ls='-', label='MgO-D07', lw=lw)
ax2.annotate('MgO-D07', xy=(128, 3400), xycoords='data',
xytext=(128, 3550), textcoords='data',
arrowprops=dict(facecolor='k', alpha=0.5, shrink=1, width = 0.1, headwidth=5),
horizontalalignment='center', verticalalignment='bottom')
ax2.set_xlabel("Pressure (GPa)"); ax2.set_ylabel("Temperature (K)")
ax2.set_ylim(1900, 3700.)
#l = ax2.legend(loc=0, fontsize=10, handlelength=2.5); l.get_frame().set_linewidth(0.5)
ax1.text(0.05, 0.03, 'a', horizontalalignment='center',\
verticalalignment='bottom', transform = ax1.transAxes,\
fontsize = 32)
ax2.text(0.05, 0.03, 'b', horizontalalignment='center',\
verticalalignment='bottom', transform = ax2.transAxes,\
fontsize = 32)
ax1.set_yticks(ax1.get_yticks()[::2])
#ax2.set_yticks(ax2.get_yticks()[::2])
plt.tight_layout(pad=0.6)
plt.savefig('f-boundaries.pdf', bbox_inches='tight', \
pad_inches=0.1)
```
| github_jupyter |
# Generators
# 生成器
> Here we'll take a deeper dive into Python generators, including *generator expressions* and *generator functions*.
本章我们深入讨论Python的生成器,包括*生成器表达式*和*生成器函数*
## Generator Expressions
## 生成器表达式
> The difference between list comprehensions and generator expressions is sometimes confusing; here we'll quickly outline the differences between them:
列表解析和生成器表达式之间的区别很容易令人混乱;下面我们快速地说明一下它们之间的区别:
### List comprehensions use square brackets, while generator expressions use parentheses
### 列表解析使用中括号,而生成器表达式使用小括号
> This is a representative list comprehension:
下面是一个很有代表性的列表解析:
```
[n ** 2 for n in range(12)]
```
> While this is a representative generator expression:
下面这个却是一个生成器表达式:
```
(n ** 2 for n in range(12))
```
> Notice that printing the generator expression does not print the contents; one way to print the contents of a generator expression is to pass it to the ``list`` constructor:
你会注意到直接打印生成器表达式并不会输出生成器的内容;可以使用`list`将生成器转换为一个列表然后输出:
```
G = (n ** 2 for n in range(12))
list(G)
```
### A list is a collection of values, while a generator is a recipe for producing values
### 列表是一个集合,而生成器只是产生集合值的配方
> When you create a list, you are actually building a collection of values, and there is some memory cost associated with that.
When you create a generator, you are not building a collection of values, but a recipe for producing those values.
Both expose the same iterator interface, as we can see here:
当你创建一个列表,你真实地创建了一个集合,当然这个集合存储在内存当中需要一定的空间。当你创建了一个生成器,你并没有创建一个集合,你仅仅是指定了产生集合值的方法。两者都实现了迭代器接口,由下面两个例子可以看到:
```
L = [n ** 2 for n in range(12)]
for val in L:
print(val, end=' ')
G = (n ** 2 for n in range(12))
for val in G:
print(val, end=' ')
```
> The difference is that a generator expression does not actually compute the values until they are needed.
This not only leads to memory efficiency, but to computational efficiency as well!
This also means that while the size of a list is limited by available memory, the size of a generator expression is unlimited!
区别在于生成器仅在你用到值的时候才会按照配方计算一个值返回给你。这样的好处不仅仅是节省内存,还能节省计算资源。这还意味着,列表的大小受限于可用内存的大小,而生成器的大小是无限的。
> An example of an infinite generator expression can be created using the ``count`` iterator defined in ``itertools``:
我们可以使用`itertools`里面的`count`函数来构造一个无限的生成器表达式:
```
from itertools import count
count()
for i in count():
print(i, end=' ')
if i >= 10: break
```
> The ``count`` iterator will go on happily counting forever until you tell it to stop; this makes it convenient to create generators that will also go on forever:
`count`函数会永远的迭代下去除非你停止了它的运行;这也可以用来创建永远运行的生成器:
```
factors = [2, 3, 5, 7]
G = (i for i in count() if all(i % n > 0 for n in factors))
for val in G:
print(val, end=' ')
if val > 40: break
```
> You might see what we're getting at here: if we were to expand the list of factors appropriately, what we would have the beginnings of is a prime number generator, using the Sieve of Eratosthenes algorithm. We'll explore this more momentarily.
上面的例子你应该已经看出来了:如果我们使用Sieve of Eratosthenes算法,将factors列表进行合适的扩展的话,那么我们将会得到一个质数的生成器。
### A list can be iterated multiple times; a generator expression is single-use
### 列表可以被迭代多次;生成器只能是一次使用
> This is one of those potential gotchas of generator expressions.
With a list, we can straightforwardly do this:
这是生成器的一个著名的坑。使用列表时,我们可以如下做:
```
L = [n ** 2 for n in range(12)]
for val in L:
print(val, end=' ')
print()
for val in L:
print(val, end=' ')
```
> A generator expression, on the other hand, is used-up after one iteration:
生成器表达式则不一样,只能迭代一次:
```
G = (n ** 2 for n in range(12))
list(G)
list(G)
```
> This can be very useful because it means iteration can be stopped and started:
这是非常有用的特性,因为这意味着迭代能停止和开始:
```
G = (n**2 for n in range(12))
for n in G:
print(n, end=' ')
if n > 30: break # 生成器停止运行
print("\ndoing something in between")
for n in G: # 生成器继续运行
print(n, end=' ')
```
> One place I've found this useful is when working with collections of data files on disk; it means that you can quite easily analyze them in batches, letting the generator keep track of which ones you have yet to see.
作者发现这个特性在使用磁盘上存储的数据文件时特别有用;它意味着你可以很容易的按批次来分析数据,让生成器记录下目前的处理进度。
## Generator Functions: Using ``yield``
## 生成器函数:使用 `yield`
> We saw in the previous section that list comprehensions are best used to create relatively simple lists, while using a normal ``for`` loop can be better in more complicated situations.
The same is true of generator expressions: we can make more complicated generators using *generator functions*, which make use of the ``yield`` statement.
从上面的讨论中,我们可以知道列表解析适用于创建相对简单的列表,如果列表的生成规则比较复杂,还是使用普通`for`循环更加合适。对于生成器表达式来说也一样:我们可以使用*生成器函数*创建更加复杂的生成器,这里需要用到`yield`关键字。
> Here we have two ways of constructing the same list:
我们有两种方式来构建同一个列表:
```
L1 = [n ** 2 for n in range(12)]
L2 = []
for n in range(12):
L2.append(n ** 2)
print(L1)
print(L2)
```
> Similarly, here we have two ways of constructing equivalent generators:
类似的,我们也有两种方法来构建相同的生成器:
```
G1 = (n ** 2 for n in range(12))
def gen():
for n in range(12):
yield n ** 2
G2 = gen()
print(*G1)
print(*G2)
```
> A generator function is a function that, rather than using ``return`` to return a value once, uses ``yield`` to yield a (potentially infinite) sequence of values.
Just as in generator expressions, the state of the generator is preserved between partial iterations, but if we want a fresh copy of the generator we can simply call the function again.
生成器函数与普通函数的区别在于,生成器函数不是使用`return`来一次性返回值,而是使用`yield`来产生一系列(可能无穷多个)值。就像生成器表达式一样,生成器的状态会被生成器自己保留并记录,如果你需要一个新的生成器,你可以再次调用函数。
## Example: Prime Number Generator
## 例子:质数生成器
> Here I'll show my favorite example of a generator function: a function to generate an unbounded series of prime numbers.
A classic algorithm for this is the *Sieve of Eratosthenes*, which works something like this:
下面作者将介绍他最喜欢的生成器函数的例子:一个可以产生无穷多个质数序列的函数。计算质数又一个经典算法*Sieve of Eratosthenes*,它的工作原理如下:
```
# 产生可能的质数序列
L = [n for n in range(2, 40)]
print(L)
# 剔除所有被第一个元素整除的数
L = [n for n in L if n == L[0] or n % L[0] > 0]
print(L)
# 剔除所有被第二个元素整除的数
L = [n for n in L if n == L[1] or n % L[1] > 0]
print(L)
# 剔除所有被第三个元素整除的数
L = [n for n in L if n == L[2] or n % L[2] > 0]
print(L)
```
> If we repeat this procedure enough times on a large enough list, we can generate as many primes as we wish.
如果我们在一个很大的列表上重复这个过程足够多次,我们可以生成我们需要的质数。
> Let's encapsulate this logic in a generator function:
我们将这个逻辑封装到一个生成器函数中:
```
def gen_primes(N):
"""Generate primes up to N"""
primes = set() # 使用primes集合存储找到的质数
for n in range(2, N):
if all(n % p > 0 for p in primes): # primes中的元素都不能整除n -> n是质数
primes.add(n) # 将n加入primes集合
yield n # 产生序列
print(*gen_primes(100))
```
> That's all there is to it!
While this is certainly not the most computationally efficient implementation of the Sieve of Eratosthenes, it illustrates how convenient the generator function syntax can be for building more complicated sequences.
虽然这可能不是最优化的Sieve of Eratosthenes算法实现,但是它表明声称其函数语法是多么简便,而且可以用来构建很复杂的序列。
| github_jupyter |
<a href="https://colab.research.google.com/github/reallygooday/60daysofudacity/blob/master/Basic_Image_Classifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
hand-written digits dataset from UCI: http://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
```
# Importing load_digits() from the sklearn.datasets package
from sklearn.datasets import load_digits
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
digits_data = load_digits()
digits_data.keys()
labels = pd.Series(digits_data['target'])
data = pd.DataFrame(digits_data['data'])
data.head(1)
first_image = data.iloc[0]
np_image = first_image.values
np_image = np_image.reshape(8,8)
plt.imshow(np_image, cmap='gray_r')
f, axarr = plt.subplots(2, 4)
axarr[0, 0].imshow(data.iloc[0].values.reshape(8,8), cmap='gray_r')
axarr[0, 1].imshow(data.iloc[99].values.reshape(8,8), cmap='gray_r')
axarr[0, 2].imshow(data.iloc[199].values.reshape(8,8), cmap='gray_r')
axarr[0, 3].imshow(data.iloc[299].values.reshape(8,8), cmap='gray_r')
axarr[1, 0].imshow(data.iloc[999].values.reshape(8,8), cmap='gray_r')
axarr[1, 1].imshow(data.iloc[1099].values.reshape(8,8), cmap='gray_r')
axarr[1, 2].imshow(data.iloc[1199].values.reshape(8,8), cmap='gray_r')
axarr[1, 3].imshow(data.iloc[1299].values.reshape(8,8), cmap='gray_r')
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import KFold
# 50% Train / test validation
def train_knn(nneighbors, train_features, train_labels):
knn = KNeighborsClassifier(n_neighbors = nneighbors)
knn.fit(train_features, train_labels)
return knn
def test(model, test_features, test_labels):
predictions = model.predict(test_features)
train_test_df = pd.DataFrame()
train_test_df['correct_label'] = test_labels
train_test_df['predicted_label'] = predictions
overall_accuracy = sum(train_test_df["predicted_label"] == train_test_df["correct_label"])/len(train_test_df)
return overall_accuracy
def cross_validate(k):
fold_accuracies = []
kf = KFold(n_splits = 4, random_state=2)
for train_index, test_index in kf.split(data):
train_features, test_features = data.loc[train_index], data.loc[test_index]
train_labels, test_labels = labels.loc[train_index], labels.loc[test_index]
model = train_knn(k, train_features, train_labels)
overall_accuracy = test(model, test_features, test_labels)
fold_accuracies.append(overall_accuracy)
return fold_accuracies
knn_one_accuracies = cross_validate(1)
np.mean(knn_one_accuracies)
k_values = list(range(1,10))
k_overall_accuracies = []
for k in k_values:
k_accuracies = cross_validate(k)
k_mean_accuracy = np.mean(k_accuracies)
k_overall_accuracies.append(k_mean_accuracy)
plt.figure(figsize=(8,4))
plt.title("Mean Accuracy vs. k")
plt.plot(k_values, k_overall_accuracies)
#Neural Network With One Hidden Layer
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import KFold
# 50% Train / test validation
def train_nn(neuron_arch, train_features, train_labels):
mlp = MLPClassifier(hidden_layer_sizes=neuron_arch)
mlp.fit(train_features, train_labels)
return mlp
def test(model, test_features, test_labels):
predictions = model.predict(test_features)
train_test_df = pd.DataFrame()
train_test_df['correct_label'] = test_labels
train_test_df['predicted_label'] = predictions
overall_accuracy = sum(train_test_df["predicted_label"] == train_test_df["correct_label"])/len(train_test_df)
return overall_accuracy
def cross_validate(neuron_arch):
fold_accuracies = []
kf = KFold(n_splits = 4, random_state=2)
for train_index, test_index in kf.split(data):
train_features, test_features = data.loc[train_index], data.loc[test_index]
train_labels, test_labels = labels.loc[train_index], labels.loc[test_index]
model = train_nn(neuron_arch, train_features, train_labels)
overall_accuracy = test(model, test_features, test_labels)
fold_accuracies.append(overall_accuracy)
return fold_accuracies
from sklearn.neural_network import MLPClassifier
nn_one_neurons = [
(8,),
(16,),
(32,),
(64,),
(128,),
(256,)
]
nn_one_accuracies = []
for n in nn_one_neurons:
nn_accuracies = cross_validate(n)
nn_mean_accuracy = np.mean(nn_accuracies)
nn_one_accuracies.append(nn_mean_accuracy)
plt.figure(figsize=(8,4))
plt.title("Mean Accuracy vs. Neurons In Single Hidden Layer")
x = [i[0] for i in nn_one_neurons]
plt.plot(x, nn_one_accuracies)
# Neural Network With Two Hidden Layers
nn_two_neurons = [
(64,64),
(128, 128),
(256, 256)
]
nn_two_accuracies = []
for n in nn_two_neurons:
nn_accuracies = cross_validate(n)
nn_mean_accuracy = np.mean(nn_accuracies)
nn_two_accuracies.append(nn_mean_accuracy)
plt.figure(figsize=(8,4))
plt.title("Mean Accuracy vs. Neurons In Two Hidden Layers")
x = [i[0] for i in nn_two_neurons]
plt.plot(x, nn_two_accuracies)
nn_two_accuracies
#Neural Network With Three Hidden Layers
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import KFold
# 50% Train / test validation
def train_nn(neuron_arch, train_features, train_labels):
mlp = MLPClassifier(hidden_layer_sizes=neuron_arch)
mlp.fit(train_features, train_labels)
return mlp
def test(model, test_features, test_labels):
predictions = model.predict(test_features)
train_test_df = pd.DataFrame()
train_test_df['correct_label'] = test_labels
train_test_df['predicted_label'] = predictions
overall_accuracy = sum(train_test_df["predicted_label"] == train_test_df["correct_label"])/len(train_test_df)
return overall_accuracy
def cross_validate_six(neuron_arch):
fold_accuracies = []
kf = KFold(n_splits = 6, random_state=2)
for train_index, test_index in kf.split(data):
train_features, test_features = data.loc[train_index], data.loc[test_index]
train_labels, test_labels = labels.loc[train_index], labels.loc[test_index]
model = train_nn(neuron_arch, train_features, train_labels)
overall_accuracy = test(model, test_features, test_labels)
fold_accuracies.append(overall_accuracy)
return fold_accuracies
nn_three_neurons = [
(10, 10, 10),
(64, 64, 64),
(128, 128, 128)
]
nn_three_accuracies = []
for n in nn_three_neurons:
nn_accuracies = cross_validate_six(n)
nn_mean_accuracy = np.mean(nn_accuracies)
nn_three_accuracies.append(nn_mean_accuracy)
plt.figure(figsize=(8,4))
plt.title("Mean Accuracy vs. Neurons In Three Hidden Layers")
x = [i[0] for i in nn_three_neurons]
plt.plot(x, nn_three_accuracies)
nn_three_accuracies
```
#Image Classification with PyTorch
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import torchvision
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor()
])),
batch_size=32, shuffle=False)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=False,
transform=transforms.Compose([
transforms.ToTensor()
])),
batch_size=32, shuffle=False)
class BasicNN(nn.Module):
def __init__(self):
super(BasicNN, self).__init__()
self.net = nn.Linear(28 * 28, 10)
def forward(self, x):
batch_size = x.size(0)
x = x.view(batch_size, -1)
output = self.net(x)
return F.softmax(output)
model = BasicNN()
optimizer = optim.SGD(model.parameters(), lr=0.001)
def test():
total_loss = 0
correct = 0
for image, label in test_loader:
image, label = Variable(image), Variable(label)
output = model(image)
total_loss += F.cross_entropy(output, label)
correct += (torch.max(output, 1)[1].view(label.size()).data == label.data).sum()
total_loss = total_loss.data[0]/ len(test_loader)
accuracy = correct / len(test_loader.dataset)
return total_loss, accuracy
def train():
model.train()
for image, label in train_loader:
image, label = Variable(image), Variable(label)
optimizer.zero_grad()
output = model(image)
loss = F.cross_entropy(output, label)
loss.backward()
optimizer.step()
best_test_loss = None
for e in range(1, 150):
train()
test_loss, test_accuracy = test()
print("\n[Epoch: %d] Test Loss:%5.5f Test Accuracy:%5.5f" % (e, test_loss, test_accuracy))
# Save the model if the test_loss is the lowest
if not best_test_loss or test_loss < best_test_loss:
best_test_loss = test_loss
else:
break
print("\nFinal Results\n-------------\n""Loss:", best_test_loss, "Test Accuracy: ", test_accuracy)
```
| github_jupyter |
<h1>REGIONE LOMBARDIA</h1>
Confronto dei dati relativi ai decessi registrati dall'ISTAT e i decessi causa COVID-19 registrati dalla Protezione Civile Italiana con i decessi previsti dal modello predittivo SARIMA.
<h2>DECESSI MENSILI REGIONE LOMBARDIA ISTAT</h2>
Il DataFrame contiene i dati relativi ai decessi mensili della regione <b>Lombardia</b> dal <b>2015</b> al <b>30 settembre 2020</b>.
```
import matplotlib.pyplot as plt
import pandas as pd
decessi_istat = pd.read_csv('../../csv/regioni/lombardia.csv')
decessi_istat.head()
decessi_istat['DATA'] = pd.to_datetime(decessi_istat['DATA'])
decessi_istat.TOTALE = pd.to_numeric(decessi_istat.TOTALE)
```
<h3>Recupero dei dati inerenti al periodo COVID-19</h3>
```
decessi_istat = decessi_istat[decessi_istat['DATA'] > '2020-02-29']
decessi_istat.head()
```
<h3>Creazione serie storica dei decessi ISTAT</h3>
```
decessi_istat = decessi_istat.set_index('DATA')
decessi_istat = decessi_istat.TOTALE
decessi_istat
```
<h2>DECESSI MENSILI REGIONE LOMBARDIA CAUSATI DAL COVID</h2>
Il DataFrame contine i dati forniti dalla Protezione Civile relativi ai decessi mensili della regione <b>Lombardia</b> da <b> marzo 2020</b> al <b>30 settembre 2020</b>.
```
covid = pd.read_csv('../../csv/regioni_covid/lombardia.csv')
covid.head()
covid['data'] = pd.to_datetime(covid['data'])
covid.deceduti = pd.to_numeric(covid.deceduti)
covid = covid.set_index('data')
covid.head()
```
<h3>Creazione serie storica dei decessi COVID-19</h3>
```
covid = covid.deceduti
```
<h2>PREDIZIONE DECESSI MENSILI REGIONE SECONDO MODELLO SARIMA</h2>
Il DataFrame contiene i dati riguardanti i decessi mensili della regione <b>Lombardia</b> secondo la predizione del modello SARIMA applicato.
```
predictions = pd.read_csv('../../csv/pred/predictions_SARIMA_lombardia.csv')
predictions.head()
predictions.rename(columns={'Unnamed: 0': 'Data', 'predicted_mean':'Totale'}, inplace=True)
predictions.head()
predictions['Data'] = pd.to_datetime(predictions['Data'])
predictions.Totale = pd.to_numeric(predictions.Totale)
```
<h3>Recupero dei dati inerenti al periodo COVID-19</h3>
```
predictions = predictions[predictions['Data'] > '2020-02-29']
predictions.head()
predictions = predictions.set_index('Data')
predictions.head()
```
<h3>Creazione serie storica dei decessi secondo la predizione del modello</h3>
```
predictions = predictions.Totale
```
<h1>INTERVALLI DI CONFIDENZA </h1>
<h3>Limite massimo</h3>
```
upper = pd.read_csv('../../csv/upper/predictions_SARIMA_lombardia_upper.csv')
upper.head()
upper.rename(columns={'Unnamed: 0': 'Data', 'upper TOTALE':'Totale'}, inplace=True)
upper['Data'] = pd.to_datetime(upper['Data'])
upper.Totale = pd.to_numeric(upper.Totale)
upper.head()
upper = upper[upper['Data'] > '2020-02-29']
upper = upper.set_index('Data')
upper.head()
upper = upper.Totale
```
<h3>Limite minimo
```
lower = pd.read_csv('../../csv/lower/predictions_SARIMA_lombardia_lower.csv')
lower.head()
lower.rename(columns={'Unnamed: 0': 'Data', 'lower TOTALE':'Totale'}, inplace=True)
lower['Data'] = pd.to_datetime(lower['Data'])
lower.Totale = pd.to_numeric(lower.Totale)
lower.head()
lower = lower[lower['Data'] > '2020-02-29']
lower = lower.set_index('Data')
lower.head()
lower = lower.Totale
```
<h1> CONFRONTO DELLE SERIE STORICHE </h1>
Di seguito il confronto grafico tra le serie storiche dei <b>decessi totali mensili</b>, dei <b>decessi causa COVID-19</b> e dei <b>decessi previsti dal modello SARIMA</b> della regione <b>Lombardia</b>.
<br />
I mesi di riferimento sono: <b>marzo</b>, <b>aprile</b>, <b>maggio</b>, <b>giugno</b>, <b>luglio</b>, <b>agosto</b> e <b>settembre</b>.
```
plt.figure(figsize=(15,4))
plt.title('LOMBARDIA - Confronto decessi totali, decessi causa covid e decessi del modello predittivo', size=18)
plt.plot(covid, label='decessi accertati covid')
plt.plot(decessi_istat, label='decessi totali')
plt.plot(predictions, label='predizione modello')
plt.legend(prop={'size': 12})
plt.show()
plt.figure(figsize=(15,4))
plt.title("LOMBARDIA - Confronto decessi totali ISTAT con decessi previsti dal modello", size=18)
plt.plot(predictions, label='predizione modello')
plt.plot(upper, label='limite massimo')
plt.plot(lower, label='limite minimo')
plt.plot(decessi_istat, label='decessi totali')
plt.legend(prop={'size': 12})
plt.show()
```
<h2>Calcolo dei decessi COVID-19 secondo il modello predittivo</h2>
Differenza tra i decessi totali rilasciati dall'ISTAT e i decessi secondo la previsione del modello SARIMA.
```
n = decessi_istat - predictions
n_upper = decessi_istat - lower
n_lower = decessi_istat - upper
plt.figure(figsize=(15,4))
plt.title("LOMBARDIA - Confronto decessi accertati covid con decessi covid previsti dal modello", size=18)
plt.plot(covid, label='decessi covid accertati - Protezione Civile')
plt.plot(n, label='devessi covid previsti - modello SARIMA')
plt.plot(n_upper, label='limite massimo - modello SARIMA')
plt.plot(n_lower, label='limite minimo - modello SARIMA')
plt.legend(prop={'size': 12})
plt.show()
```
Gli <b>intervalli</b> corrispondono alla differenza tra i decessi totali forniti dall'ISTAT per i mesi di marzo, aprile, maggio e giugno 2020 e i valori degli <b>intervalli di confidenza</b> (intervallo superiore e intervallo inferiore) del modello predittivo SARIMA dei medesimi mesi.
```
d = decessi_istat.sum()
print("Decessi 2020:", d)
d_m = predictions.sum()
print("Decessi attesi dal modello 2020:", d_m)
d_lower = lower.sum()
print("Decessi attesi dal modello 2020 - livello mimino:", d_lower)
```
<h3>Numero totale dei decessi accertati COVID-19 per la regione Lombardia </h3>
```
m = covid.sum()
print(int(m))
```
<h3>Numero totale dei decessi COVID-19 previsti dal modello per la regione Lombardia </h3>
<h4>Valore medio
```
total = n.sum()
print(int(total))
```
<h4>Valore massimo
```
total_upper = n_upper.sum()
print(int(total_upper))
```
<h4>Valore minimo
```
total_lower = n_lower.sum()
print(int(total_lower))
```
<h3>Calcolo del numero dei decessi COVID-19 non registrati secondo il modello predittivo SARIMA della regione Lombardia</h3>
<h4>Valore medio
```
x = decessi_istat - predictions - covid
x = x.sum()
print(int(x))
```
<h4>Valore massimo
```
x_upper = decessi_istat - lower - covid
x_upper = x_upper.sum()
print(int(x_upper))
```
<h4>Valore minimo
```
x_lower = decessi_istat - upper - covid
x_lower = x_lower.sum()
print(int(x_lower))
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
from scipy import stats
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# for line in lines:
# for x1,y1,x2,y2 in line:
# cv2.line(img, (x1, y1), (x2, y2), color, thickness)
sizeY = img.shape[0]
sizeX = img.shape[1]
pointsLeft = []
pointsRight = []
for line in lines:
for x1,y1,x2,y2 in line:
#cv2.line(img, (x1 , y1) , (x2 , y2) , [0, 255, 0], thickness)
# Gets the midpoint of a line
posX = (x1 + x2) * 0.5
posY = (y1 + y2) * 0.5
# Determines whether the midpoint is loaded on the right or left side of the image and classifies it
if posX < sizeX * 0.5 :
pointsLeft.append((posX, posY))
else:
pointsRight.append((posX, posY))
# Get m and b from linear regression
left = stats.linregress(pointsLeft)
right = stats.linregress(pointsRight)
left_m = left.slope
right_m = right.slope
left_b = left.intercept
right_b = right.intercept
# Define the points of left line x = (y - b) / m
left_y1 = int(sizeY)
left_x1 = int((left_y1 - left_b) / left_m)
left_y2 = int(sizeY * 0.6)
left_x2 = int((left_y2 - left_b) / left_m)
# Define the points of right line x = (y - b) / m
right_y1 = int(sizeY)
right_x1 = int((right_y1 - right_b) / right_m)
right_y2 = int(sizeY * 0.6)
right_x2 = int((right_y2 - right_b) / right_m)
# Draw two lane lines
cv2.line(img, (left_x1 , left_y1 ) , (left_x2 , left_y2 ) , color, thickness)
cv2.line(img, (right_x1 , right_y1) , (right_x2 , right_y2) , color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
image = mpimg.imread("test_images/"+os.listdir("test_images/")[4])
weighted_image = process_image(image)
plt.imshow(weighted_image)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
gray = grayscale(image)
kernel_size = 9
blur_gray = gaussian_blur(gray, kernel_size)
low_threshold = 100
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
ysize = image.shape[0]
xsize = image.shape[1]
vertices = np.array([[(xsize * 0.10 , ysize * 0.90),
(xsize * 0.46 , ysize * 0.60),
(xsize * 0.54 , ysize * 0.60),
(xsize * 0.90 , ysize * 0.90)]], dtype=np.int32)
# imshape = image.shape
# vertices = np.array([[(0,imshape[0]),(0, 0), (imshape[1], 0), (imshape[1],imshape[0])]], dtype=np.int32)
# vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 5 #minimum number of pixels making up a line
max_line_gap = 5 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
line_img = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)
weighted_image = weighted_img(line_img, image)
return weighted_image
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
# clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
```
from imp import reload
import autoargs; reload(autoargs);
```
## argparse made easy!
```
# pass your function and args from your sys.argv, and you're off to the races!
def myprint(arg1, arg2):
print("arg1:", arg1)
print("arg2:", arg2)
autoargs.autocall(myprint, ["first", "second"])
# if you want your arguments to be types, use any function that expects a string
# and returns the type you want in your arg annotation
def str_repeat(s: str, n: int):
print((s * n).strip())
autoargs.autocall(str_repeat, ["args are easy!\n", "3"])
# if your args value is a string, it gets split using shlex
autoargs.autocall(str_repeat, "'still easy!\n' 3")
import functools
import operator
# varargs are supported too!
def product(*args: float):
return functools.reduce(operator.mul, args, 1.0)
print(autoargs.autocall(product, ["5", "10", "0.5"]))
def join(delimiter, *args):
return delimiter.join(args)
print(autoargs.autocall(join, [", ", "pretty easy", "right?"]))
def aggregate(*args: float, op: {'sum', 'mul'}):
if op == "sum":
return sum(args)
elif op == "mul":
return product(*args)
autoargs.autocall(aggregate, ["--help"])
# kwargs are supported using command-line syntax
def land_of_defaults(a="default-a", argb="b default"):
print(a, argb)
autoargs.autocall(land_of_defaults, []) # => "" (no args in call)
autoargs.autocall(land_of_defaults, ['-aOverride!']) # => "-aOverride!"
autoargs.autocall(land_of_defaults, ['-a', 'Override!']) # => "-a Override!"
autoargs.autocall(land_of_defaults, ['--argb', 'Override!']) # => "--argb Override!"
# warning! if an argument has a default, it can only be given via this kwarg syntax
# if you want to require a kwarg, use a kwonly-arg
def required_arg(normal, default="boring", *, required):
print(normal, default, required)
autoargs.autocall(required_arg, ["normal", "--required", "val"])
autoargs.autocall(required_arg, ["normal"])
```
### Invalid Arg Handling
Speaking of errors, invalid arguments are caught by the parser. This means that you get CLI-like error messages, like the user would be expecting if this were a CLI interface.
```
def oops(arg: int):
return "%s is an integer!" % arg
autoargs.autocall(oops, [])
autoargs.autocall(oops, ["spam"])
autoargs.autocall(oops, ["20", "spam"])
```
## parser
```
# if you want access to the parser, go right ahead!
parser = autoargs.autoparser(myprint)
parser
parsed = parser.parse_args(["first", "second"])
parsed
vars(parsed)
```
## todo:
- parsing a whole module/object (fns become subparsers)
- using autoargs to call other module's fns from command line
- setup.py
- add to pypi
- proper docs
- all of the above with appropriate testing
stay tuned for these and (potentially) other ideas! feel free to add issues
| github_jupyter |
## Convolutional Layer
In this notebook, we visualize four filtered outputs (a.k.a. feature maps) of a convolutional layer.
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'images/udacity_sdc.png'
#img_path = 'C:/Users/oanag/Pictures/2019/FranceCoteDAzur_2019-04-26/FranceCoteDAzur-134.JPG'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
#nicely print matrix
print(filter_vals)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print(filters)
### do not modify the code below this line ###
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
```
### Define a convolutional layer
Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
# after a ReLu is applied
# visualize the output of an activated conv layer
viz_layer(activated_layer)
```
| github_jupyter |
# A Chaos Game with Triangles
John D. Cook [proposed](https://www.johndcook.com/blog/2017/07/08/the-chaos-game-and-the-sierpinski-triangle/) an interesting "game" from the book *[Chaos and Fractals](https://smile.amazon.com/Chaos-Fractals-New-Frontiers-Science/dp/0387202293)*: start at a vertex of an equilateral triangle. Then move to a new point halfway between the current point and one of the three vertexes of the triangle, chosen at random. Repeat to create *N* points, and plot them. What do you get?
I'll refactor Cook's code a bit and then we'll see:
```
import matplotlib.pyplot as plt
import random
def random_walk(vertexes, N):
"Walk halfway from current point towards a random vertex; repeat for N points."
points = [random.choice(vertexes)]
for _ in range(N-1):
points.append(midpoint(points[-1], random.choice(vertexes)))
return points
def show_walk(vertexes, N=5000):
"Walk halfway towards a random vertex for N points; show reults."
Xs, Ys = transpose(random_walk(vertexes, N))
Xv, Yv = transpose(vertexes)
plt.plot(Xs, Ys, 'r.')
plt.plot(Xv, Yv, 'bs')
plt.gca().set_aspect('equal')
plt.gcf().set_size_inches(9, 9)
plt.axis('off')
plt.show()
def midpoint(p, q): return ((p[0] + q[0])/2, (p[1] + q[1])/2)
def transpose(matrix): return zip(*matrix)
triangle = ((0, 0), (0.5, (3**0.5)/2), (1, 0))
show_walk(triangle, 20)
```
OK, the first 20 points don't tell me much. What if I try 20,000 points?
```
show_walk(triangle, 20000)
```
Wow! The [Sierpinski Triangle](https://en.wikipedia.org/wiki/Sierpinski_triangle)!
What happens if we start with a different set of vertexes, like a square?
```
square = ((0, 0), (0, 1), (1, 0), (1, 1))
show_walk(square)
```
There doesn't seem to be any structure there. Let's try again to make sure:
```
show_walk(square, 20000)
```
I'm still not seeing anything but random points. How about a right triangle?
```
right_triangle = ((0, 0), (0, 1), (1, 0))
show_walk(right_triangle, 20000)
```
We get a squished Serpinski triangle. How about a pentagon? (I'm lazy so I had Wolfram Alpha [compute the vertexes](https://www.wolframalpha.com/input/?i=vertexes+of+regular+pentagon).)
```
pentagon = ((0.5, -0.688), (0.809, 0.262), (0., 0.850), (-0.809, 0.262), (-0.5, -0.688))
show_walk(pentagon)
```
To clarify, let's try again with different numbers of points:
```
show_walk(pentagon, 10000)
show_walk(pentagon, 20000)
```
I definitely see a central hole, and five secondary holes surrounding that, and then, maybe 15 holes surrounding those? Or maybe not 15; hard to tell. Is a "Sierpinski Pentagon" a thing? I hadn't heard of it but a [quick search](https://www.google.com/search?q=sierpinski+pentagon) reveals that yes indeed, it is [a thing](http://ecademy.agnesscott.edu/~lriddle/ifs/pentagon/sierngon.htm), and it does have 15 holes surrounding the 5 holes. Let's try the hexagon:
```
hexagon = ((0.5, -0.866), (1, 0), (0.5, 0.866), (-0.5, 0.866), (-1, 0), (-0.5, -0.866))
show_walk(hexagon)
show_walk(hexagon, 20000)
```
You can see a little of the six-fold symmetry, but it is not as clear as the triangle and pentagon.
| github_jupyter |
# Part 2: Intro to Private Training with Remote Execution
In the last section, we learned about PointerTensors, which create the underlying infrastructure we need for privacy preserving Deep Learning. In this section, we're going to see how to use these basic tools to train our first deep learning model using remote execution.
Authors:
- Yann Dupis - Twitter: [@YannDupis](https://twitter.com/YannDupis)
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
### Why use remote execution?
Let's say you are an AI startup who wants to build a deep learning model to detect [diabetic retinopathy (DR)](https://ai.googleblog.com/2016/11/deep-learning-for-detection-of-diabetic.html), which is the fastest growing cause of blindness. Before training your model, the first step would be to acquire a dataset of retinopathy images with signs of DR. One approach could be to work with a hospital and ask them to send you a copy of this dataset. However because of the sensitivity of the patients' data, the hospital might be exposed to liability risks.
That's where remote execution comes into the picture. Instead of bringing training data to the model (a central server), you bring the model to the training data (wherever it may live). In this case, it would be the hospital.
The idea is that this allows whoever is creating the data to own the only permanent copy, and thus maintain control over who ever has access to it. Pretty cool, eh?
# Section 2.1 - Private Training on MNIST
For this tutorial, we will train a model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to classify digits based on images.
We can assume that we have a remote worker named Bob who owns the data.
```
import tensorflow as tf
import syft as sy
hook = sy.TensorFlowHook(tf)
bob = sy.VirtualWorker(hook, id="bob")
```
Let's download the MNIST data from `tf.keras.datasets`. Note that we are converting the data from numpy to `tf.Tensor` in order to have the PySyft functionalities.
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train, y_train = tf.convert_to_tensor(x_train), tf.convert_to_tensor(y_train)
x_test, y_test = tf.convert_to_tensor(x_test), tf.convert_to_tensor(y_test)
```
As decribed in Part 1, we can send this data to Bob with the `send` method on the `tf.Tensor`.
```
x_train_ptr = x_train.send(bob)
y_train_ptr = y_train.send(bob)
```
Excellent! We have everything to start experimenting. To train our model on Bob's machine, we just have to perform the following steps:
- Define a model, including optimizer and loss
- Send the model to Bob
- Start the training process
- Get the trained model back
Let's do it!
```
# Define the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
# Compile with optimizer, loss and metrics
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
Once you have defined your model, you can simply send it to Bob calling the `send` method. It's the exact same process as sending a tensor.
```
model_ptr = model.send(bob)
model_ptr
```
Now, we have a pointer pointing to the model on Bob's machine. We can validate that's the case by inspecting the attribute `_objects` on the virtual worker.
```
bob._objects[model_ptr.id_at_location]
```
Everything is ready to start training our model on this remote dataset. You can call `fit` and pass `x_train_ptr` `y_train_ptr` which are pointing to Bob's data. Note that's the exact same interface as normal `tf.keras`.
```
model_ptr.fit(x_train_ptr, y_train_ptr, epochs=2, validation_split=0.2)
```
Fantastic! you have trained your model acheiving an accuracy greater than 95%.
You can get your trained model back by just calling `get` on it.
```
model_gotten = model_ptr.get()
model_gotten
```
It's good practice to see if your model can generalize by assessing its accuracy on an holdout dataset. You can simply call `evaluate`.
```
model_gotten.evaluate(x_test, y_test, verbose=2)
```
Boom! The model remotely trained on Bob's data is more than 95% accurate on this holdout dataset.
If your model doesn't fit into the Sequential paradigm, you can use Keras's functional API, or even subclass [tf.keras.Model](https://www.tensorflow.org/guide/keras/custom_layers_and_models#building_models) to create custom models.
```
class CustomModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(CustomModel, self).__init__(name='custom_model')
self.num_classes = num_classes
self.flatten = tf.keras.layers.Flatten(input_shape=(28, 28))
self.dense_1 = tf.keras.layers.Dense(128, activation='relu')
self.dropout = tf.keras.layers.Dropout(0.2)
self.dense_2 = tf.keras.layers.Dense(num_classes, activation='softmax')
def call(self, inputs, training=False):
x = self.flatten(inputs)
x = self.dense_1(x)
x = self.dropout(x, training=training)
return self.dense_2(x)
model = CustomModel(10)
# need to call the model on dummy data before sending it
# in order to set the input shape (required when saving to SavedModel)
model.predict(tf.ones([1, 28, 28]))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model_ptr = model.send(bob)
model_ptr.fit(x_train_ptr, y_train_ptr, epochs=2, validation_split=0.2)
```
## Well Done!
And voilà! We have trained a Deep Learning model on Bob's data by sending the model to him. Never in this process do we ever see or request access to the underlying training data! We preserve the privacy of Bob!!!
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- Star PySyft on GitHub! - [https://github.com/OpenMined/PySyft](https://github.com/OpenMined/PySyft)
- Star PySyft-TensorFlow on GitHub! - [https://github.com/OpenMined/PySyft-TensorFlow]
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
# MIDAS Examples
If you're reading this you probably already know that MIDAS stands for Mixed Data Sampling, and it is a technique for creating time-series forecast models that allows you to mix series of different frequencies (ie, you can use monthly data as predictors for a quarterly series, or daily data as predictors for a monthly series, etc.). The general approach has been described in a series of papers by Ghysels, Santa-Clara, Valkanov and others.
This notebook attempts to recreate some of the examples from the paper [_Forecasting with Mixed Frequencies_](https://research.stlouisfed.org/publications/review/2010/11/01/forecasting-with-mixed-frequencies/) by Michelle T. Armesto, Kristie M. Engemann, and Michael T. Owyang.
```
%matplotlib inline
import datetime
import numpy as np
import pandas as pd
from midas.mix import mix_freq
from midas.adl import estimate, forecast, midas_adl, rmse
```
# MIDAS ADL
This package currently implements the MIDAS ADL (autoregressive distributed lag) method. We'll start with an example using quarterly GDP and monthly payroll data. We'll then show the basic steps in setting up and fitting this type of model, although in practice you'll probably used the top-level __midas_adl__ function to do forecasts.
TODO: MIDAS equation and discussion
# Example 1: GDP vs Non-Farm Payroll
```
gdp = pd.read_csv('../tests/data/gdp.csv', parse_dates=['DATE'], index_col='DATE')
pay = pd.read_csv('../tests/data/pay.csv', parse_dates=['DATE'], index_col='DATE')
gdp.tail()
pay.tail()
```
## Figure 1
This is a variation of Figure 1 from the paper comparing year-over-year growth of GDP and employment.
```
gdp_yoy = ((1. + (np.log(gdp.GDP) - np.log(gdp.GDP.shift(3)))) ** 4) - 1.
emp_yoy = ((1. + (np.log(pay.PAY) - np.log(pay.PAY.shift(1)))) ** 12) - 1.
df = pd.concat([gdp_yoy, emp_yoy], axis=1)
df.columns = ['gdp_yoy', 'emp_yoy']
df[['gdp_yoy','emp_yoy']].loc['1980-1-1':].plot(figsize=(15,4), style=['o','-'])
```
## Mixing Frequencies
The first step is to do the actual frequency mixing. In this case we're mixing monthly data (employment) with quarterly data (GDP). This may sometimes be useful to do directly, but again you'll probably used __midas_adl__ to do forecasting.
```
gdp['gdp_growth'] = (np.log(gdp.GDP) - np.log(gdp.GDP.shift(1))) * 100.
pay['emp_growth'] = (np.log(pay.PAY) - np.log(pay.PAY.shift(1))) * 100.
y, yl, x, yf, ylf, xf = mix_freq(gdp.gdp_growth, pay.emp_growth, "3m", 1, 3,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1))
x.head()
```
The arguments here are as follows:
- First, the dependent (low frequency) and independent (high-frequency) data are given as Pandas series, and they are assumed to be indexed by date.
- xlag The number of lags for the high-frequency variable
- ylag The number of lags for the low-frequency variable (the autoregressive part)
- horizon: How much the high-frequency data is lagged before frequency mixing
- start_date, end_date: The start and end date over which the model is fitted. If these are outside the range of the low-frequency data, they will be adjusted
The _horizon_ argument is a little tricky (the argument name was retained from the MatLab version). This is used both the align the data and to do _nowcasting_ (more on that later). For example, if it's September 2017 then the latest GDP data from FRED will be for Q2 and this will be dated 2017-04-01. The latest monthly data from non-farm payroll will be for August, which will be dated 2017-08-01. If we aligned just on dates, the payroll data for April (04-01), March (03-01), and February(02-01) would be aligned with Q2 (since xlag = "3m"), but what we want is June, May, and April, so here the horizon argument is 3 indicating that the high-frequency data should be lagged three months before being mixed with the quarterly data.
### Fitting the Model
Because of the form of the MIDAS model, fitting the model requires using non-linear least squares. For now, if you call the __estimate__ function directly, you'll get back a results of type scipy.optimize.optimize.OptimizeResult
```
res = estimate(y, yl, x, poly='beta')
res.x
```
You can also call __forecast__ directly. This will use the optimization results returned from __eatimate__ to produce a forecast for every date in the index of the forecast inputs (here xf and ylf):
```
fc = forecast(xf, ylf, res, poly='beta')
forecast_df = fc.join(yf)
forecast_df['gap'] = forecast_df.yfh - forecast_df.gdp_growth
forecast_df
gdp.join(fc)[['gdp_growth','yfh']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
```
### Comparison against univariate ARIMA model
```
import statsmodels.tsa.api as sm
m = sm.AR(gdp['1975-01-01':'2011-01-01'].gdp_growth,)
r = m.fit(maxlag=1)
r.params
fc_ar = r.predict(start='2005-01-01')
fc_ar.name = 'xx'
df_p = gdp.join(fc)[['gdp_growth','yfh']]
df_p.join(fc_ar)[['gdp_growth','yfh','xx']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
```
## The midas_adl function
The __midas\_adl__ function wraps up frequency-mixing, fitting, and forecasting into one process. The default mode of forecasting is _fixed_, which means that the data between start_date and end_date will be used to fit the model, and then any data in the input beyond end_date will be used for forecasting. For example, here we're fitting from the beginning of 1985 to the end of 2008, but the gdp data extends to Q1 of 2011 so we get nine forecast points. Three monthly lags of the high-frequency data are specified along with one quarterly lag of GDP.
```
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3)
rmse_fc
```
You can also change the polynomial used to weight the MIDAS coefficients. The default is 'beta', but you can also specify exponential Almom weighting ('expalmon') or beta with non-zero last term ('betann')
```
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
poly='expalmon')
rmse_fc
```
### Rolling and Recursive Forecasting
As mentioned above the default forecasting method is fixed where the model is fit once and then all data after end_date is used for forecasting. Two other methods are supported _rolling window_ and _recursive_. The _rolling window_ method is just what it sounds like. The start_date and end_date are used for the initial window, and then each new forecast moves that window forward by one period so that you're always doing one step ahead forecasts. Of course, to do anything useful this also assumes that the date range of the dependent data extends beyond end_date accounting for the lags implied by _horizon_. Generally, you'll get lower RMSE values here since the forecasts are always one step ahead.
```
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,10,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
forecast_horizon=h,
poly='beta',
method='rolling') for h in (1, 2, 5)}
results[1][0]
```
The _recursive_ method is similar except that the start date does not change, so the range over which the fitting happens increases for each new forecast.
```
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,10,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
forecast_horizon=h,
poly='beta',
method='recursive') for h in (1, 2, 5)}
results[1][0]
```
## Nowcasting
Per the manual for the MatLab Matlab Toolbox Version 1.0, you can do _nowcasting_ (or MIDAS with leads) basically by adjusting the _horizon_ parameter. For example, below we change the _horizon_ paremter to 1, we're now forecasting with a one month horizon rather than a one quarter horizon:
```
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=1)
rmse_fc
```
Not surprisingly the RMSE drops considerably.
## CPI vs. Federal Funds Rate
__UNDER CONSTRUCTION: Note that these models take considerably longer to fit__
```
cpi = pd.read_csv('CPIAUCSL.csv', parse_dates=['DATE'], index_col='DATE')
ffr = pd.read_csv('DFF_2_Vintages_Starting_2009_09_28.txt', sep='\t', parse_dates=['observation_date'],
index_col='observation_date')
cpi.head()
ffr.head(10)
cpi_yoy = ((1. + (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1)))) ** 12) - 1.
cpi_yoy.head()
df = pd.concat([cpi_yoy, ffr.DFF_20090928 / 100.], axis=1)
df.columns = ['cpi_growth', 'dff']
df.loc['1980-1-1':'2010-1-1'].plot(figsize=(15,4), style=['-+','-.'])
cpi_growth = (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1))) * 100.
y, yl, x, yf, ylf, xf = mix_freq(cpi_growth, ffr.DFF_20090928, "1m", 1, 1,
start_date=datetime.datetime(1975,10,1),
end_date=datetime.datetime(1991,1,1))
x.head()
res = estimate(y, yl, x)
fc = forecast(xf, ylf, res)
fc.join(yf).head()
pd.concat([cpi_growth, fc],axis=1).loc['2008-01-01':'2010-01-01'].plot(style=['-o','-+'], figsize=(12, 4))
results = {h: midas_adl(cpi_growth, ffr.DFF_20090928,
start_date=datetime.datetime(1975,7,1),
end_date=datetime.datetime(1990,11,1),
xlag="1m",
ylag=1,
horizon=1,
forecast_horizon=h,
method='rolling') for h in (1, 2, 5)}
(results[1][0], results[2][0], results[5][0])
results[1][1].plot(figsize=(12,4))
results = {h: midas_adl(cpi_growth, ffr.DFF_20090928,
start_date=datetime.datetime(1975,10,1),
end_date=datetime.datetime(1991,1,1),
xlag="1m",
ylag=1,
horizon=1,
forecast_horizon=h,
method='recursive') for h in (1, 2, 5)}
results[1][0]
results[1][1].plot()
```
| github_jupyter |
```
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/spring20/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week1_intro/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
### OpenAI Gym
We're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.
That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.
So here's how it works:
```
import gym
env = gym.make("MountainCar-v0")
env.reset()
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
```
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away.
### Gym interface
The three main methods of an environment are
* __reset()__ - reset environment to initial state, _return first observation_
* __render()__ - show current environment state (a more colorful version :) )
* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info)
* _new observation_ - an observation right after commiting the action __a__
* _reward_ - a number representing your reward for commiting action __a__
* _is done_ - True if the MDP has just finished, False if still in progress
* _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.
```
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the right slightly (around 0.0005)
```
### Play with it
Below is the code that drives the car to the right. However, if you simply use the default policy, the car will not reach the flag at the far right due to gravity.
__Your task__ is to fix it. Find a strategy that reaches the flag.
You are not required to build any sophisticated algorithms for now, feel free to hard-code :)
```
from IPython import display
# Create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(
gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1,
)
actions = {'left': 0, 'stop': 1, 'right': 2}
def policy(obs, t):
# Write the code for your policy here. You can use the observation
# (a tuple of position and velocity), the current time step, or both,
# if you want.
position, velocity = obs
if velocity > 0:
a = actions['right']
else:
a = actions['left']
# This is an example policy. You can try running it, but it will not work.
# Your goal is to fix that.
return a
plt.figure(figsize=(4, 3))
display.clear_output(wait=True)
obs = env.reset()
for t in range(TIME_LIMIT):
plt.gca().clear()
action = policy(obs, t) # Call your policy
obs, reward, done, _ = env.step(action) # Pass the action chosen by the policy to the environment
# We don't do anything with reward here because MountainCar is a very simple environment,
# and reward is a constant -1. Therefore, your goal is to end the episode as quickly as possible.
# Draw game image on display.
plt.imshow(env.render('rgb_array'))
display.clear_output(wait=True)
display.display(plt.gcf())
print(obs)
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
display.clear_output(wait=True)
from submit import submit_interface
submit_interface(policy, <EMAIL>, <TOKEN>)
```
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb)
## Colab Setup
```
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
license_keys['JSL_VERSION']
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import sparknlp
from sparknlp.util import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
import pandas as pd
spark = sparknlp_jsl.start(license_keys['SECRET'])
spark
```
# **Date Normalizer**
New Annotator that transforms chunks Dates to a normalized Date with format YYYY/MM/DD. This annotator identifies dates in chunk annotations and transforms those dates to the format YYYY/MM/DD.
We going to create a chunks dates with different formats:
```
dates = [
'08/02/2018',
'11/2018',
'11/01/2018',
'12Mar2021',
'Jan 30, 2018',
'13.04.1999',
'3April 2020',
'next monday',
'today',
'next week'
]
from pyspark.sql.types import StringType
df_dates = spark.createDataFrame(dates,StringType()).toDF('ner_chunk')
```
We going to transform that text to documents in spark-nlp.
```
document_assembler = DocumentAssembler().setInputCol('ner_chunk').setOutputCol('document')
documents_DF = document_assembler.transform(df_dates)
```
After that we going to transform that documents to chunks.
```
from sparknlp.functions import map_annotations_col
chunks_df = map_annotations_col(documents_DF.select("document","ner_chunk"),
lambda x: [Annotation('chunk', a.begin, a.end, a.result, a.metadata, a.embeddings) for a in x], "document",
"chunk_date", "chunk")
chunks_df.select('chunk_date').show(truncate=False)
```
Now we going to normalize that chunks using the DateNormalizer.
```
date_normalizer = DateNormalizer().setInputCols('chunk_date').setOutputCol('date')
date_normaliced_df = date_normalizer.transform(chunks_df)
```
We going to show how the date is normalized.
```
dateNormalizedClean = date_normaliced_df.selectExpr("ner_chunk","date.result as dateresult","date.metadata as metadata")
dateNormalizedClean.withColumn("dateresult", dateNormalizedClean["dateresult"]
.getItem(0)).withColumn("metadata", dateNormalizedClean["metadata"]
.getItem(0)['normalized']).show(truncate=False)
```
We can configure the `anchorDateYear`,`anchorDateMonth` and `anchorDateDay` for the relatives dates.
In the following example we will use as a relative date 2021/02/22, to make that possible we need to set up the `anchorDateYear` to 2020, the `anchorDateMonth` to 2 and the `anchorDateDay` to 27. I will show you the configuration with the following example.
```
date_normalizer = DateNormalizer().setInputCols('chunk_date').setOutputCol('date')\
.setAnchorDateDay(27)\
.setAnchorDateMonth(2)\
.setAnchorDateYear(2021)
date_normaliced_df = date_normalizer.transform(chunks_df)
dateNormalizedClean = date_normaliced_df.selectExpr("ner_chunk","date.result as dateresult","date.metadata as metadata")
dateNormalizedClean.withColumn("dateresult", dateNormalizedClean["dateresult"]
.getItem(0)).withColumn("metadata", dateNormalizedClean["metadata"]
.getItem(0)['normalized']).show(truncate=False)
```
As you see the relatives dates like `next monday` , `today` and `next week` takes the `2021/02/22` as reference date.
| github_jupyter |
# Code to download The Guardian UK data and clean data for text analysis
@Jorge de Leon
This script allows you to download news articles that match your parameters from the Guardian newspaper, https://www.theguardian.com/us.
## Set-up
```
import os
import re
import glob
import json
import requests
import pandas as pd
from glob import glob
from os import makedirs
from textblob import TextBlob
from os.path import join, exists
from datetime import date, timedelta
os.chdir("..")
import nltk
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
from nltk import sent_tokenize, word_tokenize
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import stopwords
```
## API and news articles requests
This section contains the code that will be used to download articles from the Guardian website.
the initial variables will be determined as user-defined parameters.
```
#Enter API and parameters - these parameters can be obtained by playing around with the Guardian API tool:
# https://open-platform.theguardian.com/explore/
# Set up initial and end date
start_date_global = date(2000, 1, 1)
end_date_global = date(2020, 5, 17)
query = "JPMorgan"
term = ('stock')
#Enter API key, endpoint and parameters
my_api_key = open("..\\input files\\creds_guardian.txt").read().strip()
api_endpoint = "http://content.guardianapis.com/search?"
my_params = {
'from-date': '',
'to-date': '',
'show-fields': 'bodyText',
'q': query,
'page-size': 200,
'api-key': my_api_key
}
articles_dir = join('theguardian','jpmorgan')
makedirs(articles_dir, exist_ok=True)
# day iteration from here:
# http://stackoverflow.com/questions/7274267/print-all-day-dates-between-two-dates
start_date = start_date_global
end_date = end_date_global
dayrange = range((end_date - start_date).days + 1)
for daycount in dayrange:
dt = start_date + timedelta(days=daycount)
datestr = dt.strftime('%Y-%m-%d')
fname = join(articles_dir, datestr + '.json')
if not exists(fname):
# then let's download it
print("Downloading", datestr)
all_results = []
my_params['from-date'] = datestr
my_params['to-date'] = datestr
current_page = 1
total_pages = 1
while current_page <= total_pages:
print("...page", current_page)
my_params['page'] = current_page
resp = requests.get(api_endpoint, my_params)
data = resp.json()
all_results.extend(data['response']['results'])
# if there is more than one page
current_page += 1
total_pages = data['response']['pages']
with open(fname, 'w') as f:
print("Writing to", fname)
# re-serialize it for pretty indentation
f.write(json.dumps(all_results, indent=2))
#Read all json files that will be concatenated
test_files = sorted(glob('theguardian/jpmorgan/*.json'))
#intialize empty list that we will append dataframes to
all_files = []
#write a for loop that will go through each of the file name through globbing and the end result will be the list
#of dataframes
for file in test_files:
try:
articles = pd.read_json(file)
all_files.append(articles)
except pd.errors.EmptyDataError:
print('Note: filename.csv ws empty. Skipping')
continue #will skip the rest of the bloc and move to next file
#create dataframe with data from json files
theguardian_rawdata = pd.concat(all_files, axis=0, ignore_index=True)
```
## Text Analysis
```
#Drop empty columns
theguardian_rawdata = theguardian_rawdata.iloc[:,0:12]
#show types of media that was downloaded by type
theguardian_rawdata['type'].unique()
#filter only for articles
theguardian_rawdata = theguardian_rawdata[theguardian_rawdata['type'].str.match('article',na=False)]
#remove columns that do not contain relevant information for analysis
theguardian_dataset = theguardian_rawdata.drop(['apiUrl','id', 'isHosted', 'pillarId', 'pillarName',
'sectionId', 'sectionName', 'type','webTitle', 'webUrl'], axis=1)
#Modify the column webPublicationDate to Date and the fields to string and lower case
theguardian_dataset["date"] = pd.to_datetime(theguardian_dataset["webPublicationDate"]).dt.strftime('%Y-%m-%d')
theguardian_dataset['fields'] = theguardian_dataset['fields'].astype(str).str.lower()
#Clean the articles from URLS, remove punctuaction and numbers.
theguardian_dataset['fields'] = theguardian_dataset['fields'].str.replace('<.*?>','') # remove HTML tags
theguardian_dataset['fields'] = theguardian_dataset['fields'].str.replace('[^\w\s]','') # remove punc.
#Generate sentiment analysis for each article
#Using TextBlob obtain polarity
theguardian_dataset['sentiment_polarity'] = theguardian_dataset['fields'].apply(lambda row: TextBlob(row).sentiment.polarity)
#Using TextBlob obtain subjectivity
theguardian_dataset['sentiment_subjectivity'] = theguardian_dataset['fields'].apply(lambda row: TextBlob(row).sentiment.subjectivity)
#Remove numbers from text
theguardian_dataset['fields'] = theguardian_dataset['fields'].str.replace('\d+','') # remove numbers
#Then I will tokenize each word and remover stop words
theguardian_dataset['tokenized_fields'] = theguardian_dataset.apply(lambda row: nltk.word_tokenize(row['fields']), axis=1)
#Stop words
stop_words=set(stopwords.words("english"))
#Remove stop words
theguardian_dataset['tokenized_fields'] = theguardian_dataset['tokenized_fields'].apply(lambda x: [item for item in x if item not in stop_words])
#Count number of words and create a column with the most common 5 words per article
from collections import Counter
theguardian_dataset['high_recurrence'] = theguardian_dataset['tokenized_fields'].apply(lambda x: [k for k, v in Counter(x).most_common(5)])
#Create a word count for the word "stock"
theguardian_dataset['word_ocurrence'] = theguardian_dataset['tokenized_fields'].apply(lambda x: [w for w in x if re.search(term, w)])
theguardian_dataset['word_count'] = theguardian_dataset['word_ocurrence'].apply(len)
#Create a count of the total number of words
theguardian_dataset['total_words'] = theguardian_dataset['tokenized_fields'].apply(len)
#Create new table with average polarity, subjectivity, count of the word "stock" per day
guardian_microsoft = theguardian_dataset.groupby('date')['sentiment_polarity','sentiment_subjectivity','word_count','total_words'].agg('mean')
#Create a variable for the number of articles per day
count_articles = theguardian_dataset
count_articles['no_articles'] = count_articles.groupby(['date'])['fields'].transform('count')
count_articles = count_articles[["date","no_articles"]]
count_articles_df = count_articles.drop_duplicates(subset = "date",
keep = "first", inplace=False)
#Join tables by date
guardian_microsoft = guardian_microsoft.merge(count_articles_df, on='date', how ='left')
#Save dataframes into CSV
theguardian_dataset.to_csv('theguardian/jpmorgan/theguardian_jpmorgan_text.csv', encoding='utf-8')
guardian_microsoft.to_csv('theguardian/jpmorgan/theguardian_jpmorgan_data.csv', encoding='utf-8')
```
| github_jupyter |
```
import torch
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from pathlib import Path
sns.color_palette("tab10")
sns.set(rc={
"figure.dpi": 150,
"text.usetex": True,
"xtick.labelsize": "small",
"ytick.labelsize": "small",
"axes.labelsize": "small",
"axes.titlesize": "small",
"figure.titlesize": "medium",
"axes.titlepad": 2.0,
"xtick.major.pad": -4.0,
#"figure.subplot.hspace": 0.0,
"figure.constrained_layout.use": True,
})
def attribute_to_matrix(sequences, attribute):
attribute_list = []
for seq in sequences:
attribute_list.append(seq[attribute])
return np.array(attribute_list)
def cold_starts(init_times):
num_activations = init_times.size
num_cold_starts = np.sum(init_times > 0)
return num_activations, num_cold_starts
def app_name(filename: str) -> str:
app_name = filename[filename.find(":")+4:filename.find("_fetched_")]
if "_rand" in filename:
app_name += "_rand"
return app_name
def get_index(applist: list, filename: str) -> int:
return applist.index(app_name(filename))
# Number of activations vs number of cold starts.
# Configuration
dir = "final_high_load_n_1000"
#dir = "final_batched_high_load_n_400"
files = Path(f"../data/{dir}").glob('*.pkl')
for f in sorted(files):
print(f)
with open(f, "rb") as stream:
data = torch.load(stream)
num_activations, num_cold_starts = cold_starts(attribute_to_matrix(data["sequences"], "init_times"))
#print(f"Number of activations: {num_activations}")
#print(f"Number of cold starts: {num_cold_starts}")
print(f"Cold start to activation ratio: {round(num_cold_starts / num_activations, 4)}")
# Average inter-event time for each step.
# Configuration
dir = "final_low_load_n_1000"
#dir = "final_high_load_n_1000"
save = False
def boxplot(applist: list, file2data: dict, title: str, ylabel: str) -> plt.Figure:
fig, ax = plt.subplots(len(applist), 1)
fig.set_size_inches(5.5, 6.8)
fig.suptitle(title)
for filename, data in file2data.items():
if app_name(filename) in applist:
sns.boxplot(ax=ax[get_index(applist, filename)], data=data, width=0.5, showfliers=False, linewidth=0.9)
ax[get_index(applist, filename)].set_title(app_name(filename).replace("_", "\_"))
ax[get_index(applist, filename)].set_ylabel(ylabel)
ax[get_index(applist, filename)].set_xticklabels(list(range(1, data.shape[-1] + 1)))
ax[-1].set_xlabel("i")
return fig
prestr = "ll_"
if "high_load" in dir: prestr = "hl_"
poststr = " (no cold starts)"
if "high_load" in dir: poststr = " (30\% cold starts)"
apps = ["sequence", "parallel_small", "tree_small", "fanout_small", "parallel_large", "tree_large", "fanout_large"]
apps_rand = ["sequence_rand", "parallel_small_rand", "tree_small_rand", "fanout_small_rand", "parallel_large_rand", "tree_large_rand", "fanout_large_rand"]
inter_dict = {}
init_dict = {}
wait_dict = {}
files = Path(f"../data/{dir}").glob('*.pkl')
for f in sorted(files):
with open(f, "rb") as stream:
data = torch.load(stream)
inter_dict[str(f)] = np.diff(attribute_to_matrix(data["sequences"], "arrival_times"), axis=-1)
init_dict[str(f)] = attribute_to_matrix(data["sequences"], "init_times")
wait_dict[str(f)] = attribute_to_matrix(data["sequences"], "wait_times") / 1000
title = fr"Distribution of inter-event time $\tau_i${poststr}"
ylabel = "ms"
fig1 = boxplot(apps, inter_dict, title, ylabel)
if save: plt.savefig(f"data_plots/{prestr}dist_inter.pdf")
fig2 = boxplot(apps_rand, inter_dict, title, ylabel)
if save: plt.savefig(f"data_plots/{prestr}dist_inter_rand.pdf")
if "high_load" in dir:
title = fr"Distribution of initTime $i_i${poststr}"
ylabel = "ms"
fig1 = boxplot(apps, init_dict, title, ylabel)
if save: plt.savefig(f"data_plots/{prestr}dist_init.pdf")
fig2 = boxplot(apps_rand, init_dict, title, ylabel)
if save: plt.savefig(f"data_plots/{prestr}dist_init_rand.pdf")
title = fr"Distribution of waitTime $w_i${poststr}"
ylabel = "sec"
fig1 = boxplot(apps, wait_dict, title, ylabel)
if save: plt.savefig(f"data_plots/{prestr}dist_wait.pdf")
fig2 = boxplot(apps_rand, wait_dict, title, ylabel)
if save: plt.savefig(f"data_plots/{prestr}dist_wait_rand.pdf")
# Distribution of inter-event, init and wait times.
# Configuration
dir = "final_low_load_n_1000"
#dir = "final_high_load_n_1000"
#dir = "final_batched_high_load_n_400"
plot_init_times = True
plot_wait_times = True
save = False
def compose_title(filename):
def app_name(filename):
app_name = filename[filename.find(":")+4:filename.find("_fetched_")]
return app_name.replace("_", "\_")
title = app_name(filename)
if "_rand" in filename:
title += "\_rand"
if "_b_" in filename:
title += " (30% cold starts)"
return title
apps = ["sequence", "parallel_small", "tree_small", "fanout_small", "parallel_large",
"tree_large", "fanout_large", "sequence_rand", "parallel_small_rand", "tree_small_rand",
"fanout_small_rand", "parallel_large_rand", "tree_large_rand", "fanout_large_rand"]
files = Path(f"../data/{dir}").glob('*.pkl')
fig_inter, ax_inter = plt.subplots(14, 1)
fig_inter.set_size_inches(6.4, 18)
fig_init, ax_init = plt.subplots(14, 1)
fig_init.set_size_inches(6.4, 18)
fig_wait, ax_wait = plt.subplots(14, 1)
fig_wait.set_size_inches(6.4, 18)
for i, f in enumerate(sorted(files)):
with open(f, "rb") as stream:
data = torch.load(stream)
if "small" in str(f):
numb_traces = 200
else:
numb_traces = 100
inter_event_times = np.diff(attribute_to_matrix(data["sequences"], "arrival_times"), axis=-1)[:numb_traces].reshape(-1)
init_times = attribute_to_matrix(data["sequences"], "init_times")[:numb_traces].reshape(-1)
wait_times = attribute_to_matrix(data["sequences"], "wait_times")[:numb_traces].reshape(-1) / 1000
max_range = int(np.quantile(inter_event_times, 0.99))
ax_inter[get_index(apps, str(f))].hist(inter_event_times, bins=300, range=(0.0, max_range), log=True)
ax_inter[get_index(apps, str(f))].set_xlabel("milliseconds")
ax_inter[get_index(apps, str(f))].set_ylabel("count")
ax_inter[get_index(apps, str(f))].title.set_text(f"Distribution over all inter-event times - {compose_title(str(f))}")
if plot_init_times:
max_range = int(np.quantile(init_times, 0.99))
ax_init[get_index(apps, str(f))].hist(init_times, bins=300, range=(0.0, max_range), log=True)
ax_init[get_index(apps, str(f))].set_xlabel("milliseconds")
ax_init[get_index(apps, str(f))].set_ylabel("count")
ax_init[get_index(apps, str(f))].title.set_text(f"Distribution over all initTimes - {compose_title(str(f))}")
if plot_wait_times:
max_range = int(np.quantile(wait_times, 0.99))
ax_wait[get_index(apps, str(f))].hist(wait_times, bins=300, range=(0.0, max_range), log=True)
ax_wait[get_index(apps, str(f))].set_xlabel("seconds")
ax_wait[get_index(apps, str(f))].set_ylabel("count")
ax_wait[get_index(apps, str(f))].title.set_text(f"Distribution over all waitTimes - {compose_title(str(f))}")
#fig_inter.tight_layout()
#fig_init.tight_layout()
#fig_wait.tight_layout()
if save:
filename = f"{str(f)[str(f).rfind('/')+1:str(f).rfind('.pkl')]}.png"
plt.savefig(filename)
# Standard deviation of inter-event, init and wait time
# Configuration
dir = "final_low_load_n_1000"
files = Path(f"../data/{dir}").glob('*.pkl')
def app_name(filename):
app_name = filename[filename.find(":")+4:filename.find("_fetched_")]
if "rand" in filename:
app_name += "_rand"
return app_name
for f in sorted(files):
with open(f, "rb") as stream:
data = torch.load(stream)
inter_event_times = np.diff(attribute_to_matrix(data["sequences"], "arrival_times"), axis=-1)
init_times = attribute_to_matrix(data["sequences"], "init_times")
wait_times = attribute_to_matrix(data["sequences"], "wait_times") #/ 1000
print(f"app={app_name(str(f))}\n"
f"std_inter_event={np.std(inter_event_times)}\n"
f"std_init={np.std(init_times)}\n"
f"std_wait={np.std(wait_times)}\n")
```
| github_jupyter |
Neuroon cross-validation
------------------------
Neuroon and PSG recordings were simultanously collected over the course of two nights. This analysis will show whether Neuroon is able to accurately classify sleep stages. The PSG classification will be a benchmark against which Neuroon performance will be tested. "The AASM Manual for te Scoring of Sleep ad Associated Events" identifies 5 sleep stages:
* Stage W (Wakefulness)
* Stage N1 (NREM 1)
* Stage N1 (NREM 2)
* Stage N1 (NREM 3)
* Stage R (REM)
<img src="images/sleep_stages.png">
These stages can be identified following the rules guidelines in [1] either visually or digitally using combined information from EEG, EOG and EMG. Extensive research is beeing conducted on developing automated and simpler methods for sleep stage classification suitable for everyday home use (for a review see [2]). Automatic methods based on single channel EEG, which is the Neuroon category, were shown to work accurately when compared to PSG scoring [3].
[1] Berry RB BR, Gamaldo CE, Harding SM, Lloyd RM, Marcus CL, Vaughn BV; for the American Academy of Sleep Medicine. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications.,Version 2.0.3. Darien, IL: American Academy of Sleep Medicine; 2014.
[2] Van De Water, A. T. M., Holmes, A., & Hurley, D. a. (2011). Objective measurements of sleep for non-laboratory settings as alternatives to polysomnography - a systematic review. Journal of Sleep Research, 20, 183–200.
[3] Berthomier, C., Drouot, X., Herman-Stoïca, M., Berthomier, P., Prado, J., Bokar-Thire, D. d’Ortho, M.P. (2007). Automatic analysis of single-channel sleep EEG: validation in healthy individuals. Sleep, 30(11), 1587–1595.
Signals time-synchronization using crosscorelation
--------------------------------------------------
Neuroon and PSG were recorded on devices with (probably) unsycnhronized clocks. First we will use a cross-correlation method [4] to find the time offset between the two recordings.
[4] Fridman, L., Brown, D. E., Angell, W., Abdić, I., Reimer, B., & Noh, H. Y. (2016). Automated synchronization of driving data using vibration and steering events. Pattern Recognition Letters, 75, 9-15.
Define cross correlation function - code from: (http://lexfridman.com/blogs/research/2015/09/18/fast-cross-correlation-and-time-series-synchronization-in-python/)
for other examlpes see: (http://stackoverflow.com/questions/4688715/find-time-shift-between-two-similar-waveforms)
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from itertools import tee
import pandas as pd
import seaborn as sns
from numpy.fft import fft, ifft, fft2, ifft2, fftshift
from collections import OrderedDict
from datetime import timedelta
plt.rcParams['figure.figsize'] = (9.0, 5.0)
from parse_signal import load_psg, load_neuroon
# Cross-correlation function. Equivalent to numpy.correlate(x,y mode = 'full') but faster for large arrays
# This function was tested against other cross correlation methods in -- LINK TO OTHER NOTEBOOK
def cross_correlation_using_fft(x, y):
f1 = fft(x)
f2 = fft(np.flipud(y))
cc = np.real(ifft(f1 * f2))
return fftshift(cc)
# shift < 0 means that y starts 'shift' time steps before x # shift > 0 means that y starts 'shift' time steps after x
def compute_shift(x, y):
assert len(x) == len(y)
c = cross_correlation_using_fft(x, y)
assert len(c) == len(x)
zero_index = int(len(x) / 2) - 1
shift = zero_index - np.argmax(c)
return shift,c
def cross_correlate():
# Load the signal from hdf database and parse it to pandas series with datetime index
psg_signal = load_psg('F3-A2')
neuroon_signal = load_neuroon()
# Resample the signal to 100hz, to have the same length for cross correlation
psg_10 = psg_signal.resample('10ms').mean()
neuroon_10 = neuroon_signal.resample('10ms').mean()
# Create ten minute intervals
dates_range = pd.date_range(psg_signal.head(1).index.get_values()[0], neuroon_signal.tail(1).index.get_values()[0], freq="10min")
# Convert datetime interval boundaries to string with only hours, minutes and seconds
dates_range = [d.strftime('%H:%M:%S') for d in dates_range]
all_coefs = []
# iterate over overlapping pairs of 10 minutes boundaries
for start, end in pairwise(dates_range):
# cut 10 minutes piece of signal
neuroon_cut = neuroon_10.between_time(start, end)
psg_cut = psg_10.between_time(start, end)
# Compute the correlation using fft convolution
shift, coeffs = compute_shift(neuroon_cut, psg_cut)
#normalize the coefficients because they will be shown on the same heatmap and need a common color scale
all_coefs.append((coeffs - coeffs.mean()) / coeffs.std())
#print('max corr at shift %s is at sample %i'%(start, shift))
all_coefs = np.array(all_coefs)
return all_coefs, dates_range
# This function is used to iterate over a list, taking two consecutive items at each iteration
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
# Construct a matrix where each row represents a 10 minute window from the recording
# and each column represent correlation coefficient between neuroon and psg signals offset by samples number.
# 0 samples offset coefficient is stored at the middle column -1. Negative offset and positive offset span left and right from the center.
# offset < 0 means that psg starts 'shift' time steps before neuroon
# offset > 0 means that psg starts 'shift' time steps after neuroon
coeffs_matrix, dates = cross_correlate()
from plotting_collection import plot_crosscorrelation_heatmap
#Plot part of the coefficients matrix centered around the max average correlation for all 10 minute windows
plot_crosscorrelation_heatmap(coeffs_matrix, dates)
```
Hipnogram time-delay
--------------------
From the crosscorrelation of the eeg signals we can see the two devices are off by 2 minutes 41 seconds. Now we'll see if there is a point in time where the hipnograms are most simmilar. The measure of hipnogram simmilarity will be the sum of times when two devices classified the same sleep stage.
```
import parse_hipnogram as ph
def get_hipnogram_intersection(neuroon_hipnogram, psg_hipnogram, time_shift):
neuroon_hipnogram.index = neuroon_hipnogram.index + timedelta(seconds = int(time_shift))
combined = psg_hipnogram.join(neuroon_hipnogram, how = 'outer', lsuffix = '_psg', rsuffix = '_neuro')
combined.loc[:, ['stage_num_psg', 'stage_name_psg', 'stage_num_neuro', 'stage_name_neuro', 'event_number_psg', 'event_number_neuro']] = combined.loc[:, ['stage_num_psg', 'stage_name_psg', 'stage_num_neuro', 'stage_name_neuro', 'event_number_psg', 'event_number_neuro']].fillna( method = 'bfill')
combined.loc[:, ['stage_shift_psg', 'stage_shift_neuro']] = combined.loc[:, ['stage_shift_psg', 'stage_shift_neuro']].fillna( value = 'inside')
# From the occupied room number subtract the room occupied by another mouse.
combined['overlap'] = combined['stage_num_psg'] - combined['stage_num_neuro']
same_stage = combined.loc[combined['overlap'] == 0]
same_stage.loc[:, 'event_union'] = same_stage['event_number_psg'] + same_stage['event_number_neuro']
# common_window = np.array([neuroon_hipnogram.tail(1).index.get_values()[0] - psg_hipnogram.head(1).index.get_values()[0]],dtype='timedelta64[m]').astype(int)[0]
all_durations = OrderedDict()
for stage_name, intersection in same_stage.groupby('event_union'):
# Subtract the first row timestamp from the last to get the duration. Store as the duration in milliseconds.
duration = (intersection.index.to_series().iloc[-1]- intersection.index.to_series().iloc[0]).total_seconds()
stage_id = intersection.iloc[0, intersection.columns.get_loc('stage_name_neuro')]
# Keep appending results to a list stored in a dict. Check if the list exists, if not create it.
if stage_id not in all_durations.keys():
all_durations[stage_id] = [duration]
else:
all_durations[stage_id].append(duration)
means = OrderedDict()
stds = OrderedDict()
sums = OrderedDict()
stages_sum = 0
#Adding it here so its first in ordered dict and leftmost on the plot
sums['stages_sum'] = 0
for key, value in all_durations.items():
#if key != 'wake':
means[key] = np.array(value).mean()
stds[key] = np.array(value).std()
sums[key] = np.array(value).sum()
stages_sum += np.array(value).sum()
sums['stages_sum'] = stages_sum
# Divide total seconds by 60 to get minutes
#return stages_sum
return sums, means, stds
def intersect_with_shift():
psg_hipnogram = ph.parse_psg_stages()
neuroon_hipnogram = ph.parse_neuroon_stages()
intersection = OrderedDict([('wake', []), ('rem',[]), ('N1',[]), ('N2',[]), ('N3', []), ('stages_sum', [])])
shift_range = np.arange(-500, 100, 10)
for shift in shift_range:
sums, _, _ = get_hipnogram_intersection(neuroon_hipnogram.copy(), psg_hipnogram.copy(), shift)
for stage, intersect_dur in sums.items():
intersection[stage].append(intersect_dur)
return intersection, shift_range
def plot_intersection(intersection, shift_range):
psg_hipnogram = ph.parse_psg_stages()
neuroon_hipnogram = ph.parse_neuroon_stages()
stage_color_dict = {'N1' : 'royalblue', 'N2' :'forestgreen', 'N3' : 'coral', 'rem' : 'plum', 'wake' : 'lightgrey', 'stages_sum': 'dodgerblue'}
fig, axes = plt.subplots(2)
zscore_ax = axes[0].twinx()
for stage in ['rem', 'N2', 'N3', 'wake']:
intersect_sum = np.array(intersection[stage])
z_scored = (intersect_sum - intersect_sum.mean()) / intersect_sum.std()
zscore_ax.plot(shift_range, z_scored, color = stage_color_dict[stage], label = stage, alpha = 0.5, linestyle = '--')
max_overlap = shift_range[np.argmax(intersection['stages_sum'])]
fig.suptitle('max overlap at %i seconds offset'%max_overlap)
axes[0].plot(shift_range, intersection['stages_sum'], label = 'stages sum', color = 'dodgerblue')
axes[0].axvline(max_overlap, color='k', linestyle='--')
axes[0].set_ylabel('time in the same sleep stage')
axes[0].set_xlabel('offset in seconds')
axes[0].legend(loc = 'center right')
zscore_ax.grid(b=False)
zscore_ax.legend()
sums0, means0, stds0 = get_hipnogram_intersection(neuroon_hipnogram.copy(), psg_hipnogram.copy(), 0)
#
width = 0.35
ind = np.arange(5)
colors_inorder = ['dodgerblue', 'lightgrey', 'forestgreen', 'coral', 'plum']
#Plot the non shifted overlaps
axes[1].bar(left = ind, height = list(sums0.values()),width = width, alpha = 0.8,
tick_label =list(sums0.keys()), edgecolor = 'black', color= colors_inorder)
sumsMax, meansMax, stdsMax = get_hipnogram_intersection(neuroon_hipnogram.copy(), psg_hipnogram.copy(), max_overlap)
# Plot the shifted overlaps
axes[1].bar(left = ind +width, height = list(sumsMax.values()),width = width, alpha = 0.8,
tick_label =list(sumsMax.keys()), edgecolor = 'black', color = colors_inorder)
axes[1].set_xticks(ind + width)
plt.tight_layout()
intersection, shift_range = intersect_with_shift()
plot_intersection(intersection, shift_range)
```
hipnogram analysis indicates the same direction of time delay - psg is 4 minutes 10 seconds before neuroon. The time delay is larger for the hipnograms than for the signals by 1 minute 30 seconds.
Todo:
* add second axis with percentages
* see if the overlap increased in proportion with offset
* plot parts of time corrected signals and hipnograms
* add different correlation tests notebook
* add spectral and pca analysis
| github_jupyter |
# PI-ICR analysis
Created on 17 July 2019 for the ISOLTRAP experiment
- V1.1 (24 June 2020): Maximum likelihood estimation was simplified based on SciPy PDF's and the CERN-ROOT6 minimizer via the iminuit package (→ great performance)
- V1.2 (20 February 2021): Preparations for scientific publication and iminuit v2 update integration
@author: Jonas Karthein<br>
@contact: jonas.karthein@cern.ch<br>
@license: MIT license
### References
[1]: https://doi.org/10.1007/s00340-013-5621-0
[2]: https://doi.org/10.1103/PhysRevLett.110.082501
[3]: https://doi.org/10.1007/s10751-019-1601-z
[4]: https://doi.org/10.1103/PhysRevLett.124.092502
[1] S. Eliseev, _et al._ Appl. Phys. B (2014) 114: 107.<br>
[2] S. Eliseev, _et al._ Phys. Rev. Lett. 110, 082501 (2013).<br>
[3] J. Karthein, _et al._ Hyperfine Interact (2019) 240: 61.<br>
### Application
The code was used to analyse data for the following publications:
[3] J. Karthein, _et al._ Hyperfine Interact (2019) 240: 61.<br>
[4] V. Manea and J. Karthein, _et al._ Phys. Rev. Lett. 124, 092502 (2020)<br>
[5] M. Mougeot, _et al._ in preparation (2020)<br>
### Introduction
The following code was written to reconstruct raw Phase-Imaging Ion-Cyclotron-Resonance (PI-ICR) data, to fit PI-ICR position information and calculate a frequency using the patter 1/2 scheme described in Ref. [1] and to determine a frequency ratio between a measurement ion and a reference ion. Additionally, the code allows to analyze isomeric states separated in pattern 2.
data, to fit PI-ICR position information and calculate a frequency using the patter 1/2 scheme described in Ref. [1] and to determine a frequency ratio between a measurement ion and a reference ion. Additionally, the code allows to analyze isomeric states separated in pattern 2.
### Required software and libraries
The following code was written in Python 3.7. The required libraries are listed below with a rough description for their task in the code. It doesn't claim to be a full description of the library.
* pandas (data storage and calculation)
* numpy (calculation)
* matplotlib (plotting)
* scipy (PDFs, least squares estimation)
* configparser (configuration file processing)
* jupyter (Python notebook environment)
* iminuit (CERN-ROOT6 minimizer)
All packages can be fetched using pip:
```
!pip3 install --user pandas numpy matplotlib scipy configparser jupyter iminuit
```
Instead of the regular jupyter environment, one can also use CERN's SWAN service or Google Colab.
```
google_colab = False
if google_colab:
try:
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My\ Drive/Colab/pi-icr/
except:
%cd ~/cernbox/Documents/Colab/pi-icr/
```
### Data files
Specify, whether the analysis involves one or two states separated in pattern 2 by commenting out the not applicable case in lines 10 or 11. Then enter the file paths for all your data files without the `*.txt` extension. In the following, `ioi` represents the Ion of interest, and `ref` the reference ion.
```
%config InlineBackend.figure_format ='retina'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import pickle, os
# analysis = {'ioi_g': {},'ref': {}}
analysis = {'ioi_g': {},'ioi_m': {},'ref': {}}
files_ioi_g = ['data/ioi_ground/85Rb_c_000',
'data/ioi_ground/85Rb_002',
'data/ioi_ground/85Rb_004',
'data/ioi_ground/85Rb_006']
# files_ioi_m = ['data/ioi_isomer/101In_c_000',
# 'data/ioi_isomer/101In_005']
files_ref = ['data/ref/133Cs_c_000',
'data/ref/133Cs_003',
'data/ref/133Cs_005',
'data/ref/133Cs_007']
latex_ioi_g = '$^{88}$Rb'
# latex_ioi_m = '$^{101}$In$^m$'
latex_ref = '$^{133}$Cs'
```
### Load pre-analyzed data from file or reconstruct raw data
All files are loaded and reconstructed in one big dictionary of dictionaries. It contains besides the positions and timestamps also information about the measurement conditions (excitation frequencies, rounds etc). One can load a whole beamtime at once. Center files must be indicated by a `_c_` in the name (e.g. regular name: `101In_001.txt` $\rightarrow$ center name `101In_c_000.txt`). All the data is at later stages saved in a `pickle` file. This enables quick loading of the data dictionary without the need of re-reconstructing the data.
The reconstruction code is parallelized and can be found in the subfolder `bin/reconstruction.py`
```
from bin.reconstruction import PIICR
piicr = PIICR()
if os.path.isfile('data/data-save.p'):
analysis = pickle.load(open('data/data-save.p','rb'))
print('\nLoading finished!')
else:
for file in files_ioi_g:
analysis['ioi_g'].update({file: piicr.prepare(file)})
if analysis['ioi_m'] != {}:
for file in files_ioi_m:
analysis['ioi_m'].update({file: piicr.prepare(file)})
for file in files_ref:
analysis['ref'].update({file: piicr.prepare(file)})
print('\nReconstruction finished!')
```
### Individual file selection
The analysis dictionary contains all files. The analysis however is intended to be performed on a file-by-file basis. Please select the individual files here in the variable `file_name`.
```
# load P1, P2 and C data in panda dataframes for selected file
# file_name = files_ioi_g[1]
# file_name = files_ioi_m[1]
# file_name = files_ref[1]
file_name = files_ioi_g[3]
print('Selected file:',file_name)
if 'ground' in file_name:
df_p1 = pd.DataFrame(analysis['ioi_g'][file_name]['p1'], columns=['event','x','y','time'])
df_p2 = pd.DataFrame(analysis['ioi_g'][file_name]['p2'], columns=['event','x','y','time'])
df_c = pd.DataFrame(analysis['ioi_g'][file_name.split('_0', 1)[0]+'_c_000']['c'],
columns=['event','x','y','time'])
elif 'isomer' in file_name:
df_p1 = pd.DataFrame(analysis['ioi_m'][file_name]['p1'], columns=['event','x','y','time'])
df_p2 = pd.DataFrame(analysis['ioi_m'][file_name]['p2'], columns=['event','x','y','time'])
df_c = pd.DataFrame(analysis['ioi_m'][file_name.split('_0', 1)[0]+'_c_000']['c'],
columns=['event','x','y','time'])
else:
df_p1 = pd.DataFrame(analysis['ref'][file_name]['p1'], columns=['event','x','y','time'])
df_p2 = pd.DataFrame(analysis['ref'][file_name]['p2'], columns=['event','x','y','time'])
df_c = pd.DataFrame(analysis['ref'][file_name.split('_0', 1)[0]+'_c_000']['c'],
columns=['event','x','y','time'])
```
### Manual space and time cut
Please perform a rough manual space cut for each file to improve results on the automatic space cutting tool. This is necessary if one deals with two states in pattern two or if there is a lot of background. This selection will be ellipsoidal. Additionally, please perform a rough time of flight (ToF) cut.
```
# manual_space_cut = [x_peak_pos, x_peak_spread, y_peak_pos, y_peak_spread]
manual_space_cut = {'data/ioi_ground/85Rb_002': [150, 150, 100, 150],
'data/ioi_ground/85Rb_004': [150, 150, 100, 150],
'data/ioi_ground/85Rb_006': [150, 150, 100, 150],
'data/ref/133Cs_003': [120, 150, 80, 150],
'data/ref/133Cs_005': [120, 150, 80, 150],
'data/ref/133Cs_007': [120, 150, 80, 150]}
# manual_tof_cut = [tof_min, tof_max]
manual_tof_cut = [20, 50]
# manual_z_cut <= number of ions in the trap
manual_z_cut = 5
```
### Automatic time and space cuts based on Gaussian distribution
This section contains all cuts in time and space in different steps.
1. In the time domain contaminants are removed by fitting a gaussian distribution via maximum likelihood estimation to the largest peak in the ToF spectrum and cutting +/- 5 $\sigma$ (change cut range in lines 70 & 71). The ToF distribution has to be binned first before the maximum can be found, but the fit is performed on the unbinned data set.
2. Manual space cut is applied for pattern 1 and pattern 2 (not for the center spot)
3. Outlyers/wrongly excited ions are removed +/- 3 $\sigma$ by measures of a simple mean in x and y after applying the manual cut (change cut range in lines).
4. Ejections with more than `manual_z_cut` number of ions in the trap (without taking into account the detector efficiency) are rejected (= z-class cut)
```
%config InlineBackend.figure_format ='retina'
import matplotlib as mpl
from scipy.stats import norm
from iminuit import Minuit
# Utopia LaTeX font with greek letters
mpl.rc('font', family='serif', serif='Linguistics Pro')
mpl.rc('text', usetex=False)
mpl.rc('mathtext', fontset='custom',
rm='Linguistics Pro',
it='Linguistics Pro:italic',
bf='Linguistics Pro:bold')
mpl.rcParams.update({'font.size': 18})
col = ['#FFCC00', '#FF2D55', '#00A2FF', '#61D935', 'k', 'grey', 'pink'] # yellow, red, blue, green
df_list = [df_p1, df_p2, df_c]
pattern = ['p1', 'p2', 'c']
bin_time_df = [0,0,0] # [p1,p2,c] list of dataframes containing the time-binned data
result_t = [0,0,0] # [p1,p2,c] list of MLE fit result dicts
cut_df = [0,0,0] # [p1,p2,c] list of dataframes containing the time- and space-cut data
excludes_df = [0,0,0] # [p1,p2,c] list of dataframes containing the time- and space-cut excluded data
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(12, 15))
for df_nr in range(len(df_list)):
##############################
### BINNING, FITTING TOF DISTR
##############################
bin_time_df[df_nr] = pd.DataFrame(pd.value_counts(pd.cut(df_list[df_nr].time, bins=np.arange(manual_tof_cut[0], manual_tof_cut[1],0.02))).sort_index()).rename(index=str, columns={'time': 'counts'}).reset_index(drop=True)
bin_time_df[df_nr]['time'] = np.arange(manual_tof_cut[0]+0.01,manual_tof_cut[1]-0.01,0.02)
# fit gaussian to time distribution using unbinned maximum likelihood estimation
def NLL_1D(mean, sig):
'''Negative log likelihood function for (n=1)-dimensional Gaussian distribution.'''
return( -np.sum(norm.logpdf(x=data_t,
loc=mean,
scale=sig)) )
def Start_Par(data):
'''Starting parameter based on simple mean of 1D numpy array.'''
return(np.array([data.mean(), # meanx
data.std()])) #rho
# minimize negative log likelihood function first for the symmetric case
data_t = df_list[df_nr][(df_list[df_nr].time > bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()] - 1.0) &
(df_list[df_nr].time < bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()] + 1.0)].time.to_numpy()
result_t[df_nr] = Minuit(NLL_1D, mean=Start_Par(data_t)[0], sig=Start_Par(data_t)[1])
result_t[df_nr].errors = (0.1, 0.1) # initital step size
result_t[df_nr].limits =[(None, None), (None, None)] # fit ranges
result_t[df_nr].errordef = Minuit.LIKELIHOOD # MLE definition (instead of Minuit.LEAST_SQUARES)
result_t[df_nr].migrad() # finds minimum of mle function
result_t[df_nr].hesse() # computes errors
for p in result_t[df_nr].parameters:
print("{} = {:3.5f} +/- {:3.5f}".format(p, result_t[df_nr].values[p], result_t[df_nr].errors[p]))
##############################
### VISUALIZE TOF DISTRIBUTION # kind='bar' is VERY time consuming -> use kind='line' instead!
##############################
# whole distribution
bin_time_df[df_nr].plot(x='time', y='counts', kind='line', xticks=np.arange(manual_tof_cut[0],manual_tof_cut[1]+1,5), ax=axes[df_nr,0])
# reduced peak plus fit
bin_time_df[df_nr][bin_time_df[df_nr].counts.idxmax()-50:bin_time_df[df_nr].counts.idxmax()+50].plot(x='time', y='counts', kind='line', ax=axes[df_nr,1])
pdf_x = np.arange(bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()-50],
bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()+51],
(bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()+51]
-bin_time_df[df_nr].time[bin_time_df[df_nr].counts.idxmax()-50])/100)
pdf_y = norm.pdf(pdf_x, result_t[df_nr].values['mean'], result_t[df_nr].values['sig'])
axes[df_nr,0].plot(pdf_x, pdf_y/pdf_y.max()*bin_time_df[df_nr].counts.max(), 'r', label='PDF')
axes[df_nr,1].plot(pdf_x, pdf_y/pdf_y.max()*bin_time_df[df_nr].counts.max(), 'r', label='PDF')
# mark events in t that will be cut away (+/- 3 sigma = 99.73% of data)
bin_time_df[df_nr][(bin_time_df[df_nr].time < result_t[df_nr].values['mean'] - 3*result_t[df_nr].values['sig']) |
(bin_time_df[df_nr].time > result_t[df_nr].values['mean'] + 3*result_t[df_nr].values['sig'])].plot(x='time', y='counts', kind='scatter', ax=axes[df_nr,0], c='y', marker='x', s=50, label='excluded')
bin_time_df[df_nr][(bin_time_df[df_nr].time < result_t[df_nr].values['mean'] - 3*result_t[df_nr].values['sig']) |
(bin_time_df[df_nr].time > result_t[df_nr].values['mean'] + 3*result_t[df_nr].values['sig'])].plot(x='time', y='counts', kind='scatter', ax=axes[df_nr,1], c='y', marker='x', s=50, label='excluded')
# legend title shows total number of events and reduced number of events
axes[df_nr,0].legend(title='total: {}'.format(bin_time_df[df_nr].counts.sum()),loc='upper right', fontsize=16)
axes[df_nr,1].legend(title='considered: {}'.format(bin_time_df[df_nr].counts.sum()-bin_time_df[df_nr][(bin_time_df[df_nr].time < result_t[df_nr].values['mean'] - 3*result_t[df_nr].values['sig']) |
(bin_time_df[df_nr].time > result_t[df_nr].values['mean'] + 3*result_t[df_nr].values['sig'])].counts.sum()),loc='upper left', fontsize=16)
##############################
### APPYING ALL CUTS
##############################
# cutting in t: mean +/- 5 sigma
cut_df[df_nr] = df_list[df_nr][(df_list[df_nr].time > (result_t[df_nr].values['mean'] - 5*result_t[df_nr].values['sig']))&
(df_list[df_nr].time < (result_t[df_nr].values['mean'] + 5*result_t[df_nr].values['sig']))]
len1 = cut_df[df_nr].shape[0]
# applying manual cut in x and y:
if df_nr < 2: # only for p1 and p2, not for c
cut_df[df_nr] = cut_df[df_nr][((cut_df[df_nr].x-manual_space_cut[file_name][0])**2 + (cut_df[df_nr].y-manual_space_cut[file_name][2])**2) <
manual_space_cut[file_name][1]*manual_space_cut[file_name][3]]
len2 = cut_df[df_nr].shape[0]
# applyig automatic cut in x and y: mean +/- 3 std in an ellipsoidal cut
cut_df[df_nr] = cut_df[df_nr][((cut_df[df_nr].x-cut_df[df_nr].x.mean())**2 + (cut_df[df_nr].y-cut_df[df_nr].y.mean())**2) <
3*cut_df[df_nr].x.std()*3*cut_df[df_nr].y.std()]
len3 = cut_df[df_nr].shape[0]
# applying automatic z-class-cut (= cut by number of ions per event) for z>5 ions per event to reduce space-charge effects:
cut_df[df_nr] = cut_df[df_nr][cut_df[df_nr].event.isin(cut_df[df_nr].event.value_counts()[cut_df[df_nr].event.value_counts() <= 6].index)]
# printing the reduction of the number of ions per file in each of the cut steps
print('\n{}: data size: {} -> time cut: {} -> manual space cut: {} -> automatic space cut: {} -> z-class-cut: {}\n'.format(pattern[df_nr], df_list[df_nr].shape[0], len1, len2, len3, cut_df[df_nr].shape[0]))
# saves excluded data (allows visual checking later)
excludes_df[df_nr] = pd.concat([df_list[df_nr], cut_df[df_nr]]).drop_duplicates(keep=False).reset_index(drop=True)
plt.savefig('{}-tof.pdf'.format(file_name))
plt.show()
```
### Spot fitting
2D multivariate gaussian maximum likelihood estimations of the cleaned pattern 1, pattern 2 and center spot positions are performed SciPy PDF's and ROOT's minimizer. Displayed are all uncut data with a blue-transparent point. This allows displaying a density of points by the shade of blue without the need of binning the data (= reducing the information; also: binning is much more time-consuming). The cut data is displayed with a black "x" at the position of the blue point. These points are not considered in the fit (represented by the red (6-$\sigma$ band) but allow for an additional check of the cutting functions. The scale of the MCP-position plots is given in the time unit of the position-sensitive MCP data. There is no need in converting it into a mm-unit since one is only interested in the angle.
```
%config InlineBackend.figure_format ='retina'
# activate interactive matplotlib plot -> uncomment line below!
# %matplotlib notebook
import pickle, os
from scipy.stats import multivariate_normal, linregress, pearsonr
from scipy.optimize import minimize
import numpy as np
from iminuit import Minuit
# open preanalyzed dataset if existing
if os.path.isfile('data/data-save.p'):
analysis = pickle.load(open('data/data-save.p','rb'))
df_list = [df_p1, df_p2, df_c]
result = [{},{},{}]
root_res = [0,0,0]
parameters = ['meanx', 'meany', 'sigx', 'sigy', 'theta']
fig2, axes2 = plt.subplots(nrows=3, ncols=1, figsize=(7.5, 20))
piicr_scheme_names = ['p1','p2','c']
##############################
### Prepare maximum likelihood estimation
##############################
def Rot(theta):
'''Rotation (matrix) of angle theta to cartesian coordinates.'''
return np.array([[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
def NLL_2D(meanx, meany, sigx, sigy, theta):
'''Negative log likelihood function for (n=2)-dimensional Gaussian distribution for Minuit.'''
cov = Rot(theta) @ np.array([[np.power(sigx,2),0],[0,np.power(sigy,2)]]) @ Rot(theta).T
return( -np.sum(multivariate_normal.logpdf(x=data,
mean=np.array([meanx, meany]),
cov=cov,
allow_singular=True)) )
def NLL_2D_scipy(param):
'''Negative log likelihood function for (n=2)-dimensional Gaussian distribution for SciPy.'''
meanx, meany, sigx, sigy, theta = param
cov = Rot(theta) @ np.array([[np.power(sigx,2),0],[0,np.power(sigy,2)]]) @ Rot(theta).T
return( -np.sum(multivariate_normal.logpdf(x=data,
mean=np.array([meanx, meany]),
cov=cov,
allow_singular=True)) )
def Start_Par(data):
'''Starting parameter based on simple linear regression and 2D numpy array.'''
# simple linear regression to guess the rotation angle based on slope
slope, intercept, r_value, p_value, std_err = linregress(data[:, 0], data[:, 1])
theta_guess = -np.arctan(slope)
# data rotated based on theta guess
data_rotated_guess = np.dot(Rot(theta_guess), [data[:,0], data[:,1]])
first_guess = np.array([data[:,0].mean()+0.2, # meanx
data[:,1].mean()+0.2, # meany
data_rotated_guess[1].std(), # sigma-x
data_rotated_guess[0].std(), # sigma-y
theta_guess]) # rot. angle based on slope of lin. reg.
# based on a first guess, a minimization based on a robust simplex is performed
start_par = minimize(NLL_2D_scipy, first_guess, method='Nelder-Mead')
return(start_par['x'])
##############################
### Fitting and visualization of P1, P2, C
##############################
for df_nr in range(len(df_list)):
# minimize negative log likelihood function first for the symmetric case
data = cut_df[df_nr][['x', 'y']].to_numpy()
root_res[df_nr] = Minuit(NLL_2D, meanx=Start_Par(data)[0], meany=Start_Par(data)[1],
sigx=Start_Par(data)[2], sigy=Start_Par(data)[3],
theta=Start_Par(data)[4])
root_res[df_nr].errors = (0.1, 0.1, 0.1, 0.1, 0.1) # initital step size
root_res[df_nr].limits =[(None, None), (None, None), (None, None), (None, None), (None, None)] # fit ranges
root_res[df_nr].errordef = Minuit.LIKELIHOOD # MLE definition (instead of Minuit.LEAST_SQUARES)
root_res[df_nr].migrad() # finds minimum of mle function
root_res[df_nr].hesse() # computes errors
# plotting of data, excluded data, reference MCP circle, and fit results
axes2[df_nr].plot(df_list[df_nr].x.to_numpy(),df_list[df_nr].y.to_numpy(),'o',alpha=0.15,label='data',zorder=0)
axes2[df_nr].plot(excludes_df[df_nr].x.to_numpy(), excludes_df[df_nr].y.to_numpy(), 'x k',
label='excluded data',zorder=1)
mcp_circ = mpl.patches.Ellipse((0,0), 1500, 1500, edgecolor='k', fc='None', lw=2)
axes2[df_nr].add_patch(mcp_circ)
axes2[df_nr].scatter(root_res[df_nr].values['meanx'], root_res[df_nr].values['meany'], marker='o', color=col[1], linewidth=0, zorder=2)
sig = mpl.patches.Ellipse((root_res[df_nr].values['meanx'], root_res[df_nr].values['meany']),
3*root_res[df_nr].values['sigx'], 3*root_res[df_nr].values['sigy'],
np.degrees(root_res[df_nr].values['theta']),
edgecolor=col[1], fc='None', lw=2, label='6-$\sigma$ band (fit)', zorder=2)
axes2[df_nr].add_patch(sig)
axes2[df_nr].legend(title='fit(x) = {:1.0f}({:1.0f})\nfit(y) = {:1.0f}({:1.0f})'.format(root_res[df_nr].values['meanx'],root_res[df_nr].errors['meanx'],
root_res[df_nr].values['meany'],root_res[df_nr].errors['meany']),
loc='lower left', fontsize=14)
axes2[df_nr].axis([-750,750,-750,750])
axes2[df_nr].grid(True)
axes2[df_nr].text(-730, 660, '{}: {}'.format(file_name.split('/',1)[-1], piicr_scheme_names[df_nr]))
plt.tight_layout()
# save fit information for each parameter:
# 'parameter': [fitresult, fiterror, Hesse-covariance matrix]
for i in range(len(parameters)):
result[df_nr].update({'{}'.format(parameters[i]): [np.array(root_res[df_nr].values)[i],
np.array(root_res[df_nr].errors)[i],
root_res[df_nr].covariance]})
if 'ground' in file_name:
analysis['ioi_g'][file_name]['fit-{}'.format(piicr_scheme_names[df_nr])] = result[df_nr]
elif 'isomer' in file_name:
analysis['ioi_m'][file_name]['fit-{}'.format(piicr_scheme_names[df_nr])] = result[df_nr]
else:
analysis['ref'][file_name]['fit-{}'.format(piicr_scheme_names[df_nr])] = result[df_nr]
plt.savefig('{}-fit.pdf'.format(file_name))
plt.show()
# save all data using pickle
pickle.dump(analysis, open('data/data-save.p','wb'))
```
---
# !!! <font color='red'>REPEAT</font> CODE ABOVE FOR ALL INDIVIDUAL FILES !!!
---
<br>
<br>
### Save fit data to dataframe and *.csv file
<br>Continue here after analyzing all files individually. The following command saves all necessary data and fit information in a `*.csv` file.
```
calc_df = pd.DataFrame()
for key in analysis.keys():
for subkey in analysis[key].keys():
if '_c_' not in subkey:
calc_df = calc_df.append(pd.DataFrame({'file': subkey,
'p1_x': analysis[key][subkey]['fit-p1']['meanx'][0],
'p1_y': analysis[key][subkey]['fit-p1']['meany'][0],
'p2_x': analysis[key][subkey]['fit-p2']['meanx'][0],
'p2_y': analysis[key][subkey]['fit-p2']['meany'][0],
'c_x': analysis[key][subkey]['fit-c']['meanx'][0],
'c_y': analysis[key][subkey]['fit-c']['meany'][0],
'p1_x_unc': analysis[key][subkey]['fit-p1']['meanx'][1],
'p1_y_unc': analysis[key][subkey]['fit-p1']['meany'][1],
'p2_x_unc': analysis[key][subkey]['fit-p2']['meanx'][1],
'p2_y_unc': analysis[key][subkey]['fit-p2']['meany'][1],
'c_x_unc': analysis[key][subkey]['fit-c']['meanx'][1],
'c_y_unc': analysis[key][subkey]['fit-c']['meany'][1],
'cyc_freq_guess': analysis[key][subkey]['cyc_freq'],
'red_cyc_freq': analysis[key][subkey]['red_cyc_freq'],
'mag_freq': analysis[key][subkey]['mag_freq'],
'cyc_acc_time': analysis[key][subkey]['cyc_acc_time'],
'n_acc': analysis[key][subkey]['n_acc'],
'time_start': pd.to_datetime('{} {}'.format(analysis[key][subkey]['time-info'][0], analysis[key][subkey]['time-info'][1]), format='%m/%d/%Y %H:%M:%S', errors='ignore'),
'time_end': pd.to_datetime('{} {}'.format(analysis[key][subkey]['time-info'][2], analysis[key][subkey]['time-info'][3]), format='%m/%d/%Y %H:%M:%S', errors='ignore')}, index=[0]), ignore_index=True)
calc_df.to_csv('data/analysis-summary.csv')
calc_df
```
### Calculate $\nu_c$ from position fits
[1]: https://doi.org/10.1007/s00340-013-5621-0
[2]: https://doi.org/10.1103/PhysRevLett.110.082501
[3]: https://doi.org/10.1007/s10751-019-1601-z
Can be run independently from everything above by loading the `analysis-summary.csv` file!<br> A detailed description of the $\nu_c$ calculation can be found in Ref. [1], [2] and [3].
```
import pandas as pd
import numpy as np
# load fit-data file, datetime has to be converted
calc_df = pd.read_csv('data/analysis-summary.csv', header=0, index_col=0)
# calculate angle between the P1-vector (P1_x/y - C_x/y) and the P2-vector (P2_x/y - C_x/y)
calc_df['p1p2_angle'] = np.arctan2(calc_df.p1_y - calc_df.c_y, calc_df.p1_x - calc_df.c_x) \
- np.arctan2(calc_df.p2_y - calc_df.c_y, calc_df.p2_x - calc_df.c_x)
# calculate the uncertainty on the angle between the P1/P2 vectors
# see https://en.wikipedia.org/wiki/Atan2
calc_df['p1p2_angle_unc'] = np.sqrt(
( calc_df.p1_x_unc * (calc_df.c_y - calc_df.p1_y) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.p1_y_unc * (calc_df.p1_x - calc_df.c_x) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.p2_x_unc * (calc_df.c_y - calc_df.p2_y) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.p2_y_unc * (calc_df.p2_x - calc_df.c_x) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) )**2
+ ( calc_df.c_x_unc *
( -(calc_df.c_y - calc_df.p1_y) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 )
-(calc_df.c_y - calc_df.p2_y) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) ) )**2
+ ( calc_df.c_y_unc *
( (calc_df.p1_x - calc_df.c_x) / ( (calc_df.p1_x - calc_df.c_x)**2 + (calc_df.p1_y - calc_df.c_y)**2 )
+(calc_df.p2_x - calc_df.c_x) / ( (calc_df.p2_x - calc_df.c_x)**2 + (calc_df.p2_y - calc_df.c_y)**2 ) ) )**2 )
# calculate cyc freq: total phase devided by total time
calc_df['cyc_freq'] = (calc_df.p1p2_angle + 2*np.pi * calc_df.n_acc) / (2*np.pi * calc_df.cyc_acc_time * 0.000001)
calc_df['cyc_freq_unc'] = calc_df.p1p2_angle_unc / (2*np.pi * calc_df.cyc_acc_time * 0.000001)
calc_df.to_csv('data/analysis-summary.csv')
calc_df.head()
```
### Frequency-ratio calculation
[1]: https://doi.org/10.1007/s00340-013-5621-0
[2]: https://doi.org/10.1103/PhysRevLett.110.082501
[3]: https://doi.org/10.1007/s10751-019-1601-z
In order to determine the frequency ratio between the ioi and the ref, simultaneous fits of all for the data set possible polynomial degrees are performed. The code calculates the reduced $\chi^2_{red}$ for each fit and returns only the one with a $\chi^2_{red}$ closest to 1. A detailed description of the procedure can be found in Ref. [3]. If problems in the fitting occur, please try to vary the starting parameter section in lines 125-135 of `~/bin/freq_ratio.py`
```
import pandas as pd
import numpy as np
from bin.freq_ratio import Freq_ratio
freq = Freq_ratio()
# load fit-data file
calc_df = pd.read_csv('data/analysis-summary.csv', header=0, index_col=0)
# save average time of measurement: t_start+(t_end-t_start)/2
calc_df.time_start = pd.to_datetime(calc_df.time_start)
calc_df.time_end = pd.to_datetime(calc_df.time_end)
calc_df['time'] = calc_df.time_start + (calc_df.time_end - calc_df.time_start)/2
calc_df.to_csv('data/analysis-summary.csv')
# convert avg.time to difference in minutes from first measurement -> allows fitting with small number as x value
calc_df['time_delta'] = ((calc_df['time']-calc_df['time'].min())/np.timedelta64(1, 's')/60)
# selecting data for isotopes
df_ioi_g = calc_df[calc_df.file.str.contains('ground')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
df_ioi_m = calc_df[calc_df.file.str.contains('isomer')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
# allows to define a subset of reference frequencies for ground and isomer
df_ref_g = calc_df[calc_df.file.str.contains('ref')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
df_ref_m = calc_df[calc_df.file.str.contains('ref')][['time_delta','cyc_freq','cyc_freq_unc','time','file']]
# simultaneous polynomial fit, see https://doi.org/10.1007/s10751-019-1601-z
fit1, fit2, ratio1, ratio_unc1, chi_sq1 = freq.ratio_sim_fit(['ref', 'ioi_g'],
df_ref_g.time_delta.tolist(),
df_ref_g.cyc_freq.tolist(),
df_ref_g.cyc_freq_unc.tolist(),
df_ioi_g.time_delta.tolist(),
df_ioi_g.cyc_freq.tolist(),
df_ioi_g.cyc_freq_unc.tolist())
if len(df_ioi_m) > 0:
fit3, fit4, ratio2, ratio_unc2, chi_sq2 = freq.ratio_sim_fit(['ref', 'ioi_m'],
df_ref_m.time_delta.tolist(),
df_ref_m.cyc_freq.tolist(),
df_ref_m.cyc_freq_unc.tolist(),
df_ioi_m.time_delta.tolist(),
df_ioi_m.cyc_freq.tolist(),
df_ioi_m.cyc_freq_unc.tolist())
```
### Frequency-ratio plotting
```
%config InlineBackend.figure_format ='retina'
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import numpy as np
mpl.rc('font', family='serif', serif='Linguistics Pro') # open source Utopia LaTeX font with greek letters
mpl.rc('text', usetex=False)
mpl.rc('mathtext', fontset='custom',
rm='Linguistics Pro',
it='Linguistics Pro:italic',
bf='Linguistics Pro:bold')
mpl.rcParams.update({'font.size': 18})
# prepare fit data
x1 = np.linspace(min([df_ioi_g.time_delta.min(),df_ref_g.time_delta.min()]),max([df_ioi_g.time_delta.max(),df_ref_g.time_delta.max()]),500)
t1 = pd.date_range(pd.Series([df_ioi_g.time.min(),df_ref_g.time.min()]).min(),pd.Series([df_ioi_g.time.max(),df_ref_g.time.max()]).max(),periods=500)
if len(df_ioi_m) > 0:
x2 = np.linspace(min([df_ioi_m.time_delta.min(),df_ref_m.time_delta.min()]),max([df_ioi_m.time_delta.max(),df_ref_m.time_delta.max()]),500)
t2 = pd.date_range(pd.Series([df_ioi_m.time.min(),df_ref_m.time.min()]).min(),pd.Series([df_ioi_m.time.max(),df_ref_m.time.max()]).max(),periods=500)
fit1_y = [np.polyval(fit1, i) for i in x1]
fit2_y = [np.polyval(fit2, i) for i in x1]
if len(df_ioi_m) > 0:
fit3_y = [np.polyval(fit3, i) for i in x2]
fit4_y = [np.polyval(fit4, i) for i in x2]
#########################
### PLOTTING ground state
#########################
if len(df_ioi_m) > 0:
fig, (ax1, ax3) = plt.subplots(figsize=(9,12),nrows=2, ncols=1)
else:
fig, ax1 = plt.subplots(figsize=(9,6),nrows=1, ncols=1)
ax1.errorbar(df_ref_g.time, df_ref_g.cyc_freq, yerr=df_ref_g.cyc_freq_unc, fmt='o', label='{}'.format(latex_ref), marker='d', c='#1E77B4', ms=10, elinewidth=2.5)
ax1.set_xlabel('Time', fontsize=24, fontweight='bold')
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Frequency (Hz)', fontsize=24, fontweight='bold')
ax1.tick_params('y', colors='#1E77B4')
ax1.plot(t1, fit1_y, ls=(5.5, (5, 1, 1, 1, 1, 1, 1, 1)),c='#1E77B4', label='poly-fit')
# Allowing two axes in one subplot
ax2 = ax1.twinx()
ax2.errorbar(df_ioi_g.time, df_ioi_g.cyc_freq, yerr=df_ioi_g.cyc_freq_unc, fmt='o', color='#D62728', label='{}'.format(latex_ioi_g), fillstyle='none', ms=10, elinewidth=2.5) # green: #2ca02c
ax2.tick_params('y', colors='#D62728')
ax2.plot(t1, fit2_y, ls=(0, (5, 3, 1, 3)),c='#D62728', label='poly-fit')
# adjust the y axes to be the same height
middle_y1 = df_ref_g.cyc_freq.min() + (df_ref_g.cyc_freq.max() - df_ref_g.cyc_freq.min())/2
middle_y2 = df_ioi_g.cyc_freq.min() + (df_ioi_g.cyc_freq.max() - df_ioi_g.cyc_freq.min())/2
range_y1 = df_ref_g.cyc_freq.max() - df_ref_g.cyc_freq.min() + 2 * df_ref_g.cyc_freq_unc.max()
range_y2 = df_ioi_g.cyc_freq.max() - df_ioi_g.cyc_freq.min() + 2 * df_ioi_g.cyc_freq_unc.max()
ax1.set_ylim(middle_y1 - 1.3 * max([range_y1, middle_y1*range_y2/middle_y2])/2, middle_y1 + 1.1 * max([range_y1, middle_y1*range_y2/middle_y2])/2) # outliers only
ax2.set_ylim(middle_y2 - 1.1 * max([middle_y2*range_y1/middle_y1, range_y2])/2, middle_y2 + 1.3 * max([middle_y2*range_y1/middle_y1, range_y2])/2) # most of the data
# plotting only hours without the date
ax2.xaxis.set_major_formatter(mpl.dates.DateFormatter('%H:%M'))
ax2.xaxis.set_minor_locator(mpl.dates.HourLocator())
handles1, labels1 = ax1.get_legend_handles_labels()
handles2, labels2 = ax2.get_legend_handles_labels()
handles_g = [handles1[1], handles2[1], (handles1[0], handles2[0])]
labels_g = [labels1[1], labels2[1], labels1[0]]
plt.legend(handles=handles_g, labels=labels_g,fontsize=18,title='Ratio: {:1.10f}\n $\\pm${:1.10f}'.format(ratio1, ratio_unc1), loc='upper right')
plt.text(0.03,0.03,'poly-{}: $\chi^2_{{red}}$ {:3.2f}'.format(len(fit1)-1, chi_sq1),transform=ax1.transAxes)
###########################
### PLOTTING isomeric state
###########################
if len(df_ioi_m) > 0:
ax3.errorbar(df_ref_m.time, df_ref_m.cyc_freq, yerr=df_ref_m.cyc_freq_unc, fmt='o', label='{}'.format(latex_ref), marker='d', c='#1E77B4', ms=10, elinewidth=2.5)
ax3.set_xlabel('Time', fontsize=24, fontweight='bold')
# Make the y-axis label, ticks and tick labels match the line color.
ax3.set_ylabel('Frequency (Hz)', fontsize=24, fontweight='bold')
ax3.tick_params('y', colors='#1E77B4')
ax3.plot(t2, fit3_y, ls=(5.5, (5, 1, 1, 1, 1, 1, 1, 1)),c='#1E77B4', label='poly-fit')
# Allowing two axes in one subplot
ax4 = ax3.twinx()
ax4.errorbar(df_ioi_m.time, df_ioi_m.cyc_freq, yerr=df_ioi_m.cyc_freq_unc, fmt='o', color='#D62728', label='{}'.format(latex_ioi_m), fillstyle='none', ms=10, elinewidth=2.5) # green: #2ca02c
ax4.tick_params('y', colors='#D62728')
ax4.plot(t2, fit4_y, ls=(0, (5, 3, 1, 3)),c='#D62728', label='poly-fit')
# adjust the y axes to be the same height
middle_y3 = df_ref_m.cyc_freq.min() + (df_ref_m.cyc_freq.max() - df_ref_m.cyc_freq.min())/2
middle_y4 = df_ioi_m.cyc_freq.min() + (df_ioi_m.cyc_freq.max() - df_ioi_m.cyc_freq.min())/2
range_y3 = df_ref_m.cyc_freq.max() - df_ref_m.cyc_freq.min() + 2 * df_ref_m.cyc_freq_unc.max()
range_y4 = df_ioi_m.cyc_freq.max() - df_ioi_m.cyc_freq.min() + 2 * df_ioi_m.cyc_freq_unc.max()
ax3.set_ylim(middle_y3 - 1.3 * max([range_y3, middle_y3*range_y4/middle_y4])/2, middle_y3 + 1.1 * max([range_y3, middle_y3*range_y4/middle_y4])/2) # outliers only
ax4.set_ylim(middle_y4 - 1.1 * max([middle_y4*range_y3/middle_y3, range_y4])/2, middle_y4 + 1.3 * max([middle_y4*range_y3/middle_y3, range_y4])/2) # most of the data
# plotting only hours without the date
ax4.xaxis.set_major_formatter(mpl.dates.DateFormatter('%H:%M'))
ax4.xaxis.set_minor_locator(mpl.dates.HourLocator())
handles3, labels3 = ax3.get_legend_handles_labels()
handles4, labels4 = ax4.get_legend_handles_labels()
handles_m = [handles3[1], handles4[1], (handles3[0], handles4[0])]
labels_m = [labels3[1], labels4[1], labels3[0]]
plt.legend(handles=handles_m, labels=labels_m, fontsize=18,title='Ratio: {:1.10f}\n $\\pm${:1.10f}'.format(ratio2, ratio_unc2), loc='upper right')
plt.text(0.03,0.03,'poly-{}: $\chi^2_{{red}}$ {:3.2f}'.format(len(fit3)-1, chi_sq2),transform=ax3.transAxes)
plt.tight_layout()
plt.savefig('data/freq-ratios.pdf')
plt.show()
```
| github_jupyter |
# Introdution to Jupyter Notebooks and Text Processing in Python
This 'document' is a Jupyter notebook. It allows you to combine explanatory **text** and **code** that executes to produce results you can see on the same page.
## Notebook Basics
### Text cells
The box this text is written in is called a *cell*. It is a *text cell* written in a very simple markup language called 'Markdown'. Here is a useful [Markdown cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). You can edit and then run cells to produce a result. Running this text cell produces formatted text.
### Code cells
The other main kind of cell is a *code cell*. The cell immediately below this one is a code cell. Running a code cell runs the code in the cell and produces a result.
```
# This is a comment in a code cell. Comments start with a # symbol. They are ignored and do not do anything.
# This box is a code cell. When this cell is run, the code below will execute and produce a result
3 + 4
```
## Simple String Manipulation in Python
This section introduces some very basic things you can do in Python to create and manipulate *strings*. A string is a simple sequence of characters, like `flabbergast`. This introduction is limited to those things that may be useful to know in order to understand the *Bughunt!* data mining in the following two notebooks.
### Creating and Storing Strings in Variables
Strings are simple to create in Python. You can simply write some characters in quote marks.
```
'Butterflies are important as pollinators.'
```
In order to do something useful with this string, other than print it out, we need to store in a *variable* by using the assignment operator `=` (equals sign). Whatever is on the right-hand side of the `=` is stored into a variable with the name on the left-hand side.
```
# my_variable is the variable on the left
# 'manuscripts' is the string on the right that is stored in the variable my_variable
my_variable = 'Butterflies are important as pollinators.'
```
Notice that nothing is printing to the screen. That's because the string is stored in the variable `my_variable`. In order to see what is inside the variable `my_variable` we can simply write `my_variable` in a code cell, run it, and the interpreter will print it out for us.
```
my_variable
```
### Manipulating Bits of Strings
#### Accessing Individual Characters
A strings is just a sequence (or list) of characters. You can access **individual characters** in a string by specifying which ones you want in square brackets. If you want the first character you specify `1`.
```
my_variable[1]
```
Hang on a minute! Why did it give us `u` instead of `B`?
In programming, everything tends to be *zero indexed*, which means that things are counted from 0 rather than 1. Thus, in the example above, `1` gives us the *second* character in the string.
If you want the first character in the string, you need to specify the index `0`!
```
my_variable[0]
```
#### Accessing a Range of Characters
You can also pick out a **range of characters** from within a string, by giving the *start index* followed by the *end index* with a semi-colon (`:`) in between.
The example below gives us the character at index `0` all the way up to, *but not including*, the character at index `20`.
```
my_variable[0:20]
```
### Changing Whole Strings with Functions
Python has some built-in *functions* that allow you to change a whole string at once. You can change all characters to lowercase or uppercase:
```
my_variable.lower()
my_variable.upper()
```
NB: These functions do not change the original string but create a new one. Our original string is still the same as it was before:
```
my_variable
```
### Testing Strings
You can also test a string to see if it is passes some test, e.g. is the string all alphabetic characters only?
```
my_variable.isalpha()
```
Does the string have the letter `p` in it?
```
'p' in my_variable
```
### Lists of Strings
Another important thing we can do with strings is creating a list of strings by listing them inside square brackets `[]`:
```
my_list = ['Butterflies are important as pollinators',
'Butterflies feed primarily on nectar from flowers',
'Butterflies are widely used in objects of art']
my_list
```
### Manipulating Lists of Strings
Just like with strings, we can access individual items inside a list by index number:
```
my_list[0]
```
And we can access a range of items inside a list by *slicing*:
```
my_list[0:2]
```
### Advanced: Creating Lists of Strings with List Comprehensions
We can create new lists in an elegant way by combining some of the things we have covered above. Here is an example where we have taken our original list `my_list` and created a new list `new_list` by going over each string in the list:
```
new_list = [string for string in my_list]
new_list
```
Why do this? If we combine it with a test, we can have a list that only contains strings with the letter `p` in them:
```
new_list_p = [string for string in my_list if 'p' in string]
new_list_p
```
This is a very powerful way to quickly create lists. We can even change all the strings to uppercase at the same time!
```
new_list_p_upper = [string.upper() for string in my_list if 'p' in string]
new_list_p_upper
```
| github_jupyter |
<font size="+1">This notebook will illustrate how to access DeepLabCut(DLC) results for IBL sessions and how to create short videos with DLC labels printed onto, as well as wheel angle, starting by downloading data from the IBL flatiron server. It requires ibllib, a ONE account and the following script: https://github.com/int-brain-lab/iblapps/blob/master/DLC_labeled_video.py</font>
```
run '/home/mic/Dropbox/scripts/IBL/DLC_labeled_video.py'
one = ONE()
```
Let's first find IBL ephys sessions with DLC results:
```
eids= one.search(task_protocol='ephysChoiceworld', dataset_types=['camera.dlc'], details=False)
len(eids)
```
For a particular session, we can create a short labeled video by calling the function Viewer, specifying the eid of the desired session, the video type (there's 'left', 'right' and 'body' videos), and a range of trials for which the video should be created. Most sesions have around 700 trials. In the following, this is illustrated with session '3663d82b-f197-4e8b-b299-7b803a155b84', video type 'left', trials range [10,13] and without a zoom for the eye, such that nose, paw and tongue tracking is visible. The eye-zoom option shows only the four points delineating the pupil edges, which are too small to be visible in the normal view. Note that this automatically starts the download of the video from flatiron (in case it is not locally stored already), which may take a while since these videos are about 8 GB large.
```
eid = eids[6]
Viewer(eid, 'left', [10,13], save_video=True, eye_zoom=False)
```
As usual when downloading IBL data from flatiron, the dimensions are listed. Below is one frame of the video for illustration. One can see one point for each paw, two points for the edges of the tongue, one point for the nose and there are 4 points close together around the pupil edges. All points for which the DLC network had a confidence probability of below 0.9 are hidden. For instance when the mouse is not licking, there is no tongue and so the network cannot detect it, and no points are shown.
The script will display and save the short video in your local folder.

Sections of the script <code>DLC_labeled_video.py</code> can be recycled to analyse DLC traces. For example let's plot the x coordinate for the right paw in a <code>'left'</code> cam video for a given trial.
```
one = ONE()
dataset_types = ['camera.times','trials.intervals','camera.dlc']
video_type = 'left'
# get paths to load in data
D = one.load('3663d82b-f197-4e8b-b299-7b803a155b84',dataset_types=dataset_types, dclass_output=True)
alf_path = Path(D.local_path[0]).parent.parent / 'alf'
video_data = alf_path.parent / 'raw_video_data'
# get trials start and end times, camera time stamps (one for each frame, synced with DLC trace)
trials = alf.io.load_object(alf_path, '_ibl_trials')
cam0 = alf.io.load_object(alf_path, '_ibl_%sCamera' % video_type)
cam1 = alf.io.load_object(video_data, '_ibl_%sCamera' % video_type)
cam = {**cam0,**cam1}
# for each tracked point there's x,y in [px] in the frame and a likelihood that indicates the network's confidence
cam.keys()
```
There is also <code>'times'</code> in this dictionary, the time stamps for each frame that we'll use to sync it with other events in the experiment. Let's get rid of it briefly to have only DLC points and set coordinates to nan when the likelihood is below 0.9.
```
Times = cam['times']
del cam['times']
points = np.unique(['_'.join(x.split('_')[:-1]) for x in cam.keys()])
cam['times'] = Times
# A helper function to find closest time stamps
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return idx
```
Let's pick say the 5th trial and find all DLC traces for it.
```
frame_start = find_nearest(cam['times'], trials['intervals'][4][0])
frame_stop = find_nearest(cam['times'], trials['intervals'][4][1])
XYs = {}
for point in points:
x = np.ma.masked_where(
cam[point + '_likelihood'] < 0.9, cam[point + '_x'])
x = x.filled(np.nan)
y = np.ma.masked_where(
cam[point + '_likelihood'] < 0.9, cam[point + '_y'])
y = y.filled(np.nan)
XYs[point] = np.array(
[x[frame_start:frame_stop], y[frame_start:frame_stop]])
import matplotlib.pyplot as plt
plt.plot(cam['times'][frame_start:frame_stop],XYs['paw_r'][0])
plt.xlabel('time [sec]')
plt.ylabel('x location of right paw [px]')
```
| github_jupyter |
# piston example with explicit Euler scheme
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as anim
import numpy as np
import sys
sys.path.insert(0, './code')
import ideal_gas
```
### physical parameters
```
# length of cylinder
l = 0.1
# radius of cylinder
r = 0.05
# thickness of wall
w = 0.006
# derived geometrical data
r2 = 2 * r # diameter of cylinder
w2 = w / 2 # halved thickness of wall
l2 = l - w2
A = r**2 * np.pi # cross-sectional area
def get_v_1(q):
"""first volume"""
return A * (q - w2)
def get_v_2(q):
"""second volume"""
return A * (l2 - q)
# density of aluminium
m_Al = 2700.0
m_Cu = 8960.0
# mass of piston
m = m_Cu * A * w
# thermal conductivity of aluminium
κ_Al = 237.0
κ_Cu = 401.0
# thermal conduction coefficient
α = κ_Cu * A / w
m_inv = 1 / m
```
### initial conditions
determine $n_1$, $n_2$, $s_1$, $s_2$
```
# wanted conditions
v_1 = v_2 = get_v_1(l/2)
θ_1 = 273.15 + 25.0
π_1 = 1.5 * 1e5
θ_2 = 273.15 + 20.0
π_2 = 1.0 * 1e5
from scipy.optimize import fsolve
n_1 = fsolve(lambda n : ideal_gas.S_π(ideal_gas.U2(θ_1, n), v_1, n) - π_1, x0=2e22)[0]
s_1 = ideal_gas.S(ideal_gas.U2(θ_1, n_1), v_1, n_1)
# check temperature
ideal_gas.U_θ(s_1, v_1, n_1) - 273.15
# check pressure
ideal_gas.U_π(s_1, v_1, n_1) * 1e-5
n_2 = fsolve(lambda n : ideal_gas.S_π(ideal_gas.U2(θ_2, n), v_2, n) - π_2, x0=2e22)[0]
s_2 = ideal_gas.S(ideal_gas.U2(θ_2, n_2), v_2, n_2)
# check temperature
ideal_gas.U_θ(s_2, v_2, n_2) - 273.15
# check pressure
ideal_gas.U_π(s_2, v_2, n_2) * 1e-5
x_0 = l/2, 0, s_1, s_2
```
### simulation
```
def set_state(data, i, x):
q, p, s_1, s_2 = x
data[i, 0] = q
data[i, 1] = p
data[i, 2] = v = m_inv * p
data[i, 3] = v_1 = get_v_1(q)
data[i, 4] = π_1 = ideal_gas.U_π(s_1, v_1, n_1)
data[i, 5] = s_1
data[i, 6] = θ_1 = ideal_gas.U_θ(s_1, v_1, n_1)
data[i, 7] = v_2 = get_v_2(q)
data[i, 8] = π_2 = ideal_gas.U_π(s_2, v_2, n_2)
data[i, 9] = s_2
data[i, 10] = θ_2 = ideal_gas.U_θ(s_2, v_2, n_2)
data[i, 11] = E_kin = 0.5 * m_inv * p**2
data[i, 12] = u_1 = ideal_gas.U(s_1, v_1, n_1)
data[i, 13] = u_2 = ideal_gas.U(s_2, v_2, n_2)
data[i, 14] = E = E_kin + u_1 + u_2
data[i, 15] = S = s_1 + s_2
def get_state(data, i):
return data[i, (0, 1, 5, 9)]
def rhs(x):
"""right hand side of the explicit system
of differential equations
"""
q, p, s_1, s_2 = x
v_1 = get_v_1(q)
v_2 = get_v_2(q)
π_1 = ideal_gas.U_π(s_1, v_1, n_1)
π_2 = ideal_gas.U_π(s_2, v_2, n_2)
θ_1 = ideal_gas.U_θ(s_1, v_1, n_1)
θ_2 = ideal_gas.U_θ(s_2, v_2, n_2)
return np.array((m_inv*p, A*(π_1-π_2), α*(θ_2-θ_1)/θ_1, α*(θ_1-θ_2)/θ_2))
t_f = 1.0
dt = 1e-4
steps = int(t_f // dt)
print(f'steps={steps}')
t = np.linspace(0, t_f, num=steps)
dt = t[1] - t[0]
data = np.empty((steps, 16), dtype=float)
set_state(data, 0, x_0)
x_old = get_state(data, 0)
for i in range(1, steps):
x_new = x_old + dt * rhs(x_old)
set_state(data, i, x_new)
x_old = x_new
θ_min = np.min(data[:, (6,10)])
θ_max = np.max(data[:, (6,10)])
# plot transient
fig, ax = plt.subplots(dpi=200)
ax.set_title("piston position q")
ax.plot(t, data[:, 0]);
fig, ax = plt.subplots(dpi=200)
ax.set_title("total entropy S")
ax.plot(t, data[:, 15]);
fig, ax = plt.subplots(dpi=200)
ax.set_title("total energy E")
ax.plot(t, data[:, 14]);
```
the total energy is not conserved well
| github_jupyter |
```
#Using our synthetic data library for today's exercise
#pip install ydata
#Loading the census dataset from kaggle
import logging
import os
import requests
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
#import ydata.synthetic.regular as synthetic
#Dataset URL from kaggle
data_url = 'https://www.kaggle.com/uciml/adult-census-income/downloads/adult.csv'
# The local path where the data set is saved.
local_filename = "adult.csv"
# Kaggle Username and Password
kaggle_info = {'UserName': "myusername", 'Password': "mypassword"}
# Attempts to download the CSV file. Gets rejected because we are not logged in.
r = requests.get(data_url)
# Login to Kaggle and retrieve the data.
r = requests.post(r.url, data = kaggle_info)
# Writes the data to a local file one chunk at a time.
f = open(local_filename, 'wb')
for chunk in r.iter_content(chunk_size = 512 * 1024): # Reads 512KB at a time into memory
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.close()
adult_census = pd.read_csv('adult.csv')
#For the purpose of this exercise we will filter information regarding only black and white individuals.
adult_census = adult_census[(adult_census['race']=='White') | (adult_census['race']=='Black')]
income = adult_census['income']
adult_census = adult_census.drop('education.num', axis=1)
adult_census = adult_census.drop('income', axis=1)
train_adult, test_adult, income_train, income_test = train_test_split(adult_census, income, test_size=0.33, random_state=42)
train_adult['income'] = income_train
train_adult.head(10)
sns.set(style="dark", rc={'figure.figsize':(11.7,8.27)})
sns.countplot(x="race",
palette="Paired", edgecolor=".6",
data=train_adult)
#Let's tackling the bias present in the dataset.
#For that purpose we will need to filter the records belonging to only the black individuals.
def filter_fn(row):
if row['race'] == 'Black':
return True
else:
return False
filt = train_adult.apply(filter_fn, axis=1)
train_adult_black = train_adult[filt]
```
```
print("Number of records belonging to black individuals: {}".format(train_adult_black.shape[0]))
sns.set(style="dark", rc={'figure.figsize':(11.7,8.27)})
sns.countplot(x="sex",
palette="Paired", edgecolor=".6",
data=train_adult_black)
#In what concerns sex, we have an equal representation of women and man for the black population of the dataset
#Using the YData synthetic data lib to generate new 3000 individuals for the black population
synth_model = synthetic.SynthTabular()
synth_model.fit(adult_black)
synth_data = synth_model.sample(n_samples=3000)
synth_data = pd.read_csv('synth_data.csv', index_col=[0])
synth_data = synth_data.drop('education.num', axis=1)
synth_data = pd.concat([synth_data[synth_data['income']=='>50K'],synth_data[synth_data['income']=='<=50K'][:1000]])
synth_data.describe()
#Now combining both the datasets
test_adult['income'] = income_test
adult_combined = synth_data.append(test_adult).sample(frac=1)
#Let's check again how are we regarding the balancing of our classes for the race variable
sns.set(style="dark", rc={'figure.figsize':(11.7,8.27)})
sns.countplot(x="race",
palette="Paired", edgecolor=".6",
data=adult_combined)
#Auxiliar function to encode the categorical variables
import numpy as np
from sklearn.preprocessing import OneHotEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import f1_score, accuracy_score, average_precision_score
def numerical_encoding(df, cat_cols=[], ord_cols=[]):
try:
assert isinstance(df, pd.DataFrame)
except AssertionError as e:
logging.error('The df input object must a Pandas dataframe. This action will not be executed.')
return
ord_cols_val = None
cat_cols_val = None
dummies = None
cont_cols = list(set(df.columns) - set(cat_cols+ord_cols))
cont_vals = df[cont_cols].values
if len(ord_cols) > 0:
ord_cols_val = df[ord_cols].values
label_encoder = LabelEncoder
ord_encoded = label_encoder.fit_transform(ord_cols_val)
if len(cat_cols) > 0:
cat_cols_val = df[cat_cols].values
hot_encoder = OneHotEncoder()
cat_encoded = hot_encoder.fit_transform(cat_cols_val).toarray()
dummies = []
for i, cat in enumerate(hot_encoder.categories_):
for j in cat:
dummies.append(cat_cols[i]+'_'+str(j))
if ord_cols_val is not None and cat_cols_val is not None:
encoded = np.hstack([cont_vals, ord_encoded, cat_encoded])
columns = cont_cols+ord_cols+dummies
elif cat_cols_val is not None:
encoded = np.hstack([cont_vals, cat_encoded])
columns = cont_cols+ord_cols+dummies
else:
encoded = cont_vals
columns = cont_cols
return pd.DataFrame(encoded, columns=columns), dummies
#validation functions
def score_estimators(estimators, x_test, y_test):
#f1_score average='micro'
scores = {type(clf).__name__: f1_score(y_test, clf.predict(x_test), average='micro') for clf in estimators}
return scores
def fit_estimators(estimators, data_train, y_train):
estimators_fit = []
for i, estimator in enumerate(estimators):
estimators_fit.append(estimator.fit(data_train, y_train))
return estimators_fit
def estimator_eval(data, y, cat_cols=[]):
def order_cols(df):
cols = sorted(df.columns.tolist())
return df[cols]
data,_ = numerical_encoding(data, cat_cols=cat_cols)
y, uniques = pd.factorize(y)
data = order_cols(data)
x_train, x_test, y_train, y_test = train_test_split(data, y, test_size=0.33, random_state=42)
# Prepare train and test datasets
estimators = [
LogisticRegression(multi_class='auto', solver='lbfgs', max_iter=500, random_state=42),
RandomForestClassifier(n_estimators=10, random_state=42),
DecisionTreeClassifier(random_state=42),
SVC(gamma='auto'),
KNeighborsClassifier(n_neighbors=5)
]
estimators_names = [type(clf).__name__ for clf in estimators]
for estimator in estimators:
assert hasattr(estimator, 'fit')
assert hasattr(estimator, 'score')
estimators = fit_estimators(estimators, x_train, y_train)
scores = score_estimators(estimators, x_test, y_test)
return scores
real_scores = estimator_eval(data=test_adult.drop('income', axis=1),
y=test_adult['income'],
cat_cols=['workclass', 'education', 'marital.status', 'occupation', 'relationship','race', 'sex', 'native.country'])
synth_scores = estimator_eval(data=adult_combined.drop('income', axis=1),
y=adult_combined['income'],
cat_cols=['workclass', 'education', 'marital.status', 'occupation', 'relationship','race', 'sex', 'native.country'])
dict_results = {'original': real_scores, 'synthetic': synth_scores}
results = pd.DataFrame(dict_results).reset_index()
print("Mean average accuracy improvement: {}".format((results['synthetic'] - results['original']).mean()))
results_graph = results.melt('index', var_name='data_source', value_name='accuracy')
pd.DataFrame(dict_results).transpose()
#Final results comparision
sns.barplot(x="index", y="accuracy", hue="data_source", data=results_graph,
palette="Paired", edgecolor=".6")
```
| github_jupyter |
# LAB 5b: Deploy and predict with Keras model on Cloud AI Platform.
**Learning Objectives**
1. Setup up the environment
1. Deploy trained Keras model to Cloud AI Platform
1. Online predict from model on Cloud AI Platform
1. Batch predict from model on Cloud AI Platform
## Introduction
In this notebook, we'll deploying our Keras model to Cloud AI Platform and creating predictions.
We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/5b_deploy_keras_ai_platform_babyweight.ipynb).
## Set up environment variables and load necessary libraries
Import necessary libraries.
```
import os
```
### Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
```
%%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# Change these to try this notebook out
PROJECT = "cloud-training-demos" # TODO: Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # TODO: Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
%%bash
gcloud config set compute/region $REGION
gcloud config set ai_platform/region global
```
## Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last [lab](../solutions/10_train_keras_ai_platform_babyweight.ipynb). We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
```
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
```
## Lab Task #2: Deploy trained model.
Deploying the trained model to act as a REST web service is a simple gcloud call. Complete __#TODO__ by providing location of saved_model.pb file to Cloud AI Platoform model deployment service. The deployment will take a few minutes.
```
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=# TODO: Add GCS path to saved_model.pb file.
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.1 \
--python-version=3.7
```
## Lab Task #3: Use model to make online prediction.
Complete __#TODO__s for both the Python and gcloud Shell API methods of calling our deployed model on Cloud AI Platform for online prediction.
### Python API
We can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
```
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = # TODO: Add model name
MODEL_VERSION = # TODO: Add model version
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
# TODO: Create another instance
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
```
The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different).
### gcloud shell API
Instead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud.
```
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
```
Now call `gcloud ai-platform predict` using the JSON we just created and point to our deployed `model` and `version`.
```
%%bash
gcloud ai-platform predict \
--model=babyweight \
--json-instances=inputs.json \
--version=# TODO: Add model version
```
## Lab Task #4: Use model to make batch prediction.
Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction. Complete __#TODO__s so we can call our deployed model on Cloud AI Platform for batch prediction.
```
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight \
--version=# TODO: Add model version
```
## Lab Summary:
In this lab, we set up the environment, deployed a trained Keras model to Cloud AI Platform, online predicted from deployed model on Cloud AI Platform, and batch predicted from deployed model on Cloud AI Platform.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
## XYZ Pro Features
This notebook demonstrates some of the pro features for XYZ Hub API.
XYZ paid features can be found here: [xyz pro features](https://www.here.xyz/xyz_pro/).
XYZ plans can be found here: [xyz plans](https://developer.here.com/pricing).
### Virtual Space
A virtual space is described by definition which references other existing spaces(the upstream spaces).
Queries being done to a virtual space will return the features of its upstream spaces combined.
Below are different predefined operations of how to combine the features of the upstream spaces.
- [group](#group_cell)
- [merge](#merge_cell)
- [override](#override_cell)
- [custom](#custom_cell)
```
# Make necessary imports.
import os
import json
import warnings
from xyzspaces.datasets import get_chicago_parks_data, get_countries_data
from xyzspaces.exceptions import ApiError
import xyzspaces
```
<div class="alert alert-block alert-warning">
<b>Warning:</b> Before running below cells please make sure you have XYZ Token to interact with xyzspaces.
Please see README.md in notebooks folder for more info on XYZ_TOKEN
</div>
```
os.environ["XYZ_TOKEN"] = "MY-XYZ-TOKEN" # Replace your token here.
xyz = xyzspaces.XYZ()
# create two spaces which will act as upstream spaces for virtual space created later.
title1 = "Testing xyzspaces"
description1 = "Temporary space containing countries data."
space1 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space1
gj_countries = get_countries_data()
space1.add_features(features=gj_countries)
space_id1 = space1.info["id"]
title2 = "Testing xyzspaces"
description2 = "Temporary space containing Chicago parks data."
space2 = xyz.spaces.new(title=title2, description=description2)
# Add some data to space2
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
space2.add_features(features=gj_chicago)
space_id2 = space2.info["id"]
```
<a id='group_cell'></a>
#### Group
Group means to combine the content of the specified spaces. All objects of each space will be part of the response when the virtual space is queried by the user. The information about which object came from which space can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming.
```
# Create a new virtual space by grouping two spaces created above.
title = "Virtual Space for coutries and Chicago parks data"
description = "Test group functionality of virtual space"
upstream_spaces = [space_id1, space_id2]
kwargs = {"virtualspace": dict(group=upstream_spaces)}
vspace = xyz.spaces.virtual(title=title, description=description, **kwargs)
print(json.dumps(vspace.info, indent=2))
# Reading a particular feature from space1 via virtual space.
vfeature1 = vspace.get_feature(feature_id="FRA")
feature1 = space1.get_feature(feature_id="FRA")
assert vfeature1 == feature1
# Reading a particular feature from space2 via virtual space.
vfeature2 = vspace.get_feature(feature_id="LP")
feature2 = space2.get_feature(feature_id="LP")
assert vfeature2 == feature2
# Deleting a feature from virtual space deletes corresponding feature from upstream space.
vspace.delete_feature(feature_id="FRA")
try:
space1.get_feature("FRA")
except ApiError as err:
print(err)
# Delete temporary spaces created.
vspace.delete()
space1.delete()
space2.delete()
```
<a id='merge_cell'></a>
#### Merge
Merge means that objects with the same ID will be merged together. If there are duplicate feature-IDs in the various data of the upstream spaces, the duplicates will be merged to build a single feature. The result will be a response that is guaranteed to have no features with duplicate IDs. The merge will happen in the order of the space-references in the specified array. That means objects coming from the second space will overwrite potentially existing property values of objects coming from the first space. The information about which object came from which space(s) can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming, or the last one in the list if none was specified.When deleting features from the virtual space a new pseudo-deleted feature is written to the last space in the list. Trying to read the feature with that ID from the virtual space is not possible afterward.
```
# create two spaces with duplicate data
title1 = "Testing xyzspaces"
description1 = "Temporary space containing Chicago parks data."
space1 = xyz.spaces.new(title=title1, description=description1)
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
# Add some data to it space1
space1.add_features(features=gj_chicago)
space_id1 = space1.info["id"]
title2 = "Testing xyzspaces duplicate"
description2 = "Temporary space containing Chicago parks data duplicate"
space2 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space2
space2.add_features(features=gj_chicago)
space_id2 = space2.info["id"]
# update a particular feature of second space so that post merge virtual space will have this feature merged
lp = space2.get_feature("LP")
space2.update_feature(feature_id="LP", data=lp, add_tags=["foo", "bar"])
# Create a new virtual space by merging two spaces created above.
title = "Virtual Space for coutries and Chicago parks data"
description = "Test merge functionality of virtual space"
upstream_spaces = [space_id1, space_id2]
kwargs = {"virtualspace": dict(merge=upstream_spaces)}
vspace = xyz.spaces.virtual(title=title, description=description, **kwargs)
print(vspace.info)
vfeature1 = vspace.get_feature(feature_id="LP")
assert vfeature1["properties"]["@ns:com:here:xyz"]["tags"] == ["foo", "bar"]
bp = space2.get_feature("BP")
space2.update_feature(feature_id="BP", data=lp, add_tags=["foo1", "bar1"])
vfeature2 = vspace.get_feature(feature_id="BP")
assert vfeature2["properties"]["@ns:com:here:xyz"]["tags"] == ["foo1", "bar1"]
space1.delete()
space2.delete()
vspace.delete()
```
<a id='override_cell'></a>
#### Override
Override means that objects with the same ID will be overridden completely. If there are duplicate feature-IDs in the various data of the upstream spaces, the duplicates will be overridden to result in a single feature. The result will be a response that is guaranteed to have no features with duplicate IDs. The override will happen in the order of the space-references in the specified array. That means objects coming from the second space one will override potentially existing features coming from the first space. The information about which object came from which space can be found in the XYZ-namespace in the properties of each feature. When writing back these objects to the virtual space they'll be written back to the upstream space from which they were actually coming. When deleting features from the virtual space the same rules as for merge apply.
```
# create two spaces with duplicate data
title1 = "Testing xyzspaces"
description1 = "Temporary space containing Chicago parks data."
space1 = xyz.spaces.new(title=title1, description=description1)
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
# Add some data to it space1
space1.add_features(features=gj_chicago)
space_id1 = space1.info["id"]
title2 = "Testing xyzspaces duplicate"
description2 = "Temporary space containing Chicago parks data duplicate"
space2 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space2
space2.add_features(features=gj_chicago)
space_id2 = space2.info["id"]
# Create a new virtual space by override operation.
title = "Virtual Space for coutries and Chicago parks data"
description = "Test merge functionality of virtual space"
upstream_spaces = [space_id1, space_id2]
kwargs = {"virtualspace": dict(override=upstream_spaces)}
vspace = xyz.spaces.virtual(title=title, description=description, **kwargs)
print(vspace.info)
bp = space2.get_feature("BP")
space2.update_feature(feature_id="BP", data=bp, add_tags=["foo1", "bar1"])
vfeature2 = vspace.get_feature(feature_id="BP")
assert vfeature2["properties"]["@ns:com:here:xyz"]["tags"] == ["foo1", "bar1"]
space1.delete()
space2.delete()
vspace.delete()
```
### Applying clustering in space
```
# create two spaces which will act as upstream spaces for virtual space created later.
title1 = "Testing xyzspaces"
description1 = "Temporary space containing countries data."
space1 = xyz.spaces.new(title=title1, description=description1)
# Add some data to it space1
gj_countries = get_countries_data()
space1.add_features(features=gj_countries)
space_id1 = space1.info["id"]
# Genereate clustering for the space
space1.cluster(clustering="hexbin")
# Delete created space
space1.delete()
```
### Rule based Tagging
Rule based tagging makes tagging multiple features in space tagged to a particular tag, based in rules mentioned based on JSON-path expression. Users can update space with a map of rules where the key is the tag to be applied to all features matching the JSON-path expression being the value.
If multiple rules are matching, multiple tags will be applied to the according to matched sets of features. It could even happen that a feature is matched by multiple rules and thus multiple tags will get added to it.
```
# Create a new space
title = "Testing xyzspaces"
description = "Temporary space containing Chicago parks data."
space = xyz.spaces.new(title=title, description=description)
# Add data to the space.
with open("./data/chicago_parks.geo.json", encoding="utf-8-sig") as json_file:
gj_chicago = json.load(json_file)
_ = space.add_features(features=gj_chicago)
# update space to add tagging rules to the above mentioned space.
tagging_rules = {
"large": "$.features[?(@.properties.area>=500)]",
"small": "$.features[?(@.properties.area<500)]",
}
_ = space.update(tagging_rules=tagging_rules)
# verify that features are tagged correctly based on rules.
large_parks = space.search(tags=["large"])
for park in large_parks:
assert park["id"] in ["LP", "BP", "JP"]
small_parks = space.search(tags=["small"])
for park in small_parks:
assert park["id"] in ["MP", "GP", "HP", "DP", "CP", "COP"]
# Delete created space
space.delete()
```
### Activity Log
The Activity log will enable tracking of changes in your space.
To activate it, just create a space with the listener added and enable_uuid set to True.
More information on the activity log can be found [here](https://www.here.xyz/api/devguide/activitylogguide/).
```
title = "Activity-Log Test"
description = "Activity-Log Test"
listeners = {
"id": "activity-log",
"params": {"states": 5, "storageMode": "DIFF_ONLY", "writeInvalidatedAt": "true"},
"eventTypes": ["ModifySpaceEvent.request"],
}
space = xyz.spaces.new(
title=title,
description=description,
enable_uuid=True,
listeners=listeners,
)
from time import sleep
# As activity log is async operation adding sleep to get info
sleep(5)
print(json.dumps(space.info, indent=2))
space.delete()
```
| github_jupyter |
# torchserve.ipynb
This notebook contains code for the portions of the benchmark in [the benchmark notebook](./benchmark.ipynb) that use [TorchServe](https://github.com/pytorch/serve).
```
# Imports go here
import json
import os
import requests
import scipy.special
import transformers
# Fix silly warning messages about parallel tokenizers
os.environ['TOKENIZERS_PARALLELISM'] = 'False'
# Constants go here
INTENT_MODEL_NAME = 'mrm8488/t5-base-finetuned-e2m-intent'
SENTIMENT_MODEL_NAME = 'cardiffnlp/twitter-roberta-base-sentiment'
QA_MODEL_NAME = 'deepset/roberta-base-squad2'
GENERATE_MODEL_NAME = 'gpt2'
INTENT_INPUT = {
'context':
("I came here to eat chips and beat you up, "
"and I'm all out of chips.")
}
SENTIMENT_INPUT = {
'context': "We're not happy unless you're not happy."
}
QA_INPUT = {
'question': 'What is 1 + 1?',
'context':
"""Addition (usually signified by the plus symbol +) is one of the four basic operations of
arithmetic, the other three being subtraction, multiplication and division. The addition of two
whole numbers results in the total amount or sum of those values combined. The example in the
adjacent image shows a combination of three apples and two apples, making a total of five apples.
This observation is equivalent to the mathematical expression "3 + 2 = 5" (that is, "3 plus 2
is equal to 5").
"""
}
GENERATE_INPUT = {
'prompt_text': 'All your base are'
}
```
## Model Packaging
TorchServe requires models to be packaged up as model archive files. Documentation for this process (such as it is) is [here](https://github.com/pytorch/serve/blob/master/README.md#serve-a-model) and [here](https://github.com/pytorch/serve/blob/master/model-archiver/README.md).
### Intent Model
The intent model requires the caller to call the pre- and post-processing code manually. Only the model and tokenizer are provided on the model zoo.
```
# First we need to dump the model into a local directory.
intent_model = transformers.AutoModelForSeq2SeqLM.from_pretrained(
INTENT_MODEL_NAME)
intent_tokenizer = transformers.AutoTokenizer.from_pretrained('t5-base')
intent_model.save_pretrained('torchserve/intent')
intent_tokenizer.save_pretrained('torchserve/intent')
```
Next we wrapped the model in a handler class, located at `./torchserve/handler_intent.py`, which
needs to be in its own separate Python file in order for the `torch-model-archiver`
utility to work.
The following command turns this Python file, plus the data files created by the
previous cell, into a model archive (`.mar`) file at `torchserve/model_store/intent.mar`.
```
%%time
!mkdir -p torchserve/model_store
!torch-model-archiver --model-name intent --version 1.0 \
--serialized-file torchserve/intent/pytorch_model.bin \
--handler torchserve/handler_intent.py \
--extra-files "torchserve/intent/config.json,torchserve/intent/special_tokens_map.json,torchserve/intent/tokenizer_config.json,torchserve/intent/tokenizer.json" \
--export-path torchserve/model_store \
--force
```
### Sentiment Model
The sentiment model operates similarly to the intent model.
```
sentiment_tokenizer = transformers.AutoTokenizer.from_pretrained(
SENTIMENT_MODEL_NAME)
sentiment_model = (
transformers.AutoModelForSequenceClassification
.from_pretrained(SENTIMENT_MODEL_NAME))
sentiment_model.save_pretrained('torchserve/sentiment')
sentiment_tokenizer.save_pretrained('torchserve/sentiment')
contexts = ['hello', 'world']
input_batch = sentiment_tokenizer(contexts, padding=True,
return_tensors='pt')
inference_output = sentiment_model(**input_batch)
scores = inference_output.logits.detach().numpy()
scores = scipy.special.softmax(scores, axis=1).tolist()
scores = [{k: v for k, v in zip(['positive', 'neutral', 'negative'], row)}
for row in scores]
# return scores
scores
```
As with the intent model, we created a handler class (located at `torchserve/handler_sentiment.py`), then
pass that class and the serialized model from two cells ago
through the `torch-model-archiver` utility.
```
%%time
!torch-model-archiver --model-name sentiment --version 1.0 \
--serialized-file torchserve/sentiment/pytorch_model.bin \
--handler torchserve/handler_sentiment.py \
--extra-files "torchserve/sentiment/config.json,torchserve/sentiment/special_tokens_map.json,torchserve/sentiment/tokenizer_config.json,torchserve/sentiment/tokenizer.json" \
--export-path torchserve/model_store \
--force
```
### Question Answering Model
The QA model uses a `transformers` pipeline. We squeeze this model into the TorchServe APIs by telling the pipeline to serialize all of its parts to a single directory, then passing the parts that aren't `pytorch_model.bin` in as extra files. At runtime, our custom handler uses the model loading code from `transformers` on the reconstituted model directory.
```
qa_pipeline = transformers.pipeline('question-answering', model=QA_MODEL_NAME)
qa_pipeline.save_pretrained('torchserve/qa')
```
As with the previous models, we wrote a class (located at `torchserve/handler_qa.py`), then
pass that wrapper class and the serialized model through the `torch-model-archiver` utility.
```
%%time
!torch-model-archiver --model-name qa --version 1.0 \
--serialized-file torchserve/qa/pytorch_model.bin \
--handler torchserve/handler_qa.py \
--extra-files "torchserve/qa/config.json,torchserve/qa/merges.txt,torchserve/qa/special_tokens_map.json,torchserve/qa/tokenizer_config.json,torchserve/qa/tokenizer.json,torchserve/qa/vocab.json" \
--export-path torchserve/model_store \
--force
data = [QA_INPUT, QA_INPUT]
# Preprocessing
samples = [qa_pipeline.create_sample(**r) for r in data]
generators = [qa_pipeline.preprocess(s) for s in samples]
# Inference
inference_outputs = ((qa_pipeline.forward(example) for example in batch) for batch in generators)
post_results = [qa_pipeline.postprocess(o) for o in inference_outputs]
post_results
```
### Natural Language Generation Model
The text generation model is roughly similar to the QA model, albeit with important differences in how the three stages of the pipeline operate. At least model loading is the same.
```
generate_pipeline = transformers.pipeline(
'text-generation', model=GENERATE_MODEL_NAME)
generate_pipeline.save_pretrained('torchserve/generate')
data = [GENERATE_INPUT, GENERATE_INPUT]
pad_token_id = generate_pipeline.tokenizer.eos_token_id
json_records = data
# preprocess() takes a single input at a time, but we need to do
# a batch at a time.
input_batch = [generate_pipeline.preprocess(**r) for r in json_records]
# forward() takes a single input at a time, but we need to run a
# batch at a time.
inference_output = [
generate_pipeline.forward(r, pad_token_id=pad_token_id)
for r in input_batch]
# postprocess() takes a single generation result at a time, but we
# need to run a batch at a time.
generate_result = [generate_pipeline.postprocess(i)
for i in inference_output]
generate_result
```
Once again, we wrote a class (located at `torchserve/handler_generate.py`), then
pass that wrapper class and the serialized model through the `torch-model-archiver` utility.
```
%%time
!torch-model-archiver --model-name generate --version 1.0 \
--serialized-file torchserve/generate/pytorch_model.bin \
--handler torchserve/handler_generate.py \
--extra-files "torchserve/generate/config.json,torchserve/generate/merges.txt,torchserve/generate/special_tokens_map.json,torchserve/generate/tokenizer_config.json,torchserve/generate/tokenizer.json,torchserve/generate/vocab.json" \
--export-path torchserve/model_store \
--force
```
## Testing
Now we can fire up TorchServe and test our models.
For some reason, starting TorchServe needs to be done in a proper terminal window. Running the command from this notebook has no effect. The commands to run (from the root of the repository) are:
```
> conda activate ./env
> cd notebooks/benchmark/torchserve
> torchserve --start --ncs --model-store model_store --ts-config torchserve.properties
```
Then pick up a cup of coffee and a book and wait a while. The startup process is like cold-starting a gas turbine and takes about 10 minutes.
Once the server has started, we can test our deployed models by making POST requests.
```
# Probe the management API to verify that TorchServe is running.
requests.get('http://127.0.0.1:8081/models').json()
port = 8080
intent_result = requests.put(
f'http://127.0.0.1:{port}/predictions/intent_en',
json.dumps(INTENT_INPUT)).json()
print(f'Intent result: {intent_result}')
sentiment_result = requests.put(
f'http://127.0.0.1:{port}/predictions/sentiment_en',
json.dumps(SENTIMENT_INPUT)).json()
print(f'Sentiment result: {sentiment_result}')
qa_result = requests.put(
f'http://127.0.0.1:{port}/predictions/qa_en',
json.dumps(QA_INPUT)).json()
print(f'Question answering result: {qa_result}')
generate_result = requests.put(
f'http://127.0.0.1:{port}/predictions/generate_en',
json.dumps(GENERATE_INPUT)).json()
print(f'Natural language generation result: {generate_result}')
```
## Cleanup
TorchServe consumes many resources even when it isn't doing anything. When you're done running the baseline portion of the benchmark, be sure to shut down the server by running:
```
> torchserve --stop
```
| github_jupyter |
# Exercise 6-3
## LSTM
The following two cells will create a LSTM cell with one neuron.
We scale the output of the LSTM linear and add a bias.
Then the output will be wrapped by a sigmoid activation.
The goal is to predict a time series where every $n^{th}$ ($5^{th}$ in the current example) element is 1 and all others are 0.
a) Please read and understand the source code below.
b) Consult the output of the predictions. What do you observe? How does the LSTM manage to predict the next element in the sequence?
```
import tensorflow as tf
import numpy as np
from matplotlib import pyplot as plt
tf.reset_default_graph()
tf.set_random_seed(12314)
epochs=50
zero_steps = 5
learning_rate = 0.01
lstm_neurons = 1
out_dim = 1
num_features = 1
batch_size = zero_steps
window_size = zero_steps*2
time_steps = 5
x = tf.placeholder(tf.float32, [None, window_size, num_features], 'x')
y = tf.placeholder(tf.float32, [None, out_dim], 'y')
lstm = tf.nn.rnn_cell.LSTMCell(lstm_neurons)
state = lstm.zero_state(batch_size, dtype=tf.float32)
regression_w = tf.Variable(tf.random_normal([lstm_neurons]))
regression_b = tf.Variable(tf.random_normal([out_dim]))
outputs, state = tf.contrib.rnn.static_rnn(lstm, tf.unstack(x, window_size, 1), state)
output = outputs[-1]
predicted = tf.nn.sigmoid(output * regression_w + regression_b)
cost = tf.reduce_mean(tf.losses.mean_squared_error(y, predicted))
optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost)
forget_gate = output.op.inputs[1].op.inputs[0].op.inputs[0].op.inputs[0]
input_gate = output.op.inputs[1].op.inputs[0].op.inputs[1].op.inputs[0]
cell_candidates = output.op.inputs[1].op.inputs[0].op.inputs[1].op.inputs[1]
output_gate_sig = output.op.inputs[0]
output_gate_tanh = output.op.inputs[1]
X = [
[[ (shift-n) % zero_steps == 0 ] for n in range(window_size)
] for shift in range(batch_size)
]
Y = [[ shift % zero_steps == 0 ] for shift in range(batch_size) ]
with tf.Session() as sess:
sess.run(tf.initializers.global_variables())
loss = 1
epoch = 0
while loss >= 1e-5:
epoch += 1
_, loss = sess.run([optimizer, cost], {x:X, y:Y})
if epoch % (epochs//10) == 0:
print("loss %.5f" % (loss), end='\t\t\r')
print()
outs, stat, pred, fg, inpg, cell_cands, outg_sig, outg_tanh = sess.run([outputs, state, predicted, forget_gate, input_gate, cell_candidates, output_gate_sig, output_gate_tanh], {x:X, y:Y})
outs = np.asarray(outs)
for batch in reversed(range(batch_size)):
print("input:")
print(np.asarray(X)[batch].astype(int).reshape(-1))
print("forget\t\t%.4f\ninput gate\t%.4f\ncell cands\t%.4f\nout gate sig\t%.4f\nout gate tanh\t%.4f\nhidden state\t%.4f\ncell state\t%.4f\npred\t\t%.4f\n\n" % (
fg[batch,0],
inpg[batch,0],
cell_cands[batch,0],
outg_sig[batch,0],
outg_tanh[batch,0],
stat.h[batch,0],
stat.c[batch,0],
pred[batch,0]))
```
LSTM gates:

(image source: https://www.stratio.com/wp-content/uploads/2017/10/6-1.jpg)
### Answers
* When the current element is 1, then the forget-gate tells "forget" (value is close to 0) $\Rightarrow$ Reset cell state
* The cell state (long term memory) decreases until it reached some certain point. Then the hidden state is activated and thus the prediction is close to 1.
* The sigomoid output cell ($o_t$) is always close to 1 $\Rightarrow$ the hidden layer directly dependent on the cell state (no short term memory is used).
* The input gate ($i_t$) is always close to 1 thus the cell candidates ($c_t$) will always be accepted
* The cell candidates ($c_t$) are mainly dependent on $x_t$. It is close to 1 when $x_t$ is one (resetting the counter) and negative if $x_t$ is 0 (decreasing the counter).
Note that with other initial values (different seed) it may result in a different local minimum (the counter could increase, $h_t$ could be negative and be scaled negative, ...)
| github_jupyter |
<a href="https://colab.research.google.com/github/reihaneh-torkzadehmahani/MyDPGAN/blob/master/AdvancedDPCGAN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## differential_privacy.analysis.rdp_accountant
```
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""RDP analysis of the Sampled Gaussian Mechanism.
Functionality for computing Renyi differential privacy (RDP) of an additive
Sampled Gaussian Mechanism (SGM). Its public interface consists of two methods:
compute_rdp(q, noise_multiplier, T, orders) computes RDP for SGM iterated
T times.
get_privacy_spent(orders, rdp, target_eps, target_delta) computes delta
(or eps) given RDP at multiple orders and
a target value for eps (or delta).
Example use:
Suppose that we have run an SGM applied to a function with l2-sensitivity 1.
Its parameters are given as a list of tuples (q1, sigma1, T1), ...,
(qk, sigma_k, Tk), and we wish to compute eps for a given delta.
The example code would be:
max_order = 32
orders = range(2, max_order + 1)
rdp = np.zeros_like(orders, dtype=float)
for q, sigma, T in parameters:
rdp += rdp_accountant.compute_rdp(q, sigma, T, orders)
eps, _, opt_order = rdp_accountant.get_privacy_spent(rdp, target_delta=delta)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import sys
import numpy as np
from scipy import special
import six
########################
# LOG-SPACE ARITHMETIC #
########################
def _log_add(logx, logy):
"""Add two numbers in the log space."""
a, b = min(logx, logy), max(logx, logy)
if a == -np.inf: # adding 0
return b
# Use exp(a) + exp(b) = (exp(a - b) + 1) * exp(b)
return math.log1p(math.exp(a - b)) + b # log1p(x) = log(x + 1)
def _log_sub(logx, logy):
"""Subtract two numbers in the log space. Answer must be non-negative."""
if logx < logy:
raise ValueError("The result of subtraction must be non-negative.")
if logy == -np.inf: # subtracting 0
return logx
if logx == logy:
return -np.inf # 0 is represented as -np.inf in the log space.
try:
# Use exp(x) - exp(y) = (exp(x - y) - 1) * exp(y).
return math.log(
math.expm1(logx - logy)) + logy # expm1(x) = exp(x) - 1
except OverflowError:
return logx
def _log_print(logx):
"""Pretty print."""
if logx < math.log(sys.float_info.max):
return "{}".format(math.exp(logx))
else:
return "exp({})".format(logx)
def _compute_log_a_int(q, sigma, alpha):
"""Compute log(A_alpha) for integer alpha. 0 < q < 1."""
assert isinstance(alpha, six.integer_types)
# Initialize with 0 in the log space.
log_a = -np.inf
for i in range(alpha + 1):
log_coef_i = (math.log(special.binom(alpha, i)) + i * math.log(q) +
(alpha - i) * math.log(1 - q))
s = log_coef_i + (i * i - i) / (2 * (sigma**2))
log_a = _log_add(log_a, s)
return float(log_a)
def _compute_log_a_frac(q, sigma, alpha):
"""Compute log(A_alpha) for fractional alpha. 0 < q < 1."""
# The two parts of A_alpha, integrals over (-inf,z0] and [z0, +inf), are
# initialized to 0 in the log space:
log_a0, log_a1 = -np.inf, -np.inf
i = 0
z0 = sigma**2 * math.log(1 / q - 1) + .5
while True: # do ... until loop
coef = special.binom(alpha, i)
log_coef = math.log(abs(coef))
j = alpha - i
log_t0 = log_coef + i * math.log(q) + j * math.log(1 - q)
log_t1 = log_coef + j * math.log(q) + i * math.log(1 - q)
log_e0 = math.log(.5) + _log_erfc((i - z0) / (math.sqrt(2) * sigma))
log_e1 = math.log(.5) + _log_erfc((z0 - j) / (math.sqrt(2) * sigma))
log_s0 = log_t0 + (i * i - i) / (2 * (sigma**2)) + log_e0
log_s1 = log_t1 + (j * j - j) / (2 * (sigma**2)) + log_e1
if coef > 0:
log_a0 = _log_add(log_a0, log_s0)
log_a1 = _log_add(log_a1, log_s1)
else:
log_a0 = _log_sub(log_a0, log_s0)
log_a1 = _log_sub(log_a1, log_s1)
i += 1
if max(log_s0, log_s1) < -30:
break
return _log_add(log_a0, log_a1)
def _compute_log_a(q, sigma, alpha):
"""Compute log(A_alpha) for any positive finite alpha."""
if float(alpha).is_integer():
return _compute_log_a_int(q, sigma, int(alpha))
else:
return _compute_log_a_frac(q, sigma, alpha)
def _log_erfc(x):
"""Compute log(erfc(x)) with high accuracy for large x."""
try:
return math.log(2) + special.log_ndtr(-x * 2**.5)
except NameError:
# If log_ndtr is not available, approximate as follows:
r = special.erfc(x)
if r == 0.0:
# Using the Laurent series at infinity for the tail of the erfc function:
# erfc(x) ~ exp(-x^2-.5/x^2+.625/x^4)/(x*pi^.5)
# To verify in Mathematica:
# Series[Log[Erfc[x]] + Log[x] + Log[Pi]/2 + x^2, {x, Infinity, 6}]
return (-math.log(math.pi) / 2 - math.log(x) - x**2 - .5 * x**-2 +
.625 * x**-4 - 37. / 24. * x**-6 + 353. / 64. * x**-8)
else:
return math.log(r)
def _compute_delta(orders, rdp, eps):
"""Compute delta given a list of RDP values and target epsilon.
Args:
orders: An array (or a scalar) of orders.
rdp: A list (or a scalar) of RDP guarantees.
eps: The target epsilon.
Returns:
Pair of (delta, optimal_order).
Raises:
ValueError: If input is malformed.
"""
orders_vec = np.atleast_1d(orders)
rdp_vec = np.atleast_1d(rdp)
if len(orders_vec) != len(rdp_vec):
raise ValueError("Input lists must have the same length.")
deltas = np.exp((rdp_vec - eps) * (orders_vec - 1))
idx_opt = np.argmin(deltas)
return min(deltas[idx_opt], 1.), orders_vec[idx_opt]
def _compute_eps(orders, rdp, delta):
"""Compute epsilon given a list of RDP values and target delta.
Args:
orders: An array (or a scalar) of orders.
rdp: A list (or a scalar) of RDP guarantees.
delta: The target delta.
Returns:
Pair of (eps, optimal_order).
Raises:
ValueError: If input is malformed.
"""
orders_vec = np.atleast_1d(orders)
rdp_vec = np.atleast_1d(rdp)
if len(orders_vec) != len(rdp_vec):
raise ValueError("Input lists must have the same length.")
eps = rdp_vec - math.log(delta) / (orders_vec - 1)
idx_opt = np.nanargmin(eps) # Ignore NaNs
return eps[idx_opt], orders_vec[idx_opt]
def _compute_rdp(q, sigma, alpha):
"""Compute RDP of the Sampled Gaussian mechanism at order alpha.
Args:
q: The sampling rate.
sigma: The std of the additive Gaussian noise.
alpha: The order at which RDP is computed.
Returns:
RDP at alpha, can be np.inf.
"""
if q == 0:
return 0
if q == 1.:
return alpha / (2 * sigma**2)
if np.isinf(alpha):
return np.inf
return _compute_log_a(q, sigma, alpha) / (alpha - 1)
def compute_rdp(q, noise_multiplier, steps, orders):
"""Compute RDP of the Sampled Gaussian Mechanism.
Args:
q: The sampling rate.
noise_multiplier: The ratio of the standard deviation of the Gaussian noise
to the l2-sensitivity of the function to which it is added.
steps: The number of steps.
orders: An array (or a scalar) of RDP orders.
Returns:
The RDPs at all orders, can be np.inf.
"""
if np.isscalar(orders):
rdp = _compute_rdp(q, noise_multiplier, orders)
else:
rdp = np.array(
[_compute_rdp(q, noise_multiplier, order) for order in orders])
return rdp * steps
def get_privacy_spent(orders, rdp, target_eps=None, target_delta=None):
"""Compute delta (or eps) for given eps (or delta) from RDP values.
Args:
orders: An array (or a scalar) of RDP orders.
rdp: An array of RDP values. Must be of the same length as the orders list.
target_eps: If not None, the epsilon for which we compute the corresponding
delta.
target_delta: If not None, the delta for which we compute the corresponding
epsilon. Exactly one of target_eps and target_delta must be None.
Returns:
eps, delta, opt_order.
Raises:
ValueError: If target_eps and target_delta are messed up.
"""
if target_eps is None and target_delta is None:
raise ValueError(
"Exactly one out of eps and delta must be None. (Both are).")
if target_eps is not None and target_delta is not None:
raise ValueError(
"Exactly one out of eps and delta must be None. (None is).")
if target_eps is not None:
delta, opt_order = _compute_delta(orders, rdp, target_eps)
return target_eps, delta, opt_order
else:
eps, opt_order = _compute_eps(orders, rdp, target_delta)
return eps, target_delta, opt_order
```
## dp query
```
# Copyright 2018, The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An interface for differentially private query mechanisms.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
class DPQuery(object):
"""Interface for differentially private query mechanisms."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def initial_global_state(self):
"""Returns the initial global state for the DPQuery."""
pass
@abc.abstractmethod
def derive_sample_params(self, global_state):
"""Given the global state, derives parameters to use for the next sample.
Args:
global_state: The current global state.
Returns:
Parameters to use to process records in the next sample.
"""
pass
@abc.abstractmethod
def initial_sample_state(self, global_state, tensors):
"""Returns an initial state to use for the next sample.
Args:
global_state: The current global state.
tensors: A structure of tensors used as a template to create the initial
sample state.
Returns: An initial sample state.
"""
pass
@abc.abstractmethod
def accumulate_record(self, params, sample_state, record):
"""Accumulates a single record into the sample state.
Args:
params: The parameters for the sample.
sample_state: The current sample state.
record: The record to accumulate.
Returns:
The updated sample state.
"""
pass
@abc.abstractmethod
def get_noised_result(self, sample_state, global_state):
"""Gets query result after all records of sample have been accumulated.
Args:
sample_state: The sample state after all records have been accumulated.
global_state: The global state.
Returns:
A tuple (result, new_global_state) where "result" is the result of the
query and "new_global_state" is the updated global state.
"""
pass
```
## gausian query
```
# Copyright 2018, The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Implements DPQuery interface for Gaussian average queries.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import tensorflow as tf
nest = tf.contrib.framework.nest
class GaussianSumQuery(DPQuery):
"""Implements DPQuery interface for Gaussian sum queries.
Accumulates clipped vectors, then adds Gaussian noise to the sum.
"""
# pylint: disable=invalid-name
_GlobalState = collections.namedtuple(
'_GlobalState', ['l2_norm_clip', 'stddev'])
def __init__(self, l2_norm_clip, stddev):
"""Initializes the GaussianSumQuery.
Args:
l2_norm_clip: The clipping norm to apply to the global norm of each
record.
stddev: The stddev of the noise added to the sum.
"""
self._l2_norm_clip = l2_norm_clip
self._stddev = stddev
def initial_global_state(self):
"""Returns the initial global state for the GaussianSumQuery."""
return self._GlobalState(float(self._l2_norm_clip), float(self._stddev))
def derive_sample_params(self, global_state):
"""Given the global state, derives parameters to use for the next sample.
Args:
global_state: The current global state.
Returns:
Parameters to use to process records in the next sample.
"""
return global_state.l2_norm_clip
def initial_sample_state(self, global_state, tensors):
"""Returns an initial state to use for the next sample.
Args:
global_state: The current global state.
tensors: A structure of tensors used as a template to create the initial
sample state.
Returns: An initial sample state.
"""
del global_state # unused.
return nest.map_structure(tf.zeros_like, tensors)
def accumulate_record(self, params, sample_state, record):
"""Accumulates a single record into the sample state.
Args:
params: The parameters for the sample.
sample_state: The current sample state.
record: The record to accumulate.
Returns:
The updated sample state.
"""
l2_norm_clip = params
record_as_list = nest.flatten(record)
clipped_as_list, _ = tf.clip_by_global_norm(record_as_list, l2_norm_clip)
clipped = nest.pack_sequence_as(record, clipped_as_list)
return nest.map_structure(tf.add, sample_state, clipped)
def get_noised_result(self, sample_state, global_state, add_noise=True):
"""Gets noised sum after all records of sample have been accumulated.
Args:
sample_state: The sample state after all records have been accumulated.
global_state: The global state.
Returns:
A tuple (estimate, new_global_state) where "estimate" is the estimated
sum of the records and "new_global_state" is the updated global state.
"""
def add_noise(v):
if add_noise:
return v + tf.random_normal(tf.shape(v), stddev=global_state.stddev)
else:
return v
return nest.map_structure(add_noise, sample_state), global_state
class GaussianAverageQuery(DPQuery):
"""Implements DPQuery interface for Gaussian average queries.
Accumulates clipped vectors, adds Gaussian noise, and normalizes.
Note that we use "fixed-denominator" estimation: the denominator should be
specified as the expected number of records per sample. Accumulating the
denominator separately would also be possible but would be produce a higher
variance estimator.
"""
# pylint: disable=invalid-name
_GlobalState = collections.namedtuple(
'_GlobalState', ['sum_state', 'denominator'])
def __init__(self, l2_norm_clip, sum_stddev, denominator):
"""Initializes the GaussianAverageQuery.
Args:
l2_norm_clip: The clipping norm to apply to the global norm of each
record.
sum_stddev: The stddev of the noise added to the sum (before
normalization).
denominator: The normalization constant (applied after noise is added to
the sum).
"""
self._numerator = GaussianSumQuery(l2_norm_clip, sum_stddev)
self._denominator = denominator
def initial_global_state(self):
"""Returns the initial global state for the GaussianAverageQuery."""
sum_global_state = self._numerator.initial_global_state()
return self._GlobalState(sum_global_state, float(self._denominator))
def derive_sample_params(self, global_state):
"""Given the global state, derives parameters to use for the next sample.
Args:
global_state: The current global state.
Returns:
Parameters to use to process records in the next sample.
"""
return self._numerator.derive_sample_params(global_state.sum_state)
def initial_sample_state(self, global_state, tensors):
"""Returns an initial state to use for the next sample.
Args:
global_state: The current global state.
tensors: A structure of tensors used as a template to create the initial
sample state.
Returns: An initial sample state.
"""
# GaussianAverageQuery has no state beyond the sum state.
return self._numerator.initial_sample_state(global_state.sum_state, tensors)
def accumulate_record(self, params, sample_state, record):
"""Accumulates a single record into the sample state.
Args:
params: The parameters for the sample.
sample_state: The current sample state.
record: The record to accumulate.
Returns:
The updated sample state.
"""
return self._numerator.accumulate_record(params, sample_state, record)
def get_noised_result(self, sample_state, global_state, add_noise=True):
"""Gets noised average after all records of sample have been accumulated.
Args:
sample_state: The sample state after all records have been accumulated.
global_state: The global state.
Returns:
A tuple (estimate, new_global_state) where "estimate" is the estimated
average of the records and "new_global_state" is the updated global state.
"""
noised_sum, new_sum_global_state = self._numerator.get_noised_result(
sample_state, global_state.sum_state, add_noise)
new_global_state = self._GlobalState(
new_sum_global_state, global_state.denominator)
def normalize(v):
return tf.truediv(v, global_state.denominator)
return nest.map_structure(normalize, noised_sum), new_global_state
```
## our_dp_optimizer
```
# Copyright 2018, The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Differentially private optimizers for TensorFlow."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
def make_optimizer_class(cls):
"""Constructs a DP optimizer class from an existing one."""
if (tf.train.Optimizer.compute_gradients.__code__ is
not cls.compute_gradients.__code__):
tf.logging.warning(
'WARNING: Calling make_optimizer_class() on class %s that overrides '
'method compute_gradients(). Check to ensure that '
'make_optimizer_class() does not interfere with overridden version.',
cls.__name__)
class DPOptimizerClass(cls):
"""Differentially private subclass of given class cls."""
def __init__(
self,
l2_norm_clip,
noise_multiplier,
dp_average_query,
num_microbatches,
unroll_microbatches=False,
*args, # pylint: disable=keyword-arg-before-vararg
**kwargs):
super(DPOptimizerClass, self).__init__(*args, **kwargs)
self._dp_average_query = dp_average_query
self._num_microbatches = num_microbatches
self._global_state = self._dp_average_query.initial_global_state()
# TODO(b/122613513): Set unroll_microbatches=True to avoid this bug.
# Beware: When num_microbatches is large (>100), enabling this parameter
# may cause an OOM error.
self._unroll_microbatches = unroll_microbatches
def dp_compute_gradients(self,
loss,
var_list,
gate_gradients=tf.train.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
grad_loss=None,
add_noise=True):
# Note: it would be closer to the correct i.i.d. sampling of records if
# we sampled each microbatch from the appropriate binomial distribution,
# although that still wouldn't be quite correct because it would be
# sampling from the dataset without replacement.
microbatches_losses = tf.reshape(loss,
[self._num_microbatches, -1])
sample_params = (self._dp_average_query.derive_sample_params(
self._global_state))
def process_microbatch(i, sample_state):
"""Process one microbatch (record) with privacy helper."""
grads, _ = zip(*super(cls, self).compute_gradients(
tf.gather(microbatches_losses, [i]), var_list,
gate_gradients, aggregation_method,
colocate_gradients_with_ops, grad_loss))
# Converts tensor to list to replace None gradients with zero
grads1 = list(grads)
for inx in range(0, len(grads)):
if (grads[inx] == None):
grads1[inx] = tf.zeros_like(var_list[inx])
grads_list = grads1
sample_state = self._dp_average_query.accumulate_record(
sample_params, sample_state, grads_list)
return sample_state
if var_list is None:
var_list = (tf.trainable_variables() + tf.get_collection(
tf.GraphKeys.TRAINABLE_RESOURCE_VARIABLES))
sample_state = self._dp_average_query.initial_sample_state(
self._global_state, var_list)
if self._unroll_microbatches:
for idx in range(self._num_microbatches):
sample_state = process_microbatch(idx, sample_state)
else:
# Use of while_loop here requires that sample_state be a nested
# structure of tensors. In general, we would prefer to allow it to be
# an arbitrary opaque type.
cond_fn = lambda i, _: tf.less(i, self._num_microbatches)
body_fn = lambda i, state: [
tf.add(i, 1), process_microbatch(i, state)
]
idx = tf.constant(0)
_, sample_state = tf.while_loop(cond_fn, body_fn,
[idx, sample_state])
final_grads, self._global_state = (
self._dp_average_query.get_noised_result(
sample_state, self._global_state, add_noise))
return (final_grads)
def minimize(self,
d_loss_real,
d_loss_fake,
global_step=None,
var_list=None,
gate_gradients=tf.train.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
name=None,
grad_loss=None):
"""Minimize using sanitized gradients
Args:
d_loss_real: the loss tensor for real data
d_loss_fake: the loss tensor for fake data
global_step: the optional global step.
var_list: the optional variables.
name: the optional name.
Returns:
the operation that runs one step of DP gradient descent.
"""
# First validate the var_list
if var_list is None:
var_list = tf.trainable_variables()
for var in var_list:
if not isinstance(var, tf.Variable):
raise TypeError("Argument is not a variable.Variable: %s" %
var)
# ------------------ OUR METHOD --------------------------------
r_grads = self.dp_compute_gradients(
d_loss_real,
var_list=var_list,
gate_gradients=gate_gradients,
aggregation_method=aggregation_method,
colocate_gradients_with_ops=colocate_gradients_with_ops,
grad_loss=grad_loss, add_noise = True)
f_grads = self.dp_compute_gradients(
d_loss_fake,
var_list=var_list,
gate_gradients=gate_gradients,
aggregation_method=aggregation_method,
colocate_gradients_with_ops=colocate_gradients_with_ops,
grad_loss=grad_loss,
add_noise=False)
# Compute the overall gradients
s_grads = [(r_grads[idx] + f_grads[idx])
for idx in range(len(r_grads))]
sanitized_grads_and_vars = list(zip(s_grads, var_list))
self._assert_valid_dtypes(
[v for g, v in sanitized_grads_and_vars if g is not None])
# Apply the overall gradients
apply_grads = self.apply_gradients(sanitized_grads_and_vars,
global_step=global_step,
name=name)
return apply_grads
# -----------------------------------------------------------------
return DPOptimizerClass
def make_gaussian_optimizer_class(cls):
"""Constructs a DP optimizer with Gaussian averaging of updates."""
class DPGaussianOptimizerClass(make_optimizer_class(cls)):
"""DP subclass of given class cls using Gaussian averaging."""
def __init__(
self,
l2_norm_clip,
noise_multiplier,
num_microbatches,
unroll_microbatches=False,
*args, # pylint: disable=keyword-arg-before-vararg
**kwargs):
dp_average_query = GaussianAverageQuery(
l2_norm_clip, l2_norm_clip * noise_multiplier,
num_microbatches)
self.l2_norm_clip = l2_norm_clip
self.noise_multiplier = noise_multiplier
super(DPGaussianOptimizerClass,
self).__init__(l2_norm_clip, noise_multiplier,
dp_average_query, num_microbatches,
unroll_microbatches, *args, **kwargs)
return DPGaussianOptimizerClass
DPAdagradOptimizer = make_optimizer_class(tf.train.AdagradOptimizer)
DPAdamOptimizer = make_optimizer_class(tf.train.AdamOptimizer)
DPGradientDescentOptimizer = make_optimizer_class(
tf.train.GradientDescentOptimizer)
DPAdagradGaussianOptimizer = make_gaussian_optimizer_class(
tf.train.AdagradOptimizer)
DPAdamGaussianOptimizer = make_gaussian_optimizer_class(tf.train.AdamOptimizer)
DPGradientDescentGaussianOptimizer = make_gaussian_optimizer_class(
tf.train.GradientDescentOptimizer)
```
## gan.ops
```
"""
Most codes from https://github.com/carpedm20/DCGAN-tensorflow
"""
import math
import numpy as np
import tensorflow as tf
if "concat_v2" in dir(tf):
def concat(tensors, axis, *args, **kwargs):
return tf.concat_v2(tensors, axis, *args, **kwargs)
else:
def concat(tensors, axis, *args, **kwargs):
return tf.concat(tensors, axis, *args, **kwargs)
def bn(x, is_training, scope):
return tf.contrib.layers.batch_norm(x,
decay=0.9,
updates_collections=None,
epsilon=1e-5,
scale=True,
is_training=is_training,
scope=scope)
def conv_out_size_same(size, stride):
return int(math.ceil(float(size) / float(stride)))
def conv_cond_concat(x, y):
"""Concatenate conditioning vector on feature map axis."""
x_shapes = x.get_shape()
y_shapes = y.get_shape()
return concat(
[x, y * tf.ones([x_shapes[0], x_shapes[1], x_shapes[2], y_shapes[3]])],
3)
def conv2d(input_,
output_dim,
k_h=5,
k_w=5,
d_h=2,
d_w=2,
stddev=0.02,
name="conv2d"):
with tf.variable_scope(name):
w = tf.get_variable(
'w', [k_h, k_w, input_.get_shape()[-1], output_dim],
initializer=tf.truncated_normal_initializer(stddev=stddev))
conv = tf.nn.conv2d(input_,
w,
strides=[1, d_h, d_w, 1],
padding='SAME')
biases = tf.get_variable('biases', [output_dim],
initializer=tf.constant_initializer(0.0))
conv = tf.reshape(tf.nn.bias_add(conv, biases), conv.get_shape())
return conv
def deconv2d(input_,
output_shape,
k_h=5,
k_w=5,
d_h=2,
d_w=2,
name="deconv2d",
stddev=0.02,
with_w=False):
with tf.variable_scope(name):
# filter : [height, width, output_channels, in_channels]
w = tf.get_variable(
'w', [k_h, k_w, output_shape[-1],
input_.get_shape()[-1]],
initializer=tf.random_normal_initializer(stddev=stddev))
try:
deconv = tf.nn.conv2d_transpose(input_,
w,
output_shape=output_shape,
strides=[1, d_h, d_w, 1])
# Support for verisons of TensorFlow before 0.7.0
except AttributeError:
deconv = tf.nn.deconv2d(input_,
w,
output_shape=output_shape,
strides=[1, d_h, d_w, 1])
biases = tf.get_variable('biases', [output_shape[-1]],
initializer=tf.constant_initializer(0.0))
deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())
if with_w:
return deconv, w, biases
else:
return deconv
def lrelu(x, leak=0.2, name="lrelu"):
return tf.maximum(x, leak * x)
def linear(input_,
output_size,
scope=None,
stddev=0.02,
bias_start=0.0,
with_w=False):
shape = input_.get_shape().as_list()
with tf.variable_scope(scope or "Linear"):
matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,
tf.random_normal_initializer(stddev=stddev))
bias = tf.get_variable("bias", [output_size],
initializer=tf.constant_initializer(bias_start))
if with_w:
return tf.matmul(input_, matrix) + bias, matrix, bias
else:
return tf.matmul(input_, matrix) + bias
```
## OUR DP CGAN
```
# -*- coding: utf-8 -*-
from __future__ import division
from keras.datasets import cifar10
from mlxtend.data import loadlocal_mnist
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
class OUR_DP_CGAN(object):
model_name = "OUR_DP_CGAN" # name for checkpoint
def __init__(self, sess, epoch, batch_size, z_dim, epsilon, delta, sigma,
clip_value, lr, dataset_name, base_dir, checkpoint_dir,
result_dir, log_dir):
self.sess = sess
self.dataset_name = dataset_name
self.base_dir = base_dir
self.checkpoint_dir = checkpoint_dir
self.result_dir = result_dir
self.log_dir = log_dir
self.epoch = epoch
self.batch_size = batch_size
self.epsilon = epsilon
self.delta = delta
self.noise_multiplier = sigma
self.l2_norm_clip = clip_value
self.lr = lr
if dataset_name == 'mnist' or dataset_name == 'fashion-mnist':
# parameters
self.input_height = 28
self.input_width = 28
self.output_height = 28
self.output_width = 28
self.z_dim = z_dim # dimension of noise-vector
self.y_dim = 10 # dimension of condition-vector (label)
self.c_dim = 1
# train
self.learningRateD = self.lr
self.learningRateG = self.learningRateD * 5
self.beta1 = 0.5
self.beta2 = 0.99
# test
self.sample_num = 64 # number of generated images to be saved
# load mnist
self.data_X, self.data_y = load_mnist(train = True)
# get number of batches for a single epoch
self.num_batches = len(self.data_X) // self.batch_size
elif dataset_name == 'cifar10':
# parameters
self.input_height = 32
self.input_width = 32
self.output_height = 32
self.output_width = 32
self.z_dim = 100 # dimension of noise-vector
self.y_dim = 10 # dimension of condition-vector (label)
self.c_dim = 3 # color dimension
# train
# self.learning_rate = 0.0002 # 1e-3, 1e-4
self.learningRateD = 1e-3
self.learningRateG = 1e-4
self.beta1 = 0.5
self.beta2 = 0.99
# test
self.sample_num = 64 # number of generated images to be saved
# load cifar10
self.data_X, self.data_y = load_cifar10(train=True)
self.num_batches = len(self.data_X) // self.batch_size
else:
raise NotImplementedError
def discriminator(self, x, y, is_training=True, reuse=False):
# Network Architecture is exactly same as in infoGAN (https://arxiv.org/abs/1606.03657)
# Architecture : (64)4c2s-(128)4c2s_BL-FC1024_BL-FC1_S
with tf.variable_scope("discriminator", reuse=reuse):
# merge image and label
if (self.dataset_name == "mnist"):
y = tf.reshape(y, [self.batch_size, 1, 1, self.y_dim])
x = conv_cond_concat(x, y)
net = lrelu(conv2d(x, 64, 4, 4, 2, 2, name='d_conv1'))
net = lrelu(
bn(conv2d(net, 128, 4, 4, 2, 2, name='d_conv2'),
is_training=is_training,
scope='d_bn2'))
net = tf.reshape(net, [self.batch_size, -1])
net = lrelu(
bn(linear(net, 1024, scope='d_fc3'),
is_training=is_training,
scope='d_bn3'))
out_logit = linear(net, 1, scope='d_fc4')
out = tf.nn.sigmoid(out_logit)
elif (self.dataset_name == "cifar10"):
y = tf.reshape(y, [self.batch_size, 1, 1, self.y_dim])
x = conv_cond_concat(x, y)
lrelu_slope = 0.2
kernel_size = 5
w_init = tf.contrib.layers.xavier_initializer()
net = lrelu(
conv2d(x,
64,
5,
5,
2,
2,
name='d_conv1' + '_' + self.dataset_name))
net = lrelu(
bn(conv2d(net,
128,
5,
5,
2,
2,
name='d_conv2' + '_' + self.dataset_name),
is_training=is_training,
scope='d_bn2'))
net = lrelu(
bn(conv2d(net,
256,
5,
5,
2,
2,
name='d_conv3' + '_' + self.dataset_name),
is_training=is_training,
scope='d_bn3'))
net = lrelu(
bn(conv2d(net,
512,
5,
5,
2,
2,
name='d_conv4' + '_' + self.dataset_name),
is_training=is_training,
scope='d_bn4'))
net = tf.reshape(net, [self.batch_size, -1])
out_logit = linear(net,
1,
scope='d_fc5' + '_' + self.dataset_name)
out = tf.nn.sigmoid(out_logit)
return out, out_logit
def generator(self, z, y, is_training=True, reuse=False):
# Network Architecture is exactly same as in infoGAN (https://arxiv.org/abs/1606.03657)
# Architecture : FC1024_BR-FC7x7x128_BR-(64)4dc2s_BR-(1)4dc2s_S
with tf.variable_scope("generator", reuse=reuse):
if (self.dataset_name == "mnist"):
# merge noise and label
z = concat([z, y], 1)
net = tf.nn.relu(
bn(linear(z, 1024, scope='g_fc1'),
is_training=is_training,
scope='g_bn1'))
net = tf.nn.relu(
bn(linear(net, 128 * 7 * 7, scope='g_fc2'),
is_training=is_training,
scope='g_bn2'))
net = tf.reshape(net, [self.batch_size, 7, 7, 128])
net = tf.nn.relu(
bn(deconv2d(net, [self.batch_size, 14, 14, 64],
4,
4,
2,
2,
name='g_dc3'),
is_training=is_training,
scope='g_bn3'))
out = tf.nn.sigmoid(
deconv2d(net, [self.batch_size, 28, 28, 1],
4,
4,
2,
2,
name='g_dc4'))
elif (self.dataset_name == "cifar10"):
h_size = 32
h_size_2 = 16
h_size_4 = 8
h_size_8 = 4
h_size_16 = 2
z = concat([z, y], 1)
net = linear(z,
512 * h_size_16 * h_size_16,
scope='g_fc1' + '_' + self.dataset_name)
net = tf.nn.relu(
bn(tf.reshape(
net, [self.batch_size, h_size_16, h_size_16, 512]),
is_training=is_training,
scope='g_bn1'))
net = tf.nn.relu(
bn(deconv2d(net,
[self.batch_size, h_size_8, h_size_8, 256],
5,
5,
2,
2,
name='g_dc2' + '_' + self.dataset_name),
is_training=is_training,
scope='g_bn2'))
net = tf.nn.relu(
bn(deconv2d(net,
[self.batch_size, h_size_4, h_size_4, 128],
5,
5,
2,
2,
name='g_dc3' + '_' + self.dataset_name),
is_training=is_training,
scope='g_bn3'))
net = tf.nn.relu(
bn(deconv2d(net, [self.batch_size, h_size_2, h_size_2, 64],
5,
5,
2,
2,
name='g_dc4' + '_' + self.dataset_name),
is_training=is_training,
scope='g_bn4'))
out = tf.nn.tanh(
deconv2d(net, [
self.batch_size, self.output_height, self.output_width,
self.c_dim
],
5,
5,
2,
2,
name='g_dc5' + '_' + self.dataset_name))
return out
def build_model(self):
# some parameters
image_dims = [self.input_height, self.input_width, self.c_dim]
bs = self.batch_size
""" Graph Input """
# images
self.inputs = tf.placeholder(tf.float32, [bs] + image_dims,
name='real_images')
# labels
self.y = tf.placeholder(tf.float32, [bs, self.y_dim], name='y')
# noises
self.z = tf.placeholder(tf.float32, [bs, self.z_dim], name='z')
""" Loss Function """
# output of D for real images
D_real, D_real_logits = self.discriminator(self.inputs,
self.y,
is_training=True,
reuse=False)
# output of D for fake images
G = self.generator(self.z, self.y, is_training=True, reuse=False)
D_fake, D_fake_logits = self.discriminator(G,
self.y,
is_training=True,
reuse=True)
# get loss for discriminator
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_real_logits, labels=tf.ones_like(D_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_fake_logits, labels=tf.zeros_like(D_fake)))
self.d_loss_real_vec = tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_real_logits, labels=tf.ones_like(D_real))
self.d_loss_fake_vec = tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_fake_logits, labels=tf.zeros_like(D_fake))
self.d_loss = d_loss_real + d_loss_fake
# get loss for generator
self.g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_fake_logits, labels=tf.ones_like(D_fake)))
""" Training """
# divide trainable variables into a group for D and a group for G
t_vars = tf.trainable_variables()
d_vars = [
var for var in t_vars if var.name.startswith('discriminator')
]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# optimizers
with tf.control_dependencies(tf.get_collection(
tf.GraphKeys.UPDATE_OPS)):
d_optim_init = DPGradientDescentGaussianOptimizer(
l2_norm_clip=self.l2_norm_clip,
noise_multiplier=self.noise_multiplier,
num_microbatches=self.batch_size,
learning_rate=self.learningRateD)
global_step = tf.train.get_global_step()
self.d_optim = d_optim_init.minimize(
d_loss_real=self.d_loss_real_vec,
d_loss_fake=self.d_loss_fake_vec,
global_step=global_step,
var_list=d_vars)
optimizer = DPGradientDescentGaussianOptimizer(
l2_norm_clip=self.l2_norm_clip,
noise_multiplier=self.noise_multiplier,
num_microbatches=self.batch_size,
learning_rate=self.learningRateD)
self.g_optim = tf.train.GradientDescentOptimizer(self.learningRateG) \
.minimize(self.g_loss, var_list=g_vars)
"""" Testing """
self.fake_images = self.generator(self.z,
self.y,
is_training=False,
reuse=True)
""" Summary """
d_loss_real_sum = tf.summary.scalar("d_loss_real", d_loss_real)
d_loss_fake_sum = tf.summary.scalar("d_loss_fake", d_loss_fake)
d_loss_sum = tf.summary.scalar("d_loss", self.d_loss)
g_loss_sum = tf.summary.scalar("g_loss", self.g_loss)
# final summary operations
self.g_sum = tf.summary.merge([d_loss_fake_sum, g_loss_sum])
self.d_sum = tf.summary.merge([d_loss_real_sum, d_loss_sum])
def train(self):
# initialize all variables
tf.global_variables_initializer().run()
# graph inputs for visualize training results
self.sample_z = np.random.uniform(-1,
1,
size=(self.batch_size, self.z_dim))
self.test_labels = self.data_y[0:self.batch_size]
# saver to save model
self.saver = tf.train.Saver()
# summary writer
self.writer = tf.summary.FileWriter(
self.log_dir + '/' + self.model_name, self.sess.graph)
# restore check-point if it exits
could_load, checkpoint_counter = self.load(self.checkpoint_dir)
if could_load:
start_epoch = (int)(checkpoint_counter / self.num_batches)
start_batch_id = checkpoint_counter - start_epoch * self.num_batches
counter = checkpoint_counter
print(" [*] Load SUCCESS")
else:
start_epoch = 0
start_batch_id = 0
counter = 1
print(" [!] Load failed...")
# loop for epoch
epoch = start_epoch
should_terminate = False
while (epoch < self.epoch and not should_terminate):
# get batch data
for idx in range(start_batch_id, self.num_batches):
batch_images = self.data_X[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_labels = self.data_y[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_z = np.random.uniform(
-1, 1, [self.batch_size, self.z_dim]).astype(np.float32)
# update D network
_, summary_str, d_loss = self.sess.run(
[self.d_optim, self.d_sum, self.d_loss],
feed_dict={
self.inputs: batch_images,
self.y: batch_labels,
self.z: batch_z
})
self.writer.add_summary(summary_str, counter)
eps = self.compute_epsilon((epoch * self.num_batches) + idx)
if (eps > self.epsilon):
should_terminate = True
print("TERMINATE !! Run out of Privacy Budget.....")
epoch = self.epoch
break
# update G network
_, summary_str, g_loss = self.sess.run(
[self.g_optim, self.g_sum, self.g_loss],
feed_dict={
self.inputs: batch_images,
self.y: batch_labels,
self.z: batch_z
})
self.writer.add_summary(summary_str, counter)
# display training status
counter += 1
_ = self.sess.run(self.fake_images,
feed_dict={
self.z: self.sample_z,
self.y: self.test_labels
})
# save training results for every 100 steps
if np.mod(counter, 100) == 0:
print("Iteration : " + str(idx) + " Eps: " + str(eps))
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: self.sample_z,
self.y: self.test_labels
})
tot_num_samples = min(self.sample_num, self.batch_size)
manifold_h = int(np.floor(np.sqrt(tot_num_samples)))
manifold_w = int(np.floor(np.sqrt(tot_num_samples)))
save_images(
samples[:manifold_h * manifold_w, :, :, :],
[manifold_h, manifold_w],
check_folder(self.result_dir + '/' + self.model_dir) +
'/' + self.model_name +
'_train_{:02d}_{:04d}.png'.format(epoch, idx))
epoch = epoch + 1
# After an epoch, start_batch_id is set to zero
# non-zero value is only for the first epoch after loading pre-trained model
start_batch_id = 0
# save model
self.save(self.checkpoint_dir, counter)
# show temporal results
if (self.dataset_name == 'mnist'):
self.visualize_results_MNIST(epoch)
elif (self.dataset_name == 'cifar10'):
self.visualize_results_CIFAR(epoch)
# save model for final step
self.save(self.checkpoint_dir, counter)
def compute_fpr_tpr_roc(Y_test, Y_score):
n_classes = Y_score.shape[1]
false_positive_rate = dict()
true_positive_rate = dict()
roc_auc = dict()
for class_cntr in range(n_classes):
false_positive_rate[class_cntr], true_positive_rate[
class_cntr], _ = roc_curve(Y_test[:, class_cntr],
Y_score[:, class_cntr])
roc_auc[class_cntr] = auc(false_positive_rate[class_cntr],
true_positive_rate[class_cntr])
# Compute micro-average ROC curve and ROC area
false_positive_rate["micro"], true_positive_rate[
"micro"], _ = roc_curve(Y_test.ravel(), Y_score.ravel())
roc_auc["micro"] = auc(false_positive_rate["micro"],
true_positive_rate["micro"])
return false_positive_rate, true_positive_rate, roc_auc
def classify(X_train,
Y_train,
X_test,
classiferName,
random_state_value=0):
if classiferName == "lr":
classifier = OneVsRestClassifier(
LogisticRegression(solver='lbfgs',
multi_class='multinomial',
random_state=random_state_value))
elif classiferName == "mlp":
classifier = OneVsRestClassifier(
MLPClassifier(random_state=random_state_value, alpha=1))
elif classiferName == "rf":
classifier = OneVsRestClassifier(
RandomForestClassifier(n_estimators=100,
random_state=random_state_value))
else:
print("Classifier not in the list!")
exit()
Y_score = classifier.fit(X_train, Y_train).predict_proba(X_test)
return Y_score
batch_size = int(self.batch_size)
if (self.dataset_name == "mnist"):
n_class = np.zeros(10)
n_class[0] = 5923 - batch_size
n_class[1] = 6742
n_class[2] = 5958
n_class[3] = 6131
n_class[4] = 5842
n_class[5] = 5421
n_class[6] = 5918
n_class[7] = 6265
n_class[8] = 5851
n_class[9] = 5949
Z_sample = np.random.uniform(-1, 1, size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + 0
y_one_hot = np.zeros((batch_size, self.y_dim))
y_one_hot[np.arange(batch_size), y] = 1
images = self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot
})
for classLabel in range(0, 10):
for _ in range(0, int(n_class[classLabel]), batch_size):
Z_sample = np.random.uniform(-1,
1,
size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + classLabel
y_one_hot_init = np.zeros((batch_size, self.y_dim))
y_one_hot_init[np.arange(batch_size), y] = 1
images = np.append(images,
self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot_init
}),
axis=0)
y_one_hot = np.append(y_one_hot, y_one_hot_init, axis=0)
X_test, Y_test = load_mnist(train = False)
Y_test = [int(y) for y in Y_test]
classes = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Y_test = label_binarize(Y_test, classes=classes)
if (self.dataset_name == "cifar10"):
n_class = np.zeros(10)
for t in range(1, 10):
n_class[t] = 1000
Z_sample = np.random.uniform(-1, 1, size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + 0
y_one_hot = np.zeros((batch_size, self.y_dim))
y_one_hot[np.arange(batch_size), y] = 1
images = self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot
})
for classLabel in range(0, 10):
for _ in range(0, int(n_class[classLabel]), batch_size):
Z_sample = np.random.uniform(-1,
1,
size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + classLabel
y_one_hot_init = np.zeros((batch_size, self.y_dim))
y_one_hot_init[np.arange(batch_size), y] = 1
images = np.append(images,
self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot_init
}),
axis=0)
y_one_hot = np.append(y_one_hot, y_one_hot_init, axis=0)
X_test, Y_test = load_cifar10(train=False)
classes = range(0, 10)
Y_test = label_binarize(Y_test, classes=classes)
print(" Classifying - Logistic Regression...")
TwoDim_images = images.reshape(np.shape(images)[0], -2)
X_test = X_test.reshape(np.shape(X_test)[0], -2)
Y_score = classify(TwoDim_images,
y_one_hot,
X_test,
"lr",
random_state_value=30)
false_positive_rate, true_positive_rate, roc_auc = compute_fpr_tpr_roc(
Y_test, Y_score)
classification_results_fname = self.base_dir + "CGAN_AuROC.txt"
classification_results = open(classification_results_fname, "w")
classification_results.write(
"\nepsilon : {:.2f}, sigma: {:.2f}, clipping value: {:.2f}".format(
(self.epsilon), round(self.noise_multiplier, 2),
round(self.l2_norm_clip, 2)))
classification_results.write("\nAuROC - logistic Regression: " +
str(roc_auc["micro"]))
classification_results.write(
"\n--------------------------------------------------------------------\n"
)
print(" Classifying - Random Forest...")
Y_score = classify(TwoDim_images,
y_one_hot,
X_test,
"rf",
random_state_value=30)
print(" Computing ROC - Random Forest ...")
false_positive_rate, true_positive_rate, roc_auc = compute_fpr_tpr_roc(
Y_test, Y_score)
classification_results.write(
"\nepsilon : {:.2f}, sigma: {:.2f}, clipping value: {:.2f}".format(
(self.epsilon), round(self.noise_multiplier, 2),
round(self.l2_norm_clip, 2)))
classification_results.write("\nAuROC - random Forest: " +
str(roc_auc["micro"]))
classification_results.write(
"\n--------------------------------------------------------------------\n"
)
print(" Classifying - multilayer Perceptron ...")
Y_score = classify(TwoDim_images,
y_one_hot,
X_test,
"mlp",
random_state_value=30)
print(" Computing ROC - Multilayer Perceptron ...")
false_positive_rate, true_positive_rate, roc_auc = compute_fpr_tpr_roc(
Y_test, Y_score)
classification_results.write(
"\nepsilon : {:.2f}, sigma: {:.2f}, clipping value: {:.2f}".format(
(self.epsilon), round(self.noise_multiplier, 2),
round(self.l2_norm_clip, 2)))
classification_results.write("\nAuROC - multilayer Perceptron: " +
str(roc_auc["micro"]))
classification_results.write(
"\n--------------------------------------------------------------------\n"
)
# save model for final step
self.save(self.checkpoint_dir, counter)
def compute_epsilon(self, steps):
"""Computes epsilon value for given hyperparameters."""
if self.noise_multiplier == 0.0:
return float('inf')
orders = [1 + x / 10. for x in range(1, 100)] + list(range(12, 64))
sampling_probability = self.batch_size / 60000
rdp = compute_rdp(q=sampling_probability,
noise_multiplier=self.noise_multiplier,
steps=steps,
orders=orders)
# Delta is set to 1e-5 because MNIST has 60000 training points.
return get_privacy_spent(orders, rdp, target_delta=1e-5)[0]
# CIFAR 10
def visualize_results_CIFAR(self, epoch):
tot_num_samples = min(self.sample_num, self.batch_size) # 64, 100
image_frame_dim = int(np.floor(np.sqrt(tot_num_samples))) # 8
""" random condition, random noise """
y = np.random.choice(self.y_dim, self.batch_size)
y_one_hot = np.zeros((self.batch_size, self.y_dim))
y_one_hot[np.arange(self.batch_size), y] = 1
z_sample = np.random.uniform(-1, 1, size=(self.batch_size,
self.z_dim)) # 100, 100
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: z_sample,
self.y: y_one_hot
})
save_matplot_img(
samples[:image_frame_dim * image_frame_dim, :, :, :],
[image_frame_dim, image_frame_dim], self.result_dir + '/' +
self.model_name + '_epoch%03d' % epoch + '_test_all_classes.png')
# MNIST
def visualize_results_MNIST(self, epoch):
tot_num_samples = min(self.sample_num, self.batch_size)
image_frame_dim = int(np.floor(np.sqrt(tot_num_samples)))
""" random condition, random noise """
y = np.random.choice(self.y_dim, self.batch_size)
y_one_hot = np.zeros((self.batch_size, self.y_dim))
y_one_hot[np.arange(self.batch_size), y] = 1
z_sample = np.random.uniform(-1, 1, size=(self.batch_size, self.z_dim))
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: z_sample,
self.y: y_one_hot
})
save_images(
samples[:image_frame_dim * image_frame_dim, :, :, :],
[image_frame_dim, image_frame_dim],
check_folder(self.result_dir + '/' + self.model_dir) + '/' +
self.model_name + '_epoch%03d' % epoch + '_test_all_classes.png')
""" specified condition, random noise """
n_styles = 10 # must be less than or equal to self.batch_size
np.random.seed()
si = np.random.choice(self.batch_size, n_styles)
for l in range(self.y_dim):
y = np.zeros(self.batch_size, dtype=np.int64) + l
y_one_hot = np.zeros((self.batch_size, self.y_dim))
y_one_hot[np.arange(self.batch_size), y] = 1
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: z_sample,
self.y: y_one_hot
})
save_images(
samples[:image_frame_dim * image_frame_dim, :, :, :],
[image_frame_dim, image_frame_dim],
check_folder(self.result_dir + '/' + self.model_dir) + '/' +
self.model_name + '_epoch%03d' % epoch +
'_test_class_%d.png' % l)
samples = samples[si, :, :, :]
if l == 0:
all_samples = samples
else:
all_samples = np.concatenate((all_samples, samples), axis=0)
""" save merged images to check style-consistency """
canvas = np.zeros_like(all_samples)
for s in range(n_styles):
for c in range(self.y_dim):
canvas[s * self.y_dim +
c, :, :, :] = all_samples[c * n_styles + s, :, :, :]
save_images(
canvas, [n_styles, self.y_dim],
check_folder(self.result_dir + '/' + self.model_dir) + '/' +
self.model_name + '_epoch%03d' % epoch +
'_test_all_classes_style_by_style.png')
@property
def model_dir(self):
return "{}_{}_{}_{}".format(self.model_name, self.dataset_name,
self.batch_size, self.z_dim)
def save(self, checkpoint_dir, step):
checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir,
self.model_name)
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
self.saver.save(self.sess,
os.path.join(checkpoint_dir,
self.model_name + '.model'),
global_step=step)
def load(self, checkpoint_dir):
import re
print(" [*] Reading checkpoints...")
checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir,
self.model_name)
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
self.saver.restore(self.sess,
os.path.join(checkpoint_dir, ckpt_name))
counter = int(
next(re.finditer("(\d+)(?!.*\d)", ckpt_name)).group(0))
print(" [*] Success to read {}".format(ckpt_name))
return True, counter
else:
print(" [*] Failed to find a checkpoint")
return False, 0
```
## gan.utils
```
"""
Most codes from https://github.com/carpedm20/DCGAN-tensorflow
"""
from __future__ import division
import scipy.misc
import numpy as np
from six.moves import xrange
import matplotlib.pyplot as plt
import os, gzip
import tensorflow as tf
import tensorflow.contrib.slim as slim
from keras.datasets import cifar10
from keras.datasets import mnist
def one_hot(x, n):
"""
convert index representation to one-hot representation
"""
x = np.array(x)
assert x.ndim == 1
return np.eye(n)[x]
def prepare_input(data=None, labels=None):
image_height = 32
image_width = 32
image_depth = 3
assert (data.shape[1] == image_height * image_width * image_depth)
assert (data.shape[0] == labels.shape[0])
# do mean normalization across all samples
mu = np.mean(data, axis=0)
mu = mu.reshape(1, -1)
sigma = np.std(data, axis=0)
sigma = sigma.reshape(1, -1)
data = data - mu
data = data / sigma
is_nan = np.isnan(data)
is_inf = np.isinf(data)
if np.any(is_nan) or np.any(is_inf):
print('data is not well-formed : is_nan {n}, is_inf: {i}'.format(
n=np.any(is_nan), i=np.any(is_inf)))
# data is transformed from (no_of_samples, 3072) to (no_of_samples , image_height, image_width, image_depth)
# make sure the type of the data is no.float32
data = data.reshape([-1, image_depth, image_height, image_width])
data = data.transpose([0, 2, 3, 1])
data = data.astype(np.float32)
return data, labels
def read_cifar10(filename): # queue one element
class CIFAR10Record(object):
pass
result = CIFAR10Record()
label_bytes = 1 # 2 for CIFAR-100
result.height = 32
result.width = 32
result.depth = 3
data = np.load(filename, encoding='latin1')
value = np.asarray(data['data']).astype(np.float32)
labels = np.asarray(data['labels']).astype(np.int32)
return prepare_input(value, labels)
def load_cifar10(train):
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
if (train == True):
dataX = x_train.reshape([-1, 32, 32, 3])
dataY = y_train
else:
dataX = x_test.reshape([-1, 32, 32, 3])
dataY = y_test
seed = 547
np.random.seed(seed)
np.random.shuffle(dataX)
np.random.seed(seed)
np.random.shuffle(dataY)
y_vec = np.zeros((len(dataY), 10), dtype=np.float)
for i, label in enumerate(dataY):
y_vec[i, dataY[i]] = 1.0
return dataX / 255., y_vec
def load_mnist(train = True):
def extract_data(filename, num_data, head_size, data_size):
with gzip.open(filename) as bytestream:
bytestream.read(head_size)
buf = bytestream.read(data_size * num_data)
data = np.frombuffer(buf, dtype=np.uint8).astype(np.float)
return data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape((60000, 28, 28, 1))
y_train = y_train.reshape((60000))
x_test = x_test.reshape((10000, 28, 28, 1))
y_test = y_test.reshape((10000))
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
if (train == True):
seed = 547
np.random.seed(seed)
np.random.shuffle(x_train)
np.random.seed(seed)
np.random.shuffle(y_train)
y_vec = np.zeros((len(y_train), 10), dtype=np.float)
for i, label in enumerate(y_train):
y_vec[i, y_train[i]] = 1.0
return x_train / 255., y_vec
else:
seed = 547
np.random.seed(seed)
np.random.shuffle(x_test)
np.random.seed(seed)
np.random.shuffle(y_test)
y_vec = np.zeros((len(y_test), 10), dtype=np.float)
for i, label in enumerate(y_test):
y_vec[i, y_test[i]] = 1.0
return x_test / 255., y_vec
def check_folder(log_dir):
if not os.path.exists(log_dir):
os.makedirs(log_dir)
return log_dir
def show_all_variables():
model_vars = tf.trainable_variables()
slim.model_analyzer.analyze_vars(model_vars, print_info=True)
def get_image(image_path,
input_height,
input_width,
resize_height=64,
resize_width=64,
crop=True,
grayscale=False):
image = imread(image_path, grayscale)
return transform(image, input_height, input_width, resize_height,
resize_width, crop)
def save_images(images, size, image_path):
return imsave(inverse_transform(images), size, image_path)
def imread(path, grayscale=False):
if (grayscale):
return scipy.misc.imread(path, flatten=True).astype(np.float)
else:
return scipy.misc.imread(path).astype(np.float)
def merge_images(images, size):
return inverse_transform(images)
def merge(images, size):
h, w = images.shape[1], images.shape[2]
if (images.shape[3] in (3, 4)):
c = images.shape[3]
img = np.zeros((h * size[0], w * size[1], c))
for idx, image in enumerate(images):
i = idx % size[1]
j = idx // size[1]
img[j * h:j * h + h, i * w:i * w + w, :] = image
return img
elif images.shape[3] == 1:
img = np.zeros((h * size[0], w * size[1]))
for idx, image in enumerate(images):
i = idx % size[1]
j = idx // size[1]
img[j * h:j * h + h, i * w:i * w + w] = image[:, :, 0]
return img
else:
raise ValueError('in merge(images,size) images parameter '
'must have dimensions: HxW or HxWx3 or HxWx4')
def imsave(images, size, path):
image = np.squeeze(merge(images, size))
return scipy.misc.imsave(path, image)
def center_crop(x, crop_h, crop_w, resize_h=64, resize_w=64):
if crop_w is None:
crop_w = crop_h
h, w = x.shape[:2]
j = int(round((h - crop_h) / 2.))
i = int(round((w - crop_w) / 2.))
return scipy.misc.imresize(x[j:j + crop_h, i:i + crop_w],
[resize_h, resize_w])
def transform(image,
input_height,
input_width,
resize_height=64,
resize_width=64,
crop=True):
if crop:
cropped_image = center_crop(image, input_height, input_width,
resize_height, resize_width)
else:
cropped_image = scipy.misc.imresize(image,
[resize_height, resize_width])
return np.array(cropped_image) / 127.5 - 1.
def inverse_transform(images):
return (images + 1.) / 2.
""" Drawing Tools """
# borrowed from https://github.com/ykwon0407/variational_autoencoder/blob/master/variational_bayes.ipynb
def save_scattered_image(z,
id,
z_range_x,
z_range_y,
name='scattered_image.jpg'):
N = 10
plt.figure(figsize=(8, 6))
plt.scatter(z[:, 0],
z[:, 1],
c=np.argmax(id, 1),
marker='o',
edgecolor='none',
cmap=discrete_cmap(N, 'jet'))
plt.colorbar(ticks=range(N))
axes = plt.gca()
axes.set_xlim([-z_range_x, z_range_x])
axes.set_ylim([-z_range_y, z_range_y])
plt.grid(True)
plt.savefig(name)
# borrowed from https://gist.github.com/jakevdp/91077b0cae40f8f8244a
def discrete_cmap(N, base_cmap=None):
"""Create an N-bin discrete colormap from the specified input map"""
# Note that if base_cmap is a string or None, you can simply do
# return plt.cm.get_cmap(base_cmap, N)
# The following works for string, None, or a colormap instance:
base = plt.cm.get_cmap(base_cmap)
color_list = base(np.linspace(0, 1, N))
cmap_name = base.name + str(N)
return base.from_list(cmap_name, color_list, N)
def save_matplot_img(images, size, image_path):
# revice image data // M*N*3 // RGB float32 : value must set between 0. with 1.
for idx in range(64):
vMin = np.amin(images[idx])
vMax = np.amax(images[idx])
img_arr = images[idx].reshape(32 * 32 * 3, 1) # flatten
for i, v in enumerate(img_arr):
img_arr[i] = (v - vMin) / (vMax - vMin)
img_arr = img_arr.reshape(32, 32, 3) # M*N*3
plt.subplot(8, 8, idx + 1), plt.imshow(img_arr,
interpolation='nearest')
plt.axis("off")
plt.savefig(image_path)
```
## Main
```
import tensorflow as tf
import os
base_dir = "./"
out_dir = base_dir + "mnist_clip1_sigma0.6_lr0.55"
if not os.path.exists(out_dir):
os.mkdir(out_dir)
gpu_options = tf.GPUOptions(visible_device_list="0")
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
gpu_options=gpu_options)) as sess:
epoch = 100
cgan = OUR_DP_CGAN(sess,
epoch=epoch,
batch_size=64,
z_dim=100,
epsilon=9.6,
delta=1e-5,
sigma=0.6,
clip_value=1,
lr=0.055,
dataset_name='mnist',
checkpoint_dir=out_dir + "/checkpoint/",
result_dir=out_dir + "/results/",
log_dir=out_dir + "/logs/",
base_dir=base_dir)
cgan.build_model()
print(" [*] Building model finished!")
show_all_variables()
cgan.train()
print(" [*] Training finished!")
```
| github_jupyter |
### **PINN eikonal solver for a portion of the Marmousi model**
```
from google.colab import drive
drive.mount('/content/gdrive')
cd "/content/gdrive/My Drive/Colab Notebooks/Codes/PINN_isotropic_eikonal_R1"
!pip install sciann==0.5.4.0
!pip install tensorflow==2.2.0
#!pip install keras==2.3.1
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import tensorflow as tf
from sciann import Functional, Variable, SciModel, PDE
from sciann.utils import *
import scipy.io
import time
import random
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
tf.config.threading.set_intra_op_parallelism_threads(1)
tf.config.threading.set_inter_op_parallelism_threads(1)
np.random.seed(123)
tf.random.set_seed(123)
# Loading velocity model
filename="./inputs/marm/model/marm_vz.txt"
marm = pd.read_csv(filename, index_col=None, header=None)
velmodel = np.reshape(np.array(marm), (101, 101)).T
# Loading reference solution
filename="./inputs/marm/traveltimes/fmm_or2_marm_s(1,1).txt"
T_data = pd.read_csv(filename, index_col=None, header=None)
T_data = np.reshape(np.array(T_data), (101, 101)).T
#Model specifications
zmin = 0.; zmax = 2.; deltaz = 0.02;
xmin = 0.; xmax = 2.; deltax = 0.02;
# Point-source location
sz = 1.0; sx = 1.0;
# Number of training points
num_tr_pts = 3000
# Creating grid, calculating refrence traveltimes, and prepare list of grid points for training (X_star)
z = np.arange(zmin,zmax+deltaz,deltaz)
nz = z.size
x = np.arange(xmin,xmax+deltax,deltax)
nx = x.size
Z,X = np.meshgrid(z,x,indexing='ij')
X_star = [Z.reshape(-1,1), X.reshape(-1,1)]
selected_pts = np.random.choice(np.arange(Z.size),num_tr_pts,replace=False)
Zf = Z.reshape(-1,1)[selected_pts]
Zf = np.append(Zf,sz)
Xf = X.reshape(-1,1)[selected_pts]
Xf = np.append(Xf,sx)
X_starf = [Zf.reshape(-1,1), Xf.reshape(-1,1)]
# Plot the velocity model with the source location
plt.style.use('default')
plt.figure(figsize=(4,4))
ax = plt.gca()
im = ax.imshow(velmodel, extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet")
ax.plot(sx,sz,'k*',markersize=8)
plt.xlabel('Offset (km)', fontsize=14)
plt.xticks(fontsize=10)
plt.ylabel('Depth (km)', fontsize=14)
plt.yticks(fontsize=10)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="6%", pad=0.15)
cbar = plt.colorbar(im, cax=cax)
cbar.set_label('km/s',size=10)
cbar.ax.tick_params(labelsize=10)
plt.savefig("./figs/marm/velmodel.pdf", format='pdf', bbox_inches="tight")
# Analytical solution for the known traveltime part
vel = velmodel[int(round(sz/deltaz)),int(round(sx/deltax))] # Velocity at the source location
T0 = np.sqrt((Z-sz)**2 + (X-sx)**2)/vel;
px0 = np.divide(X-sx, T0*vel**2, out=np.zeros_like(T0), where=T0!=0)
pz0 = np.divide(Z-sz, T0*vel**2, out=np.zeros_like(T0), where=T0!=0)
# Find source location id in X_star
TOLX = 1e-6
TOLZ = 1e-6
sids,_ = np.where(np.logical_and(np.abs(X_starf[0]-sz)<TOLZ , np.abs(X_starf[1]-sx)<TOLX))
print(sids)
print(sids.shape)
print(X_starf[0][sids,0])
print(X_starf[1][sids,0])
# Preparing the Sciann model object
K.clear_session()
layers = [20]*10
# Appending source values
velmodelf = velmodel.reshape(-1,1)[selected_pts]; velmodelf = np.append(velmodelf,vel)
px0f = px0.reshape(-1,1)[selected_pts]; px0f = np.append(px0f,0.)
pz0f = pz0.reshape(-1,1)[selected_pts]; pz0f = np.append(pz0f,0.)
T0f = T0.reshape(-1,1)[selected_pts]; T0f = np.append(T0f,0.)
xt = Variable("xt",dtype='float64')
zt = Variable("zt",dtype='float64')
vt = Variable("vt",dtype='float64')
px0t = Variable("px0t",dtype='float64')
pz0t = Variable("pz0t",dtype='float64')
T0t = Variable("T0t",dtype='float64')
tau = Functional("tau", [zt, xt], layers, 'l-atan')
# Loss function based on the factored isotropic eikonal equation
L = (T0t*diff(tau, xt) + tau*px0t)**2 + (T0t*diff(tau, zt) + tau*pz0t)**2 - 1.0/vt**2
targets = [tau, 20*L, (1-sign(tau*T0t))*abs(tau*T0t)]
target_vals = [(sids, np.ones(sids.shape).reshape(-1,1)), 'zeros', 'zeros']
model = SciModel(
[zt, xt, vt, pz0t, px0t, T0t],
targets,
load_weights_from='models/vofz_model-end.hdf5',
optimizer='scipy-l-BFGS-B'
)
#Model training
start_time = time.time()
hist = model.train(
X_starf + [velmodelf,pz0f,px0f,T0f],
target_vals,
batch_size = X_starf[0].size,
epochs = 12000,
learning_rate = 0.008,
verbose=0
)
elapsed = time.time() - start_time
print('Training time: %.2f seconds' %(elapsed))
# Convergence history plot for verification
fig = plt.figure(figsize=(5,3))
ax = plt.axes()
#ax.semilogy(np.arange(0,300,0.001),hist.history['loss'],LineWidth=2)
ax.semilogy(hist.history['loss'],LineWidth=2)
ax.set_xlabel('Epochs (x $10^3$)',fontsize=16)
plt.xticks(fontsize=12)
#ax.xaxis.set_major_locator(plt.MultipleLocator(50))
ax.set_ylabel('Loss',fontsize=16)
plt.yticks(fontsize=12);
plt.grid()
# Predicting traveltime solution from the trained model
L_pred = L.eval(model, X_star + [velmodel,pz0,px0,T0])
tau_pred = tau.eval(model, X_star + [velmodel,pz0,px0,T0])
tau_pred = tau_pred.reshape(Z.shape)
T_pred = tau_pred*T0
print('Time at source: %.4f'%(tau_pred[int(round(sz/deltaz)),int(round(sx/deltax))]))
# Plot the PINN solution error
plt.style.use('default')
plt.figure(figsize=(4,4))
ax = plt.gca()
im = ax.imshow(np.abs(T_pred-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet")
plt.xlabel('Offset (km)', fontsize=14)
plt.xticks(fontsize=10)
plt.ylabel('Depth (km)', fontsize=14)
plt.yticks(fontsize=10)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="6%", pad=0.15)
cbar = plt.colorbar(im, cax=cax)
cbar.set_label('seconds',size=10)
cbar.ax.tick_params(labelsize=10)
plt.savefig("./figs/marm/pinnerror.pdf", format='pdf', bbox_inches="tight")
# Load fast sweeping traveltims for comparison
T_fsm = np.load('./inputs/marm/traveltimes/Tcomp.npy')
# Plot the first order FMM solution error
plt.style.use('default')
plt.figure(figsize=(4,4))
ax = plt.gca()
im = ax.imshow(np.abs(T_fsm-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet")
plt.xlabel('Offset (km)', fontsize=14)
plt.xticks(fontsize=10)
plt.ylabel('Depth (km)', fontsize=14)
plt.yticks(fontsize=10)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="6%", pad=0.15)
cbar = plt.colorbar(im, cax=cax)
cbar.set_label('seconds',size=10)
cbar.ax.tick_params(labelsize=10)
plt.savefig("./figs/marm/fmm1error.pdf", format='pdf', bbox_inches="tight")
# Traveltime contour plots
fig = plt.figure(figsize=(5,5))
ax = plt.gca()
im1 = ax.contour(T_data, 6, extent=[xmin,xmax,zmin,zmax], colors='r')
im2 = ax.contour(T_pred, 6, extent=[xmin,xmax,zmin,zmax], colors='k',linestyles = 'dashed')
im3 = ax.contour(T_fsm, 6, extent=[xmin,xmax,zmin,zmax], colors='b',linestyles = 'dotted')
ax.plot(sx,sz,'k*',markersize=8)
plt.xlabel('Offset (km)', fontsize=14)
plt.ylabel('Depth (km)', fontsize=14)
ax.tick_params(axis='both', which='major', labelsize=8)
plt.gca().invert_yaxis()
h1,_ = im1.legend_elements()
h2,_ = im2.legend_elements()
h3,_ = im3.legend_elements()
ax.legend([h1[0], h2[0], h3[0]], ['Reference', 'PINN', 'Fast sweeping'],fontsize=12)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
#ax.arrow(1.9, 1.7, -0.1, -0.1, head_width=0.05, head_length=0.075, fc='red', ec='red',width=0.02)
plt.savefig("./figs/marm/contours.pdf", format='pdf', bbox_inches="tight")
print(np.linalg.norm(T_pred-T_data)/np.linalg.norm(T_data))
print(np.linalg.norm(T_pred-T_data))
```
| github_jupyter |
# 1A.1 - Deviner un nombre aléatoire (correction)
On reprend la fonction introduite dans l'énoncé et qui permet de saisir un nombre.
```
import random
nombre = input("Entrez un nombre")
nombre
```
**Q1 :** Ecrire une jeu dans lequel python choisi aléatoirement un nombre entre 0 et 100, et essayer de trouver ce nombre en 10 étapes.
```
n = random.randint(0,100)
appreciation = "?"
while True:
var = input("Entrez un nombre")
var = int(var)
if var < n :
appreciation = "trop bas"
print(var, appreciation)
else :
appreciation = "trop haut"
print(var, appreciation)
if var == n:
appreciation = "bravo !"
print(var, appreciation)
break
```
**Q2 :** Transformer ce jeu en une fonction ``jeu(nVies)`` où ``nVies`` est le nombre d'itérations maximum.
```
import random
n = random.randint(0,100)
vies = 10
appreciation = "?"
while vies > 0:
var = input("Entrez un nombre")
var = int(var)
if var < n :
appreciation = "trop bas"
print(vies, var, appreciation)
else :
appreciation = "trop haut"
print(vies, var, appreciation)
if var == n:
appreciation = "bravo !"
print(vies, var, appreciation)
break
vies -= 1
```
**Q3 :** Adapter le code pour faire une classe joueur avec une méthode jouer, où un joueur est défini par un pseudo et son nombre de vies. Faire jouer deux joueurs et déterminer le vainqueur.
```
class joueur:
def __init__(self, vies, pseudo):
self.vies = vies
self.pseudo = pseudo
def jouer(self):
appreciation = "?"
n = random.randint(0,100)
while self.vies > 0:
message = appreciation + " -- " + self.pseudo + " : " + str(self.vies) + " vies restantes. Nombre choisi : "
var = input(message)
var = int(var)
if var < n :
appreciation = "trop bas"
print(vies, var, appreciation)
else :
appreciation = "trop haut"
print(vies, var, appreciation)
if var == n:
appreciation = "bravo !"
print(vies, var, appreciation)
break
self.vies -= 1
# Initialisation des deux joueurs
j1 = joueur(10, "joueur 1")
j2 = joueur(10, "joueur 2")
# j1 et j2 jouent
j1.jouer()
j2.jouer()
# Nombre de vies restantes à chaque joueur
print("Nombre de vies restantes à chaque joueur")
print(j1.pseudo + " : " + str(j1.vies) + " restantes")
print(j2.pseudo + " : " + str(j2.vies) + " restantes")
# Résultat de la partie
print("Résultat de la partie")
if j1.vies < j2.vies:
print(j1.pseudo + "a gagné la partie")
elif j1.vies == j2.vies:
print("match nul")
else: print(j2.pseudo + " a gagné la partie")
```
| github_jupyter |
# Pytorch Basic
```
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from IPython.display import clear_output
torch.cuda.is_available()
```
## Device
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
```
## Hyper Parameter
```
input_size = 784
hidden_size = 500
num_class = 10
epochs = 5
batch_size = 100
lr = 0.001
```
## Load MNIST Dataset
```
train_dataset = torchvision.datasets.MNIST(root='../data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='../data',
train=False,
transform=transforms.ToTensor())
print('train dataset shape : ',train_dataset.data.shape)
print('test dataset shape : ',test_dataset.data.shape)
plt.imshow(train_dataset.data[0])
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
```
## Simple Model
```
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_class):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size,hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_class)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
model = NeuralNet(input_size,hidden_size,num_class).to(device)
```
## Loss and Optimizer
```
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
```
## Train
```
total_step = len(train_loader)
for epoch in range(epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.reshape(-1,28*28).to(device)
labels = labels.to(device)
outputs = model(images)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
clear_output()
print('EPOCH [{}/{}] STEP [{}/{}] Loss {: .4f})'
.format(epoch+1, epochs, i+1, total_step, loss.item()))
```
## Test
```
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))
```
## save
```
torch.save(model.state_dict(), 'model.ckpt')
```
---
| github_jupyter |
# Colab-pytorch-image-classification
Original repo: [bentrevett/pytorch-image-classification](https://github.com/bentrevett/pytorch-image-classification)
[SqueezeNet code](https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py): [pytorch/vision](https://github.com/pytorch/vision)
My fork: [styler00dollar/Colab-image-classification](https://github.com/styler00dollar/Colab-image-classification)
This colab is a combination of [this Colab](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/5_resnet.ipynb) and [my other Colab](https://colab.research.google.com/github/styler00dollar/Colab-image-classification/blob/master/5_(small)_ResNet.ipynb) to do SqueezeNet training.
```
!nvidia-smi
```
# DATASET CREATION
```
#@title Mount Google Drive
from google.colab import drive
drive.mount('/content/drive')
print('Google Drive connected.')
# copy data somehow
!mkdir '/content/classification'
!mkdir '/content/classification/images'
!cp "/content/drive/MyDrive/classification_v2.7z" "/content/classification/images/classification.7z"
%cd /content/classification/images
!7z x "classification.7z"
!rm -rf /content/classification/images/classification.7z
#@title dataset creation
TRAIN_RATIO = 0.90 #@param {type:"number"}
import os
import shutil
from tqdm import tqdm
#data_dir = os.path.join(ROOT, 'CUB_200_2011')
data_dir = '/content/classification' #@param {type:"string"}
images_dir = os.path.join(data_dir, 'images')
train_dir = os.path.join(data_dir, 'train')
test_dir = os.path.join(data_dir, 'test')
if os.path.exists(train_dir):
shutil.rmtree(train_dir)
if os.path.exists(test_dir):
shutil.rmtree(test_dir)
os.makedirs(train_dir)
os.makedirs(test_dir)
classes = os.listdir(images_dir)
for c in classes:
class_dir = os.path.join(images_dir, c)
images = os.listdir(class_dir)
n_train = int(len(images) * TRAIN_RATIO)
train_images = images[:n_train]
test_images = images[n_train:]
os.makedirs(os.path.join(train_dir, c), exist_ok = True)
os.makedirs(os.path.join(test_dir, c), exist_ok = True)
for image in tqdm(train_images):
image_src = os.path.join(class_dir, image)
image_dst = os.path.join(train_dir, c, image)
shutil.copyfile(image_src, image_dst)
for image in tqdm(test_images):
image_src = os.path.join(class_dir, image)
image_dst = os.path.join(test_dir, c, image)
shutil.copyfile(image_src, image_dst)
```
# CALC MEANS & STDS
```
#@title print means and stds
import torch
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from tqdm import tqdm
train_data = datasets.ImageFolder(root = train_dir,
transform = transforms.ToTensor())
means = torch.zeros(3)
stds = torch.zeros(3)
for img, label in tqdm(train_data):
means += torch.mean(img, dim = (1,2))
stds += torch.std(img, dim = (1,2))
means /= len(train_data)
stds /= len(train_data)
print("\n")
print(f'Calculated means: {means}')
print(f'Calculated stds: {stds}')
```
# TRAIN
```
#@title import, seed, transforms, dataloader, functions, plot, model, parameter
%cd /content/
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
from torch.optim.lr_scheduler import _LRScheduler
import torch.utils.data as data
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
from sklearn import decomposition
from sklearn import manifold
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
import matplotlib.pyplot as plt
import numpy as np
import copy
from collections import namedtuple
import os
import random
import shutil
SEED = 1234 #@param {type:"number"}
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
train_dir = '/content/classification/train' #@param {type:"string"}
test_dir = '/content/classification/test' #@param {type:"string"}
pretrained_size = 256 #@param {type:"number"}
pretrained_means = [0.6838, 0.6086, 0.6063] #@param {type:"raw"}
pretrained_stds= [0.2411, 0.2403, 0.2306] #@param {type:"raw"}
#https://github.com/mit-han-lab/data-efficient-gans/blob/master/DiffAugment_pytorch.py
import torch
import torch.nn.functional as F
def DiffAugment(x, policy='', channels_first=True):
if policy:
if not channels_first:
x = x.permute(0, 3, 1, 2)
for p in policy.split(','):
for f in AUGMENT_FNS[p]:
x = f(x)
if not channels_first:
x = x.permute(0, 2, 3, 1)
x = x.contiguous()
return x
def rand_brightness(x):
x = x + (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) - 0.5)
return x
def rand_saturation(x):
x_mean = x.mean(dim=1, keepdim=True)
x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) * 2) + x_mean
return x
def rand_contrast(x):
x_mean = x.mean(dim=[1, 2, 3], keepdim=True)
x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) + 0.5) + x_mean
return x
def rand_translation(x, ratio=0.125):
shift_x, shift_y = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5)
translation_x = torch.randint(-shift_x, shift_x + 1, size=[x.size(0), 1, 1], device=x.device)
translation_y = torch.randint(-shift_y, shift_y + 1, size=[x.size(0), 1, 1], device=x.device)
grid_batch, grid_x, grid_y = torch.meshgrid(
torch.arange(x.size(0), dtype=torch.long, device=x.device),
torch.arange(x.size(2), dtype=torch.long, device=x.device),
torch.arange(x.size(3), dtype=torch.long, device=x.device),
)
grid_x = torch.clamp(grid_x + translation_x + 1, 0, x.size(2) + 1)
grid_y = torch.clamp(grid_y + translation_y + 1, 0, x.size(3) + 1)
x_pad = F.pad(x, [1, 1, 1, 1, 0, 0, 0, 0])
x = x_pad.permute(0, 2, 3, 1).contiguous()[grid_batch, grid_x, grid_y].permute(0, 3, 1, 2)
return x
def rand_cutout(x, ratio=0.5):
cutout_size = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5)
offset_x = torch.randint(0, x.size(2) + (1 - cutout_size[0] % 2), size=[x.size(0), 1, 1], device=x.device)
offset_y = torch.randint(0, x.size(3) + (1 - cutout_size[1] % 2), size=[x.size(0), 1, 1], device=x.device)
grid_batch, grid_x, grid_y = torch.meshgrid(
torch.arange(x.size(0), dtype=torch.long, device=x.device),
torch.arange(cutout_size[0], dtype=torch.long, device=x.device),
torch.arange(cutout_size[1], dtype=torch.long, device=x.device),
)
grid_x = torch.clamp(grid_x + offset_x - cutout_size[0] // 2, min=0, max=x.size(2) - 1)
grid_y = torch.clamp(grid_y + offset_y - cutout_size[1] // 2, min=0, max=x.size(3) - 1)
mask = torch.ones(x.size(0), x.size(2), x.size(3), dtype=x.dtype, device=x.device)
mask[grid_batch, grid_x, grid_y] = 0
x = x * mask.unsqueeze(1)
return x
AUGMENT_FNS = {
'color': [rand_brightness, rand_saturation, rand_contrast],
'translation': [rand_translation],
'cutout': [rand_cutout],
}
train_transforms = transforms.Compose([
transforms.Resize(pretrained_size),
transforms.RandomRotation(5),
transforms.RandomHorizontalFlip(0.5),
transforms.RandomCrop(pretrained_size, padding = 10),
transforms.ToTensor(),
transforms.Normalize(mean = pretrained_means,
std = pretrained_stds)
])
test_transforms = transforms.Compose([
transforms.Resize(pretrained_size),
transforms.CenterCrop(pretrained_size),
transforms.ToTensor(),
transforms.Normalize(mean = pretrained_means,
std = pretrained_stds)
])
train_data = datasets.ImageFolder(root = train_dir,
transform = train_transforms)
test_data = datasets.ImageFolder(root = test_dir,
transform = test_transforms)
VALID_RATIO = 0.90 #@param {type:"number"}
n_train_examples = int(len(train_data) * VALID_RATIO)
n_valid_examples = len(train_data) - n_train_examples
train_data, valid_data = data.random_split(train_data,
[n_train_examples, n_valid_examples])
valid_data = copy.deepcopy(valid_data)
valid_data.dataset.transform = test_transforms
print(f'Number of training examples: {len(train_data)}')
print(f'Number of validation examples: {len(valid_data)}')
print(f'Number of testing examples: {len(test_data)}')
BATCH_SIZE = 32 #@param {type:"number"}
train_iterator = data.DataLoader(train_data,
shuffle = True,
batch_size = BATCH_SIZE)
valid_iterator = data.DataLoader(valid_data,
batch_size = BATCH_SIZE)
test_iterator = data.DataLoader(test_data,
batch_size = BATCH_SIZE)
def normalize_image(image):
image_min = image.min()
image_max = image.max()
image.clamp_(min = image_min, max = image_max)
image.add_(-image_min).div_(image_max - image_min + 1e-5)
return image
def plot_images(images, labels, classes, normalize = True):
n_images = len(images)
rows = int(np.sqrt(n_images))
cols = int(np.sqrt(n_images))
fig = plt.figure(figsize = (15, 15))
for i in range(rows*cols):
ax = fig.add_subplot(rows, cols, i+1)
image = images[i]
if normalize:
image = normalize_image(image)
ax.imshow(image.permute(1, 2, 0).cpu().numpy())
label = classes[labels[i]]
ax.set_title(label)
ax.axis('off')
N_IMAGES = 25 #@param {type:"number"}
images, labels = zip(*[(image, label) for image, label in
[train_data[i] for i in range(N_IMAGES)]])
classes = test_data.classes
plot_images(images, labels, classes)
def format_label(label):
label = label.split('.')[-1]
label = label.replace('_', ' ')
label = label.title()
label = label.replace(' ', '')
return label
test_data.classes = [format_label(c) for c in test_data.classes]
classes = test_data.classes
plot_images(images, labels, classes)
#https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py
import torch
import torch.nn as nn
import torch.nn.init as init
#from .utils import load_state_dict_from_url
from typing import Any
#__all__ = ['SqueezeNet', 'squeezenet1_0', 'squeezenet1_1']
model_urls = {
'1_0': 'https://download.pytorch.org/models/squeezenet1_0-a815701f.pth',
'1_1': 'https://download.pytorch.org/models/squeezenet1_1-f364aa15.pth',
}
class Fire(nn.Module):
def __init__(
self,
inplanes: int,
squeeze_planes: int,
expand1x1_planes: int,
expand3x3_planes: int
) -> None:
super(Fire, self).__init__()
self.inplanes = inplanes
self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1)
self.squeeze_activation = nn.ReLU(inplace=True)
self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes,
kernel_size=1)
self.expand1x1_activation = nn.ReLU(inplace=True)
self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes,
kernel_size=3, padding=1)
self.expand3x3_activation = nn.ReLU(inplace=True)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.squeeze_activation(self.squeeze(x))
return torch.cat([
self.expand1x1_activation(self.expand1x1(x)),
self.expand3x3_activation(self.expand3x3(x))
], 1)
class SqueezeNet(nn.Module):
def __init__(
self,
version: str = '1_0',
num_classes: int = 1000
) -> None:
super(SqueezeNet, self).__init__()
self.num_classes = num_classes
if version == '1_0':
self.features = nn.Sequential(
nn.Conv2d(3, 96, kernel_size=7, stride=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(96, 16, 64, 64),
Fire(128, 16, 64, 64),
Fire(128, 32, 128, 128),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(256, 32, 128, 128),
Fire(256, 48, 192, 192),
Fire(384, 48, 192, 192),
Fire(384, 64, 256, 256),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(512, 64, 256, 256),
)
elif version == '1_1':
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(64, 16, 64, 64),
Fire(128, 16, 64, 64),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(128, 32, 128, 128),
Fire(256, 32, 128, 128),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(256, 48, 192, 192),
Fire(384, 48, 192, 192),
Fire(384, 64, 256, 256),
Fire(512, 64, 256, 256),
)
else:
# FIXME: Is this needed? SqueezeNet should only be called from the
# FIXME: squeezenet1_x() functions
# FIXME: This checking is not done for the other models
raise ValueError("Unsupported SqueezeNet version {version}:"
"1_0 or 1_1 expected".format(version=version))
# Final convolution is initialized differently from the rest
final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1)
self.classifier = nn.Sequential(
nn.Dropout(p=0.5),
final_conv,
nn.ReLU(inplace=True),
nn.AdaptiveAvgPool2d((1, 1))
)
for m in self.modules():
if isinstance(m, nn.Conv2d):
if m is final_conv:
init.normal_(m.weight, mean=0.0, std=0.01)
else:
init.kaiming_uniform_(m.weight)
if m.bias is not None:
init.constant_(m.bias, 0)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.features(x)
x = self.classifier(x)
return torch.flatten(x, 1)
def _squeezenet(version: str, pretrained: bool, progress: bool, **kwargs: Any) -> SqueezeNet:
model = SqueezeNet(version, **kwargs)
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
r"""SqueezeNet model architecture from the `"SqueezeNet: AlexNet-level
accuracy with 50x fewer parameters and <0.5MB model size"
<https://arxiv.org/abs/1602.07360>`_ paper.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _squeezenet('1_0', pretrained, progress, **kwargs)
def squeezenet1_1(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
r"""SqueezeNet 1.1 model from the `official SqueezeNet repo
<https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_.
SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters
than SqueezeNet 1.0, without sacrificing accuracy.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _squeezenet('1_1', pretrained, progress, **kwargs)
"""
#https://github.com/pytorch/vision/blob/master/torchvision/models/utils.py
try:
from torch.hub import load_state_dict_from_url
except ImportError:
from torch.utils.model_zoo import load_url as load_state_dict_from_url
"""
model_train = '1_1' #@param ["1_0", "1_1"] {type:"string"}
if model_train == '1_0':
model = SqueezeNet(num_classes=len(test_data.classes), version='1_0')
#state_dict = load_state_dict_from_url(model_urls[model_train],
# progress=True)
#model.load_state_dict(state_dict)
elif model_train == '1_1':
model = SqueezeNet(num_classes=len(test_data.classes), version='1_1')
#state_dict = load_state_dict_from_url(model_urls[model_train],
# progress=True)
#model.load_state_dict(state_dict)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
START_LR = 1e-7 #@param {type:"number"}
optimizer = optim.Adam(model.parameters(), lr=START_LR)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
criterion = nn.CrossEntropyLoss()
model = model.to(device)
criterion = criterion.to(device)
class LRFinder:
def __init__(self, model, optimizer, criterion, device):
self.optimizer = optimizer
self.model = model
self.criterion = criterion
self.device = device
torch.save(model.state_dict(), 'init_params.pt')
def range_test(self, iterator, end_lr = 10, num_iter = 100,
smooth_f = 0.05, diverge_th = 5):
lrs = []
losses = []
best_loss = float('inf')
lr_scheduler = ExponentialLR(self.optimizer, end_lr, num_iter)
iterator = IteratorWrapper(iterator)
for iteration in tqdm(range(num_iter)):
loss = self._train_batch(iterator)
#update lr
lr_scheduler.step()
lrs.append(lr_scheduler.get_lr()[0])
if iteration > 0:
loss = smooth_f * loss + (1 - smooth_f) * losses[-1]
if loss < best_loss:
best_loss = loss
losses.append(loss)
if loss > diverge_th * best_loss:
print("Stopping early, the loss has diverged")
break
#reset model to initial parameters
model.load_state_dict(torch.load('init_params.pt'))
return lrs, losses
def _train_batch(self, iterator):
self.model.train()
self.optimizer.zero_grad()
x, y = iterator.get_batch()
x = x.to(self.device)
y = y.to(self.device)
y_pred, _ = self.model(x)
loss = self.criterion(y_pred, y)
loss.backward()
self.optimizer.step()
return loss.item()
class ExponentialLR(_LRScheduler):
def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
self.end_lr = end_lr
self.num_iter = num_iter
super(ExponentialLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
curr_iter = self.last_epoch + 1
r = curr_iter / self.num_iter
return [base_lr * (self.end_lr / base_lr) ** r for base_lr in self.base_lrs]
class IteratorWrapper:
def __init__(self, iterator):
self.iterator = iterator
self._iterator = iter(iterator)
def __next__(self):
try:
inputs, labels = next(self._iterator)
except StopIteration:
self._iterator = iter(self.iterator)
inputs, labels, *_ = next(self._iterator)
return inputs, labels
def get_batch(self):
return next(self)
def calculate_topk_accuracy(y_pred, y, k = 5):
with torch.no_grad():
batch_size = y.shape[0]
_, top_pred = y_pred.topk(k=1)
top_pred = top_pred.t()
correct = top_pred.eq(y.view(1, -1).expand_as(top_pred))
correct_1 = correct[:1].view(-1).float().sum(0, keepdim = True)
#correct_k = correct[:k].view(-1).float().sum(0, keepdim = True)
acc_1 = correct_1 / batch_size
#acc_k = correct_k / batch_size
acc_k = 0
return acc_1, acc_k
def train(model, iterator, optimizer, criterion, scheduler, device, current_epoch):
epoch_loss = 0
epoch_acc_1 = 0
epoch_acc_5 = 0
model.train()
policy = 'color,translation,cutout' #@param {type:"string"}
diffaug_activate = True #@param ["False", "True"] {type:"raw"}
#https://stackoverflow.com/questions/45465031/printing-text-below-tqdm-progress-bar
with tqdm(iterator, position=1, bar_format='{desc}') as desc:
for (x, y) in tqdm(iterator, position=0):
x = x.to(device)
y = y.to(device)
optimizer.zero_grad()
if diffaug_activate == False:
y_pred = model(x)
else:
y_pred = model(DiffAugment(x, policy=policy))
loss = criterion(y_pred, y)
acc_1, acc_5 = calculate_topk_accuracy(y_pred, y)
loss.backward()
optimizer.step()
scheduler.step()
epoch_loss += loss.item()
epoch_acc_1 += acc_1.item()
#epoch_acc_5 += acc_5.item()
epoch_loss /= len(iterator)
epoch_acc_1 /= len(iterator)
desc.set_description(f'Epoch: {current_epoch+1}')
desc.set_description(f'\tTrain Loss: {epoch_loss:.3f} | Train Acc @1: {epoch_acc_1*100:6.2f}% | ' \
f'Train Acc @5: {epoch_acc_5*100:6.2f}%')
return epoch_loss, epoch_acc_1, epoch_acc_5
def evaluate(model, iterator, criterion, device):
epoch_loss = 0
epoch_acc_1 = 0
epoch_acc_5 = 0
model.eval()
with torch.no_grad():
with tqdm(iterator, position=0, bar_format='{desc}', leave=True) as desc:
for (x, y) in iterator:
x = x.to(device)
y = y.to(device)
y_pred = model(x)
loss = criterion(y_pred, y)
acc_1, acc_5 = calculate_topk_accuracy(y_pred, y)
epoch_loss += loss.item()
epoch_acc_1 += acc_1.item()
#epoch_acc_5 += acc_5.item()
epoch_loss /= len(iterator)
epoch_acc_1 /= len(iterator)
#epoch_acc_5 /= len(iterator)
desc.set_description(f'\tValid Loss: {epoch_loss:.3f} | Valid Acc @1: {epoch_acc_1*100:6.2f}% | ' \
f'Valid Acc @5: {epoch_acc_5*100:6.2f}%')
return epoch_loss, epoch_acc_1, epoch_acc_5
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
#@title lr_finder
END_LR = 10 #@param {type:"number"}
NUM_ITER = 100#@param {type:"number"} #100
lr_finder = LRFinder(model, optimizer, criterion, device)
lrs, losses = lr_finder.range_test(train_iterator, END_LR, NUM_ITER)
#@title plot_lr_finder
def plot_lr_finder(lrs, losses, skip_start = 5, skip_end = 5):
if skip_end == 0:
lrs = lrs[skip_start:]
losses = losses[skip_start:]
else:
lrs = lrs[skip_start:-skip_end]
losses = losses[skip_start:-skip_end]
fig = plt.figure(figsize = (16,8))
ax = fig.add_subplot(1,1,1)
ax.plot(lrs, losses)
ax.set_xscale('log')
ax.set_xlabel('Learning rate')
ax.set_ylabel('Loss')
ax.grid(True, 'both', 'x')
plt.show()
plot_lr_finder(lrs, losses, skip_start = 30, skip_end = 30)
#@title config
FOUND_LR = 2e-4 #@param {type:"number"}
"""
params = [
{'params': model.conv1.parameters(), 'lr': FOUND_LR / 10},
{'params': model.bn1.parameters(), 'lr': FOUND_LR / 10},
{'params': model.layer1.parameters(), 'lr': FOUND_LR / 8},
{'params': model.layer2.parameters(), 'lr': FOUND_LR / 6},
{'params': model.layer3.parameters(), 'lr': FOUND_LR / 4},
{'params': model.layer4.parameters(), 'lr': FOUND_LR / 2},
{'params': model.fc.parameters()}
]
"""
#optimizer = optim.Adam(params, lr = FOUND_LR)
optimizer = optim.Adam(model.parameters(), lr = FOUND_LR)
EPOCHS = 100 #@param {type:"number"}
STEPS_PER_EPOCH = len(train_iterator)
TOTAL_STEPS = EPOCHS * STEPS_PER_EPOCH
MAX_LRS = [p['lr'] for p in optimizer.param_groups]
scheduler = lr_scheduler.OneCycleLR(optimizer,
max_lr = MAX_LRS,
total_steps = TOTAL_STEPS)
#@title training without topk
import time
best_valid_loss = float('inf')
best_valid_accuracy = 0
for epoch in range(EPOCHS):
start_time = time.monotonic()
train_loss, train_acc_1, train_acc_5 = train(model, train_iterator, optimizer, criterion, scheduler, device, epoch)
valid_loss, valid_acc_1, valid_acc_5 = evaluate(model, valid_iterator, criterion, device)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'best-validation-loss.pt')
if best_valid_accuracy < valid_acc_1:
best_valid_accuracy = valid_acc_1
torch.save(model.state_dict(), 'best-validation-accuracy.pt')
end_time = time.monotonic()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
```
#####################################################################################################
# TESTING
```
#@title Calc test loss
model.load_state_dict(torch.load('best-validation-accuracy.pt'))
print("best-validation-accuracy.pt")
test_loss, test_acc_1, test_acc_5 = evaluate(model, test_iterator, criterion, device)
print("-----------------------------")
model.load_state_dict(torch.load('best-validation-loss.pt'))
print("best-validation-loss.pt")
test_loss, test_acc_1, test_acc_5 = evaluate(model, test_iterator, criterion, device)
#@title plot_confusion_matrix
def get_predictions(model, iterator):
model.eval()
images = []
labels = []
probs = []
with torch.no_grad():
for (x, y) in iterator:
x = x.to(device)
y_pred = model(x)
y_prob = F.softmax(y_pred, dim = -1)
top_pred = y_prob.argmax(1, keepdim = True)
images.append(x.cpu())
labels.append(y.cpu())
probs.append(y_prob.cpu())
images = torch.cat(images, dim = 0)
labels = torch.cat(labels, dim = 0)
probs = torch.cat(probs, dim = 0)
return images, labels, probs
images, labels, probs = get_predictions(model, test_iterator)
pred_labels = torch.argmax(probs, 1)
def plot_confusion_matrix(labels, pred_labels, classes):
fig = plt.figure(figsize = (50, 50));
ax = fig.add_subplot(1, 1, 1);
cm = confusion_matrix(labels, pred_labels);
cm = ConfusionMatrixDisplay(cm, display_labels = classes);
cm.plot(values_format = 'd', cmap = 'Blues', ax = ax)
fig.delaxes(fig.axes[1]) #delete colorbar
plt.xticks(rotation = 90)
plt.xlabel('Predicted Label', fontsize = 50)
plt.ylabel('True Label', fontsize = 50)
plot_confusion_matrix(labels, pred_labels, classes)
#@title plot
corrects = torch.eq(labels, pred_labels)
incorrect_examples = []
for image, label, prob, correct in zip(images, labels, probs, corrects):
if not correct:
incorrect_examples.append((image, label, prob))
incorrect_examples.sort(reverse = True, key = lambda x: torch.max(x[2], dim = 0).values)
def plot_most_incorrect(incorrect, classes, n_images, normalize = True):
rows = int(np.sqrt(n_images))
cols = int(np.sqrt(n_images))
fig = plt.figure(figsize = (25, 20))
for i in range(rows*cols):
ax = fig.add_subplot(rows, cols, i+1)
image, true_label, probs = incorrect[i]
image = image.permute(1, 2, 0)
true_prob = probs[true_label]
incorrect_prob, incorrect_label = torch.max(probs, dim = 0)
true_class = classes[true_label]
incorrect_class = classes[incorrect_label]
if normalize:
image = normalize_image(image)
ax.imshow(image.cpu().numpy())
ax.set_title(f'true label: {true_class} ({true_prob:.3f})\n' \
f'pred label: {incorrect_class} ({incorrect_prob:.3f})')
ax.axis('off')
fig.subplots_adjust(hspace=0.4)
N_IMAGES = 36
plot_most_incorrect(incorrect_examples, classes, N_IMAGES)
#@title plot_representations
def get_representations(model, iterator):
model.eval()
outputs = []
intermediates = []
labels = []
with torch.no_grad():
for (x, y) in iterator:
x = x.to(device)
y_pred, _ = model(x)
outputs.append(y_pred.cpu())
labels.append(y)
outputs = torch.cat(outputs, dim = 0)
labels = torch.cat(labels, dim = 0)
return outputs, labels
outputs, labels = get_representations(model, train_iterator)
def get_pca(data, n_components = 2):
pca = decomposition.PCA()
pca.n_components = n_components
pca_data = pca.fit_transform(data)
return pca_data
def plot_representations(data, labels, classes, n_images = None):
if n_images is not None:
data = data[:n_images]
labels = labels[:n_images]
fig = plt.figure(figsize = (15, 15))
ax = fig.add_subplot(111)
scatter = ax.scatter(data[:, 0], data[:, 1], c = labels, cmap = 'hsv')
#handles, _ = scatter.legend_elements(num = None)
#legend = plt.legend(handles = handles, labels = classes)
output_pca_data = get_pca(outputs)
plot_representations(output_pca_data, labels, classes)
#@title get_tsne
def get_tsne(data, n_components = 2, n_images = None):
if n_images is not None:
data = data[:n_images]
tsne = manifold.TSNE(n_components = n_components, random_state = 0)
tsne_data = tsne.fit_transform(data)
return tsne_data
output_tsne_data = get_tsne(outputs)
plot_representations(output_tsne_data, labels, classes)
#@title plot_filtered_images
def plot_filtered_images(images, filters, n_filters = None, normalize = True):
images = torch.cat([i.unsqueeze(0) for i in images], dim = 0).cpu()
filters = filters.cpu()
if n_filters is not None:
filters = filters[:n_filters]
n_images = images.shape[0]
n_filters = filters.shape[0]
filtered_images = F.conv2d(images, filters)
fig = plt.figure(figsize = (30, 30))
for i in range(n_images):
image = images[i]
if normalize:
image = normalize_image(image)
ax = fig.add_subplot(n_images, n_filters+1, i+1+(i*n_filters))
ax.imshow(image.permute(1,2,0).numpy())
ax.set_title('Original')
ax.axis('off')
for j in range(n_filters):
image = filtered_images[i][j]
if normalize:
image = normalize_image(image)
ax = fig.add_subplot(n_images, n_filters+1, i+1+(i*n_filters)+j+1)
ax.imshow(image.numpy(), cmap = 'bone')
ax.set_title(f'Filter {j+1}')
ax.axis('off');
fig.subplots_adjust(hspace = -0.7)
N_IMAGES = 5
N_FILTERS = 7
images = [image for image, label in [train_data[i] for i in range(N_IMAGES)]]
filters = model.conv1.weight.data
plot_filtered_images(images, filters, N_FILTERS)
#@title plot_filters
#filters = model.conv1.weight.data
def plot_filters(filters, normalize = True):
filters = filters.cpu()
n_filters = filters.shape[0]
rows = int(np.sqrt(n_filters))
cols = int(np.sqrt(n_filters))
fig = plt.figure(figsize = (30, 15))
for i in range(rows*cols):
image = filters[i]
if normalize:
image = normalize_image(image)
ax = fig.add_subplot(rows, cols, i+1)
ax.imshow(image.permute(1, 2, 0))
ax.axis('off')
fig.subplots_adjust(wspace = -0.9)
plot_filters(filters)
```
| github_jupyter |
# From Variables to Classes
## A short Introduction
Python - as any programming language - has many extensions and libraries at its disposal. Basically, there are libraries for everything.
<center>But what are **libraries**? </center>
Basically, **libraries** are a collection of methods (_small pieces of code where you put sth in and get sth else out_) which you can use to analyse your data, visualise your data, run models ... do anything you like.
As said, methods usually take _something_ as input. That _something_ is usually a **variable**.
In the following, we will work our way from **variables** to **libraries**.
## Variables
Variables are one of the simplest types of objects in a programming language. An [object](https://en.wikipedia.org/wiki/Object_(computer_science) is a value stored in the memory of your computer, marked by a specific identifyer. Variables can have different types, such as [strings, numbers, and booleans](https://www.learnpython.org/en/Variables_and_Types). Differently to other programming languages, you do not need to declare the type of a variable, as variables are handled as objects in Python.
```python
x = 4.2 # floating point number
y = 'Hello World!' # string
z = True # boolean
```
```
x = 4.2
print(type(x))
y = 'Hello World!'
print(type(y))
z = True
print(type(z))
```
We can use operations (normal arithmetic operations) to use variables for getting results we want. With numbers, you can add, substract, multiply, divide, basically taking the values from the memory assigned to the variable name and performing calculations.
Let's have a look at operations with numbers and strings. We leave booleans to the side for the moment. We will simply add the variables below.
```python
n1 = 7
n2 = 42
s1 = 'Looking good, '
s2 = 'you are.'
```
```
n1 = 7
n2 = 42
s1 = 'Looking good, '
s2 = 'you are.'
first_sum = n1 + n2
print(first_sum)
first_conc = s1 + s2
print(first_conc)
```
Variables can be more than just a number. If you think of an Excel-Spreadsheet, a variable can be the content of a single cell, or multiple cells can be combined in one variable (e.g. one column of an Excel table).
So let's create a list -_a collection of variables_ - from `x`, `n1`, and `n2`. Lists in python are created using [ ].
Now, if you want to calculate the sum of this list, it is really exhausting to sum up every item of this list manually.
```python
first_list = [x, n1, n2]
# a sum of a list could look like
second_sum = some_list[0] + some_list[1] + ... + some_list[n] # where n is the last item of the list, e.g. 2 for first_list.
```
Actually, writing the second sum like this is the same as before. It would be great, if this step of calculating the sum could be used many times without writing it out. And this is, what functions are for. For example, there already exists a sum function:
```python
sum(first_list)```
```
first_list = [x, n1, n2]
second_sum = first_list[0] + first_list[1] + first_list[2]
print('manual sum {}'.format(second_sum))
# This can also be done with a function
print('sum function {}'.format(sum(first_list)))
```
## Functions
The `sum()` method we used above is a **function**.
Functions (later we will call them methods) are pieces of code, which take an input, perform some kind of operation, and (_optionally_) return an output.
In Python, functions are written like:
```python
def func(input):
"""
Description of the functions content # called the function header
"""
some kind of operation on input # called the function body
return output
```
As an example, we write a `sumup` function which sums up a list.
```
def sumup(inp):
"""
input: inp - list/array with floating point or integer numbers
return: sumd - scalar value of the summed up list
"""
val = 0
for i in inp:
val = val + i
return val
# let's compare the implemented standard sum function with the new sumup function
sum1 = sum(first_list)
sum2 = sumup(first_list)
print("The python sum function yields {}, \nand our sumup function yields {}.".format(*(sum1,sum2)))
# summing up the numbers from 1 to 100
import numpy as np
ar_2_sum = np.linspace(1,100,100, dtype='i')
print("the sum of the array is: {}".format(sumup(ar_2_sum)))
```
As we see above, functions are quite practical and save a lot of time. Further, they help structuring your code. Some functions are directly available in python without any libraries or other external software. In the example above however, you might have noticed, that we `import`ed a library called `numpy`.
In those libraries, functions are merged to one package, having the advantage that you don't need to import each single function at a time.
Imagine you move and have to pack all your belongings. You can think of libraries as packing things with similar purpose in the same box (= library).
## Functions to Methods as part of classes
When we talk about functions in the environment of classes, we usually call them methods. But what are **classes**?
[Classes](https://docs.python.org/3/tutorial/classes.html) are ways to bundle functionality together. Logically, functionality with similar purpose (or different kind of similarity).
One example could be: think of **apples**.
Apples are now a class. You can apply methods to this class, such as `eat()` or `cut()`. Or more sophisticated methods including various recipes using apples comprised in a cookbook.
The `eat()` method is straight forward. But the `cut()` method may be more interesting, since there are various ways to cut an apple.
Let's assume there are two apples to be cut differently. In python, once you have assigned a class to a variable, you have created an **instance** of that class. Then, methods of are applied to that instance by using a . notation.
```python
Golden_Delicious = apple()
Yoya = apple()
Golden_Delicious.cut(4)
Yoya.cut(8)
```
The two apples Golden Delicious and Yoya are _instances_ of the class apple. Real _incarnations_ of the abstract concept _apple_. The Golden Delicious is cut into 4 pieces, while the Yoya is cut into 8 pieces.
This is similar to more complex libraries, such as the `scikit-learn`. In one exercise, you used the command:
```python
from sklearn.cluster import KMeans
```
which simply imports the **class** `KMeans` from the library part `sklearn.cluster`. `KMeans` comprises several methods for clustering, which you can use by calling them similar to the apple example before.
For this, you need to create an _instance_ of the `KMeans` class.
```python
...
kmeans_inst = KMeans(n_clusters=n_clusters) # first we create the instance of the KMeans class called kmeans_inst
kmeans_inst.fit(data) # then we apply a method to the instance kmeans_inst
...
```
An example:
```
# here we just create the data for clustering
from sklearn.datasets.samples_generator import make_blobs
import matplotlib.pyplot as plt
%matplotlib inline
X, y = make_blobs(n_samples=100, centers=3, cluster_std= 0.5,
random_state=0)
plt.scatter(X[:,0], X[:,1], s=70)
# now we create an instance of the KMeans class
from sklearn.cluster import KMeans
nr_of_clusters = 3 # because we see 3 clusters in the plot above
kmeans_inst = KMeans(n_clusters= nr_of_clusters) # create the instance kmeans_inst
kmeans_inst.fit(X) # apply a method to the instance
y_predict = kmeans_inst.predict(X) # apply another method to the instance and save it in another variable
# lets plot the predicted cluster centers colored in the cluster color
plt.scatter(X[:, 0], X[:, 1], c=y_predict, s=50, cmap='Accent')
centers = kmeans_inst.cluster_centers_ # apply the method to find the new centers of the determined clusters
plt.scatter(centers[:, 0], centers[:, 1], c='red', s=200, alpha=0.6); # plot the cluster centers
```
## Summary
This short presentation is meant to make you familiar with the concept of variables, functions, methods and classes. All of which are objects!
* Variables are normally declared by the user and link a value stored in the memory of your pc to a variable name. They are usually the input of functions
* Functions are pieces of code taking an input and performing some operation on said input. Optionally, they return directly an output value
* To facilitate the use of functions, they are sometimes bundled as methods within classes. Classes in turn can build up whole libraries in python.
* Similar to real book libraries, python libraries contain a collection of _recipes_ which can be applied to your data.
* In terms of apples: You own different kinds of apples. A book about apple dishes (_class_) from the library contains different recipes (_methods_) which can be used for your different apples (_instances of the class_).
## Further links
* [Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/)
* [Python for Geosciences](https://github.com/koldunovn/python_for_geosciences)
* [Introduction to Python for Geoscientists](http://ggorman.github.io/Introduction-to-programming-for-geoscientists/)
* [Full Video course on Object Oriented Programming](https://www.youtube.com/watch?v=ZDa-Z5JzLYM&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc)
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.