markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Linear Regression: Rounding and Subclassing In this notebook we investigate the influence of <em style="color:blue;">rounding</em> and <em style="color:blue;">subclassing</em> on linear regression. To begin, we import all the libraries we need.
import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn.linear_model as lm
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We will work with artificially generated data. The independent variable X is a numpy array of $\texttt{N}=400$ random numbers that have a <em style="color:blue;">normal</em> distribution with mean $\mu = 10$ and standard deviation $1$. The data is created from random numbers. In order to be able to reproduce our re...
np.random.seed(1) N = 400 𝜇 = 10 X = np.random.randn(N) + 𝜇
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The dependent variable Y is created by adding some noise to the independent variable X. This noise is <em style="color:blue;">normally</em> distributed with mean $0$ and standard deviation $0.5$.
noise = 0.5 * np.random.randn(len(X)) Y = X + noise
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We build a linear model for X and Y.
model = lm.LinearRegression()
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to use SciKit-Learn we have to reshape the array X into a matrix.
X = np.reshape(X, (len(X), 1))
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We train the model and compute its score.
M = model.fit(X, Y) M.score(X, Y)
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to plot the data together with the linear model, we extract the coefficients.
ϑ0 = M.intercept_ ϑ1 = M.coef_[0]
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We plot Y versus X and the linear regression line.
xMax = np.max(X) + 0.2 xMin = np.min(X) - 0.2 %matplotlib inline plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.scatter(X, Y, c='b') # 'b' is blue color plt.xlabel('X values') plt.ylabel('true values + noise') plt.title('Influence of rounding on explained variance') plt.show(plt.plot([xMin, xMax], [ϑ0 + ϑ1 ...
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
As we want to study the effect of <em style="color:blue;">rounding</em>, the values of the dependent variable X are rounded to the nearest integer. To this end, the values are transformed to another unit, rounded and then transformed back to the original unkit. This way we can investigate how the performance of linea...
X = np.round(X * 0.8) / 0.8
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We create a new <em style="color:blue;">linear model</em>, fit it to the data and compute its score.
model = lm.LinearRegression() M = model.fit(X, Y) M.score(X, Y)
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We can see that the performance of the linear model has degraded considerably.
ϑ0 = M.intercept_ ϑ1 = M.coef_[0] xMax = max(X) + 0.2 xMin = min(X) - 0.2 plt.figure(figsize=(12, 10)) sns.set(style='darkgrid') plt.scatter(X, Y, c='b') plt.plot([xMin, xMax], [ϑ0 + ϑ1 * xMin, ϑ0 + ϑ1 * xMax], c='r') plt.xlabel('rounded X values') plt.ylabel('true X values + noise') plt.title('Influence of rounding on...
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Next, we investigate the effect of <em style="color:blue;">subclassing</em>. We will only keep those values such that $X > 11$.
X.shape selectorX = (X > 11) selectorY = np.reshape(selectorX, (N,)) XS = X[selectorX] XS = np.reshape(XS, (len(XS), 1)) YS = Y[selectorY]
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Again, we fit a linear model.
model = lm.LinearRegression() M = model.fit(XS, YS) M.score(XS, YS)
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We see that the performance of linear regression has degraded considerably. Let's plot this.
ϑ0 = M.intercept_ ϑ1 = M.coef_[0] xMax = max(XS) + 0.2 xMin = min(XS) - 0.2 plt.figure(figsize=(12, 10)) sns.set(style='darkgrid') plt.scatter(XS, YS, c='b') plt.plot([xMin, xMax], [ϑ0 + ϑ1 * xMin, ϑ0 + ϑ1 * xMax], c='r') plt.xlabel('rounded X values') plt.ylabel('true X values + noise') plt.title('Influence of subclas...
Python/5 Linear Regression/Linear-Regression-Rounding.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Problem statement We are interested in solving $$x^* = \arg \min_x f(x)$$ under the constraints that $f$ is a black box for which no closed form is known (nor its gradients); $f$ is expensive to evaluate; and evaluations of $y = f(x)$ may be noisy. Disclaimer. If you do not have these constraints, then there is certa...
noise_level = 0.1 def f(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
Note. In skopt, functions $f$ are assumed to take as input a 1D vector $x$ represented as an array-like and to return a scalar $f(x)$.
# Plot f(x) + contours x = np.linspace(-2, 2, 400).reshape(-1, 1) fx = [f(x_i, noise_level=0.0) for x_i in x] plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx], [fx_i + 1.9600 * noise_leve...
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
Bayesian optimization based on gaussian process regression is implemented in skopt.gp_minimize and can be carried out as follows:
from skopt import gp_minimize res = gp_minimize(f, # the function to minimize [(-2.0, 2.0)], # the bounds on each dimension of x acq_func="EI", # the acquisition function n_calls=15, # the number of evaluations of f ...
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
Accordingly, the approximated minimum is found to be:
"x^*=%.4f, f(x^*)=%.4f" % (res.x[0], res.fun)
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
For further inspection of the results, attributes of the res named tuple provide the following information: x [float]: location of the minimum. fun [float]: function value at the minimum. models: surrogate models used for each iteration. x_iters [array]: location of function evaluation for each iteration. func_vals...
print(res)
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
Together these attributes can be used to visually inspect the results of the minimization, such as the convergence trace or the acquisition function at the last iteration:
from skopt.plots import plot_convergence plot_convergence(res);
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
Let us now visually examine The approximation of the fit gp model to the original function. The acquistion values that determine the next point to be queried.
from skopt.acquisition import gaussian_ei plt.rcParams["figure.figsize"] = (8, 14) x = np.linspace(-2, 2, 400).reshape(-1, 1) x_gp = res.space.transform(x.tolist()) fx = np.array([f(x_i, noise_level=0.0) for x_i in x]) # Plot the 5 iterations following the 5 random points for n_iter in range(5): gp = res.models[...
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
The first column shows the following: The true function. The approximation to the original function by the gaussian process model How sure the GP is about the function. The second column shows the acquisition function values after every surrogate model is fit. It is possible that we do not choose the global minimum b...
plt.rcParams["figure.figsize"] = (6, 4) # Plot f(x) + contours x = np.linspace(-2, 2, 400).reshape(-1, 1) x_gp = res.space.transform(x.tolist()) fx = [f(x_i, noise_level=0.0) for x_i in x] plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate(([fx_i - 1.9600 * n...
examples/bayesian-optimization.ipynb
betatim/BlackBox
bsd-3-clause
Raw data
# Whenever you type "something =" it defines a new variable, "something", # and sets it equal to whatever follows the equals sign. That could be a number, # another variable, or in this case an entire table of numbers. # enter raw data data = pd.DataFrame.from_items([ ('time (s)', [0,1,2,3]), ('posi...
Motion.ipynb
merryjman/astronomy
gpl-3.0
Plotting the data
# set variables = data['column label'] time = data['time (s)'] pos = data['position (m)'] # Uncomment the next line to make it look like a graph from xkcd.com # plt.xkcd() # to make normal-looking plots again execute: # mpl.rcParams.update(inline_rc) # this makes a scatterplot of the data # plt.scatter(x values, y va...
Motion.ipynb
merryjman/astronomy
gpl-3.0
Calculate and plot velocity
# create a new empty column data['velocity (m/s)'] = '' data # np.diff() calculates the difference between a value and the one after it vel = np.diff(pos) / np.diff(time) # fill the velocity column with values from the formula data['velocity (m/s)'] = pd.DataFrame.from_items([('', vel)]) # display the data table dat...
Motion.ipynb
merryjman/astronomy
gpl-3.0
Modeling with MKS In this example the MKS equation will be used to predict microstructure at the next time step using $$p[s, 1] = \sum_{r=0}^{S-1} \alpha[l, r, 1] \sum_{l=0}^{L-1} m[l, s - r, 0] + ...$$ where $p[s, n + 1]$ is the concentration field at location $s$ and at time $n + 1$, $r$ is the convolution dummy var...
import pymks from pymks.datasets import make_cahn_hilliard n = 41 n_samples = 400 dt = 1e-2 np.random.seed(99) X, y = make_cahn_hilliard(n_samples=n_samples, size=(n, n), dt=dt)
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update.
from pymks.tools import draw_concentrations draw_concentrations((X[0], y[0]), labels=('Input Concentration', 'Output Concentration'))
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Calibrate Influence Coefficients As mentioned above, the microstructures (concentration fields) does not have discrete phases. This leaves the number of local states in local state space as a free hyper parameter. In previous work it has been shown that as you increase the number of local states, the accuracy of MKS mo...
import sklearn from sklearn.cross_validation import train_test_split split_shape = (X.shape[0],) + (np.product(X.shape[1:]),) X_train, X_test, y_train, y_test = train_test_split(X.reshape(split_shape), y.reshape(split_shape), test_size=0.5, random_state=3)
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compared the results with the testing dataset to evaluate how the MKS model's performance...
from pymks import MKSLocalizationModel from pymks.bases import PrimitiveBasis
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use Scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this pa...
from sklearn.grid_search import GridSearchCV parameters_to_tune = {'n_states': np.arange(2, 11)} prim_basis = PrimitiveBasis(2, [-1, 1]) model = MKSLocalizationModel(prim_basis) gs = GridSearchCV(model, parameters_to_tune, cv=5, fit_params={'size': (n, n)}) gs.fit(X_train, y_train) print(gs.best_estimator_) print(gs...
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
As expected the accuracy of the MKS model monotonically increases as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits. In order to save on computation costs let's set calibrate the influence coefficients with n_states equal to 6, but realize that if we need sl...
model = MKSLocalizationModel(basis=PrimitiveBasis(6, [-1, 1])) model.fit(X, y)
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Here are the first 4 influence coefficients.
from pymks.tools import draw_coeff draw_coeff(model.coeff[...,:4])
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Predict Microstructure Evolution With the calibrated influence coefficients, we are ready to predict the evolution of a concentration field. In order to do this, we need to have the Cahn-Hilliard simulation and the MKS model start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hill...
from pymks.datasets.cahn_hilliard_simulation import CahnHilliardSimulation np.random.seed(191) phi0 = np.random.normal(0, 1e-9, (1, n, n)) ch_sim = CahnHilliardSimulation(dt=dt) phi_sim = phi0.copy() phi_pred = phi0.copy()
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model.
time_steps = 10 for ii in range(time_steps): ch_sim.run(phi_sim) phi_sim = ch_sim.response phi_pred = model.predict(phi_pred)
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Let's take a look at the concentration fields.
from pymks.tools import draw_concentrations_compare draw_concentrations((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
The MKS model was able to capture the microstructure evolution with 6 local states. Resizing the Coefficients to use on Larger Systems Now let's try and predict a larger simulation by resizing the coefficients and provide a larger initial concentratio field.
m = 3 * n model.resize_coeff((m, m)) phi0 = np.random.normal(0, 1e-9, (1, m, m)) phi_sim = phi0.copy() phi_pred = phi0.copy()
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model.
for ii in range(1000): ch_sim.run(phi_sim) phi_sim = ch_sim.response phi_pred = model.predict(phi_pred)
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Let's take a look at the results.
from pymks.tools import draw_concentrations_compare draw_concentrations_compare((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
notebooks/cahn_hilliard.ipynb
XinyiGong/pymks
mit
Notebook Author: @SauravMaheshkar Introduction
import trax
trax/examples/trax_data_Explained.ipynb
google/trax
apache-2.0
Serial Fn In Trax, we use combinators to build input pipelines, much like building deep learning models. The Serial combinator applies layers serially using function composition and uses stack semantics to manage data. Trax has the following definition for a Serial combinator. def Serial(*fns): def composed_fns(gen...
data_pipeline = trax.data.Serial( trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=True), trax.data.Tokenize(vocab_dir='gs://trax-ml/vocabs/', vocab_file='en_8k.subword', keys=[0]), trax.data.Log(only_shapes=False) ) example = data_pipeline() print(next(example)) data_pipeline = trax.data.Ser...
trax/examples/trax_data_Explained.ipynb
google/trax
apache-2.0
Shuffling our datasets Trax offers two generator functions to add shuffle functionality in our input pipelines. The shuffle function shuffles a given stream The Shuffle function returns a shuffle function instead shuffle ``` def shuffle(samples, queue_size): if queue_size < 1: raise ValueError(f'Arg queue_siz...
sentence = ['Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia conseq...
trax/examples/trax_data_Explained.ipynb
google/trax
apache-2.0
Shuffle ``` def Shuffle(queue_size=1024): return lambda g: shuffle(g, queue_size) This function returns the aforementioned shuffle function and is mostly used in input pipelines. Batch Generators batch This function, creates batches for the input generator function. ``` def batch(generator, batch_size): if batc...
import numpy as np tensors = np.array([(1.,2.), ((3.,4.),(5.,6.))]) padded_tensors = trax.data.inputs.pad_to_max_dims(tensors=tensors, boundary=3) padded_tensors
trax/examples/trax_data_Explained.ipynb
google/trax
apache-2.0
Creating Buckets For training Recurrent Neural Networks, with large vocabulary a method called Bucketing is usually applied. The usual technique of using padding ensures that all occurences within a mini-batch are of the same length. But this reduces the inter-batch variability and intuitively puts similar sentences i...
data_pipeline = trax.data.Serial( trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=True), trax.data.Tokenize(vocab_dir='gs://trax-ml/vocabs/', vocab_file='en_8k.subword', keys=[0]), trax.data.BucketByLength(boundaries=[32, 128, 512, 2048], batch_sizes=[512, 128, 32,...
trax/examples/trax_data_Explained.ipynb
google/trax
apache-2.0
Filter by Length ``` def FilterByLength(max_length,length_keys=None, length_axis=0): length_keys = length_keys or [0, 1] length_fn = lambda x: _length_fn(x, length_axis, length_keys) def filtered(gen): for example in gen: if length_fn(example) <= max_length: yield example return filtered Th...
Filtered = trax.data.Serial( trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=True), trax.data.Tokenize(vocab_dir='gs://trax-ml/vocabs/', vocab_file='en_8k.subword', keys=[0]), trax.data.BucketByLength(boundaries=[32, 128, 512, 2048], batch_sizes=[512, 128, 32, 8...
trax/examples/trax_data_Explained.ipynb
google/trax
apache-2.0
Adding Loss Weights add_loss_weights ``` def add_loss_weights(generator, id_to_mask=None): for example in generator: if len(example) > 3 or len(example) < 2: assert id_to_mask is None, 'Cannot automatically mask this stream.' yield example else: if len(example) == 2: weights = np.on...
data_pipeline = trax.data.Serial( trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=True), trax.data.Tokenize(vocab_dir='gs://trax-ml/vocabs/', vocab_file='en_8k.subword', keys=[0]), trax.data.Shuffle(), trax.data.FilterByLength(max_length=2048, length_keys=[0]), trax.data.BucketByLength(...
trax/examples/trax_data_Explained.ipynb
google/trax
apache-2.0
Load airports of each country
L=json.loads(file('../json/L.json','r').read()) M=json.loads(file('../json/M.json','r').read()) N=json.loads(file('../json/N.json','r').read()) import requests AP={} for c in M: if c not in AP:AP[c]={} for i in range(len(L[c])): AP[c][N[c][i]]=L[c][i]
code/airport_dest_parser2.ipynb
csaladenes/aviation
mit
record schedules for 2 weeks, then augment count with weekly flight numbers. seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past. parse Departures
baseurl='https://www.airportia.com/' import requests, urllib2 def urlgetter(url): s = requests.Session() cookiesopen = s.get(url) cookies=str(s.cookies) fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]] #push token open...
code/airport_dest_parser2.ipynb
csaladenes/aviation
mit
good dates
SD={} SC=json.loads(file('../json/SC2.json','r').read()) for h in range(2,5):#len(AP.keys())): c=AP.keys()[h] #country not parsed yet if c in SC: if c not in SD: SD[c]=[] print h,c airportialinks=AP[c] sch={} #all airports of country, wher...
code/airport_dest_parser2.ipynb
csaladenes/aviation
mit
Save
cnc_path='../../universal/countries/' cnc=pd.read_excel(cnc_path+'cnc.xlsx').set_index('Name') MDF=pd.DataFrame() for c in SD: sch=SD[c] mdf=pd.DataFrame() for i in sch: for d in sch[i]: df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1) df['F...
code/airport_dest_parser2.ipynb
csaladenes/aviation
mit
The estimation game Root mean squared error is one of several ways to summarize the average error of an estimation process.
def RMSE(estimates, actual): """Computes the root mean squared error of a sequence of estimates. estimate: sequence of numbers actual: actual value returns: float RMSE """ e2 = [(estimate-actual)**2 for estimate in estimates] mse = np.mean(e2) return np.sqrt(mse)
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The following function simulates experiments where we try to estimate the mean of a population based on a sample with size n=7. We run iters=1000 experiments and collect the mean and median of each sample.
import random def Estimate1(n=7, iters=1000): """Evaluates RMSE of sample mean and median as estimators. n: sample size iters: number of iterations """ mu = 0 sigma = 1 means = [] medians = [] for _ in range(iters): xs = [random.gauss(mu, sigma) for _ in range(n)] ...
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Using $\bar{x}$ to estimate the mean works a little better than using the median; in the long run, it minimizes RMSE. But using the median is more robust in the presence of outliers or large errors. Estimating variance The obvious way to estimate the variance of a population is to compute the variance of the sample, $...
def MeanError(estimates, actual): """Computes the mean error of a sequence of estimates. estimate: sequence of numbers actual: actual value returns: float mean error """ errors = [estimate-actual for estimate in estimates] return np.mean(errors)
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The following function simulates experiments where we try to estimate the variance of a population based on a sample with size n=7. We run iters=1000 experiments and two estimates for each sample, $S^2$ and $S_{n-1}^2$.
def Estimate2(n=7, iters=1000): mu = 0 sigma = 1 estimates1 = [] estimates2 = [] for _ in range(iters): xs = [random.gauss(mu, sigma) for i in range(n)] biased = np.var(xs) unbiased = np.var(xs, ddof=1) estimates1.append(biased) estimates2.append(unbiased) ...
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The mean error for $S^2$ is non-zero, which suggests that it is biased. The mean error for $S_{n-1}^2$ is close to zero, and gets even smaller if we increase iters. The sampling distribution The following function simulates experiments where we estimate the mean of a population using $\bar{x}$, and returns a list of e...
def SimulateSample(mu=90, sigma=7.5, n=9, iters=1000): xbars = [] for j in range(iters): xs = np.random.normal(mu, sigma, n) xbar = np.mean(xs) xbars.append(xbar) return xbars xbars = SimulateSample()
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Here's the "sampling distribution of the mean" which shows how much we should expect $\bar{x}$ to vary from one experiment to the next.
cdf = thinkstats2.Cdf(xbars) thinkplot.Cdf(cdf) thinkplot.Config(xlabel='Sample mean', ylabel='CDF')
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The mean of the sample means is close to the actual value of $\mu$.
np.mean(xbars)
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
An interval that contains 90% of the values in the sampling disrtribution is called a 90% confidence interval.
ci = cdf.Percentile(5), cdf.Percentile(95) ci
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
And the RMSE of the sample means is called the standard error.
stderr = RMSE(xbars, 90) stderr
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Confidence intervals and standard errors quantify the variability in the estimate due to random sampling. Estimating rates The following function simulates experiments where we try to estimate the mean of an exponential distribution using the mean and median of a sample.
def Estimate3(n=7, iters=1000): lam = 2 means = [] medians = [] for _ in range(iters): xs = np.random.exponential(1.0/lam, n) L = 1 / np.mean(xs) Lm = np.log(2) / thinkstats2.Median(xs) means.append(L) medians.append(Lm) print('rmse L', RMSE(means, lam)) ...
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The RMSE is smaller for the sample mean than for the sample median. But neither estimator is unbiased. Exercises Exercise: Suppose you draw a sample with size n=10 from an exponential distribution with λ=2. Simulate this experiment 1000 times and plot the sampling distribution of the estimate L. Compute the standard er...
# Solution def SimulateSample(lam=2, n=10, iters=1000): """Sampling distribution of L as an estimator of exponential parameter. lam: parameter of an exponential distribution n: sample size iters: number of iterations """ def VertLine(x, y=1): thinkplot.Plot([x, x], [0, y], color='0.8',...
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Exercise: In games like hockey and soccer, the time between goals is roughly exponential. So you could estimate a team’s goal-scoring rate by observing the number of goals they score in a game. This estimation process is a little different from sampling the time between goals, so let’s see how it works. Write a functio...
def SimulateGame(lam): """Simulates a game and returns the estimated goal-scoring rate. lam: actual goal scoring rate in goals per game """ goals = 0 t = 0 while True: time_between_goals = random.expovariate(lam) t += time_between_goals if t > 1: break ...
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Exercise: In this chapter we used $\bar{x}$ and median to estimate µ, and found that $\bar{x}$ yields lower MSE. Also, we used $S^2$ and $S_{n-1}^2$ to estimate σ, and found that $S^2$ is biased and $S_{n-1}^2$ unbiased. Run similar experiments to see if $\bar{x}$ and median are biased estimates of µ. Also check wheth...
# Solution def Estimate4(n=7, iters=100000): """Mean error for xbar and median as estimators of population mean. n: sample size iters: number of iterations """ mu = 0 sigma = 1 means = [] medians = [] for _ in range(iters): xs = [random.gauss(mu, sigma) for i in range(n)] ...
solutions/chap08soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
0. load clean data
df = pd.read_csv('./data/I-SPY_1_clean_data.csv') df.head(2)
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
1. Inferential_statistics: Categorical vs Categorical (Chi-2 test) 1. 1 Effect of categorical predictors on Pathological complete response (PCR)
# example of contingency table inferential_statistics.contingency_table('PCR', 'ER+',df) # Perform chi-2 test on all categorical variables predictors = ['White', 'ER+', 'PR+', 'HR+','Right_Breast'] outcome = 'PCR' inferential_statistics.categorical_data(outcome, predictors, df)
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
<h3><center> 1.1.2 Conclusion: Only `ER+` , `PR+`, and `HR+` have an effect on `PCR`</center></h3> 1. 2 Effect of categorical predictors on Survival (Alive)
predictors = ['White', 'ER+', 'PR+', 'HR+','Right_Breast','PCR'] outcome = 'Alive' inferential_statistics.categorical_data(outcome, predictors, df)
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
<h3><center> 1.2.2 Conclusion: Only `ER+` and `HR+` have an effect on `Alive`</center></h3> 2. Inferential_statistics: Continous vs Categorical (ANOVA) 2.1 Effect of Age on PCR
predictor= ['age'] outcome = 'PCR' anova_table, OLS = inferential_statistics.linear_models(df, outcome, predictor); sns.boxplot(x= outcome, y=predictor[0], data=df, palette="Set3");
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
2.2 Effect of Age on Survival
predictor= ['age'] outcome = 'Alive' anova_table, OLS = inferential_statistics.linear_models(df, outcome, predictor); sns.boxplot(x= outcome, y=predictor[0], data=df, palette="Set3");
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
2.3 Explore interactions between age, survival, and PCR
# create a boxplot to visualize this interaction ax = sns.boxplot(x= 'PCR', y='age', hue ='Alive',data=df, palette="Set3"); ax.set_title('Interactions between age, survival, and PCR'); # create dataframe only for patients with PCR = Yes df_by_PCR = df.loc[df.PCR=='No',:] df_by_PCR.head() # Anova age vs Alive predicto...
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
Conclusion. age has an important effect on Alive for patients with PCR = Yes To quantitify this effect a logistic regression is needed 2.4 Effect of MRI measurements on PCR ANOVA
R = inferential_statistics.anova_MRI('PCR', df);
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
Estimate the effect size
mri_features = ['MRI_LD_Baseline', 'MRI_LD_1_3dAC', 'MRI_LD_Int_Reg', 'MRI_LD_PreSurg'] outcome = 'PCR' # Effect Size inferential_statistics.effect_size( df, mri_features, outcome)
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
2.5 Effect of MRI measurements on Survival ANOVA
outcome = 'Alive' R = inferential_statistics.anova_MRI(outcome, df); mri_features = ['MRI_LD_Baseline', 'MRI_LD_1_3dAC', 'MRI_LD_Int_Reg', 'MRI_LD_PreSurg'] outcome = 'Alive' # Effect Size inferential_statistics.effect_size( df, mri_features, outcome)
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
stratify analysis by PCR
# predictors and outcomes predictors= ['MRI_LD_Baseline', 'MRI_LD_1_3dAC', 'MRI_LD_Int_Reg', 'MRI_LD_PreSurg'] # split data and run anova PCR_outcomes = ['No','Yes'] for out in PCR_outcomes: df_by_PCR = df.loc[df.PCR == out,:] print('Outcome = Alive' + ' | ' + 'PCR = ' + out) # Anova anova_table, OLS ...
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
Conclusion The largest tumor dimension measured at baseline (MRI_LD_Baseline) is not a statistically different between patients who achieved complete pathological response (PCR)and those who did not. While all other MRI measurements are statistically different between PCR = Yes, and PCR = No All MRI measurements of th...
## 3. Inferential_statistics: Continous vs Categorical (ANOVA)
2-Inferential_Stats.ipynb
JCardenasRdz/Insights-into-the-I-SPY-clinical-trial
mit
There's a peak at value around 1.0 which represents quiet.
plt.hist(x_rms_instruments_notes[x_rms_instruments_notes <= 1].flatten(), 200); plt.hist(x_rms_instruments_notes[x_rms_instruments_notes > 1].flatten(), 200);
instrument-classification/analyze_instrument_ranges.ipynb
bzamecnik/ml
mit
The range of instruments split into quiet (black) and sounding (white) regions. We can limit the pitches to the sounding ones.
plt.imshow(x_rms_instruments_notes > 1, interpolation='none', cmap='gray') plt.grid(True) plt.suptitle('MIDI instruments range - RMS power') plt.xlabel('MIDI note') plt.ylabel('MIDI instrument') plt.savefig('data/working/instrument_ranges_binary.png');
instrument-classification/analyze_instrument_ranges.ipynb
bzamecnik/ml
mit
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inpu...
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, shape = (None, real_dim), name="inputs_real") inputs_z = tf.placeholder(tf.float32, shape = (None, z_dim), name ="inputs_z") return inputs_real, inputs_z
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Hyperparameters
# Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 784 # Sizes of hidden layers in generator and discriminator g_hidden_size = 256 d_hidden_size = 256 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Th...
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with...
# Calculate losses # One's like for real labels for Discriminator real_labels = tf.ones_like(d_logits_real) * (1 - smooth) d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits = d_logits_real, labels=real_labels)) # Zeros's like for real labels for Discrim...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator...
# Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('Generator')] d_vars = [var for var in t_vars if var.name.startswith('Discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate)...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Training
batch_size = 100 epochs = 80 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) ...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Results with 128 hidden units Epoch 72/100... Discriminator Loss: 1.2292... Generator Loss: 1.0937 Difference Loss: 0.1355... Epoch 73/100... Discriminator Loss: 1.1977... Generator Loss: 1.0838 Difference Loss: 0.1139... Epoch 74/100... Discriminator Loss: 1.0160... Generator Loss: 1.4791 Difference Loss: -0.4632... E...
%matplotlib inline import matplotlib.pyplot as plt # With 128 hidden fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() # With 256 hidden fig, ax = plt.subplots() losses = np.array(losses) ...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training.
def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') ...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
# with 128 _ = view_samples(-1, samples) # with 256 _ = view_samples(-1, samples)
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
# with 256 rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') a...
gans/gan_mnist/Intro_to_GANs_Exercises.ipynb
swirlingsand/deep-learning-foundations
mit
Hyperparameters The following are hyperparameters that will have an impact on the learning algorithm.
# Architecture N_HIDDEN = [800,800] NON_LINEARITY = lasagne.nonlinearities.rectify # Dropout parameters #DROP_INPUT = 0.2 #DROP_HIDDEN = 0.5 DROP_INPUT = None DROP_HIDDEN = None # Number of epochs to train the net NUM_EPOCHS = 50 # Optimization learning rate LEARNING_RATE = 0.01 # Batch Size BATCH_SIZE = 128 # Opt...
notebooks/classification/lasagne_nn.ipynb
gmarceaucaron/ecole-apprentissage-profond
bsd-3-clause
An optimizer can be seen as a function that takes a gradient, obtained by backpropagation, and returns an update to be applied to the current parameters. Other optimizers can be found in: optimizer reference. In order to be able to change the learning rate dynamically, we must use a shared variable that will be accessi...
import os def load_mnist(): """ A dataloader for MNIST """ from urllib.request import urlretrieve def download(filename, source='http://yann.lecun.com/exdb/mnist/'): print("Downloading %s" % filename) urlretrieve(source + filename, filename) # We then define functions f...
notebooks/classification/lasagne_nn.ipynb
gmarceaucaron/ecole-apprentissage-profond
bsd-3-clause
The following auxiliary function creates a minibatch in a 3D tensor (batch_size, img_width, img_height).
def iterate_minibatches(inputs, targets, batchsize, shuffle=False): """ Return a minibatch of images with the associated targets Keyword arguments: :type inputs: numpy.ndarray :param inputs: the dataset of images :type targets: numpy.ndarray :param targets: the targets associated to the dat...
notebooks/classification/lasagne_nn.ipynb
gmarceaucaron/ecole-apprentissage-profond
bsd-3-clause
Model definition The next two functions are general functions for creating multi-layer perceptron (mlp) and convolutional neural networks (cnn).
def create_mlp( input_shape, input_var=None, nonlinearity = lasagne.nonlinearities.rectify, n_hidden=[800], drop_input=.2, drop_hidden=.5): """ A generic function for creating a multi-layer perceptron. If n_hidden is given as a list, then depth is ignored. :type input_shape...
notebooks/classification/lasagne_nn.ipynb
gmarceaucaron/ecole-apprentissage-profond
bsd-3-clause
Optimization In the following, we want to maximize the probability to output the right digit given the image. To do this, we retrieve the output of our model, which is a softmax (probability distribution) over the 10 digits, and we compare it to the actual target. Finally, since we are using minibatches of size BATCH_S...
# Create a loss expression for training prediction = lasagne.layers.get_output(network) loss = lasagne.objectives.categorical_crossentropy(prediction, target_var).mean() params = lasagne.layers.get_all_params(network, trainable=True) updates = my_optimizer(loss, params) # Compile a function performing a training step ...
notebooks/classification/lasagne_nn.ipynb
gmarceaucaron/ecole-apprentissage-profond
bsd-3-clause
Training loop The following training loop is minimal and often insufficient for real-world purposes. The idea here is to show the minimal requirements for training a neural network. Also, we plot to show the evolution of the train and validation losses.|
#%matplotlib notebook plt.rcParams['figure.figsize'] = (4,4) # Make the figures a bit bigger import time def train( train_fn, X_train, y_train, valid_fn, X_valid, y_valid, num_epochs=50, batchsize=100): ################### # code for plotting ################### fig...
notebooks/classification/lasagne_nn.ipynb
gmarceaucaron/ecole-apprentissage-profond
bsd-3-clause
The following training loop contains features that are interesting to consider: - early-stopping - logging and filenames - checkpointing - adaptive step-size (optional) The first three are the most important ones.
import time import pickle def train( train_fn, X_train, y_train, valid_fn, X_valid, y_valid, num_epochs=100, batchsize=64): print("Starting training...") train_loss_array = [] valid_loss_array = [] # early-stopping parameters n_iter = 0 n_train_batches =...
notebooks/classification/lasagne_nn.ipynb
gmarceaucaron/ecole-apprentissage-profond
bsd-3-clause
Now we'll start by invoking the GPIO class, which will identify our board and initialize the pins. We will use two pins for input for scrolling through the slideshow. We default to the spidev device at <code>/dev/spidev0.0</code> for the minnow Additionally, the Data/Command and Reset pins are defined for the TFT LCD d...
myGPIO = GPIO.get_platform_gpio() myGPIO.setup(12,GPIO.IN) myGPIO.setup(16,GPIO.IN) lcd = ADA_LCD() lcd.clear() SPI_PORT = 0 SPI_DEVICE = 0 SPEED = 16000000 DC = 10 RST = 14
Slideshow.ipynb
MinnowBoard/fishbowl-notebooks
mit
The following functions collect all the images in the specified directory and place them into a list. It will filter out all the non-image files in the directory. It will fail if no images are found.
imageList = [] rawList = os.listdir("/notebooks") for i in range(0,len(rawList)): if (rawList[i].lower().endswith(('.png', '.jpg', '.jpeg', '.gif'))==True): imageList.append("/notebooks" + "/" + rawList[i]) if len(imageList)==0: print "No images found!" exit(1) count = 0 print imageList
Slideshow.ipynb
MinnowBoard/fishbowl-notebooks
mit
Now we'll initialize the TFT LCD display and clear it.
disp = TFT.ILI9341(DC, rst=RST, spi=SPI.SpiDev(SPI_PORT,SPI_DEVICE,SPEED)) disp.begin()
Slideshow.ipynb
MinnowBoard/fishbowl-notebooks
mit
This long infinite loop will work like so: <b>Clear the char LCD, write name of new image</b> <b>Wait for a button press</b> <b>Try to open an image</b> <b>Display the image on the TFT LCD</b> --If we fail to open the file, print an error message to the LCD display-- ----If we failed, open up the next file in the list....
while True: lcd.clear() time.sleep(0.25) message = " Image " + str(count+1) + " of " + str(len(imageList)) + "\n" + imageList[count][len(sys.argv[1]):] lcd.message(message) lcd.scroll() try: image = Image.open(imageList[count]) except(IOError): lcd.clear() time.s...
Slideshow.ipynb
MinnowBoard/fishbowl-notebooks
mit
Initial concepts An object is a container of data (attributes) and code (methods) A class is a template for creating objects Reuse is provided by: reusing the same class to create many objects "inheriting" data and code from other classes
# Definiting a Car class class Car(object): pass car = Car()
Spring2019/06a_Objects/Building Software With Objects.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Attributes
from IPython.display import Image Image(filename='ClassAttributes.png')
Spring2019/06a_Objects/Building Software With Objects.ipynb
UWSEDS/LectureNotes
bsd-2-clause