markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Next, demonstrate how to generate spectra of stars... MWS_MAIN
from desitarget.mock.mockmaker import MWS_MAINMaker %time demo_mockmaker(MWS_MAINMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
MWS_NEARBY
from desitarget.mock.mockmaker import MWS_NEARBYMaker %time demo_mockmaker(MWS_NEARBYMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
White dwarfs (WDs)
from desitarget.mock.mockmaker import WDMaker %time demo_mockmaker(WDMaker, seed=seed, loc='right')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Finally demonstrate how to generate (empyt) SKY spectra.
from desitarget.mock.mockmaker import SKYMaker SKY = SKYMaker(seed=seed) skydata = SKY.read(healpixels=healpixel, nside=nside) skyflux, skywave, skytargets, skytruth, objtruth = SKY.make_spectra(skydata) SKY.select_targets(skytargets, skytruth)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
Create count vectorizer
vect = CountVectorizer(max_features=1000) X_dtm = vect.fit_transform(dataTraining['plot']) X_dtm.shape print(vect.get_feature_names()[:50])
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Create y
dataTraining['genres'] = dataTraining['genres'].map(lambda x: eval(x)) le = MultiLabelBinarizer() y_genres = le.fit_transform(dataTraining['genres']) y_genres X_train, X_test, y_train_genres, y_test_genres = train_test_split(X_dtm, y_genres, test_size=0.33, random_state=42)
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Train multi-class multi-label model
clf = OneVsRestClassifier(RandomForestClassifier(n_jobs=-1, n_estimators=100, max_depth=10, random_state=42)) clf.fit(X_train, y_train_genres) y_pred_genres = clf.predict_proba(X_test) roc_auc_score(y_test_genres, y_pred_genres, average='macro')
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Predict the testing dataset
X_test_dtm = vect.transform(dataTesting['plot']) cols = ['p_Action', 'p_Adventure', 'p_Animation', 'p_Biography', 'p_Comedy', 'p_Crime', 'p_Documentary', 'p_Drama', 'p_Family', 'p_Fantasy', 'p_Film-Noir', 'p_History', 'p_Horror', 'p_Music', 'p_Musical', 'p_Mystery', 'p_News', 'p_Romance', 'p_Sci-Fi', 'p_Short', 'p_Sport', 'p_Thriller', 'p_War', 'p_Western'] y_pred_test_genres = clf.predict_proba(X_test_dtm) res = pd.DataFrame(y_pred_test_genres, index=dataTesting.index, columns=cols) res.head() res.to_csv('pred_genres_text_RF.csv', index_label='ID')
exercises/P2-MovieGenrePrediction.ipynb
albahnsen/PracticalMachineLearningClass
mit
Challenge 1 - Fibonacci Sequence and the Golden Ratio The Fibonacci sequence is defined by $f_{n+1} =f_n + f_{n-1}$ and the initial values $f_0 = 0$ and $f_1 = 1$. The first few elements of the sequence are: $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377 ...$ Using what you just learned about functions, define a function fib, which calculates the $n$-th element in the Fibonacci sequence. It should have one input variable, $n$ and its return value should be the $n$-th element in the Fibonacci sequence.
# Answer def fib(n): """Return nth element of the Fibonacci sequence.""" # Create the base case n0 = 0 n1 = 1 # Loop n times. Just ignore the variable i. for i in range(n): n_new = n0 + n1 n0 = n1 n1 = n_new return n0
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
The ratio of successive elements in the Fibonacci sequence converges to $$\phi = (1 + \sqrt{5})/2 ≈ 1.61803\dots$$ which is the famous golden ratio. Your task is to approximate $\phi$. Define a function phi_approx that calculates the approximate value of $\phi$ obtained by the ratio of the $n$-th and $(n−1)$-st elements, $$f_n /f_{n-1} \approx \phi$$ phi_approx should have one variable, $n$. Its return value should be the $n$-th order approximation of $\phi$.
#Answer: phi_approx_output_format = \ """Approximation order: {:d} fib_n: {:g} fib_(n-1): {:g} phi: {:.25f}""" def phi_approx(n, show_output=True): """Return the nth-order Fibonacci approximation to the golden ratio.""" fib_n = fib(n) fib_nm1 = fib(n - 1) phi = fib_n/fib_nm1 if show_output: print(phi_approx_output_format.format(n, fib_n, fib_nm1, phi)) return phi
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Finally, simply using the ":" gives you all the elements in the array. Challenge 2 - Projectile Motion In this challenge problem, you will be building what is known as a NUMERICAL INTEGRATOR in order to predict the projectiles trajectory through a gravitational field (i.e. what happens when you throw a ball through the air) Let's say that you have a projectile (let's say a ball) in a world with 2 spatial dimensions (dimensions x and y). This world has a constant acceleration due to gravity (call it simply g) that points in the -y direction and has a surface at y = 0. Can we calculate the motion of the projectile in the x-y plane after the projectile is given some initial velocity vector v? In particular, can we predict where the ball will land? With loops, yes we can! Let's first define all of the relevant variables so far. Let g = -9.8 (units of m/s, so an Earth-like world), the initial velocity vector being an array v = [3.,3.], and an initial position vector (call it r) in the x-y plane of r = [0.,1.]. For ease, let's use numpy arrays for the vectors.
#Your code here #Answers g = -9.8 v = np.array([3.,3.]) r = np.array([0.,0.])
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Now we have the functions that calculate the changes in the position and velocity vectors. We're almost there! Now, we will need a while-loop in order to step the projectile through its trajectory. What would the condition be? Well, we know that the projectile stops when it hits the ground. So, one way we can do this is to have the condition being (r[1] >= 0), since the ground is defined at y = 0. So, having your intV and intR functions, along with a while-loop and a dt = 0.1 (known as the "step-size"), can you use Python to predict where the projectile will land?
#Your code here. #Answer dt = 0.01 while (r[1] >= 0.): v = intV(v,g,dt) r = intR(r,v,dt) print(r)
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Here, you have 2 dimensions with the array timeseriesData, and as such much specify the row first and then the column. So, - array_name[n,:] is the n-th row, and all columns within that row. - array_name[:,n] is the n-th column, and all rows within that particular column. Now then, let's see what the data looks like using the plot() function that you learned last time. Do you remember how to do it? Why don't you try! Plot t as your x-axis and signal as your y-axis. Don't forget to show your plot.
#Your code here #Answer plt.plot(t,signal) plt.show()
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
astroumd/GradMap
gpl-3.0
Coping our Model in a new folder
try: shutil.copytree('C:/Users/Miguel/workspace/Thesis/Geomodeller/Basic_case/3_horizontal_layers', 'Temp_test/') except: print "The folder is already created"
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Simplest case: three horizontal layers, with depth unknow Loading pre-made Geomodeller model You have to be very careful with the path, and all the bars to the RIGHT
hor_lay = 'Temp_test/horizontal_layers.xml'#C:\Users\Miguel\workspace\Thesis\Thesis\Temp3 print hor_lay reload(geogrid) G1 = geogrid.GeoGrid() # Using G1, we can read the dimensions of our Murci geomodel G1.get_dimensions_from_geomodeller_xml_project(hor_lay) #G1.set_dimensions(dim=(0,23000,0,16000,-8000,1000)) nx = 400 ny = 2 nz = 400 G1.define_regular_grid(nx,ny,nz) G1.update_from_geomodeller_project(hor_lay)
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Setting Bayes Model
Image("Nice Notebooks\THL_no_thickness.png") alpha = pm.Normal("alpha", -350, 0.05) alpha alpha = pm.Normal("alpha", -350, 0.05)# value= -250) #Thickness of the layers thickness_layer1 = pm.Normal("thickness_layer1", -150, 0.005) # a lot of uncertainty so the constrains are necessary thickness_layer2 = pm.Normal("thickness_layer2", -150, 0.005) @pm.deterministic def beta(alpha = alpha, thickness_layer1 = thickness_layer1): return alpha + thickness_layer1 @pm.deterministic def gamma(beta = beta, thickness_layer2 = thickness_layer2): return beta + thickness_layer2 @pm.deterministic def section(alpha = alpha, beta = beta, gamma = gamma): # Create the array we will use to modify the xml samples = [alpha,beta, gamma,alpha,beta, gamma] # Load the xml to be modify hor_lay = 'Temp_test\horizontal_layers.xml' #Create the instance to modify the xml # Loading stuff reload(gxml) gmod_obj = gxml.GeomodellerClass() gmod_obj.load_geomodeller_file(hor_lay) # Create a dictionary so we can acces the section through the name section_dict = gmod_obj.create_sections_dict() # ## Get the points of all formation for a given section: Dictionary contact_points = gmod_obj.get_formation_point_data(section_dict['Section1']) #Perform the position Change for i, point in enumerate(contact_points): gmod_obj.change_formation_point_pos(point, y_coord = [samples[i],samples[i]]) # Check the new position of points #points_changed = gmod_obj.get_point_coordinates(contact_points) #print "Points coordinates", points_changed # Write the new xml gmod_obj.write_xml("Temp_test/new.xml") # Read the new xml hor_lay_new = 'Temp_test/new.xml' G1 = geogrid.GeoGrid() # Getting dimensions and definning grid G1.get_dimensions_from_geomodeller_xml_project(hor_lay_new) # Resolution! nx = 2 ny = 2 nz = 400 G1.define_regular_grid(nx,ny,nz) # Updating project G1.update_from_geomodeller_project(hor_lay_new) # Printing new model # G1.plot_section('y',cell_pos=1,colorbar = True, cmap='RdBu', figsize=(6,6),interpolation= 'nearest' ,ve = 1, geomod_coord= True) return G1 #MODEL!! model = pm.Model([alpha, beta, gamma, section, thickness_layer1, thickness_layer2]) M = pm.MCMC(model) M.sample(iter=100)
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Extracting Posterior Traces to Arrays
n_samples = 20 alpha_samples, alpha_samples_all = M.trace('alpha')[-n_samples:], M.trace("alpha")[:] beta_samples, beta_samples_all = M.trace('beta')[-n_samples:], M.trace("beta")[:] gamma_samples, gamma_samples_all = M.trace('gamma')[-n_samples:], M.trace('gamma')[:] section_samples, section_samples_all = M.trace('section')[-n_samples:], M.trace('section')[:] #print section_samples
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Plotting the results
fig, ax = plt.subplots(1, 2, figsize=(15, 5)) ax[0].hist(alpha_samples_all, histtype='stepfilled', bins=30, alpha=1, label="Upper most layer", normed=True) ax[0].hist(beta_samples_all, histtype='stepfilled', bins=30, alpha=1, label="Middle layer", normed=True, color = "g") ax[0].hist(gamma_samples_all, histtype='stepfilled', bins=30, alpha=1, label="Bottom most layer", normed=True, color = "r") ax[0].invert_xaxis() ax[0].legend() ax[0].set_title(r"""Posterior distributions of the layers""") ax[0].set_xlabel("Depth(m)") ax[1].set_title("Representation") for i in section_samples: i.plot_section('y',cell_pos=1,colorbar = True, ax = ax[1], alpha = 0.3, figsize=(6,6),interpolation= 'nearest' ,ve = 1, geomod_coord= True, contour = True) plot(M) Image("Nice Notebooks\THL_no_thickness.png") plot(M)
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
Leguark/pygeomod
mit
Estimation An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data. In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data. e.g. $\mu$ and $\sigma^2$ in the case of the normal distribution
x = array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604, 5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745, 1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 , 0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357, 1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ]) _ = hist(x, bins=8)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Fitting data to probability distributions We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria: Method of moments chooses the parameters so that the sample moments (typically the sample mean and variance) match the theoretical moments of our chosen distribution. Maximum likelihood chooses the parameters to maximize the likelihood, which measures how likely it is to observe our given sample. Discrete Random Variables $$X = {0,1}$$ $$Y = {\ldots,-2,-1,0,1,2,\ldots}$$ Probability Mass Function: For discrete $X$, $$Pr(X=x) = f(x|\theta)$$ e.g. Poisson distribution The Poisson distribution models unbounded counts: <div style="font-size: 150%;"> $$Pr(X=x)=\frac{e^{-\lambda}\lambda^x}{x!}$$ * $X=\{0,1,2,\ldots\}$ * $\lambda > 0$ $$E(X) = \text{Var}(X) = \lambda$$ ### Continuous Random Variables $$X \in [0,1]$$ $$Y \in (-\infty, \infty)$$ **Probability Density Function**: For continuous $X$, $$Pr(x \le X \le x + dx) = f(x|\theta)dx \, \text{ as } \, dx \rightarrow 0$$ ![Continuous variable](http://upload.wikimedia.org/wikipedia/commons/e/ec/Exponential_pdf.svg) ***e.g. normal distribution*** <div style="font-size: 150%;"> $$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]$$ * $X \in \mathbf{R}$ * $\mu \in \mathbf{R}$ * $\sigma>0$ $$\begin{align}E(X) &= \mu \cr \text{Var}(X) &= \sigma^2 \end{align}$$ ### Example: Nashville Precipitation The dataset `nashville_precip.txt` contains [NOAA precipitation data for Nashville measured since 1871](http://bit.ly/nasvhville_precip_data). The gamma distribution is often a good fit to aggregated rainfall data, and will be our candidate distribution in this case.
precip = pd.read_table("data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True) precip.head() _ = precip.hist(sharex=True, sharey=True, grid=False) tight_layout()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
The first step is recognixing what sort of distribution to fit our data to. A couple of observations: The data are skewed, with a longer tail to the right than to the left The data are positive-valued, since they are measuring rainfall The data are continuous There are a few possible choices, but one suitable alternative is the gamma distribution: <div style="font-size: 150%;"> $$x \sim \text{Gamma}(\alpha, \beta) = \frac{\beta^{\alpha}x^{\alpha-1}e^{-\beta x}}{\Gamma(\alpha)}$$ </div> The method of moments simply assigns the empirical mean and variance to their theoretical counterparts, so that we can solve for the parameters. So, for the gamma distribution, the mean and variance are: <div style="font-size: 150%;"> $$ \hat{\mu} = \bar{X} = \alpha \beta $$ $$ \hat{\sigma}^2 = S^2 = \alpha \beta^2 $$ </div> So, if we solve for these parameters, we can use a gamma distribution to describe our data: <div style="font-size: 150%;"> $$ \alpha = \frac{\bar{X}^2}{S^2}, \, \beta = \frac{S^2}{\bar{X}} $$ </div> Let's deal with the missing value in the October data. Given what we are trying to do, it is most sensible to fill in the missing value with the average of the available values.
precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January:
from scipy.stats.distributions import gamma hist(precip.Jan, normed=True, bins=20) plot(linspace(0, 10), gamma.pdf(linspace(0, 10), alpha_mom[0], beta_mom[0]))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution:
axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False) for ax in axs.ravel(): # Get month m = ax.get_title() # Plot fitted distribution x = linspace(*ax.get_xlim()) ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m])) # Annotate with parameter estimates label = 'alpha = {0:.2f}\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m]) ax.annotate(label, xy=(10, 0.2)) tight_layout()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Maximum Likelihood Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here. Say we have some data $y = y_1,y_2,\ldots,y_n$ that is distributed according to some distribution: <div style="font-size: 120%;"> $$Pr(Y_i=y_i | \theta)$$ </div> Here, for example, is a Poisson distribution that describes the distribution of some discrete variables, typically counts:
y = np.random.poisson(5, size=100) plt.hist(y, bins=12, normed=True) xlabel('y'); ylabel('Pr(y)')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can plot the likelihood function for any value of the parameter(s):
lambdas = np.linspace(0,15) x = 5 plt.plot(lambdas, [poisson_like(x, l) for l in lambdas]) xlabel('$\lambda$') ylabel('L($\lambda$|x={0})'.format(x))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$.
lam = 5 xvals = arange(15) plt.bar(xvals, [poisson_like(x, lam) for x in xvals]) xlabel('x') ylabel('Pr(X|$\lambda$=5)')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function:
# some function func = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1 xvals = np.linspace(0, 6) plot(xvals, func(xvals)) text(5.3, 2.1, '$f(x)$', fontsize=16) # zero line plot([0,6], [0,0], 'k-') # value at step n plot([4,4], [0,func(4)], 'k:') plt.text(4, -.2, '$x_n$', fontsize=16) # tangent line tanline = lambda x: -0.858 + 0.626*x plot(xvals, tanline(xvals), 'r--') # point at step n+1 xprime = 0.858/0.626 plot([xprime, xprime], [tanline(xprime), func(xprime)], 'k:') plt.text(xprime+.1, -.2, '$x_{n+1}$', fontsize=16)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function.
# Calculate statistics log_mean = precip.mean().apply(log) mean_log = precip.apply(log).mean()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
And now plug this back into the solution for beta: <div style="font-size: 120%;"> $$ \beta = \frac{\alpha}{\bar{X}} $$
beta_mle = alpha_mle/precip.mean()[-1] beta_mle
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can compare the fit of the estimates derived from MLE to those from the method of moments:
dec = precip.Dec dec.hist(normed=True, bins=10, grid=False) x = linspace(0, dec.max()) plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-') plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution. Example: truncated distribution Suppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then: $$ P(X \le x) = P(Y \le x|Y \gt a) = \frac{P(a \lt Y \le x)}{P(Y \gt a)}$$ (so, $Y$ is the original variable and $X$ is the truncated variable) Then X has the density: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$ Suppose $Y \sim N(\mu, \sigma^2)$ and $x_1,\ldots,x_n$ are independent observations of $X$. We can use maximum likelihood to find $\mu$ and $\sigma$. First, we can simulate a truncated distribution using a while statement to eliminate samples that are outside the support of the truncated distribution.
x = np.random.normal(size=10000) a = -1 x_small = x < a while x_small.sum(): x[x_small] = np.random.normal(size=x_small.sum()) x_small = x < a _ = hist(x, bins=100)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can construct a log likelihood for this function using the conditional form: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$
from scipy.stats.distributions import norm trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) - np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
In general, simulating data is a terrific way of testing your model before using it with real data. Kernel density estimates In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation.
# Some random data y = np.random.random(15) * 10 y x = np.linspace(0, 10, 100) # Smoothing parameter s = 0.4 # Calculate the kernels kernels = np.transpose([norm.pdf(x, yi, s) for yi in y]) plot(x, kernels, 'k:') plot(x, kernels.sum(1)) plot(y, np.zeros(len(y)), 'ro', ms=10)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Exercise: Cervical dystonia analysis Recall the cervical dystonia database, which is a clinical trial of botulinum toxin type B (BotB) for patients with cervical dystonia from nine U.S. sites. The response variable is measurements on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment). One way to check the efficacy of the treatment is to compare the distribution of TWSTRS for control and treatment patients at the end of the study. Use the method of moments or MLE to calculate the mean and variance of TWSTRS at week 16 for one of the treatments and the control group. Assume that the distribution of the twstrs variable is normal: $$f(x \mid \mu, \sigma^2) = \sqrt{\frac{1}{2\pi\sigma^2}} \exp\left{ -\frac{1}{2} \frac{(x-\mu)^2}{\sigma^2} \right}$$
cdystonia = pd.read_csv("data/cdystonia.csv") cdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Regression models A general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example, we may wish to know how different medical interventions influence the incidence or duration of disease, or perhaps a how baseball player's performance varies as a function of age.
x = np.array([2.2, 4.3, 5.1, 5.8, 6.4, 8.0]) y = np.array([0.4, 10.1, 14.0, 10.9, 15.4, 18.5]) plot(x,y,'ro')
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$. <div style="font-size: 150%;"> $y_i = f(x_i) + \epsilon_i$ </div> where $f$ is some function, for example a linear function: <div style="font-size: 150%;"> $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$ </div> and $\epsilon_i$ accounts for the difference between the observed response $y_i$ and its prediction from the model $\hat{y_i} = \beta_0 + \beta_1 x_i$. This is sometimes referred to as process uncertainty. We would like to select $\beta_0, \beta_1$ so that the difference between the predictions and the observations is zero, but this is not usually possible. Instead, we choose a reasonable criterion: the smallest sum of the squared differences between $\hat{y}$ and $y$. <div style="font-size: 120%;"> $$R^2 = \sum_i (y_i - [\beta_0 + \beta_1 x_i])^2 = \sum_i \epsilon_i^2 $$ </div> Squaring serves two purposes: (1) to prevent positive and negative values from cancelling each other out and (2) to strongly penalize large deviations. Whether the latter is a good thing or not depends on the goals of the analysis. In other words, we will select the parameters that minimize the squared error of the model.
ss = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2) ss([0,1],x,y) b0,b1 = fmin(ss, [0,1], args=(x,y)) b0,b1 plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10]) plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10]) for xi, yi in zip(x,y): plot([xi]*2, [yi, b0+b1*xi], 'k:') xlim(2, 9); ylim(0, 20)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences:
sabs = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x)) b0,b1 = fmin(sabs, [0,1], args=(x,y)) print b0,b1 plot(x, y, 'ro') plot([0,10], [b0, b0+b1*10])
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model: <div style="font-size: 150%;"> $y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \epsilon_i$ </div>
ss2 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2) b0,b1,b2 = fmin(ss2, [1,1,-1], args=(x,y)) print b0,b1,b2 plot(x, y, 'ro') xvals = np.linspace(0, 10, 100) plot(xvals, b0 + b1*xvals + b2*(xvals**2))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters. For some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of home runs a baseball player hits and the number of runs batted in (RBI) they accumulate; clearly, the relationship is positive, but we may not expect a linear relationship.
ss3 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) - theta[3]*(x**3)) ** 2) bb = pd.read_csv("data/baseball.csv", index_col=0) plot(bb.hr, bb.rbi, 'r.') b0,b1,b2,b3 = fmin(ss3, [0,1,-1,0], args=(bb.hr, bb.rbi)) xvals = arange(40) plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Of course, we need not fit least squares models by hand. The statsmodels package implements least squares models that allow for model fitting in a single line:
import statsmodels.api as sm straight_line = sm.OLS(y, sm.add_constant(x)).fit() straight_line.summary() from statsmodels.formula.api import ols as OLS data = pd.DataFrame(dict(x=x, y=y)) cubic_fit = OLS('y ~ x + I(x**2)', data).fit() cubic_fit.summary()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Exercise: Polynomial function Write a function that specified a polynomial of arbitrary degree. Model Selection How do we choose among competing models for a given dataset? More parameters are not necessarily better, from the standpoint of model fit. For example, fitting a 9-th order polynomial to the sample data from the above example certainly results in an overfit.
def calc_poly(params, data): x = np.c_[[data**i for i in range(len(params))]] return np.dot(params, x) ssp = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2) betas = fmin(ssp, np.zeros(10), args=(x,y), maxiter=1e6) plot(x, y, 'ro') xvals = np.linspace(0, max(x), 100) plot(xvals, calc_poly(betas, xvals))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as: $$AIC = n \log(\hat{\sigma}^2) + 2p$$ where $p$ is the number of parameters in the model and $\hat{\sigma}^2 = RSS/(n-p-1)$. Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases. To apply AIC to model selection, we choose the model that has the lowest AIC value.
n = len(x) aic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p RSS1 = ss(fmin(ss, [0,1], args=(x,y)), x, y) RSS2 = ss2(fmin(ss2, [1,1,-1], args=(x,y)), x, y) print aic(RSS1, 2, n), aic(RSS2, 3, n)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Hence, we would select the 2-parameter (linear) model. Logistic Regression Fitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous? male/female pass/fail died/survived Let's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey.
titanic = pd.read_excel("data/titanic.xls", "titanic") titanic.name jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) yticks([0,1]) ylabel("survived") xlabel("log(fare)")
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale. Clearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line.
x = np.log(titanic.fare[titanic.fare>0]) y = titanic.survived[titanic.fare>0] betas_titanic = fmin(ss, [1,1], args=(x,y)) jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) yticks([0,1]) ylabel("survived") xlabel("log(fare)") plt.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.])
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side. Stochastic model Rather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data: <div style="font-size: 120%;"> $$f(y|p) = p^{y} (1-p)^{1-y}$$ </div> where $y = {0,1}$ and $p \in [0,1]$. So, this model predicts whether $y$ is zero or one as a function of the probability $p$. Notice that when $y=1$, the $1-p$ term disappears, and when $y=0$, the $p$ term disappears. So, the model we want to fit should look something like this: <div style="font-size: 120%;"> $$p_i = \beta_0 + \beta_1 x_i + \epsilon_i$$ However, since $p$ is constrained to be between zero and one, it is easy to see where a linear (or polynomial) model might predict values outside of this range. We can modify this model sligtly by using a **link function** to transform the probability to have an unbounded range on a new scale. Specifically, we can use a **logit transformation** as our link function: <div style="font-size: 120%;"> $$\text{logit}(p) = \log\left[\frac{p}{1-p}\right] = x$$ Here's a plot of $p/(1-p)$
logit = lambda p: np.log(p/(1.-p)) unit_interval = np.linspace(0,1) plt.plot(unit_interval/(1-unit_interval), unit_interval)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
The inverse of the logit transformation is: <div style="font-size: 150%;"> $$p = \frac{1}{1 + \exp(-x)}$$ So, now our model is: <div style="font-size: 120%;"> $$\text{logit}(p_i) = \beta_0 + \beta_1 x_i + \epsilon_i$$ We can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli model is: <div style="font-size: 120%;"> $$L(y|p) = \prod_{i=1}^n p_i^{y_i} (1-p_i)^{1-y_i}$$ which, on the log scale is: <div style="font-size: 120%;"> $$l(y|p) = \sum_{i=1}^n y_i \log(p_i) + (1-y_i)\log(1-p_i)$$ We can easily implement this in Python, keeping in mind that `fmin` minimizes, rather than maximizes functions:
invlogit = lambda x: 1. / (1 + np.exp(-x)) def logistic_like(theta, x, y): p = invlogit(theta[0] + theta[1] * x) # Return negative of log-likelihood return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
... and fit the model.
b0,b1 = fmin(logistic_like, [0.5,0], args=(x,y)) b0, b1 jitter = np.random.normal(scale=0.01, size=len(x)) plot(x, y+jitter, 'r.', alpha=0.3) yticks([0,.25,.5,.75,1]) xvals = np.linspace(0, 600) plot(xvals, invlogit(b0+b1*xvals))
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
As with our least squares model, we can easily fit logistic regression models in statsmodels, in this case using the GLM (generalized linear model) class with a binomial error distribution specified.
logistic = sm.GLM(y, sm.add_constant(x), family=sm.families.Binomial()).fit() logistic.summary()
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Exercise: multivariate logistic regression Which other variables might be relevant for predicting the probability of surviving the Titanic? Generalize the model likelihood to include 2 or 3 other covariates from the dataset. Bootstrapping Parametric inference can be non-robust: inaccurate if parametric assumptions are violated if we rely on asymptotic results, we may not achieve an acceptable level of accuracy Parmetric inference can be difficult: derivation of sampling distribution may not be possible An alternative is to estimate the sampling distribution of a statistic empirically without making assumptions about the form of the population. We have seen this already with the kernel density estimate. Non-parametric Bootstrap The bootstrap is a resampling method discovered by Brad Efron that allows one to approximate the true sampling distribution of a dataset, and thereby obtain estimates of the mean and variance of the distribution. Bootstrap sample: <div style="font-size: 120%;"> $$S_1^* = \{x_{11}^*, x_{12}^*, \ldots, x_{1n}^*\}$$ </div> $S_i^$ is a sample of size $n$, with* replacement. In Python, we have already seen the NumPy function permutation that can be used in conjunction with Pandas' take method to generate a random sample of some data without replacement:
np.random.permutation(titanic.name)[:5]
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Similarly, we can use the random.randint method to generate a sample with replacement, which we can use when bootstrapping.
random_ind = np.random.randint(0, len(titanic), 5) titanic.name[random_ind]
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
We regard S as an "estimate" of population P population : sample :: sample : bootstrap sample The idea is to generate replicate bootstrap samples: <div style="font-size: 120%;"> $$S^* = \{S_1^*, S_2^*, \ldots, S_R^*\}$$ </div> Compute statistic $t$ (estimate) for each bootstrap sample: <div style="font-size: 120%;"> $$T_i^* = t(S^*)$$ </div>
n = 10 R = 1000 # Original sample (n=10) x = np.random.normal(size=n) # 1000 bootstrap samples of size 10 s = [x[np.random.randint(0,n,n)].mean() for i in range(R)] _ = hist(s, bins=30)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T: $$\hat{B}^ = \bar{T}^ - T$$
boot_mean - np.mean(x)
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
kialio/gsfcpyboot
mit
Explanation of code above The code above contains three list comprehensions for very compactly simulating sampling distribution of the man Create a list of sample sizes to simulate (ssizes) For each sample size (sz), generate random 100 samples, and store those samples in a matrix of size sz $\times$ 100 (i.e. each column is a sample) For each matrix created in step 2, calculate column means (= sample means) For each set of sample means in 3, calculate the standard deviation (= standard error)
# make a pair of plots ssmin, ssmax = min(ssizes), max(ssizes) theoryss = np.linspace(ssmin, ssmax, 250) fig, (ax1, ax2) = plt.subplots(1,2) # 1 x 2 grid of plots fig.set_size_inches(12,4) # plot histograms of sampling distributions for (ss,mean) in zip(ssizes, means): ax1.hist(mean, normed=True, histtype='stepfilled', alpha=0.75, label="n = %d" % ss) ax1.set_xlabel("X") ax1.set_ylabel("Density") ax1.legend() ax1.set_title("Sampling Distributions of Mean\nFor Different Sample Sizes") # plot simulation SE of mean vs theory SE of mean ax2.plot(ssizes, se, 'ko', label='simulation') ax2.plot(theoryss, sigma/np.sqrt(theoryss), color='red', label="theory") ax2.set_xlim(0, ssmax*1.1) ax2.set_ylim(0, max(se)*1.1) ax2.set_xlabel("sample size ($n$)") ax2.set_ylabel("SE of mean") ax2.legend() ax2.set_title("Standard Error of Mean\nTheoretical Expectation vs. Simulation") pass
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Sample Estimate of the Standard Error of the Mean In real-life life, we don't have access to the sampling distribution of the mean or the true population parameter $\sigma$ from which can calculate the standard error of the mean. However, we can still use our unbiased sample estimator of the standard deviation, $s$, to estimate the standard error of the mean. $$ {SE}_{\overline{x}} = \frac{s}{\sqrt{n}} $$ Conditions for sampling distribution to be nearly normal For the sampling distribution of the mean to be nearly normal with ${SE}_\overline{x}$ accurate, the following conditions should hold: Sample observations are independent Sample size is large ($n \geq 30$ is good rule of thumb) Population distribution is not strongly skewed Confidence Intervals for the Mean We know that given a random sample from a population of interest, the mean of $X$ in our random sample is unlikely to be the true population mean of $X$. However, our simulations have taught us a number of things: As sample size increases, the sample estimate of the mean is more likely to be close to the true mean As sample size increases, the standard deviation of the sampling distribution of the mean (= standard error of the mean) decreases We can use this knowledge to calculate plausible ranges of values for the mean. We call such ranges confidence intervals for the mean (the idea of confidence intervals can apply to other statistics as well). We're going to express our confidence intervals in terms of multiples of the standard error. Let's start by using simulation to explore how often our confidence intervals capture the true mean when we base our confidence intervals on different multiples, $z$, of the SE. $$ {CI}\overline{x} = \overline{x} \pm (z \times {SE}\overline{x}) $$ For the purposes of this simulation, let's consider samples of size 50, drawn from the same population of interest as before (popn above). We're going to generate a large number of such samples, and for each sample we will calculate the CI of the mean using the formula above. We will then ask, "for what fraction of the samples did our CI overlap the true population mean"? This will give us a sense of how well different confidence intervals do in providing a plausible range for the true mean.
N = 1000 samples50 = popn.rvs(size=(50, N)) # N samples of size 50 means50 = np.mean(samples50, axis=0) # sample means std50 = np.std(samples50, axis=0, ddof=1) # sample std devs se50 = std50/np.sqrt(50) # sample standard errors frac_overlap_mu = [] zs = np.arange(1,3,step=0.05) for z in zs: lowCI = means50 - z*se50 highCI = means50 + z*se50 overlap_mu = np.logical_and(lowCI <= mu, highCI >= mu) frac = np.count_nonzero(overlap_mu)/N frac_overlap_mu.append(frac) frac_overlap_mu = np.array(frac_overlap_mu) plt.plot(zs, frac_overlap_mu * 100, 'k-', label="simulation") plt.ylim(60, 104) plt.xlim(1, 3) plt.xlabel("z in CI = sample mean ± z × SE") plt.ylabel(u"% of CIs that include popn mean") # plot theoretical expectation stdnorm = stats.norm(loc=0, scale=1) plt.plot(zs, (1 - (2* stdnorm.sf(zs)))*100, 'r-', alpha=0.5, label="theory") plt.legend(loc='lower right') pass
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Interpreting our simulation How should we interpret the results above? We found as we increased the scaling of our confidence intervals (larger $z$), the true mean was within sample confidence intervals a greater proportion of the time. For example, when $z = 1$ we found that the true mean was within our CIs roughly 67% of the time, while at $z = 2$ the true mean was within our confidence intervals approximately 95% of the time. We call $x \pm 2 \times {SE}_\overline{x}$ the approximate 95% confidence interval of the mean (see below for exact values of z). Given such a CI calculated from a random sample we can say we are "95% confident" that we have captured the true mean within the bounds of the CI (subject to the caveats about the sampling distribution above). By this we mean if we took many samples and built a confidence interval from each sample using the equation above, then about 95% of those intervals would contain the actual mean, μ. Note that this is exactly what we did in our simulation!
ndraw = 100 x = means50[:ndraw] y = range(0,ndraw) plt.errorbar(x, y, xerr=1.96*se50[:ndraw], fmt='o') plt.vlines(mu, 0, ndraw, linestyle='dashed', color='#D55E00', linewidth=3, zorder=5) plt.ylim(-1,101) plt.yticks([]) plt.title("95% CI: mean ± 1.96×SE\nfor 100 samples of size 50") fig = plt.gcf() fig.set_size_inches(4,8)
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Generating a table of CIs and corresponding margins of error The table below gives the percent CI and the corresponding margin of error ($z \times {SE}$) for that confidence interval.
perc = np.array([.80, .90, .95, .99, .997]) zval = stdnorm.ppf(1 - (1 - perc)/2) # account for the two tails of the sampling distn print("% CI \tz × SE") print("-----\t------") for (i,j) in zip(perc, zval): print("{:5.1f}\t{:6.2f}".format(i*100, j)) # see the string docs (https://docs.python.org/3.4/library/string.html) # for information on how formatting works
inclass-2016-02-22-Confidence-Intervals.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Define Data Location For remote data the interaction will use ssh to securely interact with the data<br/> This uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler<br/> Note: The default paraview server connection will use port 11111
remote_data = True remote_server_auto = True case_name = 'caratung-ar-6p0-pitch-8p0' data_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CARATUNG' data_host='dstandingford@vis03' paraview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver' if not remote_server_auto: paraview_cmd=None if not remote_data: data_host='localhost' paraview_cmd=None
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Validation and regression
# Validation for Caradonna Tung Rotor (Mach at Tip - 0.877) from NASA TM 81232, page 34 validate = True regression = True # Make movie option currently not working - TODO make_movie = False if (validate): valid = True validation_tol = 0.0100 valid_lower_cl_0p50 = 0.2298-validation_tol valid_upper_cl_0p50 = 0.2298+validation_tol valid_lower_cl_0p68 = 0.2842-validation_tol valid_upper_cl_0p68 = 0.2842+validation_tol valid_lower_cl_0p80 = 0.2736-validation_tol valid_upper_cl_0p80 = 0.2736+validation_tol valid_lower_cl_0p89 = 0.2989-validation_tol valid_upper_cl_0p89 = 0.2989+validation_tol valid_lower_cl_0p96 = 0.3175-validation_tol valid_upper_cl_0p96 = 0.3175+validation_tol print 'VALIDATING CARADONNA TUNG CASE' if (regression): print 'REGRESSION CARADONNA TUNG CASE'
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Initialise Environment
%pylab inline from paraview.simple import * paraview.simple._DisableFirstRenderCameraReset() import pylab as pl
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Get control dictionary
from zutil.post import get_case_parameters,print_html_parameters parameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir) # print parameters
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Define test conditions
from IPython.display import HTML HTML(print_html_parameters(parameters)) aspect_ratio = 6.0 Pitch = 8.0 from zutil.post import for_each from zutil import rotate_vector from zutil.post import get_csv_data def plot_cp_profile(ax,file_root,span_loc,ax2): wall = PVDReader( FileName=file_root+'_wall.pvd' ) wall.UpdatePipeline() point_data = CellDatatoPointData(Input=wall) point_data.PassCellData = 0 point_data.UpdatePipeline() merged = MergeBlocks(Input=point_data) merged.UpdatePipeline() wall_slice = Slice(Input=merged, SliceType="Plane" ) wall_slice.SliceType.Normal = [0.0,1.0,0.0] wall_slice.SliceType.Origin = [0, span_loc*aspect_ratio, 0] wall_slice.UpdatePipeline() sorted_line = PlotOnSortedLines(Input=wall_slice) sorted_line.UpdatePipeline() slice_client = servermanager.Fetch(sorted_line) for_each(slice_client,func=plot_array,axis=ax,span_loc=span_loc,axis2=ax2) def plot_array(data_array,pts_array,**kwargs): ax = kwargs['axis'] span_loc = kwargs['span_loc'] ax2 = kwargs['axis2'] data = [] pos = [] pos_y = [] count = 0 cp_array = data_array.GetPointData()['cp'] for p in pts_array.GetPoints()[:,0]: cp = float(cp_array[count]) # transform to local Cp cp = cp/(span_loc)**2 data.append(cp) pt_x = pts_array.GetPoints()[count,0] pt_z = pts_array.GetPoints()[count,2] # rotate by -8 deg pt_rot = rotate_vector([pt_x,0.0,pt_z],-8.0,0.0) pt = pt_rot[0] + 0.25 pos.append(pt) pos_y.append(pt_rot[2]) count+=1 ax.plot(pos, data , color='g',linestyle='-',marker='None',label='zCFD') ax2.plot(pos, pos_y , color='grey',linestyle='-',marker='None',label='profile') def plot_experiment(ax, filename): header = True remote = False # Note - this returns a pandas dataframe object df = get_csv_data(filename,True,False) x = [] y = [] for ind in range(0,len(df.index)-1): x.append(df[list(df.columns.values)[0]][ind]) y.append(-df[list(df.columns.values)[1]][ind]) ax.scatter(x, y, color='grey', label='Experiment')
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Cp Profile
from zutil.post import get_case_root, cp_profile_wall_from_file_span from zutil.post import ProgressBar from collections import OrderedDict factor = 0.0 pbar = ProgressBar() plot_list = OrderedDict([(0.50,{'exp_data_file': 'data/cp-0p50.txt', 'cp_axis':[0.0,1.0,1.2,-1.0]}), (0.68,{'exp_data_file': 'data/cp-0p68.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}), (0.80,{'exp_data_file': 'data/cp-0p80.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}), (0.89,{'exp_data_file': 'data/cp-0p89.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}), (0.96,{'exp_data_file': 'data/cp-0p96.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]})]) fig = pl.figure(figsize=(25, 30),dpi=100, facecolor='w', edgecolor='k') fig.suptitle('Caradonna Tung Hover Rotor (' + r'$\mathbf{M_{TIP}}$' + ' = 0.877)', fontsize=28, fontweight='normal', color = '#5D5858') pnum=1 cl = {} for plot in plot_list: pbar+=5 span_loc = plot + factor ax = fig.add_subplot(3,2,pnum) ax.set_title('$\mathbf{C_P}$' + ' at ' + '$\mathbf{r/R}$' + ' = ' + str(span_loc) + '\n', fontsize=24, fontweight='normal', color = '#E48B25') ax.grid(True) ax.set_xlabel('$\mathbf{x/c}$', fontsize=24, fontweight='bold', color = '#5D5858') ax.set_ylabel('$\mathbf{C_p}$', fontsize=24, fontweight='bold', color = '#5D5858') ax.axis(plot_list[plot]['cp_axis']) ax2 = ax.twinx() ax2.set_ylabel('$\mathbf{z/c}$', fontsize=24, fontweight='bold', color = '#5D5858') ax2.axis([0,1,-0.5,0.5]) plot_cp_profile(ax,get_case_root(case_name,num_procs),span_loc,ax2) normal = [0.0, 1.0, 0.0] origin = [0.0, span_loc*aspect_ratio, 0.0] # Check this - alpha passed via kwargs to post.py # THESE NUMBERS ARE COMPLETELY WRONG - CHECK forces = cp_profile_wall_from_file_span(get_case_root(case_name,num_procs), normal, origin, alpha=Pitch) cd = forces['friction force'][0] + forces['pressure force'][0] cs = forces['friction force'][1] + forces['pressure force'][1] cl[plot] = forces['friction force'][2] + forces['pressure force'][2] print cd, cs, cl[plot] plot_experiment(ax,plot_list[plot]['exp_data_file']) ax.legend(loc='upper right', shadow=True) legend = ax.legend(loc='best', scatterpoints=1, numpoints=1, shadow=False, fontsize=16) legend.get_frame().set_facecolor('white') ax.tick_params(axis='x', pad=16) for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(18) tick.label.set_fontweight('normal') tick.label.set_color('#E48B25') for tick in ax.yaxis.get_major_ticks(): tick.label.set_fontsize(18) tick.label.set_fontweight('normal') tick.label.set_color('#E48B25') for tick in ax2.yaxis.get_major_ticks(): tick.label2.set_fontsize(18) tick.label2.set_fontweight('normal') tick.label2.set_color('#E48B25') pnum=pnum+1 fig.subplots_adjust(hspace=0.3) fig.subplots_adjust(wspace=0.4) fig.savefig("images/Caradonna_Tung_CP_profile.png") pbar.complete() show() from IPython.display import FileLink, display display(FileLink('images/Caradonna_Tung_CP_profile.png'))
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Convergence
from zutil.post import residual_plot, get_case_report residual_plot(get_case_report(case_name)) show() if make_movie: from zutil.post import get_case_root from zutil.post import ProgressBar pb = ProgressBar() vtu = PVDReader( FileName=[get_case_root(case_name,num_procs)+'.pvd'] ) vtu.UpdatePipeline() pb += 20 merged = CleantoGrid(Input=vtu) merged.UpdatePipeline() pb += 20 point_data = CellDatatoPointData(Input=merged) point_data.PassCellData = 0 point_data.PieceInvariant = 1 point_data.UpdatePipeline() pb.complete() if make_movie: # from paraview.vtk.dataset_adapter import DataSet from vtk.numpy_interface.dataset_adapter import DataSet stream = StreamTracer(Input=point_data) stream.SeedType = "Point Source" stream.SeedType.Center = [49673.0, 58826.0, 1120.0] stream.SeedType.Radius = 1 stream.SeedType.NumberOfPoints = 1 stream.Vectors = ['POINTS', 'V'] stream.MaximumStreamlineLength = 135800.00000000035 # IntegrationDirection can be FORWARD, BACKWARD, or BOTH stream.IntegrationDirection = 'BACKWARD' stream.UpdatePipeline() stream_client = servermanager.Fetch(stream) upstream_data = DataSet(stream_client) stream.IntegrationDirection = 'FORWARD' stream.UpdatePipeline() stream_client = servermanager.Fetch(stream) downstream_data = DataSet(stream_client) if make_movie: def vtk_show(renderer, w=100, h=100): """ Takes vtkRenderer instance and returns an IPython Image with the rendering. """ from vtk import vtkRenderWindow,vtkWindowToImageFilter,vtkPNGWriter renderWindow = vtkRenderWindow() renderWindow.SetOffScreenRendering(1) renderWindow.AddRenderer(renderer) renderWindow.SetSize(w, h) renderWindow.Render() windowToImageFilter = vtkWindowToImageFilter() windowToImageFilter.SetInput(renderWindow) windowToImageFilter.Update() writer = vtkPNGWriter() writer.SetWriteToMemory(1) writer.SetInputConnection(windowToImageFilter.GetOutputPort()) writer.Write() data = str(buffer(writer.GetResult())) from IPython.display import Image return Image(data) if make_movie: #print stream_data.GetPoint(0) from zutil.post import ProgressBar pb = ProgressBar() wall = PVDReader( FileName=[get_case_root(case_name,num_procs)+'_wall.pvd'] ) wall.UpdatePipeline() merged = CleantoGrid(Input=wall) merged.UpdatePipeline() point_data = CellDatatoPointData(Input=merged) point_data.PassCellData = 0 point_data.PieceInvariant = 1 point_data.UpdatePipeline() total_pts = 100# stream_data.GetNumberOfPoints() scene = GetAnimationScene() scene.EndTime = total_pts scene.PlayMode = 'Snap To TimeSteps' scene.AnimationTime = 0 a1_yplus_PVLookupTable = GetLookupTableForArray( "yplus", 1, RGBPoints=[96.69050598144531, 0.23, 0.299, 0.754, 24391.206581115723, 0.865, 0.865, 0.865, 48685.72265625, 0.706, 0.016, 0.15], VectorMode='Magnitude', NanColor=[0.25, 0.0, 0.0], ColorSpace='Diverging', ScalarRangeInitialized=1.0 ) a1_yplus_PiecewiseFunction = CreatePiecewiseFunction( Points=[96.69050598144531, 0.0, 0.5, 0.0, 48685.72265625, 1.0, 0.5, 0.0] ) drepr = Show() # GetDisplayProperties( Contour1 ) drepr.EdgeColor = [0.0, 0.0, 0.5000076295109483] drepr.SelectionPointFieldDataArrayName = 'yplus' #DataRepresentation4.SelectionCellFieldDataArrayName = 'eddy' drepr.ColorArrayName = ('POINT_DATA', 'yplus') drepr.LookupTable = a1_yplus_PVLookupTable drepr.ScaleFactor = 0.08385616838932038 drepr.Interpolation = 'Flat' drepr.ScalarOpacityFunction = a1_yplus_PiecewiseFunction view = GetRenderView() if not view: # When using the ParaView UI, the View will be present, not otherwise. view = CreateRenderView() scene.ViewModules = [view] view.CameraViewUp = [0.0, 0.0, 1.0] view.CameraPosition = list(upstream_data.GetPoint(0)) view.CameraFocalPoint = list(upstream_data.GetPoint(1)) view.CameraParallelScale = 0.499418869125992 view.CenterOfRotation = [49673.0, 58826.0, 1120.0] view.CenterAxesVisibility = 0 view.ViewSize = [3840,2160] view.LightSwitch=0 view.UseLight = 1 #RenderView2.SetOffScreenRendering(1) #Render() pb+=20 camera = view.GetActiveCamera() key_frames = [] for p in range(total_pts): pt = stream_data.GetPoint(p) #print pt frame = CameraKeyFrame() frame.Position = list(pt) frame.ViewUp = [0.0, 0.0, 1.0] frame.FocalPoint = camera.GetFocalPoint() frame.KeyTime = p/total_pts key_frames.append(frame) pb+=20 cue = GetCameraTrack() cue.Mode = 'Interpolate Camera' cue.AnimatedProxy = view cue.KeyFrames = key_frames TimeAnimationCue4 = GetTimeTrack() scene.Cues = [cue] for t in range(total_pts-1): print 'Generating: ' + str(t) pt = stream_data.GetPoint(t) view.CameraPosition = list(pt) view.CameraFocalPoint = list(stream_data.GetPoint(t+1)) #vtk_show(view.GetRenderer()) Render() #scene.AnimationTime = t WriteImage('movies/caradonna_'+str(t)+'.png') pb.complete()
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Check validation and regression¶
if (validate): def validate_data(name, value, valid_lower, valid_upper): if ((value < valid_lower) or (value > valid_upper)): print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper return False else: return True valid = validate_data('C_L[0.50]', cl[0.50], valid_lower_cl_0p50, valid_upper_cl_0p50) and valid valid = validate_data('C_L[0.68]', cl[0.68], valid_lower_cl_0p68, valid_upper_cl_0p68) and valid valid = validate_data('C_L[0.80]', cl[0.80], valid_lower_cl_0p80, valid_upper_cl_0p80) and valid valid = validate_data('C_L[0.89]', cl[0.89], valid_lower_cl_0p89, valid_upper_cl_0p89) and valid valid = validate_data('C_L[0.96]', cl[0.96], valid_lower_cl_0p96, valid_upper_cl_0p96) and valid if (valid): print 'VALIDATION = PASS :-)' else: print 'VALIDATION = FAIL :-(' if (regression): import pandas as pd pd.options.display.float_format = '{:,.6f}'.format print 'REGRESSION DATA' regress = {'version' : ['v0.0', 'v0.1' , 'CURRENT'], 'C_L[0.50]' : [2.217000, 2.217000, cl[0.50]], 'C_L[0.68]' : [0.497464, 0.498132, cl[0.68]], 'C_L[0.80]' : [0.024460, 0.024495, cl[0.80]], 'C_L[0.89]' : [0.014094, 0.014099, cl[0.89]], 'C_L[0.96]' : [0.010366, 0.010396, cl[0.96]]} regression_table = pd.DataFrame(regress, columns=['version','C_L[0.50]','C_L[0.68]', 'C_L[0.80]','C_L[0.89]','C_L[0.96]']) print regression_table
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Cleaning up
if remote_data: #print 'Disconnecting from remote paraview server connection' Disconnect()
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
zenotech/zPost
bsd-3-clause
Isolate X and y:
X = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values y = training_data['Facies'].values
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
esa-as/2016-ml-contest
apache-2.0
We want the well names to use as groups in the k-fold analysis, so we'll get those too:
wells = training_data["Well Name"].values
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
esa-as/2016-ml-contest
apache-2.0
Now we train as normal, but LeaveOneGroupOut gives us the approriate indices from X and y to test against one well at a time:
from sklearn.svm import SVC from sklearn.model_selection import LeaveOneGroupOut logo = LeaveOneGroupOut() for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] score = SVC().fit(X[train], y[train]).score(X[test], y[test]) print("{:>20s} {:.3f}".format(well_name, score))
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
esa-as/2016-ml-contest
apache-2.0
実験条件 実験条件は以下の通りです. 人工データパラメータ サンプル数 (n_samples): [100] 総ノイズスケール: $c=0.5, 1.0$. 交絡因子のスケール: $c/\sqrt{Q}$ データ観測ノイズ分布 (data_noise_type): ['laplace', 'uniform'] 交絡因子数 (n_confs or $Q$): [10] 観測データノイズスケール: 3に固定 回帰係数の分布 uniform(-1.5, 1.5) 推定ハイパーパラメータ 交絡因子相関係数 (L_cov_21s): [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]] モデル観測ノイズ分布 (model_noise_type): ['gg']
conds = [ { 'totalnoise': totalnoise, 'L_cov_21s': L_cov_21s, 'n_samples': n_samples, 'n_confs': n_confs, 'data_noise_type': data_noise_type, 'model_noise_type': model_noise_type, 'b21_dist': b21_dist } for totalnoise in [0.5, 1.0] for L_cov_21s in [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]] for n_samples in [100] for n_confs in [10] # [1, 3, 5, 10] for data_noise_type in ['laplace', 'uniform'] for model_noise_type in ['gg'] for b21_dist in ['uniform(-1.5, 1.5)'] ] print('{} conditions'.format(len(conds)))
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
人工データの生成 実験条件に基づいて人工データを生成する関数を定義します.
def gen_artificial_data_given_cond(ix_trial, cond): # 実験条件に基づく人工データ生成パラメータの設定 n_confs = cond['n_confs'] gen_data_params = deepcopy(gen_data_params_default) gen_data_params.n_samples = cond['n_samples'] gen_data_params.conf_dist = [['all'] for _ in range(n_confs)] gen_data_params.e1_dist = [cond['data_noise_type']] gen_data_params.e2_dist = [cond['data_noise_type']] gen_data_params.b21_dist = cond['b21_dist'] noise_scale = cond['totalnoise'] / np.sqrt(n_confs) gen_data_params.f1_coef = [noise_scale for _ in range(n_confs)] gen_data_params.f2_coef = [noise_scale for _ in range(n_confs)] # 人工データ生成 gen_data_params.seed = ix_trial data = gen_artificial_data(gen_data_params) return data # 人工データ生成パラメータの基準値 gen_data_params_default = GenDataParams( n_samples=100, b21_dist='r2intervals', mu1_dist='randn', mu2_dist='randn', f1_scale=1.0, f2_scale=1.0, f1_coef=['r2intervals', 'r2intervals', 'r2intervals'], f2_coef=['r2intervals', 'r2intervals', 'r2intervals'], conf_dist=[['all'], ['all'], ['all']], e1_std=3.0, e2_std=3.0, e1_dist=['laplace'], e2_dist=['laplace'], seed=0 )
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
実行例です.
data = gen_artificial_data_given_cond(0, conds[0]) xs = data['xs'] plt.figure(figsize=(3, 3)) plt.scatter(xs[:, 0], xs[:, 1]) data = gen_artificial_data_given_cond(0, { 'totalnoise': 3 * np.sqrt(1), 'n_samples': 10000, 'n_confs': 1, 'data_noise_type': 'laplace', 'b21_dist': 'uniform(-1.5, 1.5)' } ) xs = data['xs'] plt.figure(figsize=(3, 3)) plt.scatter(xs[:, 0], xs[:, 1])
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
トライアルの定義 トライアルとは, 生成された1つの人工データに対する因果推論と精度評価の処理を指します. 一つの実験条件に対し, トライアルを100回実行します.
n_trials_per_cond = 100
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
実験条件パラメータの基準値
# 因果推論パラメータ infer_params = InferParams( seed=0, standardize=True, subtract_mu_reg=False, fix_mu_zero=True, prior_var_mu='auto', prior_scale='uniform', max_c=1.0, n_mc_samples=10000, dist_noise='laplace', df_indvdl=8.0, prior_indvdls=['t'], cs=[0.4, 0.6, 0.8], scale_coeff=2. / 3., L_cov_21s=[-0.8, -0.6, -0.4, 0.4, 0.6, 0.8], betas_indvdl=None, # [0.25, 0.5, 0.75, 1.], betas_noise=[0.25, 0.5, 1.0, 3.0], causalities=[[1, 2], [2, 1]], sampling_mode='cache_mp4' ) # 回帰係数推定パラメータ mcmc_params = MCMCParams( n_burn=1000, n_mcmc_samples=1000 )
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
プログラム トライアル識別子の生成 以下の情報からトライアルに対する識別子を生成します. トライアルインデックス (ix_trial) サンプル数 (n_samples) 交絡因子数 (n_confs) 人工データ観測ノイズの種類 (data_noise_type) 予測モデル観測ノイズの種類 (model_noise_type) 交絡因子相関係数 (L_cov_21s) 総ノイズスケール (totalnoise) 回帰係数分布 (b21_dist) トライアル識別子は推定結果をデータフレームに格納するときに使用されます.
def make_id(ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, L_cov_21s, totalnoise, b21_dist): L_cov_21s_ = ' '.join([str(v) for v in L_cov_21s]) return hashlib.md5( str((L_cov_21s_, ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, totalnoise, b21_dist.replace(' ', ''))).encode('utf-8') ).hexdigest() # テスト print(make_id(55, 100, 12, 'all', 'gg', [1, 2, 3], 0.3, 'uniform(-1.5, 1.5)')) print(make_id(55, 100, 12, 'all', 'gg', [1, 2, 3], 0.3, 'uniform(-1.5, 1.5)')) # 空白を無視します
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
トライアル結果のデータフレームへの追加 トライアル結果をデータフレームに追加します. 引数dfがNoneの場合, 新たにデータフレームを作成します.
def add_result_to_df(df, result): if df is None: return pd.DataFrame({k: [v] for k, v in result.items()}) else: return df.append(result, ignore_index=True) # テスト result1 = {'col1': 10, 'col2': 20} result2 = {'col1': 30, 'col2': -10} df1 = add_result_to_df(None, result1) print('--- df1 ---') print(df1) df2 = add_result_to_df(df1, result2) print('--- df2 ---') print(df2)
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
データフレーム内のトライアル識別子の確認 計算済みの結果に対して再計算しないために使用します.
def df_exist_result_id(df, result_id): if df is not None: return result_id in np.array(df['result_id']) else: False
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
データフレームの取得 データフレームをセーブ・ロードする関数を定義します. ファイルが存在しなければNoneを返します.
def load_df(df_file): if os.path.exists(df_file): return load_pklz(df_file) else: return None def save_df(df_file, df): save_pklz(df_file, df)
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
トライアル実行 トライアルインデックスと実験条件を引数としてトライアルを実行し, 推定結果を返します.
def _estimate_hparams(xs, infer_params): assert(type(infer_params) == InferParams) sampling_mode = infer_params.sampling_mode hparamss = define_hparam_searchspace(infer_params) results = find_best_model(xs, hparamss, sampling_mode) hparams_best = results[0] bf = results[2] - results[5] # Bayes factor return hparams_best, bf def run_trial(ix_trial, cond): # 人工データ生成 data = gen_artificial_data_given_cond(ix_trial, cond) b_true = data['b'] causality_true = data['causality_true'] # 因果推論 t = time.time() infer_params.L_cov_21s = cond['L_cov_21s'] infer_params.dist_noise = cond['model_noise_type'] hparams, bf = _estimate_hparams(data['xs'], infer_params) causality_est = hparams['causality'] time_causal_inference = time.time() - t # 回帰係数推定 t = time.time() trace = do_mcmc_bmlingam(data['xs'], hparams, mcmc_params) b_post = np.mean(trace['b']) time_posterior_inference = time.time() - t return { 'causality_true': causality_true, 'regcoef_true': b_true, 'n_samples': cond['n_samples'], 'n_confs': cond['n_confs'], 'data_noise_type': cond['data_noise_type'], 'model_noise_type': cond['model_noise_type'], 'causality_est': causality_est, 'correct_rate': (1.0 if causality_est == causality_true else 0.0), 'error_reg_coef': np.abs(b_post - b_true), 'regcoef_est': b_post, 'log_bf': 2 * bf, # 2log(p(M) / p(M_rev))なので常に正の値となります. 'time_causal_inference': time_causal_inference, 'time_posterior_inference': time_posterior_inference, 'L_cov_21s': str(cond['L_cov_21s']), 'n_mc_samples': infer_params.n_mc_samples, 'confs_absmean': np.mean(np.abs(data['confs'].ravel())), 'totalnoise': cond['totalnoise'] }
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
メインプログラム
def run_expr(conds): # データフレームファイル名 data_dir = '.' df_file = data_dir + '/20160902-eval-bml-results.pklz' # ファイルが存在すれば以前の続きから実行します. df = load_df(df_file) # 実験条件に渡るループ n_skip = 0 for cond in conds: print(cond) # トライアルに渡るループ for ix_trial in range(n_trials_per_cond): # 識別子 result_id = make_id(ix_trial, **cond) # データフレームに結果が保存済みかどうかチェックします. if df_exist_result_id(df, result_id): n_skip += 1 else: # resultはトライアルの結果が含まれるdictです. # トライアルインデックスix_trialは乱数シードとして使用されます. result = run_trial(ix_trial, cond) result.update({'result_id': result_id}) df = add_result_to_df(df, result) save_df(df_file, df) print('Number of skipped trials = {}'.format(n_skip)) return df
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
メインプログラムの実行
df = run_expr(conds)
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
結果の確認
import pandas as pd # データフレームファイル名 data_dir = '.' df_file = data_dir + '/20160902-eval-bml-results.pklz' df = load_pklz(df_file) sg = df.groupby(['model_noise_type', 'data_noise_type', 'n_confs', 'totalnoise']) sg1 = sg['correct_rate'].mean() sg2 = sg['correct_rate'].count() sg3 = sg['time_causal_inference'].mean() pd.concat( { 'correct_rate': sg1, 'count': sg2, 'time': sg3, }, axis=1 )
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
回帰係数の大きさとBayesFactor $2\log(BF)$を横軸, $|b_{21}|$(または$|b_{12}|$)を縦軸に取りプロットしました. $2\log(BF)$が10以上だと真の回帰係数(の絶対値)が大きく因果効果があると言えるのですが, $2\log(BF)$がそれ以下だと, 回帰係数が大きい場合も小さい場合もあり, BFで因果効果の有無を判断するのは難しいと言えそうです. 因果効果があるモデルと無いモデルとの比較も必要なのでしょう.
data = np.array(df[['regcoef_true', 'log_bf', 'totalnoise', 'correct_rate']]) ixs1 = np.where(data[:, 3] == 1.0)[0] ixs2 = np.where(data[:, 3] == 0.0)[0] plt.scatter(data[ixs1, 1], np.abs(data[ixs1, 0]), marker='o', s=20, c='r', label='Success') plt.scatter(data[ixs2, 1], np.abs(data[ixs2, 0]), marker='*', s=70, c='b', label='Failure') plt.ylabel('|b|') plt.xlabel('2 * log(bayes_factor)') plt.legend(fontsize=15, loc=4, shadow=True, frameon=True, framealpha=1.0)
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
回帰系数の予測精度 人工データの回帰係数をU(-1.5, 1.5)で生成した実験で, 回帰系数の真値を横軸, 事後分布平均を縦軸に取りプロットしました. 真値が小さい場合は因果方向予測の正解(赤)と不正解(青)に関わらず事後分布平均が小さくなっています. 一方, 正解の場合には回帰係数が小さく, 不正解の場合には回帰係数が小さく推定される傾向があるようです.
data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']]) ixs1 = np.where(data[:, 2] == 1)[0] ixs2 = np.where(data[:, 2] == 0)[0] assert(len(ixs1) + len(ixs2) == len(data)) plt.figure(figsize=(5, 5)) plt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct') plt.scatter(data[ixs2, 0], data[ixs2, 1], c='b', label='Incorrect') plt.plot([-3, 3], [-3, 3]) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.gca().set_aspect('equal') plt.xlabel('Reg coef (true)') plt.ylabel('Reg coef (posterior mean)')
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
EPSで出力
data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']]) ixs1 = np.where(data[:, 2] == 1)[0] ixs2 = np.where(data[:, 2] == 0)[0] assert(len(ixs1) + len(ixs2) == len(data)) plt.figure(figsize=(5, 5)) plt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct') plt.plot([-3, 3], [-3, 3]) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.gca().set_aspect('equal') plt.xlabel('Reg coef (true)') plt.ylabel('Reg coef (posterior mean)') plt.title('Correct inference') plt.savefig('20160905-eval-bml-correct.eps') plt.figure(figsize=(5, 5)) plt.scatter(data[ixs2, 0], data[ixs2, 1], c='b', label='Incorrect') plt.plot([-3, 3], [-3, 3]) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.gca().set_aspect('equal') plt.xlabel('Reg coef (true)') plt.ylabel('Reg coef (posterior mean)') plt.title('Incorrect inference') plt.savefig('20160905-eval-bml-incorrect.eps')
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
taku-y/bmlingam
mit
Naive implementation of closest_pair
def naive_closest_pair(points): best, p, q = np.inf, None, None n = len(points) for i in range(n): for j in range(i + 1, n): d = euclid_distance(points[i], points[j]) if d < best: best, p, q = d, points[i], points[j] return best, p, q
Day 11 - Closest pair of points.ipynb
AlexandruValeanu/365-days-of-algorithms
gpl-3.0
Draw points (with closest-pair marked as red)
def draw_points(points, p, q): xs, ys = zip(*points) plt.figure(figsize=(10,10)) plt.scatter(xs, ys) plt.scatter([p[0], q[0]], [p[1], q[1]], s=100, c='red') plt.plot([p[0], q[0]], [p[1], q[1]], 'k', c='red') plt.show()
Day 11 - Closest pair of points.ipynb
AlexandruValeanu/365-days-of-algorithms
gpl-3.0
Run(s)
points = [(26, 77), (12, 37), (14, 18), (19, 96), (71, 95), (91, 9), (98, 43), (66, 77), (2, 75), (94, 91)] xs, ys = zip(*points) d, p, q = closest_pair(points) assert d == naive_closest_pair(points)[0] print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d)) draw_points(points, p, q) N = 10 x = np.random.rand(N) * 100 y = np.random.rand(N) * 100 points = list(zip(x, y)) d, p, q = closest_pair(points) assert d == naive_closest_pair(points)[0] print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d)) draw_points(points, p, q) N = 20 x = np.random.randint(100, size=N) y = np.random.randint(100, size=N) points = list(zip(x, y)) d, p, q = closest_pair(points) assert d == naive_closest_pair(points)[0] print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d)) draw_points(points, p, q) N = 20 x = np.random.rand(N) y = np.random.rand(N) points = list(zip(x, y)) d, p, q = closest_pair(points) assert d == naive_closest_pair(points)[0] print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d)) draw_points(points, p, q)
Day 11 - Closest pair of points.ipynb
AlexandruValeanu/365-days-of-algorithms
gpl-3.0
To parse the data file, just read 3 lines at a time to build individual vectors. Each module's data has a known number of measurements(82) so the list of vectors can be split into groups and assembled into Module objects.
from collections import namedtuple from statistics import mean,stdev Vector = namedtuple('Vector', ['x', 'y', 'z', 'label']) def parse_vectors(lines): vecs = [] lines_iter = iter(lines) label = "" def tokenize(l): nonlocal label l = l.split('#')[-1] toks = [t for t in l.split('\t') if t] if len(toks) > 1: label = toks[1].strip() return toks[0] while lines_iter: try: x = float(tokenize(next(lines_iter))) y = float(tokenize(next(lines_iter))) z = float(tokenize(next(lines_iter))) vecs.append(Vector(x,y,z,label)) except IndexError: pass except StopIteration: break return vecs vecs = parse_vectors(lines) class Module: n = 82 def __init__(self, vecs): self.hdi_bond_pads = vecs[0:32] # 32 measurements self.address_pads = vecs[32:36] # 4 measurements self.tbm_on_hdi = vecs[36:44] # 8 measurements self.tbm_on_tbm = vecs[44:48] # 4 measurements self.hdi_hv_pad = vecs[48] self.roc_bond_pads = vecs[49:81] # 32 measurements self.roc_hv_pad = vecs[81] def parse_modules(vectors): n = Module.n num_modules = len(vectors)//n return [Module(vectors[i*n:(i+1)*n]) for i in range(num_modules)] modules = parse_modules(vecs)
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
Now that the potting data has been successfully loaded into an appropriate data structure, some plots can be done. First, let's look at the location of the potting positions on the TBM, both TBM and HDI side
tbm_horiz = [] tbm_verti = [] hdi_horiz = [] hdi_verti = [] plt.figure() for module in modules: tbm_horiz.append(module.tbm_on_tbm[1][0]-module.tbm_on_tbm[0][0]) tbm_horiz.append(module.tbm_on_tbm[2][0]-module.tbm_on_tbm[3][0]) tbm_verti.append(module.tbm_on_tbm[3][1]-module.tbm_on_tbm[0][1]) tbm_verti.append(module.tbm_on_tbm[2][1]-module.tbm_on_tbm[1][1]) hdi_horiz.append(module.tbm_on_hdi[1][0]-module.tbm_on_hdi[0][0]) hdi_horiz.append(module.tbm_on_hdi[4][0]-module.tbm_on_hdi[5][0]) hdi_verti.append(module.tbm_on_hdi[3][1]-module.tbm_on_hdi[2][1]) hdi_verti.append(module.tbm_on_hdi[6][1]-module.tbm_on_hdi[7][1]) xs = [] ys = [] offset_x, offset_y, *_ = module.hdi_bond_pads[0] for i,point in enumerate(module.tbm_on_hdi): xs.append(point[0]-offset_x) ys.append(point[1]-offset_y) for i,point in enumerate(module.tbm_on_tbm): xs.append(point[0]-offset_x) ys.append(point[1]-offset_y) plt.plot(xs,ys,'.') plt.xlabel("X(mm)") plt.ylabel("Y(mm)") print("Mean TBM_TBM X-Trace Length",mean(tbm_horiz),"+-",stdev(tbm_horiz),"mm") print("Mean TBM_TBM Y-Trace Length",mean(tbm_verti),"+-",stdev(tbm_verti),"mm") print("Mean TBM_HDI X-Trace Length",mean(hdi_horiz),"+-",stdev(hdi_horiz),"mm") print("Mean TBM_HDI Y-Trace Length",mean(hdi_verti),"+-",stdev(hdi_verti),"mm")
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
So now we know what the average and standard deviation of the trace lengths on the TBM are. good. Now let's examine how flat the Modules are overall by looking at the points for the HDI and BBM bond pads in the YZ plane.
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 5)) for i, module in enumerate(modules): ys = [] zs = [] _, offset_y, offset_z, *_ = module.hdi_bond_pads[0] for bond_pad in module.hdi_bond_pads[:16]: ys.append(bond_pad[1]-offset_y) zs.append(bond_pad[2]-offset_z) axes[0][0].plot(ys,zs,'.', label=str(i)) ys.clear() zs.clear() _, offset_y, offset_z, *_ = module.hdi_bond_pads[16] for bond_pad in module.hdi_bond_pads[16:]: ys.append(bond_pad[1]-offset_y) zs.append(bond_pad[2]-offset_z) axes[0][1].plot(ys,zs,'.', label=str(i)) ys.clear() zs.clear() _, offset_y, offset_z, *_ = module.roc_bond_pads[0] for bond_pad in module.roc_bond_pads[:16]: ys.append(bond_pad[1]-offset_y) zs.append(bond_pad[2]-offset_z) axes[1][0].plot(ys,zs,'.', label=str(i)) ys.clear() zs.clear() _, offset_y, offset_z, *_ = module.roc_bond_pads[16] for bond_pad in module.roc_bond_pads[16:]: ys.append(bond_pad[1]-offset_y) zs.append(bond_pad[2]-offset_z) axes[1][1].plot(ys,zs,'.', label=str(i)) axes[0][0].set_ylabel('Z(mm)') axes[1][0].set_ylabel('Z(mm)') axes[1][0].set_xlabel('Y(mm)') axes[1][1].set_xlabel('Y(mm)') axes[0][0].set_ylim((-.20,.20)) axes[0][0].set_title("HDI Pads left side") axes[0][1].set_ylim((-.20,.20)) axes[0][1].set_title("HDI Pads right side") axes[1][0].set_ylim((-.20,.20)) axes[1][0].set_title("BBM Pads left side") axes[1][1].set_ylim((-.20,.20)) axes[1][1].set_title("BBM Pads right side")
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
HDI/BBM Offset Data There is also data available for center/rotation of hdi/bbm. We can try to measure the offsets. The raw data files are missing some rows. This would throw off the parser by introducing a misalignment of the hdi/bbm pairing. I added the missng rows by hand.
from IPython.display import Markdown, display_markdown with open("./orientationData.txt") as f: vecs = parse_vectors(f.readlines()) pairs = [] NULL = set([0]) for i in range(len(vecs)//16): for j in range(8): hdi = vecs[i*16+j] bbm = vecs[i*16+j+8] pair = (hdi,bbm) if set(hdi[:3]) != NULL and set(bbm[:3]) != NULL: pairs.append(pair) deltas = [] angles = [] ss = ["| | time stamp | delta ($\mu$m) | rotation (degrees)|", "|--:|------------|---------------:|------------------:|"] for i,pair in enumerate(pairs): dx = pair[0].x - pair[1].x dy = pair[0].y - pair[1].y dt = abs(pair[0].z - pair[1].z) delta = np.sqrt(dx**2 + dy**2) fmt = "|{}|{}|{:03f}|{:03f}|" ss.append(fmt.format(i, pair[0].label[:-14], delta*1000, dt)) deltas.append(delta) angles.append(abs(dt)) display_markdown(Markdown('\n'.join(ss))) fig, axes = plt.subplots(ncols=2) axes[0].hist(deltas, bins=50) axes[0].set_xlabel("offset(mm)") axes[1].hist(angles, bins=50) axes[1].set_xlabel("offset(deg)") plt.tight_layout() plt.show() fig, axs = plt.subplots(2,2, sharex=True) times = [] dxs = [] dys = [] drs = [] dthetas = [] for i,pair in enumerate(pairs): dt = datetime.strptime(pair[0].label[:-14], "%d/%m/%Y-%H:%M:%S") times.append(dt) dx = (pair[0].x - pair[1].x)*1000 dy = (pair[0].y - pair[1].y)*1000 dxs.append(dx) dys.append(dy) drs.append(np.sqrt(dx**2 + dy**2)) dthetas.append(abs(pair[0].z - pair[1].z)) labels = ["$\Delta$x ($\mu$m)", "$\Delta$y ($\mu$m)", "$\Delta$r ($\mu$m)", "$\Delta \\theta$ (deg)"] axs = chain.from_iterable(axs) datas = [dxs, dys, drs, dthetas] for label, ax, data in zip(labels, axs, datas): months = mdates.MonthLocator() # every month monthFmt = mdates.DateFormatter('%b') ax.xaxis.set_major_locator(months) ax.xaxis.set_major_formatter(monthFmt) ax.plot_date(times, data) ax.set_ylabel(label) ax.set_yscale('log') fig.tight_layout() plt.show()
legacy/Potting Data Analysis.ipynb
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
mit
Load the data Load the catalogues
panstarrs = Table.read("panstarrs_u1.fits") wise = Table.read("wise_u1.fits")
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Coordinates As we will use the coordinates to retrieve the extinction in their positions
coords_panstarrs = SkyCoord(panstarrs['raMean'], panstarrs['decMean'], unit=(u.deg, u.deg), frame='icrs') coords_wise = SkyCoord(wise['raWise'], wise['decWise'], unit=(u.deg, u.deg), frame='icrs')
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Reddening Get the extinction for the positions of the sources in the catalogues.
ext_panstarrs = get_eb_v(coords_panstarrs.ra.deg, coords_panstarrs.dec.deg) ext_wise = get_eb_v(coords_wise.ra.deg, coords_wise.dec.deg)
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Apply the correction to each position
i_correction = ext_panstarrs * FILTER_EXT["i"] w1_correction = ext_wise * FILTER_EXT["W1"] hist(i_correction, bins=100); hist(w1_correction, bins=100); panstarrs.rename_column("i", 'iUncor') wise.rename_column("W1mag", 'W1magUncor') panstarrs["i"] = panstarrs["iUncor"] - i_correction wise["W1mag"] = wise["W1magUncor"] - w1_correction
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Save the corrected catalogues PanSTARRS
columns_save = ['objID', 'raMean', 'decMean', 'raMeanErr', 'decMeanErr', 'i', 'iErr'] panstarrs[columns_save].write('panstarrs_u2.fits', format="fits") panstarrs["ext"] = ext_panstarrs panstarrs[['objID', "ext"]].write('panstarrs_extinction.fits', format="fits") # Free memory del panstarrs
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
WISE
columns_save = ['AllWISE', 'raWise', 'decWise', 'raWiseErr', 'decWiseErr', 'W1mag', 'W1magErr'] wise[columns_save].write('wise_u2.fits', format="fits") wise["ext"] = ext_wise wise[['AllWISE', "ext"]].write('wise_extinction.fits', format="fits")
PanSTARRS_WISE_reddening.ipynb
nudomarinero/mltier1
gpl-3.0
Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click Restart the kernel on the notebook so that the Python environment picks up the new packages.
import os PROJECT = "qwiklabs-gcp-bdc77450c97b4bf6" # REPLACE WITH YOUR PROJECT NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) # Do not change these os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID if PROJECT == "your-gcp-project-here": print("Don't forget to update your PROJECT name! Currently:", PROJECT)
quests/serverlessml/02_bqml/solution/first_model.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
5.1 Numerical Differentiation Two-point forward-difference formula $f'(x) = \frac{f(x+h) - f(x)}{h} - \frac{h}{2}f''(c)$ where $c$ is between $x$ and $x+h$ Example Use the two-point forward-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1/x$ at $x = 2$
# Parameters x = 2 h = 0.1 # Symbolic computation sym_x = sym.Symbol('x') sym_deri_x1 = sym.diff(1 / sym_x, sym_x) sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf() # Approximation f = lambda x : 1 / x deri_x1 = (f(x + h) - f(x)) / h # Comparison print('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Three-point centered-difference formula $f'(x) = \frac{f(x+h) - f(x-h)}{2h} - \frac{h^2}{6}f'''(c)$ where $x-h < c < x+h$ Example Use the three-point centered-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1 / x$ at $x = 2$
# Parameters x = 2 h = 0.1 f = lambda x : 1 / x # Symbolic computation sym_x = sym.Symbol('x') sym_deri_x1 = sym.diff(1 / sym_x, sym_x) sym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf() # Approximation deri_x1 = (f(x + h) - f(x - h)) / (2 * h) # Comparison print('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense