markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
$$f:M_{mxn} \to M_{mxn}$$ $$ \forall \quad i, j: \quad i < m, j < n \qquad a_{ij} = \left{ \begin{array}{ll} \text{True} & \quad se \quad a_{ij} > 30 \ \text{False} & \quad \text{c.c} \end{array} \right. $$
print(A > 30)
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
Usando um vetor de Bools como Indexador
print(A[A > 30])
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
$$A_{mxn} * B_{mxn} \mapsto C_{mxn}$$ $$c_{ij} = a_{ij} * b_{ij}$$ $$\forall \quad i, j: \quad i < m, j < n$$
print("{} * {} -> {}".format(A, B, A * B))
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
print("{}.{} -> {}".format(A, B, A.dot(B))) print("{}.{} -> {}".format(A, B, np.dot(A, B))) print(np.ones(10) * 12) M = np.linspace(-1, 1, 16).reshape(4, 4) print(M) print("sum(A) -> {}".format(M.sum())) print("max(A) -> {} | min(A) -> {}" .format(M.max(), M.min())) N = np.arange(16).reshape(4, 4) print(N) prin...
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
$$Ax = y$$
A = np.linspace(1, 4, 4).reshape(2, 2) print(A) y = np.array([5., 7.]) x = np.linalg.solve(A, y) print(x) print(np.dot(A, x.T)) x = np.arange(0, 10, 2) y = np.arange(5) print(np.vstack([x, y])) print(np.hstack([x, y])) print(np.hsplit(x, [2])) print(np.hsplit(x, [2, 4])) print(np.vsplit(np.eye(3), range(1, 3))...
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
jhonatancasale/graduation-pool
apache-2.0
Now for the heavy lifting: We first have to come up with the weights, - calculate the month lengths for each monthly data record - calculate weights using groupby('time.season') Finally, we just need to multiply our weights by the Dataset and sum allong the time dimension.
# Make a DataArray with the number of days in each month, size = len(time) month_length = xray.DataArray(get_dpm(ds.time.to_index(), calendar='noleap'), coords=[ds.time], name='month_length') # Calculate the weights by grouping by 'time.season'. # Conversion to float type ('astype(float)'...
examples/xray_seasonal_means.ipynb
NicWayand/xray
apache-2.0
https://cschoel.github.io/nolds/nolds.html#detrended-fluctuation-analysis
# -*- coding: UTF-8 -*- from __future__ import division import numpy as np import pandas as pd import sys import math from sklearn.preprocessing import LabelEncoder, OneHotEncoder import re import os import csv from helpers.outliers import MyOutliers from skroutz_mobile import SkroutzMobile from sklearn.ensemble import...
02_preprocessing/exploration04-price_history_dfa.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Conclusion the estimate alpha for the Hurst parameter (alpha < 1: stationary process similar to fractional Gaussian noise with H = alpha, alpha > 1: non-stationary process similar to fractional Brownian motion with H = alpha - 1) So most price histories are identified as we would expect, as non-stationary processes
# References
02_preprocessing/exploration04-price_history_dfa.ipynb
pligor/predicting-future-product-prices
agpl-3.0
https://cschoel.github.io/nolds/nolds.html#detrended-fluctuation-analysis https://scholar.google.co.uk/scholar?q=Detrended+fluctuation+analysis%3A+A+scale-free+view+on+neuronal+oscillations&btnG=&hl=en&as_sdt=0%2C5 MLA format: Hardstone, Richard, et al. "Detrended fluctuation analysis: a scale-free view on neuronal osc...
seq = seqs[0].values plt.plot(seq) detrendeds = [ss.detrend(seq) for seq in seqs] len(detrendeds) plt.plot(detrendeds[0]) detrendeds[0] alldetr = [] for detrended in detrendeds: alldetr += list(detrended) len(alldetr) fig = plt.figure( figsize=(14, 6) ) sns.distplot(alldetr, axlabel="Price Deviation from zero...
02_preprocessing/exploration04-price_history_dfa.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Performing basic math functions
x = 2 print('2*x =',2*x) #multiplcation print('x^3 =',x**3) #exponents print('e^x =',np.exp(x)) #e^x print('e^x =',np.e**x) #e^x alternate form print('Pi =',np.pi) #Pi
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Integration This will integrate a function that you provide. There a number of other methods for numerical integration that can be found online. For our examples we will use: $I = \int_{0}^{1} ax^2 + b dx$
from scipy.integrate import quad # First define a function that you want to integrate def integrand(x,a,b): return a*x**2 + b # Set your constants a = 2 b = 1 I = quad(integrand, 0, 1, args=(a,b)) print(I) # I has two values, the first value is the estimation of the integration, the second value is the upper bou...
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Arrays
y = np.array([1,2,3,4,5]) #create an array of values print('y =\t',y) #'\t' creates an indent to nicely align answers print('y[0] =\t',y[0]) #Python starts counting at 0 print('y[2] =\t',y[2]) #y[2] gives the third element in y (I don...
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
for loops
#This first loop iterates over the elements in an array array = np.array([0,1,2,3,4]) print('Frist Loop') for x in array: print(x*2) #This second loop iterates for x in the range of [0,4], again we have to say '5' because of the way Python counts print('Second Loop') for x in range(5): print(x*2)
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Summation with for loops $\sum_{n=1}^{4} 2^{-n}$
answer = 0 #Each iteration will be added to this, so we start it at zero storage = [] #This will be used to store values after each iteration for n in range(1,5): storage.append(2**(-n)) #The append command adds elements to an array answer+=2**(-n) #+= is ...
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
while loops
#This while loop accomplishes the same thing as the two for loops above x=0 while x<5: print(x*2) x+=1
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
if statements
#Order of your if statements matters. array = np.array([2,4,6,7,11]) for x in array: if x<5: print('Not a winner') elif x<10: print(2*x) else: break
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Linear Algebra
#Create a matrix a = np.array([[1,2,3],[4,5,6],[7,8,9]]) print('a =\n',a) #get eigenvalues and eigvenvectors of a w,v = linalg.eig(a) print('eigenvalues =\t',w) print('eigenvectors =\n',v) #Matrix multiplication b = np.array([1,0,0]) print('a*b =\t',a@b.T) #'@' does matrix multiplication, '.T' transposes a matri...
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Creating a function
#'def' starts the function. Variables inside the parentheses are inputs to your function. #Return is what your function will output. #In this example I have created a function that provides the first input raised to the power of the second input. def x2y(x,y): return x**y x2y(4,2)
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Symbolic Math
#This lets us create functions with variables #First define a variable x = sy.Symbol('x') #Next create a function function = x**4 der = function.diff(x,1) print('first derivative =\t',der) der2 = function.diff(x,2) print('second derivative =\t',der2) #You can substitute back in for symbols now print('1st derivative...
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Plotting
#Standard plot x = np.linspace(0,10) y = np.sin(x) z = np.cos(x) plt.plot(x,y,x,z) plt.xlabel('Radians'); plt.ylabel('Value'); plt.title('Standard Plot') plt.legend(['Sin','Cos']) plt.show() #Scatter Plot x = np.linspace(0,10,11) y = np.sin(x) z = np.cos(x) plt.scatter(x,y) plt.scatter(x,z) plt.xlabel('Radians'); ...
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Math package has useful tools as well
print('5! =\t',math.factorial(5)) print('|-3| =\t',math.fabs(-3))
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Something to help with your homework Assume you have a chemical reaction defined by: A + 2B -> C For every mole of A consumed, 2 moles of B are consumed, and 1 mole of C is produced. If we have the following molar flow rates: FA0 = 1.5 moles/s = Initial flow of A FB0 = = 2.5 moles/s = Initial flow of B FA = Flow rate...
# Set up a vector to store values of advancement adv = np.arange(0,20,.01) # Inital Flow Rates fa0 = 1.5 #moles/s fb0 = 2.5 fc0 = 0 # Calculate flow rate as a function of advancement fa = fa0-1*adv fb = fb0-2*adv fc = fc0+adv # Find the maximum value of advancement, value at which one of the reactants hits 0 moles/s...
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
Putting it to use: Homework For Fun! Use 'for' loops to create Taylor expansions of $ln(1+x)$ centered at 0, with an order of 1, 2, 3, and 4. Plot these Taylor expansions along with the original equation on one plot. Label your plots. As a reminder the formula for a Taylor expansion is: $f(a) + \sum_{n=1}^{\infty}\fr...
#Insert code here.
Resources/Python_Tutorial.ipynb
wmfschneider/CHE30324
gpl-3.0
However, this may not always be the case; if for statistical reasons it is important to average the same number of epochs from different conditions, you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging. Another approach to pooling across conditions is to create separate :class:~mne.Evoked objects for ...
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave') assert left_right_aud.nave == left_aud.nave + right_aud.nave
0.20/_downloads/5514ea6c90dde531f8026904a417527e/plot_10_evoked_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Keeping track of nave is important for inverse imaging, because it is used to scale the noise covariance estimate (which in turn affects the magnitude of estimated source activity). See minimum_norm_estimates for more information (especially the whitening_and_scaling section). For this reason, combining :class:~mne.Evo...
for ix, trial in enumerate(epochs[:3].iter_evoked()): channel, latency, value = trial.get_peak(ch_type='eeg', return_amplitude=True) latency = int(round(latency * 1e3)) # convert to milliseconds value = int(round(value * 1e6)) # convert to µV print('Tri...
0.20/_downloads/5514ea6c90dde531f8026904a417527e/plot_10_evoked_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Q. What will the maximum (last element) of wave be? How to check?
print(waves.max()) waves # Now, convert to frequency # (note conversion from mm to m): freqs = c / (waves / 1e3) freqs # Make a table & print (zip pairs up wave and freq # into a list of tuples): table = [[wave, freq] for wave, freq in zip(waves, freqs)] for row in table: print(row) print(np.array(table)) ...
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. How could we regroup elements to match the previous incarnation? (row major)
table.transpose() # let's just work with the transpose table = table.T
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What should this yield?
table.shape
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What should this be?
table[20][0] table[20,0]
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Not possible for lists! :
l = list(table) print(l[20][0]) l[20,0] table.shape for index1 in range(table.shape[0]): # Q. What is table.shape[0]? for index2 in range(table.shape[1]): print('table[{}, {}] = {:g}'.format(index1, index2, table[index1, index2])) # Q...
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
When you just loop over the elements of an array, you get rows:
table.shape[0] for row in table: # don't be fooled, it's not my naming of the looper that does that! print(row) for idontknowwhat in table: print(idontknowwhat)
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
This could also be done with one loop using numpy's ndenumerate. ndenumerate will enumerate the rows and columns of the array:
for index_tuple, value in np.ndenumerate(table): print('index {} has value {:.2e}'.format(index_tuple, value))
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. Reminder: what is the shape of table?
print(table.shape) print(type(table.shape))
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. So what is table.shape[0]?
table.shape[0]
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. And table.shape[1]?
table.shape[1]
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Arrays can be sliced analogously to lists. But we already saw, there's more indexing posssibilities on top with numpy.
table[0]
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q: How to get the first column instead?
table[:, 0] # Note that this is different. # Q. What is this? table[:][0] # This will print the second column: table[:, 1] # To get the first five rows of the table: print(table[:5, :]) print() # Same as: print(table[:5])
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Numpy also has a multi-dimensional lazy indexing trick under its sleeve:
ndarray = np.zeros(2,3,4) # will fail. Why? Hint: Look at error message ndarray = np.zeros((2,3,4)) ndarray = np.arange(2*3*4).reshape((2,3,4)) # will fail. Why? ndarray ndarray[:, :, 0] ndarray[..., 0]
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Array Computing For an array $A$ of any rank, $f(A)$ means applying the function $f$ to each element of $A$. Matrix Objects
xArray1 = np.array([1, 2, 3], float) xArray1 xArray1.T xMatrix = np.matrix(xArray1) print(type(xMatrix)) xMatrix xMatrix.shape xMatrix2 = xMatrix.transpose() xMatrix2 # Or xMatrix.T
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What is the identity matrix?
iMatrix = np.eye(3) # or np.identity iMatrix # And iMatrix2 = np.mat(iMatrix) # 'mat' short for 'matrix' iMatrix2 # Array multiplication. # Reminder of xMatrix? xMatrix # Multiplication of any matrix by the identity matrix # yields that matrix: xMatrix * iMatrix # Reminder of xMatrix2: xMatrix2 xMatrix2 = iMatri...
lecture_14_ndarraysI.ipynb
CUBoulder-ASTR2600/lectures
isc
Plotting with parameters Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$. Customize your visualization to make it effective and beautiful. Customize the box, grid, spines and ticks to match the requirements of this data. Use enough points along the x-axis to get a smooth plot. For ...
# YOUR CODE HERE def plot_sine1(a, b): t = np.linspace(0,4*np.pi,400) plt.plot(t,np.sin(a*t + b)) plt.xlim(0,4*np.pi) plt.ylim(-1.0,1.0) plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], ['0','π','2π','3π','4π']) plot_sine1(5, 3.4)
assignments/assignment05/InteractEx02.ipynb
joshnsolomon/phys202-2015-work
mit
In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument: dashed red: r-- blue circles: bo dotted black: k. Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue ...
def plot_sine2(a,b,style): t = np.linspace(0,4*np.pi,400) plt.plot(t,np.sin(a*t + b),style) plt.xlim(0,4*np.pi) plt.ylim(-1.0,1.0) plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], ['0','π','2π','3π','4π']) plot_sine2(4.0, -1.0, 'r--')
assignments/assignment05/InteractEx02.ipynb
joshnsolomon/phys202-2015-work
mit
Use interact to create a UI for plot_sine2. Use a slider for a and b as above. Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
interact(plot_sine2, a=(0.0,5.0,.1), b=(-5.0,5.0,.1), style={'Dotted Blue': 'b:', 'Black Circles': 'ko', 'Red Triangles':'r^'}); assert True # leave this for grading the plot_sine2 exercise
assignments/assignment05/InteractEx02.ipynb
joshnsolomon/phys202-2015-work
mit
Here we will look at a molecular dynamics simulation of the barstar. As we will analyse Protein Block sequences, we first need to assign these sequences for each frame of the trajectory.
# Assign PB sequences for all frames of a trajectory trajectory = os.path.join(pbx.DEMO_DATA_PATH, 'barstar_md_traj.xtc') topology = os.path.join(pbx.DEMO_DATA_PATH, 'barstar_md_traj.gro') sequences = [] for chain_name, chain in pbx.chains_from_trajectory(trajectory, topology): dihedrals = chain.get_phi_psi_angles(...
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
Block occurences per position The basic information we need to analyse protein deformability is the count of occurences of each PB for each position throughout the trajectory. This occurence matrix can be calculated with the :func:pbxplore.analysis.count_matrix function.
count_matrix = pbx.analysis.count_matrix(sequences)
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
PBxplore provides the :func:pbxplore.analysis.plot_map function to ease the visualization of the occurence matrix.
pbx.analysis.plot_map('map.png', count_matrix) !rm map.png
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
The :func:pbxplore.analysis.plot_map helper has a residue_min and a residue_max optional arguments to display only part of the matrix. These two arguments can be pass to all PBxplore functions that produce a figure.
pbx.analysis.plot_map('map.png', count_matrix, residue_min=60, residue_max=70) !rm map.png
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
Note that matrix in the the figure produced by :func:pbxplore.analysis.plot_map is normalized so as the sum of each column is 1. The matrix can be normalized with the :func:pbxplore.analysis.compute_freq_matrix.
freq_matrix = pbx.analysis.compute_freq_matrix(count_matrix) im = plt.imshow(freq_matrix, interpolation='none', aspect='auto') plt.colorbar(im) plt.xlabel('Position') plt.ylabel('Block')
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
Protein Block entropy The $N_{eq}$ is a measure of variability based on the count matrix calculated above. It can be computed with the :func:pbxplore.analysis.compute_neq function.
neq_by_position = pbx.analysis.compute_neq(count_matrix)
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
neq_by_position is a 1D numpy array with the $N_{eq}$ for each residue.
plt.plot(neq_by_position) plt.xlabel('Position') plt.ylabel('$N_{eq}$')
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
The :func:pbxplore.analysis.plot_neq helper ease the plotting of the $N_{eq}$.
pbx.analysis.plot_neq('neq.png', neq_by_position) !rm neq.png
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
The residue_min and residue_max arguments are available.
pbx.analysis.plot_neq('neq.png', neq_by_position, residue_min=60, residue_max=70) !rm neq.png
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
Display PB variability as a logo
pbx.analysis.generate_weblogo('logo.png', count_matrix) display(Image('logo.png')) !rm logo.png pbx.analysis.generate_weblogo('logo.png', count_matrix, residue_min=60, residue_max=70) display(Image('logo.png')) !rm logo.png
doc/source/notebooks/Deformability.ipynb
jbarnoud/PBxplore
mit
默认图现在有三个节点, 两个 constant() op, 和一个matmul() op. 为了真正进行矩阵相乘运算, 并得到矩阵乘法的 结果, 你必须在会话里启动这个图. 在一个会话中启动图 构造阶段完成后, 才能启动图. 启动图的第一步是创建一个 Session 对象, 如果无任何创建参数, 会话构造器将启动默认图. 欲了解完整的会话 API, 请阅读Session 类.
# 启动默认图 sess = tf.Session() # 调用 sess 的 'run()' 方法来执行矩阵乘法 op, 传入 'product' 作为该方法的参数. # 上面提到, 'product' 代表了矩阵乘法 op 的输出, 传入它是向方法表明, 我们希望取回 # 矩阵乘法 op 的输出. # # 整个执行过程是自动化的, 会话负责传递 op 所需的全部输入. op 通常是并发执行的. # # 函数调用 'run(product)' 触发了图中 # 三个 op (两个常量 op 和一个矩阵乘法 op) 的执行. # # 返回值 'result' 是一个 numpy `ndarray` 对象. result = se...
dev/openlibs/tensorflow/basic_usage.ipynb
karst87/ml
mit
Session 对象在使用完后需要关闭以释放资源. 除了显式调用 close 外, 也可以使用 "with" 代码块 来自动完成关闭动作.
with tf.Session() as sess: result = sess.run(product) print(result)
dev/openlibs/tensorflow/basic_usage.ipynb
karst87/ml
mit
在实现上, TensorFlow 将图形定义转换成分布式执行的操作, 以充分利用可用的计算资源(如 CPU 或 GPU). 一般你不需要显式指定使用 CPU 还是 GPU, TensorFlow 能自动检测. 如果检测到 GPU, TensorFlow 会尽可能地利用找到的第一个 GPU 来执行操作. 如果机器上有超过一个可用的 GPU, 除第一个外的其它 GPU 默认是不参与计算的. 为了让 TensorFlow 使用这些 GPU, 你必须将 op 明确指派给它们执行. with...Device 语句用来指派特定的 CPU 或 GPU 执行操作:
with tf.Session() as sess: # with tf.device('/gpu:0'): with tf.device('/cpu:0'): matrix1 = tf.constant([[3, 3]]) matrix2 = tf.constant([[2], [2]]) product = tf.matmul(matrix1, matrix2) reuslt = sess.run(product) print(result)
dev/openlibs/tensorflow/basic_usage.ipynb
karst87/ml
mit
设备用字符串进行标识. 目前支持的设备包括: "/cpu:0": 机器的 CPU. "/gpu:0": 机器的第一个 GPU, 如果有的话. "/gpu:1": 机器的第二个 GPU, 以此类推. 阅读使用GPU章节, 了解 TensorFlow GPU 使用的更多信息. 交互式使用 文档中的 Python 示例使用一个会话 Session 来 启动图, 并调用 Session.run() 方法执行操作. 为了便于使用诸如 IPython 之类的 Python 交互环境, 可以使用 InteractiveSession 代替 Session 类, 使用 Tensor.eval() 和 Operation....
# 进入一个交互式TensorFlow会话 import tensorflow as tf sess = tf.InteractiveSession() x = tf.Variable([1, 2]) a = tf.constant([3, 3]) # 使用初始化器 initializer op 的 run() 方法初始化 'x' x.initializer.run() # 增加一个减法 sub op, 从 'x' 减去 'a'. 运行减法 op, 输出结果 sub = tf.sub(x, a) print(sub.eval()) # ==> [-2. -1.]
dev/openlibs/tensorflow/basic_usage.ipynb
karst87/ml
mit
Loading pickle files The convert_data.py converts the ubyte format input files into numpy arrays. These arrays are then saved out as pickle files to be quickly loaded later on. The shape of the numpy arrays for images and labels are: Images: (N, rows, cols) Labels: (N, 1)
# Set up the file directory and names DIR = '../input/' X_TRAIN = DIR + 'train-images-idx3-ubyte.pkl' Y_TRAIN = DIR + 'train-labels-idx1-ubyte.pkl' X_TEST = DIR + 't10k-images-idx3-ubyte.pkl' Y_TEST = DIR + 't10k-labels-idx1-ubyte.pkl' print('Loading pickle files') X_train = pickle.load( open( X_TRAIN, "rb" ) ) y_trai...
notebooks/data_exploration.ipynb
timgasser/keras-mnist
mit
Sample training images with labels Let's show a few of the training images with the corresponding labels, so we can sanity check that the labels match the numbers, and the images themselves look like actual digits.
# Check a few training values at random as a sanity check def show_label_images(X, y): '''Shows random images in a grid''' num = 9 images = np.random.randint(0, X.shape[0], num) print('Showing training image indexes {}'.format(images)) fig, axes = plt.subplots(3,3, figsize=(6,6)) for ...
notebooks/data_exploration.ipynb
timgasser/keras-mnist
mit
Sample test images with labels Now we can check the test images and labels by picking a few random ones, and making sure the images look reasonable and they match their labels.
# Now do the same for the training dataset show_label_images(X_test, y_test) # # Training label distribution y_train_df = pd.DataFrame(y_train, columns=['class']) y_train_df.plot.hist(legend=False) hist_df = pd.DataFrame(y_train_df['class'].value_counts(normalize=True)) hist_df.index.name = 'class' hist_df.columns = [...
notebooks/data_exploration.ipynb
timgasser/keras-mnist
mit
The class distribution is pretty evenly split between the classes. 1 is the most popular class with 11.24% of instances, and at the other end 5 is the least frequent class, with 9.04% of instances
# Test label distribution y_test_df = pd.DataFrame(y_test, columns=['class']) y_test_df.plot.hist(legend=False, bins=10) test_counts = y_test_df['class'].value_counts(normalize=True) hist_df['test'] = test_counts
notebooks/data_exploration.ipynb
timgasser/keras-mnist
mit
The distribution looks very similar between training and test datasets.
hist_df['diff'] = np.abs(hist_df['train'] - hist_df['test']) hist_df.sort_values('diff', ascending=False)['diff'].plot.bar()
notebooks/data_exploration.ipynb
timgasser/keras-mnist
mit
The largest difference is 0.0040% in the number 2 class.
# Final quick check of datatypes assert X_train.dtype == np.uint8 assert y_train.dtype == np.uint8 assert X_test.dtype == np.uint8 assert y_test.dtype == np.uint8
notebooks/data_exploration.ipynb
timgasser/keras-mnist
mit
Split a dataset into a tranining and a test folder In the code blocks below we load a real and synthetic dataset to highlight the HRT at the bottom of the script. Option 1: South African Heart Dataset
link_data = "https://web.stanford.edu/~hastie/ElemStatLearn/datasets/SAheart.data" dat_sah = pd.read_csv(link_data) # Extract the binary response and then drop y_sah = dat_sah['chd'] dat_sah.drop(columns=['row.names','chd'],inplace=True) # one-hot encode famhist dat_sah['famhist'] = pd.get_dummies(dat_sah['famhist'])['...
_rmd/extra_hrt/hrt_python_copy.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Note that the column types of each data need to be defined in the cn_type variable.
cn_type_sah = np.where(dat_sah.columns=='famhist','binomial','gaussian') # Do a train/test split np.random.seed(1234) idx = np.arange(len(y_sah)) np.random.shuffle(idx) idx_test = np.where((idx % 5) == 0)[0] idx_train = np.where((idx % 5) != 0)[0] X_train_sah = X_sah[idx_train] X_test_sah = X_sah[idx_test] y_train_sah...
_rmd/extra_hrt/hrt_python_copy.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Option 2: Non-linear decision boundary dataset
# ---- Random circle data ---- # np.random.seed(1234) n_circ = 1000 X_circ = np.random.randn(n_circ,5) X_circ = X_circ + np.random.randn(n_circ,1) y_circ = np.where(np.apply_along_axis(arr=X_circ[:,0:2],axis=1,func1d= lambda x: np.sqrt(np.sum(x**2)) ) > 1.2,1,0) cn_type_circ = np.repeat('gaussian',X_circ.shape[1]) id...
_rmd/extra_hrt/hrt_python_copy.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Function support The code block below provides a wrapper to implement the HRT algorithm for a binary outcome using a single training and test split. See my previous post for generalizations of this method for cross-validation. The function also requires a cn_type argument to specify whether the column is continuous or ...
# ---- FUNCTION SUPPORT FOR SCRIPT ---- # def hrt_bin_fun(X_train,y_train,X_test,y_test,cn_type): # ---- INTERNAL FUNCTION SUPPORT ---- # # Sigmoid function def sigmoid(x): return( 1/(1+np.exp(-x)) ) # Sigmoid weightin def sigmoid_w(x): return( sigmoid(x)*(1-sigmoid(x)) ) ...
_rmd/extra_hrt/hrt_python_copy.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Get the p-values for the different datasets Now that the hrt_bin_fun has been defined, we can perform inference on the columns of the two datasets created above.
pval_circ = hrt_bin_fun(X_train=X_train_circ,y_train=y_train_circ,X_test=X_test_circ,y_test=y_test_circ,cn_type=cn_type_circ) pval_sah = hrt_bin_fun(X_train=X_train_sah,y_train=y_train_sah,X_test=X_test_sah,y_test=y_test_sah,cn_type=cn_type_sah)
_rmd/extra_hrt/hrt_python_copy.ipynb
erikdrysdale/erikdrysdale.github.io
mit
The results below show that the sbp, tobacco, ldl, adiposity, and age are statistically significant features for the South African Heart Dataset. As expected, the first two variables, var1, and var2 from the non-linear decision boundary dataset are important as these are the two variables which define the decision boun...
pd.concat([pd.DataFrame({'vars':dat_sah.columns, 'pval':pval_sah, 'dataset':'SAH'}), pd.DataFrame({'vars':['var'+str(x) for x in np.arange(5)+1],'pval':pval_circ,'dataset':'NLP'})])
_rmd/extra_hrt/hrt_python_copy.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Make sure that your environment path is set to match the correct version of pandeia
print(os.environ['pandeia_refdata'] ) import pandeia.engine print(pandeia.engine.__version__)
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Load blank exo dictionary To start, load in a blank exoplanet dictionary with empty keys. You will fill these out for yourself in the next step.
exo_dict = jdi.load_exo_dict() print(exo_dict.keys()) #print(exo_dict['star']['w_unit'])
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Edit exoplanet observation inputs Editting each keys are annoying. But, do this carefully or it could result in nonsense runs
exo_dict['observation']['sat_level'] = 80 #saturation level in percent of full well exo_dict['observation']['sat_unit'] = '%' exo_dict['observation']['noccultations'] = 1 #number of transits exo_dict['observation']['R'] = None #fixed binning. I usually suggest ZERO binning.. you can always bin later ...
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Edit exoplanet host star inputs Note... If you select phoenix you do not have to provide a starpath, w_unit or f_unit, but you do have to provide a temp, metal and logg. If you select user you do not need to provide a temp, metal and logg, but you do need to provide units and starpath. Option 1) Grab stellar model fro...
#OPTION 1 get start from database exo_dict['star']['type'] = 'phoenix' #phoenix or user (if you have your own) exo_dict['star']['mag'] = 8.0 #magnitude of the system exo_dict['star']['ref_wave'] = 1.25 #For J mag = 1.25, H = 1.6, K =2.22.. etc (all in micron) exo_dict['star']['temp'] = 5500...
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Option 1) Input as dictionary or filename
#Let's create a little fake stellar input import scipy.constants as sc wl = np.linspace(0.8, 5, 3000) nu = sc.c/(wl*1e-6) # frequency in sec^-1 teff = 5500.0 planck_5500K = nu**3 / (np.exp(sc.h*nu/sc.k/teff) - 1) #can either be dictionary input starflux = {'f':planck_5500K, 'w':wl} #or can be as a stellar file #star...
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Edit exoplanet inputs using one of three options 1) user specified 2) constant value 3) select from grid 1) Edit exoplanet planet inputs if using your own model
exo_dict['planet']['type'] ='user' #tells pandexo you are uploading your own spectrum exo_dict['planet']['exopath'] = 'wasp12b.txt' #or as a dictionary #exo_dict['planet']['exopath'] = {'f':spectrum, 'w':wavelength} exo_dict['planet']['w_unit'] = 'cm' #other options include ...
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
2) Users can also add in a constant temperature or a constant transit depth
exo_dict['planet']['type'] = 'constant' #tells pandexo you want a fixed transit depth exo_dict['planet']['transit_duration'] = 2.0*60.0*60.0 #transit duration exo_dict['planet']['td_unit'] = 's' exo_dict['planet']['radius'] = 1 exo_dict['planet']['r_unit'] = 'R_jup' #Any unit of distance...
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
3) Select from grid NOTE: Currently only the fortney grid for hot Jupiters from Fortney+2010 is supported. Holler though, if you want another grid supported
exo_dict['planet']['type'] = 'grid' #tells pandexo you want to pull from the grid exo_dict['planet']['temp'] = 1000 #grid: 500, 750, 1000, 1250, 1500, 1750, 2000, 2250, 2500 exo_dict['planet']['chem'] = 'noTiO' #options: 'noTiO' and 'eqchem', noTiO is chemical eq. without TiO...
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Load in instrument dictionary (OPTIONAL) Step 2 is optional because PandExo has the functionality to automatically load in instrument dictionaries. Skip this if you plan on observing with one of the following and want to use the subarray with the smallest frame time and the readout mode with 1 frame/1 group (standard):...
jdi.print_instruments() inst_dict = jdi.load_mode_dict('NIRSpec G140H') #loading in instrument dictionaries allow you to personalize some of #the fields that are predefined in the templates. The templates have #the subbarays with the lowest frame times and the readmodes with 1 frame per group. #if that is not wha...
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Running PandExo You have four options for running PandExo. All of them are accessed through attribute jdi.run_pandexo. See examples below. jdi.run_pandexo(exo, inst, param_space = 0, param_range = 0,save_file = True, output_path=os.getcwd(), output_file = '', verbose=True) Option 1- Run sin...
jdi.print_instruments() result = jdi.run_pandexo(exo_dict,['NIRCam F322W2'], verbose=True)
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Note, you can turn off print statements with verbose=False Option 2- Run single instrument mode (with user dict), single planet This is the same thing as option 1 but instead of feeding it a list of keys, you can feed it a instrument dictionary (this is for users who wanted to simulate something NOT pre defined within ...
inst_dict = jdi.load_mode_dict('NIRSpec G140H') #personalize subarray inst_dict["configuration"]["detector"]["subarray"] = 'sub2048' result = jdi.run_pandexo(exo_dict, inst_dict)
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Option 3- Run several modes, single planet Use several modes from print_isntruments() options.
#choose select result = jdi.run_pandexo(exo_dict,['NIRSpec G140M','NIRSpec G235M','NIRSpec G395M'], output_file='three_nirspec_modes.p',verbose=True) #run all #result = jdi.run_pandexo(exo_dict, ['RUN ALL'], save_file = False)
notebooks/JWST_Running_Pandexo.ipynb
natashabatalha/PandExo
gpl-3.0
Autoencoders An autoencoder is a type of neural network used to learn an efficient representation, or encoding, for a set of data. The advantages of using these learned encodings are similar to those of word embeddings; they reduce the dimension of the feature space and can capture similarities between different inputs...
# Set random seeds for reproducible results. import numpy as np import tensorflow as tf np.random.seed(42) tf.random.set_seed(42) # Load dataset using keras data loader. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Each image in the dataset is 28 x 28 pixels. Let's flatten each to a 1-dimensional vector of length 784.
image_size = x_train.shape[1] original_dim = image_size * image_size # Flatten each image into a 1-d vector. x_train = np.reshape(x_train, [-1, original_dim]) x_test = np.reshape(x_test, [-1, original_dim]) # Rescale pixel values to a 0-1 range. x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Autoencoder Structure <a title="Chervinskii [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Autoencoder_structure.png"><img width="512" alt="Autoencoder structure" src="https://upload.wikimedia.org/wikipedia/commons/2/28/Autoencoder_s...
from tensorflow.keras import Input from tensorflow.keras.layers import Dense from tensorflow.keras.models import Model
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Encoder
latent_dim = 36 # input layer (needed for the Model API). input_layer = Input(shape=(original_dim,), name='encoder_input') # Notice that with all layers except for the first, # we need to specify which layer is used as input. latent_layer = Dense(latent_dim, activation='relu', name='latent_layer'...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Decoder
latent_inputs = Input(shape=(latent_dim,), name='decoder_input') output_layer = Dense(original_dim, name='decoder_output')(latent_inputs) decoder = Model(latent_inputs, output_layer, name='decoder') decoder.summary()
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Training The full autoencoder passes the inputs to the encoder, then the latent representations from the encoder to the decoder. We'll use the Adam optimizer and Mean Squared Error loss.
autoencoder = Model( input_layer, decoder(encoder(input_layer)), name="autoencoder" ) autoencoder.compile(optimizer='adam', loss='mse') autoencoder.summary()
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We will train for 50 epochs, using EarlyStopping to stop training early if validation loss improves by less than 0.0001 for 10 consecutive epochs. Using a batch size of 2048, this should take 1-2 minutes to train.
early_stopping = tf.keras.callbacks.EarlyStopping( monitor='val_loss', # minimum change in loss that qualifies as "improvement" # higher values of min_delta lead to earlier stopping min_delta=0.0001, # threshold for number of epochs with no improvement patience=10, verbose=1 ) autoencoder.f...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Visualize Predictions
decoded_imgs = autoencoder.predict(x_test) import matplotlib.pyplot as plt def visualize_imgs(nrows, axis_names, images, sizes, n=10): ''' Plots images in a grid layout. nrows: number of rows of images to display axis_names: list of names for each row images: list of arrays of images sizes: list of image...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
This shows 10 original images with their corresponding reconstructed images directly below. Clearly, our autoencoder captured the basic digit structure of each image, though the reconstructed images are less sharp. Application: Image Compression Autoencoders have been used extensively in image compression and processin...
# Compress original images. encoded_imgs = encoder.predict(x_test) # Reconstruct original images. decoded_imgs = decoder.predict(encoded_imgs) visualize_imgs( 3, ['Original Images', '36-dimensional Latent Representation', 'Reconstructions'], [x_test, encoded_imgs, decoded_imgs], [image_size, 6, image_s...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Now we can visualize the latent representation of each image that the autoencoder learned. Since this reduces the 784-dimensional original image to a 36-dimensional image, it essentially performs an image compression. Application: Image Denoising Autoencoders can also "denoise" images, such as poorly scanned pictures, ...
from imgaug import augmenters # Reshape images to 3-dimensional for augmenter. Since the images were # originally 2-dimensional, the third dimension is just 1. x_train = x_train.reshape(-1, image_size, image_size, 1) x_test = x_test.reshape(-1, image_size, image_size, 1) # p is the probability of changing a pixel t...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
For comparison, here are what 5 images look like before we add noise:
f, ax = plt.subplots(figsize=(20,2), nrows=1, ncols=5) for i in range(5, 10): ax[i-5].imshow(x_train[i].reshape(image_size, image_size)) plt.show()
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
After we add noise, the images look like this:
f, ax = plt.subplots(figsize=(20,2), nrows=1, ncols=5) for i in range(5, 10): ax[i-5].imshow(x_train_noise[i].reshape(image_size, image_size)) plt.show()
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
As you can see, the images are quite noisy and difficult to denoise even with the human eye. Luckily, autoencoders are much better at this task. We'll follow a similar architecture as before, but this time we'll train the model using the noisy images as input and the original, un-noisy images as output. Encoder We will...
from tensorflow.keras.layers import Conv2D, MaxPool2D, UpSampling2D filter_1 = 64 filter_2 = 32 filter_3 = 16 kernel_size = (3, 3) pool_size = (2, 2) latent_dim = 4 input_layer = Input(shape=(image_size, image_size, 1)) # First convolutional layer encoder_conv1 = Conv2D(filter_1, kernel_size, ...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Decoder The decoder will work in reverse, using 3 Conv2D layers, with increasing output filter sizes and an UpSampling2D layer after each. This will perform the desired effect of reconstructing or denoising the image.
latent_inputs = Input(shape=(latent_dim, latent_dim, filter_3)) # First convolutional layer decoder_conv1 = Conv2D(filter_3, kernel_size, activation='relu', padding='same')(latent_inputs) decoder_up1 = UpSampling2D(pool_size)(decoder_conv1) # Second convolutional layer decoder_conv2 = Conv2D(fil...
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Training We will again use early stopping and the same model parameters.
denoise_autoencoder = Model( input_layer, decoder_denoise(encoder_denoise(input_layer)) ) denoise_autoencoder.compile(optimizer='adam', loss='mse') denoise_autoencoder.summary()
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We will only train for 10 epochs this time since the model is more complex and takes longer to train. This should take around a minute.
denoise_autoencoder.fit( # Input x_train_noise, # Output x_train, epochs=10, batch_size=2048, validation_data=(x_test_noise, x_test), callbacks=[early_stopping] )
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Visualize Denoised Images Let's visualize the first 10 denoised images.
denoised_imgs = denoise_autoencoder.predict(x_test_noise[:10]) visualize_imgs( 3, ['Noisy Images', 'Denoised Images', 'Original Images'], [x_test_noise, denoised_imgs, x_test], [image_size, image_size, image_size] )
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
As we can see, the autoencoder is mostly successful in recovering the original image, though a few denoised images are still blurry or unclear. More training or a different model architecture may help. Resources Introduction to Autoencoders Building Autoencoders in Keras PCA vs. Autoencoders Variational Autoencoders A...
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && cp kaggle.json ~/.kaggle/ && echo 'Done' ! kaggle datasets download joshmcadams/mighty-mouse-wolf-wolf ! unzip mighty-mouse-wolf-wolf.zip ! ls
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
We'll use the smaller videos (80x60) in this exercise in order to fit within Colab's memory limits and in order to get our model to run faster. mighty_mouse_80x60_watermarked.mp4 contains the feature data. This is the watermarked video file. mighty_mouse_80x60.mp4 contains the target data. This is the video file before...
# Your answer goes here
content/05_deep_learning/03_autoencoders/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0