markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
IL/XL and XY, multi-line header, multiple attributes Load everything (default) X and Y are loaded as cdp_x and cdp_y, to be consistent with the seisnc standard in segysak.
ds = gio.read_odt('../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat') ds import matplotlib.pyplot as plt plt.scatter(ds.coords['cdp_x'], ds.coords['cdp_y'], s=5)
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
Load only inline, crossline, TWT There is only one attribute here: Z, which is the two-way time of the horizon. Note that when loading data from OpendTect, you always get an xarray.Dataset, even if there's only a single attribute. This is because the format supports multiple grids and we didn't want you to have to gues...
fname = '../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat' names = ['Inline', 'Crossline', 'Z'] # Must match OdT DAT file. ds = gio.read_odt(fname, names=names) ds
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
XY only If you have a file with no IL/XL, gio can try to load data using only X and Y: If there's a header you can load any number of attributes. If there's no header, you can only one attribute (e.g. TWT) automagically... OR, if there's no header, you can provide names to tell gio what everything is. gio must create...
fname = '../data/OdT/3d_horizon/Segment_XY_Single-line-header.dat' ds = gio.read_odt(fname, origin=(376, 812), step=(2, 2)) ds ds['twt'].plot()
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
No header, more than one attribute: raises an error
fname = '../data/OdT/3d_horizon/Segment_XY_No-header.dat' ds = gio.read_odt(fname) ds # Raises an error: fname = '../data/OdT/3d_horizon/Segment_XY_No-header.dat' ds = gio.read_odt(fname, names=['X', 'Y', 'TWT']) ds ds['twt'].plot()
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
Sparse data Sometimes a surface only exists at a few points, e.g. a 3D seismic interpretation grid. In general, loading data like this is completely safe if you have inline and xline locations. If you only have (x, y) locations, gio will attempt to load it, but you should inspect the result carefullly.
fname = '../data/OdT/3d_horizon/Nimitz_Salmon_XY-and-ILXL_Single-line-header.dat' ds = gio.read_odt(fname) ds ds['twt'].plot.imshow()
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
Multiple horizons in one file You can export multiple horizons from OpendTect. These will be loaded as one xarray.Dataset as different Data variables. (The actual attribute you exported from OdT is always called Z; this information is not retained in the xarray.)
fname = '../../gio-dev/data/OdT/3d_horizon/multi_horizon/Multi_header_H2_and_H4_X_Y_iL_xL_Z_in_sec.dat' ds = gio.read_odt(fname) ds ds['F3_Demo_2_FS6'].plot() ds['F3_Demo_4_Truncation'].plot()
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
Multi-horizon, no header Unfortunately, OdT exports (x, y) in the first two columns, meaning you can't assume that columns 3 and 4 are inline, crossline. So if there's no header, and XY as well as inline/xline, you have to give the column names:
import gio fname = '../data/OdT/3d_horizon/Test_Multi_XY-and-ILXL_Z-only.dat' ds = gio.read_odt(fname, names=['Horizon', 'X', 'Y', 'Inline', 'Crossline', 'Z']) ds
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
Undefined values These are exported as '1e30' by default. You can override this (not add to it, which is the default pandas behaviour) by passing one or more na_values.
fname = '../data/OdT/3d_horizon/Segment_XY_No-header_NULLs.dat' ds = gio.read_odt(fname, names=['X', 'Y', 'TWT']) ds ds['twt'].plot()
docs/userguide/Read_OpendTect_horizons.ipynb
agile-geoscience/gio
apache-2.0
In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner:
f, ax = plt.subplots(2, 2,figsize=(15,5)) for i in range(2): for j in range(2): plt.sca(ax[i,j]) plt.plot(np.random.rand(20)) plt.xlabel('x') plt.ylabel('y') plt.tight_layout()
Matplotlib/Matplotlib.ipynb
JAmarel/Phys202
mit
Go to lxmls/deep learning/mlp.py:class NumpyMLP:def grads() and complete the code of the NumpyMLP class with the Backpropagation recursion that we just saw.
def grads(self, x, y): """ Computes the gradients of the network with respect to cross entropy error cost """ # Run forward and store activations for each layer activations = self.forward(x, all_outputs=True) # For each layer in reverse store the gradients for ...
.ipynb_checkpoints/Lxmls_Day5-checkpoint.ipynb
jnobre/lxmls-toolkit-2017
mit
5.5.2 Symbolic Forward Pass Exercise 5.3 Complete the method forward() inside of the lxmls/deep learning/mlp.py:class TheanoMLP. Note that this is called only once at the initialization of the class. To debug your implementation put a breakpoint at the init function call. Hint: Note that this is very similar to NumpyML...
def _forward(self, x, all_outputs=False): """ Symbolic forward pass all_outputs = True return symbolic input and intermediate activations """ # This will store activations at each layer and the input. This is # needed to compute backpropagation if all_outputs: activations = [x] ...
.ipynb_checkpoints/Lxmls_Day5-checkpoint.ipynb
jnobre/lxmls-toolkit-2017
mit
5.5.4 Symbolic mini-batch update Exercise 5.5 Define the updates list. This is a list where each element is a tuple of a parameter and the update rule to be applied that parameter. In this case we are defining the SGD update rule, but take into account that using more complex update rules like e.g. momentum or adam imp...
W2, b2 = mlp_a.params[2:4] # Second layer symbolic variables _W2 = theano.shared(value=W2, name='W2', borrow=True) _b2 = theano.shared(value=b2, name='b2', borrow=True, broadcastable=(False, True)) _z2 = T.dot(_W2, _tilde_z1) + _b2 _tilde_z2 = T.nnet.softmax(_z2.T).T # Ground truth _y = T.ivector(...
.ipynb_checkpoints/Lxmls_Day5-checkpoint.ipynb
jnobre/lxmls-toolkit-2017
mit
Exercise 5.6
import time # Understanding the mini-batch function and givens/updates parameters # Numpy geometry = [train_x.shape[0], 20, 2] actvfunc = ['sigmoid', 'softmax'] mlp_a = dl.NumpyMLP(geometry, actvfunc) # init_t = time.clock() sgd.SGD_train(mlp_a, n_iter, bsize=bsize, lrate=lrate, train_set=(train_x, train_y)) print "\...
.ipynb_checkpoints/Lxmls_Day5-checkpoint.ipynb
jnobre/lxmls-toolkit-2017
mit
Sequential container Define a forward and backward pass procedures.
class Sequential(Module): """ This class implements a container, which processes `input` data sequentially. `input` is processed by each module (layer) in self.modules consecutively. The resulting array is called `output`. """ def __init__ (self): super(Se...
hw5/hw1_Modules.ipynb
Boialex/MIPT-ML
gpl-3.0
Layers input: batch_size x n_feats1 output: batch_size x n_feats2
class Linear(Module): """ A module which applies a linear transformation A common name is fully-connected layer, InnerProductLayer in caffe. The module should work with 2D input of shape (n_samples, n_feature). """ def __init__(self, n_in, n_out): super(Linear, self).__init__() ...
hw5/hw1_Modules.ipynb
Boialex/MIPT-ML
gpl-3.0
This one is probably the hardest but as others only takes 5 lines of code in total. - input: batch_size x n_feats - output: batch_size x n_feats
class SoftMax(Module): def __init__(self): super(SoftMax, self).__init__() def updateOutput(self, input): # start with normalization for numerical stability self.output = np.subtract(input, input.max(axis=1, keepdims=True)) # Your code goes here. ##################...
hw5/hw1_Modules.ipynb
Boialex/MIPT-ML
gpl-3.0
Implement dropout. The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask. This is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout. While training (self.training == True) it should sample a mask on each iteration (for every batch). W...
class Dropout(Module): def __init__(self, p=0.5): super(Dropout, self).__init__() self.p = p self.mask = None def updateOutput(self, input): # Your code goes here. ################################################ if self.training: self.mask =...
hw5/hw1_Modules.ipynb
Boialex/MIPT-ML
gpl-3.0
Implement Leaky Rectified Linear Unit. Expriment with slope.
class LeakyReLU(Module): def __init__(self, slope = 0.03): super(LeakyReLU, self).__init__() self.slope = slope def updateOutput(self, input): # Your code goes here. ################################################ self.output = input.copy() self.out...
hw5/hw1_Modules.ipynb
Boialex/MIPT-ML
gpl-3.0
The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
class MSECriterion(Criterion): def __init__(self): super(MSECriterion, self).__init__() def updateOutput(self, input, target): self.output = np.sum(np.power(np.subtractact(input, target), 2)) / input.shape[0] return self.output def updateGradInput(self, input, target):...
hw5/hw1_Modules.ipynb
Boialex/MIPT-ML
gpl-3.0
You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula, remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size.
class ClassNLLCriterion(Criterion): def __init__(self): a = super(ClassNLLCriterion, self) super(ClassNLLCriterion, self).__init__() def updateOutput(self, input, target): # Use this trick to avoid numerical errors eps = 1e-15 input_clamp = np.clip(inp...
hw5/hw1_Modules.ipynb
Boialex/MIPT-ML
gpl-3.0
So building and training Neural Networks in Python in simple! But it is also powerful! Neural Style Transfer: github.com/titu1994/Neural-Style-Transfer <font size=20> <table border="0"><tr> <td><img src="img/golden_gate.jpg" width=250></td> <td>+</td> <td><img src="img/starry_night.jpg" width=250></td> <td>=</td> <td><...
from __future__ import absolute_import, print_function, division from ipywidgets import interact, interactive, widgets import numpy as np np.random.seed(1337) # for reproducibility
PyconZA-2016.ipynb
snth/ctdeep
mit
Let's load some data
from keras.datasets import mnist #(images_train, labels_train), (images_test, labels_test) = mnist.load_data() print("Data shapes:") print('images',images_train.shape) print('labels', labels_train.shape)
PyconZA-2016.ipynb
snth/ctdeep
mit
and then visualise it
%matplotlib inline import matplotlib import matplotlib.pyplot as plt def plot_mnist_digit(image, figsize=None): """ Plot a single MNIST image.""" fig = plt.figure() ax = fig.add_subplot(1, 1, 1) if figsize: ax.set_figsize(*figsize) ax.matshow(image, cmap = matplotlib.cm.binary) plt.xtic...
PyconZA-2016.ipynb
snth/ctdeep
mit
Data Preprocessing Transform "images" to "features" ... Most machine learning algorithms expect a flat array of numbers
def to_features(X): return X.reshape(-1, 784).astype("float32") / 255.0 def to_images(X): return (X*255.0).astype('uint8').reshape(-1, 28, 28) print('data shape:', images_train.shape, images_train.dtype) print('features shape', to_features(images_train).shape, to_features(images_train).dtype)
PyconZA-2016.ipynb
snth/ctdeep
mit
Split the data into a "training" and "test" set ...
#(images_train, labels_train), (images_test, labels_test) = mnist.load_data() X_train = to_features(images_train) X_test = to_features(images_test) print(X_train.shape, 'training samples') print(X_test.shape, 'test samples')
PyconZA-2016.ipynb
snth/ctdeep
mit
Transform the labels to a "one-hot" encoding ...
# The labels need to be transformed into class indicators from keras.utils import np_utils y_train = np_utils.to_categorical(labels_train, nb_classes=10) y_test = np_utils.to_categorical(labels_test, nb_classes=10) print('labels_train:', labels_train.shape, labels_train.dtype) print('y_train:', y_test.shape, y_train.dt...
PyconZA-2016.ipynb
snth/ctdeep
mit
For example, let's inspect the first 2 labels:
print('labels_train[:2]:\n', labels_train[:2][:, np.newaxis]) print('y_train[:2]\n', y_train[:2])
PyconZA-2016.ipynb
snth/ctdeep
mit
Once the model is trained, we can evaluate its performance on the test data.
mlp.evaluate(X_test, y_test) #plot_10_by_10_images(images_test, figsize=(8,8)) def draw_mlp_prediction(j): plot_mnist_digit(to_images(X_test)[j]) prediction = mlp.predict_classes(X_test[j:j+1], verbose=False)[0] print('predict:', prediction, '\tactual:', labels_test[j]) interact(draw_mlp_prediction, j=(0...
PyconZA-2016.ipynb
snth/ctdeep
mit
Deep Learning Why do we want Deep Neural Networks? Universal Approximation Theorem The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters. Power of com...
from keras.models import Sequential nb_layers = 2 mlp2 = Sequential() # add hidden layers for i in range(nb_layers): mlp2.add(Dense(output_dim=nb_hidden//nb_layers, input_dim=nb_input if i==0 else nb_hidden//nb_layers, init='uniform')) mlp2.add(Activation('sigmoid')) # add output layer mlp2.add(Dense(output_dim...
PyconZA-2016.ipynb
snth/ctdeep
mit
Did you notice anything about the accuracy? Let's train it some more.
mlp2.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1) mlp2.evaluate(X_test, y_test)
PyconZA-2016.ipynb
snth/ctdeep
mit
Autoencoders Hinton 2006 (local)
from IPython.display import HTML HTML('<iframe src="pdf/Hinton2006-science.pdf" width=800 height=400></iframe>') from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout print('nb_input =', nb_input) print('nb_hidden =', nb_hidden) ae = Sequential() # encoder ae.add(Dense(nb_hidden...
PyconZA-2016.ipynb
snth/ctdeep
mit
A better Autoencoder
from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout def make_autoencoder(nb_input=nb_input, nb_hidden=nb_hidden, activation='sigmoid', init='uniform'): ae = Sequential() # encoder ae.add(Dense(nb_hidden, input_dim=nb_input, init=init)) ae.ad...
PyconZA-2016.ipynb
snth/ctdeep
mit
Stacked Autoencoder
from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout class StackedAutoencoder(object): def __init__(self, layers, mode='autoencoder', activation='sigmoid', init='uniform', final_activation='softmax', dropout=0.2, optimizer='SGD', metric...
PyconZA-2016.ipynb
snth/ctdeep
mit
Visualising the Filters
def visualise_filter(model, layer_index, filter_index): from keras import backend as K # build a loss function that maximizes the activation # of the nth filter on the layer considered layer_output = model.layers[layer_index].get_output() loss = K.mean(layer_output[:, filter_index]) # compute ...
PyconZA-2016.ipynb
snth/ctdeep
mit
But, did you know that they are both referring to the same 7 object? In other words, variables in Python are always references or pointers to data so the variables are not technically holding the value. Pointers are like phone numbers that "point at" phones but pointers themselves are not the phone itself. We can unco...
x = y = 7 print(id(x)) print(id(y))
notes/aliasing.ipynb
parrt/msan501
mit
Wow! They are the same. That number represents the memory location where Python has stored the shared 7 object. Of course, as programmers we don't think of these atomic elements as referring to the same object; just keep in mind that they do. We are more likely to view them as copies of the same number, as lolviz show...
from lolviz import * callviz(varnames=['x','y'])
notes/aliasing.ipynb
parrt/msan501
mit
Let's verify that the same thing happens for strings:
name = 'parrt' userid = name # userid now points at the same memory as name print(id(name)) print(id(userid))
notes/aliasing.ipynb
parrt/msan501
mit
Ok, great, so we are in fact sharing the same memory address to hold the string 'parrt' and both of the variable names point at that same shared space. We call this aliasing, in the language implementation business. Things only get freaky when we start changing shared data. This can't happen with integers and strings b...
you = [1,3,5] me = [1,3,5] print(id(you)) print(id(me)) callviz(varnames=['you','me'])
notes/aliasing.ipynb
parrt/msan501
mit
Those lists have the same value but live a different memory addresses. They are not aliased; they are not shared. Consequently, changing one does not change the other:
you = [1,3,5] me = [1,3,5] print(you, me) you[0] = 99 print(you, me)
notes/aliasing.ipynb
parrt/msan501
mit
On the other hand, let's see what happens if we make you and me share the same copy of the list (point at the same memory location):
you = [1,3,5] me = you print(id(you)) print(id(me)) print(you, me) callviz(varnames=['you','me'])
notes/aliasing.ipynb
parrt/msan501
mit
Now, changing one appears to change the other, but in fact both simply refer to the same location in memory:
you[0] = 99 print(you, me) callviz(varnames=['you','me'])
notes/aliasing.ipynb
parrt/msan501
mit
Don't confuse changing the pointer to the list with changing the list elements:
you = [1,3,5] me = you callviz(varnames=['you','me']) me = [9,7,5] # doesn't affect `you` at all print(you) print(me) callviz(varnames=['you','me'])
notes/aliasing.ipynb
parrt/msan501
mit
This aliasing of data happens a great deal when we pass lists or other data structures to functions. Passing list Quantity to a function whose argument is called data means that the two are aliased. We'll look at this in more detail in the "Visibility of symbols" section of Organizing your code with functions. Shallow ...
X = [[1,2],[3,4]] Y = X.copy() # shallow copy callviz(varnames=['X','Y']) X[0][1] = 99 callviz(varnames=['X','Y']) print(Y)
notes/aliasing.ipynb
parrt/msan501
mit
Feedforward Network 一樣有輸入 x, 輸出 y。 但是中間預測、計算的樣子有點不同。 <img src="https://upload.wikimedia.org/wikipedia/en/5/54/Feed_forward_neural_net.gif" /> 模型是這樣的 一樣考慮輸入是四維向量,輸出有 3 個類別。 我們的輸入 $x=\begin{pmatrix} x_0 \ x_1 \ x_2 \ x_3 \end{pmatrix} $ 是一個向量,我們看成 column vector 好了 第 0 層 而 Weight: $ W^{(0)} = \begin{pmatrix} W^{(0)}0 \ ...
# 參考答案 %run solutions/ff_oneline.py
Week11/DIY_AI/FeedForward-Forward Propagation.ipynb
tjwei/HackNTU_Data_2017
mit
任務:計算最後的猜測機率 $q$ 設定:輸入 4 維, 輸出 3 維, 隱藏層 6 維 * 設定一些權重 $A,b,C,d$ (隨意自行填入,或者用 np.random.randint(-2,3, size=...)) * 設定輸入 $x$ (隨意自行填入,或者用 np.random.randint(-2,3, size=...)) * 自行定義 relu, sigmoid 函數 (Hint: np.maximum) * 算出隱藏層 $z$ * 自行定義 softmax * 算出最後的 q
# 請在這裡計算 np.random.seed(1234) # 參考答案,設定權重 %run -i solutions/ff_init_variables.py display(A) display(b) display(C) display(d) display(x) # 參考答案 定義 relu, sigmoid 及計算 z %run -i solutions/ff_compute_z.py display(z_relu) display(z_sigmoid) # 參考答案 定義 softmax 及計算 q %run -i solutions/ff_compute_q.py display(q_relu) displa...
Week11/DIY_AI/FeedForward-Forward Propagation.ipynb
tjwei/HackNTU_Data_2017
mit
練習 設計一個網路: * 輸入是二進位 0 ~ 15 * 輸出依照對於 3 的餘數分成三類
# Hint 下面產生數字 i 的 2 進位向量 i = 13 x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2) x # 請在這裡計算 # 參考解答 %run -i solutions/ff_mod3.py
Week11/DIY_AI/FeedForward-Forward Propagation.ipynb
tjwei/HackNTU_Data_2017
mit
練習 設計一個網路來判斷井字棋是否有連成直線(只需要判斷其中一方即可): * 輸入是 9 維向量,0 代表空格,1 代表有下子 * 輸出是二維(softmax)或一維(sigmoid)皆可,用來代表 True, False 有連線的例子 ``` X X__ XXX XXX XX_ _XX X _XX X ``` 沒連線的例子 ``` XX_ X__ _XX X XX_ X_X __X XX _X ```
# 請在這裡計算 #參考答案 %run -i solutions/ff_tic_tac_toe.py # 測試你的答案 def my_result(x): # return 0 means no, 1 means yes return (C@relu(A@x+b)+d).argmax() # or sigmoid based # return (C@relu(A@x+b)+d) > 0 def truth(x): x = x.reshape(3,3) return (x.all(axis=0).any() or x.all(axis=1).any() ...
Week11/DIY_AI/FeedForward-Forward Propagation.ipynb
tjwei/HackNTU_Data_2017
mit
Applications in Quantum Information Phase and Frequency Learning
>>> from qinfer import * >>> model = SimplePrecessionModel() >>> prior = UniformDistribution([0, 1]) >>> n_particles = 2000 >>> n_experiments = 100 >>> updater = SMCUpdater(model, n_particles, prior) >>> heuristic = ExpSparseHeuristic(updater) >>> true_params = prior.sample() >>> for idx_experiment in range(n_experimen...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
State and Process Tomography
import matplotlib os.path.join(matplotlib.get_configdir(), 'stylelib') >>> from qinfer import * >>> from qinfer.tomography import * >>> basis = pauli_basis(1) # Single-qubit Pauli basis. >>> model = TomographyModel(basis) >>> prior = GinibreReditDistribution(basis) >>> updater = SMCUpdater(model, 8000, prior) >>> heu...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Randomized Benchmarking
>>> from qinfer import * >>> import numpy as np >>> p, A, B = 0.95, 0.5, 0.5 >>> ms = np.linspace(1, 800, 201).astype(int) >>> signal = A * p ** ms + B >>> n_shots = 25 >>> counts = np.random.binomial(p=signal, n=n_shots) >>> data = np.column_stack([counts, ms, n_shots * np.ones_like(counts)]) >>> mean, cov = simple_es...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Additional Functionality Derived Models
>>> from qinfer import * >>> import numpy as np >>> model = BinomialModel(SimplePrecessionModel()) >>> n_meas = 25 >>> prior = UniformDistribution([0, 1]) >>> updater = SMCUpdater(model, 2000, prior) >>> true_params = prior.sample() >>> for t in np.linspace(0.1,20,20): ... experiment = np.array([(t, n_meas)], dtype...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Time-Dependent Models
>>> from qinfer import * >>> import numpy as np >>> prior = UniformDistribution([0, 1]) >>> true_params = np.array([[0.5]]) >>> n_particles = 2000 >>> model = RandomWalkModel( ... BinomialModel(SimplePrecessionModel()), NormalDistribution(0, 0.01**2)) >>> updater = SMCUpdater(model, n_particles, prior) >>> t = np.p...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Performance and Robustness Testing
performance = perf_test_multiple( # Use 100 trials to estimate expectation over data. 100, # Use a simple precession model both to generate, # data, and to perform estimation. SimplePrecessionModel(), # Use 2,000 particles and a uniform prior. 2000, UniformDistribution([0, 1]), # Take 5...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Parallelization Here, we demonstrate parallelization with ipyparallel and the DirectViewParallelizedModel model. First, create a model which is not designed to be useful, but rather to be expensive to evaluate a single likelihood.
class ExpensiveModel(FiniteOutcomeModel): """ The likelihood of this model randomly generates a dim-by-dim conjugate-symmetric matrix for every expparam and modelparam, exponentiates it, and returns the overlap with the |0> state. """ def __init__(self, dim=36): super(ExpensiveMod...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Now, we can use Jupyter's %timeit magic to see how long it takes, for example, to compute the likelihood 5x1000x10=50000 times.
emodel = ExpensiveModel(dim=16) %timeit -q -o -n1 -r1 emodel.likelihood(np.array([0,1,0,0,1]), np.zeros((1000,1)), np.zeros((10,1)))
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Next, we initialize the Client which communicates with the parallel processing engines. In the accompaning paper, this code was run on a single machine with dual "Intel(R) Xeon(R) CPU X5675 @ 3.07GHz" processors, for a total of 12 physical cores, and therefore, 24 engines were online.
# Do not demand that ipyparallel be installed, or ipengines be running; # instead, fail silently. run_parallel = True try: from ipyparallel import Client import dill rc = Client() # set profile here if desired dview = rc[:] dview.execute('from qinfer import *') dview.execute('from scipy.linalg i...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Finally, we run the parallel tests, looping over different numbers of engines used.
if run_parallel: par_n_particles = 5000 par_test_outcomes = np.array([0,1,0,0,1]) par_test_modelparams = np.zeros((par_n_particles, 1)) # only the shape matters par_test_expparams = np.zeros((10, 1)) # only the shape matters def compute_L(model): model.likelihood(par_test_outcomes,...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
And plot the results.
if run_parallel: fig = plt.figure() plt.plot(np.concatenate([[1], n_engines]), np.concatenate([[serial_time], par_time])/serial_time,'-o') ax = plt.gca() ax.set_xscale('log', basex=2) ax.set_yscale('log', basey=2) plt.xlim([0.8, np.max(n_engines)+2]) plt.ylim([2**-4,1.2]) plt.xlabel('# E...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
Appendices Custom Models
from qinfer import FiniteOutcomeModel import numpy as np class MultiCosModel(FiniteOutcomeModel): @property def n_modelparams(self): return 2 @property def is_n_outcomes_constant(self): return True def n_outcomes(self, expparams): return 2 def are_models_valid(self, ...
qinfer-1.0-paper.ipynb
QInfer/qinfer-examples
agpl-3.0
12. Cleanup BigQuery artifacts This notebook helps to clean up interim tables generated while executing notebooks from 01 to 09. Import required modules
# Add custom utils module to Python environment. import os import sys sys.path.append(os.path.abspath(os.pardir)) from google.cloud import bigquery from utils import helpers
packages/propensity/12.cleanup.ipynb
google/compass
apache-2.0
Set parameters
# Get GCP configurations. configs = helpers.get_configs('config.yaml') dest_configs = configs.destination # GCP project ID where queries and other computation will be run. PROJECT_ID = dest_configs.project_id # BigQuery dataset name to store query results (if needed). DATASET_NAME = dest_configs.dataset_name
packages/propensity/12.cleanup.ipynb
google/compass
apache-2.0
List all tables in the BigQuery Dataset
# Initialize BigQuery Client. bq_client = bigquery.Client() all_tables = [] for table in bq_client.list_tables(DATASET_NAME): all_tables.append(table.table_id) print(all_tables)
packages/propensity/12.cleanup.ipynb
google/compass
apache-2.0
Remove list of tables Select table names from the printed out list in above cell.
# Define specific tables to remove from the dataset. tables_to_delete = ['table1', 'table2'] # Or uncomment below to remove all tables in the dataset. # tables_to_delete = all_tables # Remove tables from BigQuery dataset. for table_id in tables_to_delete: bq_client.delete_table(f'{PROJECT_ID}.{DATASET_NAME}.{table_...
packages/propensity/12.cleanup.ipynb
google/compass
apache-2.0
Normally, I would just write %run Stack.ipynb here. As this does not work in Deepnote, I have included the implementation of the class Stack here.
class Stack: def __init__(self): self.mStackElements = [] def push(self, e): self.mStackElements.append(e) def pop(self): assert len(self.mStackElements) > 0, "popping empty stack" self.mStackElements = self.mStackElements[:-1] def top(self): assert len(self.mS...
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{toFloat}(s)$ tries to convert the string $s$ to a floating point number. If this works out, this number is returned. Otherwise, the string $s$ is returned unchanged.
def toFloat(s): try: return float(s) except ValueError: return s toFloat('0.123') toFloat('+')
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The module re provides support for <a href='https://en.wikipedia.org/wiki/Regular_expression'>regular expressions</a>. These are needed for <em style="color:blue;">tokenizing</em> a string. The function $\texttt{tokenize}(s)$ takes a string and splits this string into a list of tokens. Whitespace is discarded.
def tokenize(s): regExp = r''' 0|[1-9][0-9]* | # integer (?:0|[1-9][0-9])+[.][0-9]+ | # floating point number \*\* | # power operator [-+*/()] | # arithmetic operators and parentheses [ \t]...
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{findZero}(f, a, b, n)$ takes a function $f$ and two numbers $a$ and $b$ such that $a < b$, $f(a) \leq 0$, and $0 \leq f(b)$. It uses the bisection method to find a number $x \in [a, b]$ such that $f(x) \approx 0$.
def findZero(f, a, b, n): assert a < b , f'{a} has to be less than b' assert f(a) * f(b) <= 0, f'f({a}) * f({b}) > 0' if f(a) <= 0 <= f(b): for k in range(n): c = 0.5 * (a + b) # print(f'f({c}) = {f(c)}, {b-a}') if f(c) < 0: a = c e...
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{precedence}(o)$ calculates the precedence of the operator $o$.
def precedence(op): "your code here" assert False, f'unkown operator in precedence: {op}'
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{isConstOperator}(o)$ returns True of $o$ is a constant like eor pi. The variable x is also considered as a constant operator.
def isConstOperator(op): "your code here"
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{isLeftAssociative}(o)$ returns True of $o$ is left associative.
def isLeftAssociative(op): "your code here" assert False, f'unkown operator in isLeftAssociative: {op}'
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{evalBefore}(o_1, o_2)$ receives to strings representing arithmetical operators. It returns True if the operator $o_1$ should be evaluated before the operator $o_2$ in an arithmetical expression of the form $a \;\texttt{o}_1\; b \;\texttt{o}_2\; c$. In order to determine whether $o_1$ should be e...
def evalBefore(stackOp, nextOp): "your code here" assert False, f'incomplete case distinction in evalBefore({stackOp}, {nextOp})'
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The class Calculator supports three member variables: - the token stack mTokens, - the operator stack mOperators, - the argument stack mArguments, - the floating point number mValue, which is the current value of x. The constructor takes a list of tokens TL and initializes the token stack with these tokens.
class Calculator: def __init__(self, TL, x): self.mTokens = createStack(TL) self.mOperators = Stack() self.mArguments = Stack() self.mValue = x
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The method __str__ is used to convert an object of class Calculator to a string.
def toString(self): return '\n'.join(['_'*50, 'TokenStack: ' + str(self.mTokens), 'Arguments: ' + str(self.mArguments), 'Operators: ' + str(self.mOperators), '_'*50]) Calculator.__str__ = toString del toString
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function $\texttt{evaluate}(\texttt{self})$ evaluates the expression that is given by the tokens on the mTokenStack. There are two phases: 1. The first phase is the <em style="color:blue">reading phase</em>. In this phase the tokens are removed from the token stack mTokens. 2. The second phase is the <em style="...
def evaluate(self): "your code here" return self.mArguments.top() Calculator.evaluate = evaluate del evaluate
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The method $\texttt{popAndEvaluate}(\texttt{self})$ removes an operator from the operator stack and removes the corresponding arguments from the arguments stack. It evaluates the operator and pushes the result on the argument stack.
def popAndEvaluate(self): "your code here" Calculator.popAndEvaluate = popAndEvaluate del popAndEvaluate
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function testEvaluateExpr takes three arguments: - s is a string that can be interpreted as an arithmetic expression. This string might contain the variable $x$. In this arithmetic expression, unary function symbols need not be be followed by parenthesis. - t is a string that contains an arithmetic expression....
def testEvaluateExpr(s, t, x): TL = tokenize(s) C = Calculator(TL, x) r1 = C.evaluate() r2 = eval(t, { 'math': math }, { 'x': x }) assert r1 == r2, f'{r1} != {r2}' testEvaluateExpr('sin cos x', 'math.sin(math.cos(x))', 0) testEvaluateExpr('sin x**2', 'math.sin(math.pi)**2', math.pi) testEvaluateE...
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The function computeZero takes three arguments: * s is a string that can be interpreted as a function $f$ of the variable x. For example, s could be equal to 'x * x - 2.0'. * left and right are floating point numbers. It is required that the function $f$ changes signs in the interval $[\texttt{left}, \texttt{right}]$...
def computeZero(s, left, right): TL = tokenize(s) def f(x): c = Calculator(TL, x) return c.evaluate() return findZero(f, left, right, 54);
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
The cell below should output the number 0.7390851332151607.
computeZero('log exp x - cos(sqrt(x**2))', 0, 1)
Python/Chapter-06/Calculator-Frame.ipynb
karlstroetmann/Algorithms
gpl-2.0
Find maximum derivative of dispersion_difference_function over a range. This could be used as a bound for $\epsilon$ to guarantee results.
def find_max_der(expression,symbol,input_range): expr_der = sp.diff(expression,symbol) expr_def_func = ufuncify([symbol],expr_der) return max(abs(expr_def_func(input_range))) ## Apply the triangle inequality over a range of nus nus = np.asarray([6.+ i*5e-2 for i in range(1+int(1e3))]) max_derivative = sum...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Methods for systematic search over ranges Definitions: base -- The number base to use, i.e. the factor to increase the grid resolution at each step. starting_i -- index of starting step. 0 means we use a grid of size base by base. max_i -- final index. eps -- desired resolution at step max_i Description To look for sol...
eps = 0.00002 starting_i = 0 max_i = 4 base = 10 min_value = 6. max_value = 20. @timeit def setup_ranges(max_i,base): ranges= {} for i in range(max_i+1): ranges[i] = np.linspace(min_value,max_value,1+pow(base,i+1)) return ranges
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Note: How to obtain the index of $\nu_3$.
i = 2 1+pow(base,i+1) np.linspace(min_value,max_value,1+pow(base,i+1)) spacing = (max_value-min_value)/ pow(base,i+1) spacing num_indices_from_zero = min_value / spacing num_indices_from_zero ranges[i] sample_index = solution_containing_voxels[2].keys()[1000] sample_index ranges[2][(sum(sample_index)+int(num_in...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Main methods used
@timeit def initial_voxels(max_i,base,starting_i,eps): solution_containing_voxels = {} eps_current = eps * pow(base,max_i-starting_i) solution_containing_voxels[starting_i] = {} for i1,om1 in enumerate(ranges[starting_i]): for i2,om2 in enumerate(ranges[starting_i]): err = k_of_...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Different bases comparison Base 10
eps = 0.006 starting_i = 0 max_i = 2 base = 10 ## maximum grid length: 1+pow(base,max_i+1) ranges = setup_ranges(max_i,base) solution_containing_voxels = initial_voxels(max_i,base,starting_i,eps) add_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels) ## Number of solutions found for each resolutio...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Base 2
eps = 0.006 starting_i = 0 max_i = 9 base = 2 ## maximum grid length: 1+pow(base,max_i+1) ranges = setup_ranges(max_i,base) solution_containing_voxels = initial_voxels(max_i,base,starting_i,eps) add_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels) ## Number of solutions found for each resolution...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Discussion The number of solution voxels increases by a factor of base at each step. This happens because the function being optimize is close to linear near the solutions and because we decrease eps_current by a factor of base at each step. As a result, the total number of voxels increases by a factor of base**2 at ea...
eps = 2e-4 starting_i = 0 max_i = 1 base = 10 relative_scalings = [4,4,10] phi1_min = 30. phi1_max = 34. ranges1 = {} for i in range(0,2): ranges1[i] = np.linspace(phi1_min,phi1_max,relative_scalings[0]*pow(base,i+1)+1) phi2_min = -13 phi2_max = -9 ranges2 = {} for i in range(0,2): ranges2[i] = np.linspace(p...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Finding solutions in the voxels: In general the nus to be considered do not lie on a grid. For this reason it is necessary to find which nus lie in each voxel. Setup for the experiment
eps = 0.006 starting_i = 0 max_i = 2 base = 10 ranges = setup_ranges(max_i,base) solution_containing_voxels = initial_voxels(max_i,base,starting_i,eps) add_high_res_voxels(max_i,base,starting_i,eps,solution_containing_voxels) i = 2 ## where to draw points from. scale = 0.1 ## scale on random noise to add to points...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Development for method to be used in the package
def setup_ranges(max_i,base,min_value = 6.,max_value = 11.): ranges= {} for i in range(max_i+1): ranges[i] = np.linspace(min_value,max_value,1+pow(base,i+1)) return ranges @timeit def initial_voxels(ranges,k_of_nu1_nu2,max_i,base,starting_i,eps): solution_containing_voxels = {} eps_current ...
Dispersion_relation_chi_2_voxels_approach.ipynb
tabakg/potapov_interpolation
gpl-3.0
Let's imagine we measure 2 quantities, $x_1$ and $x_2$ for some objects, and we know the classes that these objects belong to, e.g., "star", 0, or "galaxy", 1 (maybe we classified these objects by hand, or knew through some other means). We now observe ($x_1$, $x_2$) for some new object and want to know whether it belo...
a = np.random.multivariate_normal([1., 0.5], [[4., 0.], [0., 0.25]], size=512) b = np.random.multivariate_normal([10., 8.], [[1., 0.], [0., 25]], size=1024) X = np.vstack((a,b)) ...
day1/notebooks/demo-KNN.ipynb
AstroHackWeek/AstroHackWeek2017
mit
We now observe a new point, and would like to know which class it belongs to:
np.random.seed(42) new_pt = np.random.uniform(-10, 20, size=2) plt.figure(figsize=(6,6)) plt.scatter(X[:,0], X[:,1], c=y, cmap='RdBu', marker='.', alpha=0.5, linewidth=0) plt.scatter(new_pt[0], new_pt[1], marker='+', color='g', s=100, linewidth=3) plt.xlim(-10, 20) plt.ylim(-10, 20) plt.xlabel('$x_1$') plt.ylabel('...
day1/notebooks/demo-KNN.ipynb
AstroHackWeek/AstroHackWeek2017
mit
KNN works by predicting the class of a new point based on the classes of the K training data points closest to the new point. The two things that can be customized about this method are K, the number of points to use, and the distance metric used to compute the distances between the new point and the training data. If ...
K = 16 def distance(pts1, pts2): pts1 = np.atleast_2d(pts1) pts2 = np.atleast_2d(pts2) return np.sqrt( (pts1[:,0]-pts2[:,0])**2 + (pts1[:,1]-pts2[:,1])**2) # compute the distance between all training data points and the new point dists = distance(X, new_pt) # get the classes (from the training data) of t...
day1/notebooks/demo-KNN.ipynb
AstroHackWeek/AstroHackWeek2017
mit
All of the closest points are from class 1, so we would classify the new point as class=1. If there is a mixture of possible classes, take the class with more neighbors. If it's a tie, choose a class at random. That's it! Let's see how to use the KNN classifier in scikit-learn:
from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier(n_neighbors=16) clf.fit(X, y) clf.predict(new_pt.reshape(1, -1)) # input has to be 2D
day1/notebooks/demo-KNN.ipynb
AstroHackWeek/AstroHackWeek2017
mit
Let's visualize the decision boundary of this classifier by evaluating the predicted class for a grid of trial data:
grid_1d = np.linspace(-10, 20, 256) grid_x1, grid_x2 = np.meshgrid(grid_1d, grid_1d) grid = np.stack((grid_x1.ravel(), grid_x2.ravel()), axis=1) y_grid = clf.predict(grid) plt.figure(figsize=(6,6)) plt.pcolormesh(grid_x1, grid_x2, y_grid.reshape(grid_x1.shape), cmap='Set3', alpha=1.) plt.scatter(X[:...
day1/notebooks/demo-KNN.ipynb
AstroHackWeek/AstroHackWeek2017
mit
KNN is very simple, but is very fast and is therefore useful in problems with large or wide datasets. Let's now look at a more complicated example where the training data classes overlap significantly:
a = np.random.multivariate_normal([6., 0.5], [[8., 0.], [0., 0.25]], size=512) b = np.random.multivariate_normal([10., 4.], [[2., 0.], [0., 8]], size=1024) X2 = np.vstack((a,b)) ...
day1/notebooks/demo-KNN.ipynb
AstroHackWeek/AstroHackWeek2017
mit
What does the decision boundary look like in this case, as a function of the number of neighbors, K:
for K in [4, 16, 64, 256]: clf2 = KNeighborsClassifier(n_neighbors=K) clf2.fit(X2, y2) y_grid2 = clf2.predict(grid) plt.figure(figsize=(6,6)) plt.pcolormesh(grid_x1, grid_x2, y_grid2.reshape(grid_x1.shape), cmap='Set3', alpha=1.) plt.scatter(X2[:,0], X2[:,1], marker='...
day1/notebooks/demo-KNN.ipynb
AstroHackWeek/AstroHackWeek2017
mit
3. Enter DV360 Monthly Budget Mover Recipe Parameters No changes made can be made in DV360 from the start to the end of this process Make sure there is budget information for the current and previous month's IOs in DV360 Make sure the provided spend report has spend data for every IO in the previous month Spend report...
FIELDS = { 'recipe_timezone':'America/Los_Angeles', # Timezone for report dates. 'recipe_name':'', # Table to write to. 'auth_write':'service', # Credentials used for writing data. 'auth_read':'user', # Credentials used for reading data. 'partner_id':'', # The sdf file types. 'budget_categories':'{}', ...
colabs/monthly_budget_mover.ipynb
google/starthinker
apache-2.0
4. Execute DV360 Monthly Budget Mover This does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'dataset':{ 'description':'Create a dataset where data will be combined and transfored for upload.', 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'ser...
colabs/monthly_budget_mover.ipynb
google/starthinker
apache-2.0
Specify the cell size along the rows (delr) and along the columns (delc) and the top and bottom of the aquifer for the DIS package.
#--dis data delr, delc = 50.0, 50.0 botm = np.array([-10., -30., -50.])
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Define the IBOUND array and starting heads for the BAS package. The corners of the model are defined to be inactive.
#--bas data #--ibound - active except for the corners ibound = np.ones((nlay, nrow, ncol), dtype= np.int) ibound[:, 0, 0] = 0 ibound[:, 0, -1] = 0 ibound[:, -1, 0] = 0 ibound[:, -1, -1] = 0 #--initial head data ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the LPF package.
#--lpf data laytyp=0 hk=10. vka=0.2
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0
Define the boundary condition data for the model
#--boundary condition data #--ghb data colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow)) index = np.zeros((nrow, ncol), dtype=np.int) index[:, :10] = 1 index[:, -10:] = 1 index[:10, :] = 1 index[-10:, :] = 1 nghb = np.sum(index) lrchc = np.zeros((nghb, 5)) lrchc[:, 0] = 0 lrchc[:, 1] = rowcell[inde...
original_libraries/flopy-master/examples/Notebooks/swiex4.ipynb
mjasher/gac
gpl-2.0