markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
What makes composite plates special is the fact that they typically not isotropic. This is handled by the 6x6 ABD matrix that defines the composites properties axially, in bending, and the coupling between the two.
# composite properties A11,A22,A66,A12,A16,A26,A66 = symbols('A11,A22,A66,A12,A16,A26,A66') B11,B22,B66,B12,B16,B26,B66 = symbols('B11,B22,B66,B12,B16,B26,B66') D11,D22,D66,D12,D16,D26,D66 = symbols('D11,D22,D66,D12,D16,D26,D66') ## constants of integration when solving differential equation C1,C2,C3,C4,C5,C6 = symbo...
tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb
nagordon/mechpy
mit
Let's compute our 6 displacement conditions which is where our PDE's show up
Nxf = A11*diff(u0,x) + A12*diff(v0,y) + A16*(diff(u0,y) + diff(v0,x)) - B11*diff(w0,x,2) - B12*diff(w0,y,2) - 2*B16*diff(w0,x,y) Eq(Nx, Nxf) Nyf = A12*diff(u0,x) + A22*diff(v0,y) + A26*(diff(u0,y) + diff(v0,x)) - B12*diff(w0,x,2) - B22*diff(w0,y,2) - 2*B26*diff(w0,x,y) Eq(Ny,Nyf) Nxyf = A16*diff(u0,x) + A26*diff(v0,y...
tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb
nagordon/mechpy
mit
Now, combine our 6 displacement conditions with our 3 equalibrium equations to get three goverening equations
eq1 = diff(Nxf,x) + diff(Nxf,y) eq1 eq2 = diff(Nxyf,x) + diff(Nyf,y) eq2 eq3 = diff(Mxf,x,2) + 2*diff(Mxyf,x,y) + diff(Myf,y,2) + q eq3
tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb
nagordon/mechpy
mit
Yikes, I do not want to solve that (at least right now). If we make the assumption that the plate has equal displacement of y in the x and y direction, then we can simply things ALOT! These simplifications are valid for cross ply unsymmetric laminates plate, Hyer pg 616. This is applied by setting some of our material ...
u0 = Function('u0')(x) v0 = Function('v0')(x) w0 = Function('w0')(x) Nxf = A11*diff(u0,x) + A12*diff(v0,y) - B11*diff(w0,x,2) Eq(Nx, Nxf) Nyf = A12*diff(u0,x) + A22*diff(v0,y) - B22*diff(w0,y,2) Eq(Ny,Nyf) Nxyf = A66*(diff(u0,y) + diff(v0,x)) Eq(Nxy,Nxyf) Mxf = B11*diff(u0,x) - D11*diff(w0,x,2) - D12*diff(w0,y,2) E...
tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb
nagordon/mechpy
mit
Now we are getting somewhere. Finally we can solve the differential equations
dsolve(diff(Nx(x))) dsolve(diff(Mx(x),x,2)+q)
tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb
nagordon/mechpy
mit
Now solve for u0 and w0 with some pixie dust
eq4 = (Nxf-C1) eq4 eq5 = Mxf -( -q*x**2 + C2*x + C3 ) eq5 eq6 = Eq(solve(eq4,diff(u0,x))[0] , solve(eq5, diff(u0,x))[0]) eq6 w0f = dsolve(eq6, w0) w0f eq7 = Eq(solve(eq6, diff(w0,x,2))[0] , solve(eq4,diff(w0,x,2))[0]) eq7 u0f = dsolve(eq7) u0f
tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb
nagordon/mechpy
mit
Step 0 - hyperparams vocab_size is all the potential words you could have (classification for translation case) and max sequence length are the SAME thing decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we ...
num_units = 400 #state size input_len = 60 target_len = 30 batch_size = 64 with_EOS = False total_size = 57994 train_size = 46400 test_size = 11584
04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Once generate data
data_folder = '../../../../Dropbox/data' ph_data_path = '../data/price_history' npz_full = ph_data_path + '/price_history_dp_60to30_57994.npz' npz_train = ph_data_path + '/price_history_dp_60to30_57994_46400_train.npz' npz_test = ph_data_path + '/price_history_dp_60to30_57994_11584_test.npz'
04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Step 1 - collect data
# dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS) # dp.inputs.shape, dp.targets.shape # aa, bb = dp.next() # aa.shape, bb.shape
04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Step 2 - Build model
model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS) # graph = model.getGraph(batch_size=batch_size, # num_units=num_units, # input_len=input_len, # target_len=target_len) #show_graph(graph)
04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Step 3 training the network
best_params = [500, tf.nn.tanh, 0.0001, 0.62488034788862112, 0.001] num_units, activation, lamda2, keep_prob_input, learning_rate = best_params batch_size def experiment(): return model.run(npz_path=npz_train, npz_test = npz_test, epochs=100, batch_size = batch_size, ...
04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb
pligor/predicting-future-product-prices
agpl-3.0
One epoch takes approximately 268 secs If we want to let it run for ~8 hours = 8 * 3600 / 268 ~= 107 epochs So let it run for 100 epochs and see how it behaves
dyn_stats.plotStats() plt.show() data_len = len(targets) mses = np.empty(data_len) for ii, (pred, target) in enumerate(zip(preds_dict.values(), targets.values())): mses[ii] = mean_squared_error(pred, target) np.mean(mses) huber_losses = np.empty(data_len) for ii, (pred, target) in enumerate(zip(preds_dict.value...
04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Load review dataset For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
products = graphlab.SFrame('amazon_baby_subset.gl/')
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
products['sentiment']
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
products.head(10)['name'] print '# of positive reviews =', len(products[products['sentiment']==1]) print '# of negative reviews =', len(products[products['sentiment']==-1])
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Note: For this assignment, we eliminated class imbalance by choosing a subset of the data with a similar number of positive and negative reviews. Apply text cleaning on the review data In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-wo...
import json with open('important_words.json', 'r') as f: # Reads the list of most frequent words important_words = json.load(f) important_words = [str(s) for s in important_words] print important_words
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Now, we will perform 2 simple data transformations: Remove punctuation using Python's built-in string functionality. Compute word counts (only for important_words) We start with Step 1 which can be done as follows:
def remove_punctuation(text): import string return text.translate(None, string.punctuation) products['review_clean'] = products['review'].apply(remove_punctuation)
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the ...
for word in important_words: products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
products['perfect']
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Now, write some code to compute the number of product reviews that contain the word perfect. Hint: * First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1. * Sum the number of 1s in the column contains_perfect. Quiz Question. How many review...
import numpy as np
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
def get_numpy_data(data_sframe, features, label): data_sframe['intercept'] = 1 features = ['intercept'] + features features_sframe = data_sframe[features] feature_matrix = features_sframe.to_numpy() label_sarray = data_sframe[label] label_array = label_sarray.to_numpy() return(feature_matrix...
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Let us convert the data into NumPy arrays.
# Warning: This may take a few minutes... feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section) It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_nump...
feature_matrix.shape
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question: How many features are there in the feature_matrix? Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model? Now, let us see what the sentiment column looks like:
sentiment
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Estimating conditional probability with link function Recall from lecture that the link function is given by: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{...
''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients # YOUR CODE HERE ... # Compute P(y_i = +1 | x_i, w) using the link function # YOUR COD...
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Aside. How the link function works with matrix algebra Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$: $$ [\text{feature_matrix}] = \left[ \begin{array}{c} h(\mathbf{x}_1)^T \ h(\mathbf{x}_2)^T \ \vdots \ h(\mathbf{x}_N)^T...
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]]) dummy_coefficients = np.array([1., 3., -1.]) correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] ) correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] ) pri...
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Compute derivative of log likelihood with respect to a single coefficient Recall from lecture: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$ We will now write a function that computes the derivative of log likelihood wi...
def feature_derivative(errors, feature): # Compute the dot product of errors and feature derivative = ... # Return the derivative return derivative
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood i...
def compute_log_likelihood(feature_matrix, sentiment, coefficients): indicator = (sentiment==+1) scores = np.dot(feature_matrix, coefficients) logexp = np.log(1. + np.exp(-scores)) # Simple check to prevent overflow mask = np.isinf(logexp) logexp[mask] = -scores[mask] lp = np.sum((...
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Checkpoint Just to make sure we are on the same page, run the following code block and check that the outputs match.
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]]) dummy_coefficients = np.array([1., 3., -1.]) dummy_sentiment = np.array([-1, 1]) correct_indicators = np.array( [ -1==+1, 1==+1 ] ) correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (...
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Taking gradient steps Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum. Complete the following function to solve the logistic regression model using gradient ascent:
from math import sqrt def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in xrange(max_iter): # Predict P(y_i = +1|x_i,w) using your predict_probability() function #...
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Now, let us run the logistic regression solver.
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194), step_size=1e-7, max_iter=301)
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz question: As each iteration of gradient ascent passes, does the log likelihood increase or decrease? Predicting sentiments Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula: $$ \hat{y}_i = \left{ \begin{array}{ll...
# Compute the scores as a dot product between feature_matrix and coefficients. scores = np.dot(feature_matrix, coefficients)
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above: Quiz question: How many reviews were predicted to have positive sentiment? Measuring accuracy We will now measure the classification accuracy of the model. Recall from the lecture that the classificatio...
num_mistakes = ... # YOUR CODE HERE accuracy = ... # YOUR CODE HERE print "-----------------------------------------------------" print '# Reviews correctly classified =', len(products) - num_mistakes print '# Reviews incorrectly classified =', num_mistakes print '# Reviews total =', len(products) pr...
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy) Which words contribute most to positive & negative sentiments? Recall that in Module 2 assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive re...
coefficients = list(coefficients[1:]) # exclude intercept word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)] word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
tuanavu/coursera-university-of-washington
mit
Visualizing data
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) batch_xs, batch_ys = mnist.train.next_batch(batch_size) import matplotlib.pyplot as plt %matplotlib inline plt.imshow(batch_xs[0].reshape(28, 28)) batch_ys[0] plt.imshow(batch_xs[10].reshape(28, 28)) batch_ys[10] plt.imshow(batch_xs[60].reshape(28, 28...
MNIST_for_beginners_noNN_noCONV_0.12.0-rc1.ipynb
gtesei/DeepExperiments
apache-2.0
The current state of the art in classifying these digits can be found here: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354 Model
def main(_): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # Create the model ...
MNIST_for_beginners_noNN_noCONV_0.12.0-rc1.ipynb
gtesei/DeepExperiments
apache-2.0
TensorBoard: Visualizing Learning
from tensorflow.contrib.tensorboard.plugins import projector def variable_summaries(var): """Attach a lot of summaries to a Tensor (for TensorBoard visualization).""" with tf.name_scope('summaries'): mean = tf.reduce_mean(var) tf.summary.scalar(var.name+'_mean', mean) #tf.scalar_summary(var.name+'_mean...
MNIST_for_beginners_noNN_noCONV_0.12.0-rc1.ipynb
gtesei/DeepExperiments
apache-2.0
Load Data For this notebook, we'll be using a sample set of timeseries data of BART ridership on the 5 most commonly traveled stations in San Francisco. This subsample of data was selected and processed from Pyro's examples http://docs.pyro.ai/en/stable/_modules/pyro/contrib/examples/bart.html
import os import urllib.request smoke_test = ('CI' in os.environ) if not smoke_test and not os.path.isfile('../BART_sample.pt'): print('Downloading \'BART\' sample dataset...') urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1A6LqCHPA5lHa5S3lMH8mLMNEgeku8lRG', '../BART_sample.pt') ...
examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb
jrg365/gpytorch
mit
Define a Model The only thing of note here is the use of the kernel. For this example, we'll learn a kernel with 2048 deltas in the mixture, and initialize by sampling directly from the empirical spectrum of the data.
class SpectralDeltaGP(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, num_deltas, noise_init=None): likelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.GreaterThan(1e-11)) likelihood.register_prior("noise_prior", gpytorch.priors.HorseshoePrior(0....
examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb
jrg365/gpytorch
mit
Train
model.train() mll = gpytorch.mlls.ExactMarginalLogLikelihood(model.likelihood, model) optimizer = torch.optim.Adam(model.parameters(), lr=0.01) scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[40]) num_iters = 1000 if not smoke_test else 4 with gpytorch.settings.max_cholesky_size(0): ...
examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb
jrg365/gpytorch
mit
Plot Results
from matplotlib import pyplot as plt %matplotlib inline _task = 3 plt.subplots(figsize=(15, 15), sharex=True, sharey=True) for _task in range(2): ax = plt.subplot(3, 1, _task + 1) with torch.no_grad(): # Initialize plot # f, ax = plt.subplots(1, 1, figsize=(16, 12)) # Get upper and ...
examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb
jrg365/gpytorch
mit
Generate Poisson process data and generate exponential For each interval choose $n$ events from a Poisson. Then draw from a uniform the location in the interval for each of the events.
np.random.seed(8675309) nT = 400 cts = np.random.poisson(20, size=nT) edata = [] for i in range(nT): edata.extend(i + np.sort(np.random.uniform(low=0, high=1, size=cts[i]))) edata = np.asarray(edata) edata.shape plt.plot(edata, np.arange(len(edata))) plt.xlabel('Time of event') plt.ylabel('Event number') plt.title...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
This is consistent with a Poisson of parameter 20! But there seems to be an under prediction going on, wonder why? Go through Posterior Predictive Checks (http://docs.pymc.io/notebooks/posterior_predictive.html) and see if we are reprodicting the mean and variance.
ppc = mc.sample_ppc(trace, samples=500, model=model, size=100) ax = plt.subplot() sns.distplot([n.mean() for n in ppc['Poisson']], kde=False, ax=ax) ax.axvline(cts.mean()) ax.set(title='Posterior predictive of the mean (Poisson)', xlabel='mean(x)', ylabel='Frequency'); ax = plt.subplot() sns.distplot([n.var() for n ...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
We are reprodicting well. Given the data we generated that will be treated as truth, what would we measure with various deadtime and does teh corection match what we think it should? Correction should look like $n_1 = \frac{R_1}{1-R_1 \tau}$ where $n_1$ is real rate, $R_1$ is observed rate, and $\tau$ is the dead time...
deadtime1 = 0.005 # small dead time deadtime2 = 0.1 # large dead time edata_td1 = [] edata_td1.append(edata[0]) edata_td2 = [] edata_td2.append(edata[0]) for ii, v in enumerate(edata[1:], 1): # stop one shy to not run over the end, start enumerate at 1 if v - edata_td1[-1] >= deadtime1: edata_td1.appen...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
And plot the rates per unit time
plt.figure(figsize=(8,6)) h1, b1 = np.histogram(edata, np.arange(1000)) plt.plot(tb.bin_edges_to_center(b1), h1, label='Real data', c='k') h2, b2 = np.histogram(edata_td1, np.arange(1000)) plt.plot(tb.bin_edges_to_center(b2), h2, label='Small dead time', c='r') h3, b3 = np.histogram(edata_td2, np.arange(1000)) plt.pl...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
Can we use $n_1 = \frac{R_1}{1-R_1 \tau}$ to derive the relation and spread in the dist of R? Algerbra changes math to: $R_1=\frac{n_1}{1+n_1\tau}$ Use the small dead time
# assume R1 is Poisson with mc.Model() as model: tau = deadtime1 obsRate = mc.Uniform('obsRate', 0, 1000, shape=1) obsData = mc.Poisson('obsData', obsRate, observed=h2[:400], shape=1) realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau)) start = mc.find_MAP() trace = mc.sample(10000,...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
Use the large dead time
# assume R1 is Poisson with mc.Model() as model: tau = deadtime2 obsRate = mc.Uniform('obsRate', 0, 1000) obsData = mc.Poisson('obsData', obsRate, observed=h3[:400]) realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau)) start = mc.find_MAP() trace = mc.sample(10000, start=start, njob...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
But this is totally broken!!! Output data files for each
real = pd.Series(edata) td1 = pd.Series(edata_td1) td2 = pd.Series(edata_td2) real.to_csv('no_deadtime_times.csv') td1.to_csv('small_deadtime_times.csv') td2.to_csv('large_deadtime_times.csv') real = pd.Series(h1[h1>0]) td1 = pd.Series(h2[h2>0]) td2 = pd.Series(h3[h3>0]) real.to_csv('no_deadtime_rates.csv') td1.to...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
Work on the random thoughts
with mc.Model() as model: BoundedExp = mc.Bound(mc.Exponential, lower=deadtime2, upper=None) # we observe the following time between counts lam = mc.Uniform('lam', 0, 1000) time_between = BoundedExp('tb_ob', lam, observed=np.diff(edata_td2)) start = mc.find_MAP() trace = mc.sample(10000, nj...
Counting/Poisson and exponential.ipynb
balarsen/pymc_learning
bsd-3-clause
Synthetic Features and Outliers Learning Objectives: * Create a synthetic feature that is the ratio of two other features * Use this new feature as an input to a linear regression model * Improve the effectiveness of the model by identifying and clipping (removing) outliers out of the input data Let's revisit our...
from __future__ import print_function import math from IPython import display from matplotlib import cm from matplotlib import gridspec import matplotlib.pyplot as plt import numpy as np import pandas as pd import sklearn.metrics as metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.loggin...
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
Next, we'll set up our input function, and define the function for model training:
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a linear regression model of one feature. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or...
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
Task 1: Try a Synthetic Feature Both the total_rooms and population features count totals for a given city block. But what if one city block were more densely populated than another? We can explore how block density relates to median house value by creating a synthetic feature that's a ratio of total_rooms and populati...
# # YOUR CODE HERE # california_housing_dataframe["rooms_per_person"] = calibration_data = train_model( learning_rate=0.00005, steps=500, batch_size=5, input_feature="rooms_per_person" )
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
Solution Click below for a solution.
california_housing_dataframe["rooms_per_person"] = ( california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"]) calibration_data = train_model( learning_rate=0.05, steps=500, batch_size=5, input_feature="rooms_per_person")
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
Task 2: Identify Outliers We can visualize the performance of our model by creating a scatter plot of predictions vs. target values. Ideally, these would lie on a perfectly correlated diagonal line. Use Pyplot's scatter() to create a scatter plot of predictions vs. targets, using the rooms-per-person model you trained...
# YOUR CODE HERE
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
Solution Click below for the solution.
plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) plt.scatter(calibration_data["predictions"], calibration_data["targets"])
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
The calibration data shows most scatter points aligned to a line. The line is almost vertical, but we'll come back to that later. Right now let's focus on the ones that deviate from the line. We notice that they are relatively few in number. If we plot a histogram of rooms_per_person, we find that we have a few outlier...
plt.subplot(1, 2, 2) _ = california_housing_dataframe["rooms_per_person"].hist()
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
Task 3: Clip Outliers See if you can further improve the model fit by setting the outlier values of rooms_per_person to some reasonable minimum or maximum. For reference, here's a quick example of how to apply a function to a Pandas Series: clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0)) T...
# YOUR CODE HERE
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
Solution Click below for the solution. The histogram we created in Task 2 shows that the majority of values are less than 5. Let's clip rooms_per_person to 5, and plot a histogram to double-check the results.
california_housing_dataframe["rooms_per_person"] = ( california_housing_dataframe["rooms_per_person"]).apply(lambda x: min(x, 5)) _ = california_housing_dataframe["rooms_per_person"].hist()
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
To verify that clipping worked, let's train again and print the calibration data once more:
calibration_data = train_model( learning_rate=0.05, steps=500, batch_size=5, input_feature="rooms_per_person") _ = plt.scatter(calibration_data["predictions"], calibration_data["targets"])
ml_notebooks/synthetic_features_and_outliers.ipynb
bt3gl/Machine-Learning-Resources
gpl-2.0
  Feedforward Neural Network
# import feedforward neural net from mlnn import neural_net
.ipynb_checkpoints/mlnn-checkpoint.ipynb
ishank26/nn_from_scratch
gpl-3.0
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"></script> Let's build a 4-layer neural network. Our network has one input layer, two hidden layer and one output layer. Our model can be represented as a directed acyclic graph wherein each node in a layer is conn...
# Visualize tanh and its derivative x = np.linspace(-np.pi, np.pi, 120) plt.figure(figsize=(8, 3)) plt.subplot(1, 2, 1) plt.plot(x, np.tanh(x)) plt.title("tanh(x)") plt.xlim(-3, 3) plt.subplot(1, 2, 2) plt.plot(x, 1 - np.square(np.tanh(x))) plt.xlim(-3, 3) plt.title("tanh\'(x)") plt.show()
.ipynb_checkpoints/mlnn-checkpoint.ipynb
ishank26/nn_from_scratch
gpl-3.0
It can be seen from the above figure that as we increase our input the our activation starts to saturate which can inturn kill gradients. This can be mitigated using rectified activation functions. Another problem that we encounter in training deep neural networks during backpropagation is vanishing gradient and gradie...
# Training the neural network my_nn = neural_net([2, 4, 2]) # [2,4,2] = [input nodes, hidden nodes, output nodes] my_nn.train(X, y, 0.001, 0.0001) # weights regularization lambda= 0.001 , epsilon= 0.0001 ### visualize predictions my_nn.visualize_preds(X ,y)
.ipynb_checkpoints/mlnn-checkpoint.ipynb
ishank26/nn_from_scratch
gpl-3.0
Animate Training:
X_, y_ = sklearn.datasets.make_circles(n_samples=400, noise=0.18, factor=0.005, random_state=1) plt.figure(figsize=(7, 5)) plt.scatter(X_[:, 0], X_[:, 1], s=15, c=y_, cmap=plt.cm.Spectral) plt.show() ''' Uncomment the code below to see classification process for above data. To stop training early reduce no. of ite...
.ipynb_checkpoints/mlnn-checkpoint.ipynb
ishank26/nn_from_scratch
gpl-3.0
We can segment the income data into 50 buckets, and plot it as a histogram:
%matplotlib inline # %config InlineBackend.figure_format='retina' # import seaborn as sns # sns.set_context("paper") # sns.set_style("white") # sns.set() import matplotlib.pyplot as plt plt.hist(incomes, 50) plt.show()
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Now compute the median - since we have a nice, even distribution it too should be close to 27,000:
np.median(incomes)
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Now we'll add Donald Trump into the mix. Darn income inequality!
incomes = np.append(incomes, [1000000000])
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb
vadim-ivlev/STUDY
mit
The median won't change much, but the mean does:
np.median(incomes) np.mean(incomes)
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Mode Next, let's generate some fake age data for 500 people:
ages = np.random.randint(18, high=90, size=500) ages from scipy import stats stats.mode(ages)
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Data extraction and clean up My first data source is the World Bank. We will access World Bank data by using 'Wbdata', Wbdata is a simple python interface to find and request information from the World Bank's various databases, either as a dictionary containing full metadata or as a pandas DataFrame. Currently, wbdata ...
wb.search('gdp.*capita.*const') # we use this function to search for GDP related indicators wb.search('employment') # we use this function to search for employment related indicators wb.search('unemployment') # we use this function to search for unemployment related indicators #I have identified the relevant variabl...
UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb
NYUDataBootcamp/Projects
mit
Plotting the data
# with a clean and orthodox Dataframe, I can start to do some graphics import matplotlib.pyplot as plt %matplotlib inline # we invert the x axis. Never managed to make 'Year' the X axis, lost a lot of hair in the process :( plt.gca().invert_xaxis() # Came up with this solution # and add the indicators plt.plot(esplbr....
UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb
NYUDataBootcamp/Projects
mit
Observations Spain has recently lived through a depression without precedent, yet unemployment rates above 20% are nothing new: there is a large structural component in addition to the demand-deficient factor. Youth unemployment is particuarly bad, which is the norm elsewhere too, but the spread is accentuated in S...
# let's take a look at unemployment by education level import matplotlib.pyplot as plt %matplotlib inline # we invert the x axis plt.gca().invert_xaxis() #we add the variables plt.plot(esplbr.index, esplbr['UnempW/PrimEd.']) plt.plot(esplbr.index, esplbr['UnempW/SecEd']) plt.plot(esplbr.index, esplbr['UnempW/TertEd'])...
UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb
NYUDataBootcamp/Projects
mit
Observations Those unemployed with only primary education completed and ni-nis start to rise hand in hand ten years ago, when the crisis hits. This suggests overlap between the two groups. The elephant in the room a massive construction bubble that made Spain's variant of the crisis particularly brutal. For decades, a...
# Don't forget the the DMV paperwork import quandl # Quandl package quandl.ApiConfig.api_key = '3w_GYBRfX3ZxG7my_vhs' # register for a key and unlimited number of requests # Playing it safe import sys # system module import pandas as pd # data package import matplotlib.py...
UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb
NYUDataBootcamp/Projects
mit
Data extraction and clean up We're going to be comparing Spain's NAIRU to that of Denmark. Don't tell Sanders, but Denmark is well known for having one of the most 'flexible' labor markets in Europe.
# We extract the indicators and print the dataframe NAIRU = quandl.get((['OECD/EO91_INTERNET_ESP_NAIRU_A','OECD/EO91_INTERNET_DNK_NAIRU_A']), #We call for both start_date = "1990-12-31", end_date = "2013-12-31") # And limit the time horizon NAIRU # What do we have here? type(NAIRU) NAIRU.columns ...
UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb
NYUDataBootcamp/Projects
mit
VIB + DoSE <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Goog...
import functools import sys import time import numpy as np import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_datasets as tfds import tensorflow_probability as tfp # Globally Enable XLA. # tf.config.optimizer.set_jit(True) try: physical_devices = tf.config...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
2 Load Dataset
[train_dataset, eval_dataset], datasets_info = tfds.load( name='mnist', split=['train', 'test'], with_info=True, shuffle_files=True) def _preprocess(sample): return (tf.cast(sample['image'], tf.float32) * 2 / 255. - 1., tf.cast(sample['label'], tf.int32)) train_size = datasets_info.splits[...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
3 Define Model
input_shape = datasets_info.features['image'].shape encoded_size = 16 base_depth = 32 prior = tfd.MultivariateNormalDiag( loc=tf.zeros(encoded_size), scale_diag=tf.ones(encoded_size)) Conv = functools.partial( tfn.Convolution, init_bias_fn=tf.zeros_initializer(), init_kernel_fn=tf.initializers.he_...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
4 Loss / Eval
def compute_loss(x, y, beta=1.): q = encoder(x) z = q.sample() p = decoder(z) kl = tf.reduce_mean(q.log_prob(z) - prior.log_prob(z), axis=-1) # Note: we could use exact KL divergence, eg: # kl = tf.reduce_mean(tfd.kl_divergence(q, prior)) # however we generally find that using the Monte Carlo approximat...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
5 Train
DEBUG_MODE = False tf.config.experimental_run_functions_eagerly(DEBUG_MODE) num_train_epochs = 25. # @param { isTemplate: true} num_evals = 200 # @param { isTemplate: true} dur_sec = dur_num = 0 num_train_steps = int(num_train_epochs * train_size // batch_size) for i in range(num_train_steps): start = time....
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
6 Evaluate Classification Accuracy
def evaluate_accuracy(dataset, encoder, decoder): """Evaluate the accuracy of your model on a dataset. """ this_it = iter(dataset) num_correct = 0 num_total = 0 attempts = 0 for xin, xout in this_it: xin, xout = next(this_it) e = encoder(xin) z = e.sample(10000) # 10K samples should have low ...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
The accuracy of one training run with this particular model and training setup was 99.15%, which is within a half of a percent of the state of the art, and comparable to the mnist accuracy reported in Alemi et al. (2016). OOD detection using DoSE From the previous section, we have trained a variational classifier. How...
def get_statistics(encoder, decoder, prior): """Setup a function to evaluate statistics given model components. Args: encoder: Callable neural network which takes in an image and returns a tfp.distributions.Distribution object. decoder: Callable neural network which takes in a vector and r...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
2 Define DoSE helper classes and functions
def get_DoSE_KDE(T, dataset): """Get a distribution and decision rule for OOD detection using DoSE. Given a tensor of statistics tx, compute a Kernel Density Estimate (KDE) of the statistics. This uses a quantiles trick to cut down the number of samples used in the KDE (to lower the cost of evaluating a tria...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
3 Setup OOD dataset
# For evaluating statistics on the training set, we need to perform a # pass through the dataset. train_one_pass = tfds.load('mnist')['train'] train_one_pass = tfn.util.tune_dataset(train_one_pass, batch_size=1000, repeat_count=None, ...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
4 Administer DoSE
DoSE_admin = DoSE_administrator(T, train_one_pass, hybrid_data)
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
5 Evaluate OOD performance
fp, tp = DoSE_admin.roc_curve(10000) precision, recall = DoSE_admin.precision_recall_curve(10000) plt.figure(figsize=[10,5]) plt.subplot(121) plt.plot(fp, tp, 'b-') plt.xlim(0, 1.) plt.ylim(0., 1.) plt.xlabel('FPR', fontsize=12) plt.ylabel('TPR', fontsize=12) plt.title("AUROC: %.4f"%np.trapz(tp, fp), fontsize=12) plt.s...
tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb
tensorflow/probability
apache-2.0
Source reconstruction using an LCMV beamformer This tutorial gives an overview of the beamformer method and shows how to reconstruct source activity using an LCMV beamformer.
# Authors: Britta Westner <britta.wstnr@gmail.com> # Eric Larson <larson.eric.d@gmail.com> # # License: BSD-3-Clause import matplotlib.pyplot as plt import mne from mne.datasets import sample, fetch_fsaverage from mne.beamformer import make_lcmv, apply_lcmv
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Introduction to beamformers A beamformer is a spatial filter that reconstructs source activity by scanning through a grid of pre-defined source points and estimating activity at each of those source points independently. A set of weights is constructed for each defined source location which defines the contribution of ...
data_path = sample.data_path() subjects_dir = data_path / 'subjects' meg_path = data_path / 'MEG' / 'sample' raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif' # Read the raw data raw = mne.io.read_raw_fif(raw_fname) raw.info['bads'] = ['MEG 2443'] # bad MEG channel # Set up the epoching event_id = 1 # those ...
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Computing the covariance matrices Spatial filters use the data covariance to estimate the filter weights. The data covariance matrix will be inverted_ during the spatial filter computation, so it is valuable to plot the covariance matrix and its eigenvalues to gauge whether matrix inversion will be possible. Also, beca...
data_cov = mne.compute_covariance(epochs, tmin=0.01, tmax=0.25, method='empirical') noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='empirical') data_cov.plot(epochs.info) del epochs
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
When looking at the covariance matrix plots, we can see that our data is slightly rank-deficient as the rank is not equal to the number of channels. Thus, we will have to regularize the covariance matrix before inverting it in the beamformer calculation. This can be achieved by setting the parameter reg=0.05 when calcu...
# Read forward model fwd_fname = meg_path / 'sample_audvis-meg-vol-7-fwd.fif' forward = mne.read_forward_solution(fwd_fname)
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Handling depth bias The forward model solution is inherently biased toward superficial sources. When analyzing single conditions it is best to mitigate the depth bias somehow. There are several ways to do this: :func:mne.beamformer.make_lcmv has a depth parameter that normalizes the forward model prior to computing ...
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05, noise_cov=noise_cov, pick_ori='max-power', weight_norm='unit-noise-gain', rank=None) # You can save the filter for later use with: # filters.save('filters-lcmv.h5')
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
It is also possible to compute a vector beamformer, which gives back three estimates per voxel, corresponding to the three direction components of the source. This can be achieved by setting pick_ori='vector' and will yield a :class:volume vector source estimate &lt;mne.VolVectorSourceEstimate&gt;. So we will compute a...
filters_vec = make_lcmv(evoked.info, forward, data_cov, reg=0.05, noise_cov=noise_cov, pick_ori='vector', weight_norm='unit-noise-gain', rank=None) # save a bit of memory src = forward['src'] del forward
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Apply the spatial filter The spatial filter can be applied to different data types: raw, epochs, evoked data or the data covariance matrix to gain a static image of power. The function to apply the spatial filter to :class:~mne.Evoked data is :func:~mne.beamformer.apply_lcmv which is what we will use here. The other fu...
stc = apply_lcmv(evoked, filters) stc_vec = apply_lcmv(evoked, filters_vec) del filters, filters_vec
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Visualize the reconstructed source activity We can visualize the source estimate in different ways, e.g. as a volume rendering, an overlay onto the MRI, or as an overlay onto a glass brain. The plots for the scalar beamformer show brain activity in the right temporal lobe around 100 ms post stimulus. This is expected g...
lims = [0.3, 0.45, 0.6] kwargs = dict(src=src, subject='sample', subjects_dir=subjects_dir, initial_time=0.087, verbose=True)
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
On MRI slices (orthoview; 2D)
stc.plot(mode='stat_map', clim=dict(kind='value', pos_lims=lims), **kwargs)
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
On MNI glass brain (orthoview; 2D)
stc.plot(mode='glass_brain', clim=dict(kind='value', lims=lims), **kwargs)
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Volumetric rendering (3D) with vectors These plots can also be shown using a volumetric rendering via :meth:~mne.VolVectorSourceEstimate.plot_3d. Let's try visualizing the vector beamformer case. Here we get three source time courses out per voxel (one for each component of the dipole moment: x, y, and z), which appear...
brain = stc_vec.plot_3d( clim=dict(kind='value', lims=lims), hemi='both', size=(600, 600), views=['sagittal'], # Could do this for a 3-panel figure: # view_layout='horizontal', views=['coronal', 'sagittal', 'axial'], brain_kwargs=dict(silhouette=True), **kwargs)
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Visualize the activity of the maximum voxel with all three components We can also visualize all three components in the peak voxel. For this, we will first find the peak voxel and then plot the time courses of this voxel.
peak_vox, _ = stc_vec.get_peak(tmin=0.08, tmax=0.1, vert_as_index=True) ori_labels = ['x', 'y', 'z'] fig, ax = plt.subplots(1) for ori, label in zip(stc_vec.data[peak_vox, :, :], ori_labels): ax.plot(stc_vec.times, ori, label='%s component' % label) ax.legend(loc='lower right') ax.set(title='Activity per orientati...
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Morph the output to fsaverage We can also use volumetric morphing to get the data to fsaverage space. This is for example necessary when comparing activity across subjects. Here, we will use the scalar beamformer example. We pass a :class:mne.SourceMorph as the src argument to mne.VolSourceEstimate.plot. To save some c...
fetch_fsaverage(subjects_dir) # ensure fsaverage src exists fname_fs_src = subjects_dir / 'fsaverage' / 'bem' / 'fsaverage-vol-5-src.fif' src_fs = mne.read_source_spaces(fname_fs_src) morph = mne.compute_source_morph( src, subject_from='sample', src_to=src_fs, subjects_dir=subjects_dir, niter_sdr=[5, 5, 2], n...
dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you don't see four lines of output above, you might be rendering this on Github. If you want to see the output, same as the Python output below, cut and paste the Github URL to nbviewer.jupyter.org, which will do a more thorough rendering job. Now lets do the same thing in Python. Yes, Python has it's own collecti...
class Queue: def __init__(self): self._storage = {} self._start = -1 # replicating 0 index used for arrays self._end = -1 # replicating 0 index used for arrays def size(self): return self._end - self._start def enqueue(self, val): self._end += 1 ...
Comparing JavaScript with Python.ipynb
4dsolutions/Python5
mit
Another example of features JavaScript is acquiring with ES6 (Sixth Edition we might call it), are rest and default parameters. A "rest" parameter has nothing to do with RESTful, and everything to do with "the rest" as in "whatever is left over." For example, in the function below, we pass in more ingredients than som...
%%javascript var sendTo = function(s){ element.append(s + "<br />"); } //Function to send everyone their Surface Studio! let sendSurface = recepient => { sendTo(recepient); } function recipe(ingredient0, ingre1, ing2, ...more){ sendSurface(ingredient0 + " is one ingredient."); sendSurface(more[1] + " i...
Comparing JavaScript with Python.ipynb
4dsolutions/Python5
mit
In Python we have both sequence and dictionary parameters, which we could say are both rest parameters, one for scooping up positionals, the other for gathering the named. Here's how that looks:
def recipe(ingr0, *more, ingr1, meat="turkey", **others): print(more) print(others) recipe("avocado", "tomato", "potato", ingr1="squash", dessert="peanuts", meat = "shrimp")
Comparing JavaScript with Python.ipynb
4dsolutions/Python5
mit