markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Operations are then also done based off of index:
ser1 + ser2
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0
Load the data file shared/bladder_cancer_genes_tcga.txt into a pandas.DataFrame, convert it to a numpy.ndarray matrix, and print the matrix dimensions
gene_matrix_for_network_df = pandas.read_csv("shared/bladder_cancer_genes_tcga.txt", sep="\t") gene_matrix_for_network = gene_matrix_for_network_df.as_matrix() print(gene_matrix_for_network.shape)
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Filter the matrix to include only rows for which the column-wise median is > 14; matrix should now be 13 x 414.
genes_keep = numpy.where(numpy.median(gene_matrix_for_network, axis=1) > 14) matrix_filt = gene_matrix_for_network[genes_keep, ][0] matrix_filt.shape N = matrix_filt.shape[0] M = matrix_filt.shape[1]
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Binarize the gene expression matrix using the mean value as a breakpoint, turning it into a NxM matrix of booleans (True/False). Call it gene_matrix_binarized.
gene_matrix_binarized = numpy.tile(numpy.mean(matrix_filt, axis=1),(M,1)).transpose() < matrix_filt print(gene_matrix_binarized.shape)
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Test your matrix by printing the first four columns of the first four rows:
gene_matrix_binarized[0:4,0:4]
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
The core part of the REVEAL algorithm is a function that can compute the joint entropy of a collection of binary (TRUE/FALSE) vectors X1, X2, ..., Xn (where length(X1) = length(Xi) = M). Write a function entropy_multiple_vecs that takes as its input a nxM matrix (where n is the number of variables, i.e., genes, and M i...
def entropy_multiple_vecs(binary_vecs): ## use shape to get the numbers of rows and columns as [n,M] [n, M] = binary_vecs.shape # make a "M x n" dataframe from the transpose of the matrix binary_vecs binary_df = pandas.DataFrame(binary_vecs.transpose()) # use the groupby method to obtain a...
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
This test case should produce the value 3.938:
print(entropy_multiple_vecs(gene_matrix_binarized[0:4,]))
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Example implementation of the REVEAL algorithm: We'll go through stage 3
ratio_thresh = 0.1 genes_to_fit = list(range(0,N)) stage = 0 regulators = [None]*N entropies_for_stages = [None]*N max_stage = 4 entropies_for_stages[0] = numpy.zeros(N) for i in range(0,N): single_row_matrix = gene_matrix_binarized[i,:,None].transpose() entropies_for_stages[0][i] = entropy_multiple_vecs(sing...
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
source_sentences[:50].split('\n')
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
target_sentences[:50].split('\n')
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Preprocess To do anything useful with it, we'll need to turn the characters into a list of integers:
def extract_character_vocab(data): special_words = ['<pad>', '<unk>', '<s>', '<\s>'] set_words = set([character for line in data.split('\n') for character in line]) int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))} vocab_to_int = {word: word_i for word_i, wor...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length): new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \ for sentence in source_ids] new_target_ids = [sentence + [target_letter_to_int['<pad...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
This is the final shape we need them to be in. We can now proceed to building the model. Model Check the Version of TensorFlow This will check to make sure you have the correct version of TensorFlow
from distutils.version import LooseVersion import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__))
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Hyperparameters
# Number of Epochs epochs = 60 # Batch Size batch_size = 128 # RNN Size rnn_size = 50 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 13 decoding_embedding_size = 13 # Learning Rate learning_rate = 0.001
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Input
input_data = tf.placeholder(tf.int32, [batch_size, sequence_length]) targets = tf.placeholder(tf.int32, [batch_size, sequence_length]) lr = tf.placeholder(tf.float32)
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Sequence to Sequence The decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used w...
#print(source_letter_to_int) source_vocab_size = len(source_letter_to_int) print("Length of letter to int is {}".format(source_vocab_size)) print("encoding embedding size is {}".format(encoding_embedding_size)) # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encodi...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Process Decoding Input
import numpy as np # Process the input we'll feed to the decoder ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1) #Demonstration/Example demonstration_outputs = np.reshape(range(batch_size * sequence_length),...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Decoding Embed the decoding input Build the decoding RNNs Build the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.
target_vocab_size = len(target_letter_to_int) #print(target_vocab_size, " : ", decoding_embedding_size) # Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) #print(dec_input, target_vocab_siz...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Decoder During Training Build the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder. Apply the output layer to the output of the training decoder
with tf.variable_scope("decoding") as decoding_scope: # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # ...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Decoder During Inference Reuse the weights the biases from the training decoder using tf.variable_scope("decoding", reuse=True) Build the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder. The output function is applied to the output in this step
with tf.variable_scope("decoding", reuse=True) as decoding_scope: # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\s>'], sequence_length - 1, target_vocab_size) i...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Optimization Our loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.
# Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([batch_size, sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, ...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Train We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
import numpy as np train_source = source_ids[batch_size:] train_target = target_ids[batch_size:] valid_source = source_ids[:batch_size] valid_target = target_ids[:batch_size] sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( ...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Prediction
input_sentence = 'hello' input_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()] input_sentence = input_sentence + [0] * (sequence_length - len(input_sentence)) batch_shell = np.zeros((batch_size, sequence_length)) batch_shell[0] = input_sentence chatbot_log...
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Lets load up our trajectory. This is the trajectory that we generated in the "Running a simulation in OpenMM and analyzing the results with mdtraj" example.
traj = md.load('ala2.h5') traj
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Create a two component PCA model, and project our data down into this reduced dimensional space. Using just the cartesian coordinates as input to PCA, it's important to start with some kind of alignment.
pca1 = PCA(n_components=2) traj.superpose(traj, 0) reduced_cartesian = pca1.fit_transform(traj.xyz.reshape(traj.n_frames, traj.n_atoms * 3)) print(reduced_cartesian.shape)
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Now we can plot the data on this projection.
plt.figure() plt.scatter(reduced_cartesian[:, 0], reduced_cartesian[:,1], marker='x', c=traj.time) plt.xlabel('PC1') plt.ylabel('PC2') plt.title('Cartesian coordinate PCA: alanine dipeptide') cbar = plt.colorbar() cbar.set_label('Time [ps]')
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Lets try cross-checking our result by using a different feature space that isn't sensitive to alignment, and instead to "featurize" our trajectory by computing the pairwise distance between every atom in each frame, and using that as our high dimensional input space for PCA.
pca2 = PCA(n_components=2) from itertools import combinations # this python function gives you all unique pairs of elements from a list atom_pairs = list(combinations(range(traj.n_atoms), 2)) pairwise_distances = md.geometry.compute_distances(traj, atom_pairs) print(pairwise_distances.shape) reduced_distances = pca2....
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hid...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### epochs = 2000 learning_rate = 0.008 hidden_nodes = 10 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch o...
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], 'r',label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, 'g', label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides....
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below The model pr...
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self...
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Summary Report
import itertools import json import os import re import pickle import platform import time from collections import defaultdict as dd from functools import partial from os.path import abspath, dirname, exists, join from string import Template import numpy as np import pandas as pd import seaborn as sns import scipy.st...
rsmtool/notebooks/summary/header.ipynb
EducationalTestingService/rsmtool
apache-2.0
<style type="text/css"> div.prompt.output_prompt { color: white; } span.highlight_color { color: red; } span.highlight_bold { font-weight: bold; } @media print { @page { size: landscape; margin: 0cm 0cm 0cm 0cm; } * { margin: 0px; padding: 0px; }...
# NOTE: you will need to set the following manually # if you are using this notebook interactively. summary_id = environ_config.get('SUMMARY_ID') description = environ_config.get('DESCRIPTION') jsons = environ_config.get('JSONS') output_dir = environ_config.get('OUTPUT_DIR') use_thumbnails = environ_config.get('USE_THU...
rsmtool/notebooks/summary/header.ipynb
EducationalTestingService/rsmtool
apache-2.0
The role of dipole orientations in distributed source localization When performing source localization in a distributed manner (MNE/dSPM/sLORETA/eLORETA), the source space is defined as a grid of dipoles that spans a large portion of the cortex. These dipoles have both a position and an orientation. In this tutorial, w...
import mne import numpy as np from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse data_path = sample.data_path() evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif') left_auditory = evokeds[0].apply_baseline() fwd = mne.read_forward_solution( dat...
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The source space Let's start by examining the source space as constructed by the :func:mne.setup_source_space function. Dipoles are placed along fixed intervals on the cortex, determined by the spacing parameter. The source space does not define the orientation for these dipoles.
lh = fwd['src'][0] # Visualize the left hemisphere verts = lh['rr'] # The vertices of the source space tris = lh['tris'] # Groups of three vertices that form triangles dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles dip_ori = lh['nn'][lh['vertno']] dip_len = len(dip_pos) dip_times = [0] white = (1.0,...
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Fixed dipole orientations While the source space defines the position of the dipoles, the inverse operator defines the possible orientations of them. One of the options is to assign a fixed orientation. Since the neural currents from which MEG and EEG signals originate flows mostly perpendicular to the cortex [1]_, res...
fig = mne.viz.create_3d_figure(size=(600, 400)) # Plot the cortex fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white', coord_frame='head', fig=fig) # Show the dipoles as arrows pointing along the surface norma...
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Restricting the dipole orientations in this manner leads to the following source estimate for the sample data:
# Compute the source estimate for the 'left - auditory' condition in the sample # dataset. inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True) stc = apply_inverse(left_auditory, inv, pick_ori=None) # Visualize it at the moment of peak activity. _, time_max = stc.get_peak(hemi='lh') brain_fixed ...
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The direction of the estimated current is now restricted to two directions: inward and outward. In the plot, blue areas indicate current flowing inwards and red areas indicate current flowing outwards. Given the curvature of the cortex, groups of dipoles tend to point in the same direction: the direction of the electro...
fig = mne.viz.create_3d_figure(size=(600, 400)) # Plot the cortex fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white', coord_frame='head', fig=fig) # Show the three dipoles defined at each location in the sour...
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
When computing the source estimate, the activity at each of the three dipoles is collapsed into the XYZ components of a single vector, which leads to the following source estimate for the sample data:
# Make an inverse operator with loose dipole orientations inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False, loose=1.0) # Compute the source estimate, indicate that we want a vector solution stc = apply_inverse(left_auditory, inv, pick_ori='vector') # Visualize it...
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Limiting orientations, but not fixing them Often, the best results will be obtained by allowing the dipoles to have somewhat free orientation, but not stray too far from a orientation that is perpendicular to the cortex. The loose parameter of the :func:mne.minimum_norm.make_inverse_operator allows you to specify a val...
# Set loose to 0.2, the default value inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False, loose=0.2) stc = apply_inverse(left_auditory, inv, pick_ori='vector') # Visualize it at the moment of peak activity. _, time_max = stc.magnitude().get_peak(hemi='lh') brain_loo...
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Discarding dipole orientation information Often, further analysis of the data does not need information about the orientation of the dipoles, but rather their magnitudes. The pick_ori parameter of the :func:mne.minimum_norm.apply_inverse function allows you to specify whether to return the full vector solution ('vector...
# Only retain vector magnitudes stc = apply_inverse(left_auditory, inv, pick_ori=None) # Visualize it at the moment of peak activity. _, time_max = stc.get_peak(hemi='lh') brain = stc.plot(surface='white', subjects_dir=subjects_dir, initial_time=time_max, time_unit='s', size=(600, 400))
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Use the np.random module to generate a normal distribution of 1,000 data points in two dimensions (e.g. x, y) - choose whatever mean and sigma^2 you like. Generate another 1,000 data points with a normal distribution in two dimensions that are well separated from the first set. You now have two "clusters". Concatena...
model = svm.OneClassSVM()
notebooks/anomaly_detection/sample_anomaly_detection.ipynb
cavestruz/MLPipeline
mit
Fit the model to the training data. Use the trained model to predict whether X_test_normal data point are in the same distributions. Calculate the fraction of "false" predictions. Use the trained model to predict whether X_test_uniform is in the same distribution. Calculate the fraction of "false" predictions. Use the...
from sklearn.covariance import EllipticEnvelope
notebooks/anomaly_detection/sample_anomaly_detection.ipynb
cavestruz/MLPipeline
mit
Model and parameters Electron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.
L = 200e-9 # device thickness, m model = oedes.models.std.electrononly(L, traps=['trap']) params = { 'T': 300, # K 'electrode0.workfunction': 0, # eV 'electrode1.workfunction': 0, # eV 'electron.energy': 0, # eV 'electron.mu': 1e-9, # m2/(Vs) 'electron.N0': 2.4e26, # 1/m^3 'electron.trap.en...
examples/scl/scl-trapping.ipynb
mzszym/oedes
agpl-3.0
Sweep parameters For simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.
trapenergy_sweep = oedes.sweep('electron.trap.energy',np.asarray([-0.45, -0.33, -0.21, 1.])) voltage_sweep = oedes.sweep('electrode0.voltage', np.logspace(-3, np.log10(20.), 100))
examples/scl/scl-trapping.ipynb
mzszym/oedes
agpl-3.0
Result
c=oedes.context(model) for tdepth,ct in c.sweep(params, trapenergy_sweep): for _ in ct.sweep(ct.params, voltage_sweep): pass v,j = ct.teval(voltage_sweep.parameter_name,'J') oedes.testing.store(j, rtol=1e-3) # for automatic testing if tdepth < 0: label = 'no traps' else: lab...
examples/scl/scl-trapping.ipynb
mzszym/oedes
agpl-3.0
Step 1 I started with the "Franchises" list on Boxofficemojo.com. Within each franchise page, I scraped each movie's information and enter it into a Python dictionary. If it's already in the dictionary, the entry will be overwritten, except with a different Franchise name. But note below that the url for "Franchises" l...
url = 'http://www.boxofficemojo.com/franchises/?view=Franchise&sort=nummovies&order=ASC&p=.htm' response = requests.get(url) page = response.text soup = BeautifulSoup(page,"lxml") tables = soup.find_all("table") rows = [row for row in tables[3].find_all('tr')] rows = rows[1:] # Initialize empty dictionary of movies mo...
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Step 2 Clean up data.
# Remove movies that were re-issues, special editions, or separate 3D or IMAX versions. df['Ignore'] = df['Title'].apply(lambda x: 're-issue' in x.lower() or 're-release' in x.lower() or 'special edition' in x.lower() or '3d)' in x.lower() or 'imax' in x.lower()) df = df[(df.Ignore == False)] del df['Ignore'] df.shap...
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
The films need to be grouped by franchise so that franchise-related data can be included as featured for each observation. - The Average Adjusted Gross of all previous films in the franchise - The Adjusted Gross of the very first film in the franchise - The Release Date of the previous film in the franchise - The Relea...
df = df.sort_values(['Franchise','Release']) df['CumGross'] = df.groupby(['Franchise'])['AdjGross'].apply(lambda x: x.cumsum()) df['SeriesNum'] = df.groupby(['Franchise'])['Release'].apply(lambda x: x.rank()) df['PrevAvgGross'] = (df['CumGross'] - df['AdjGross'])/(df['SeriesNum'] - 1)
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Number of Theaters in which the film showed -- Where this number was unavailable, replaced '-' with 0; the 0 will later be replaced with the mean number of theaters for the other films in the same franchise. I chose the average as a reasonable estimate.
df.Theaters = df.Theaters.replace('-','0') df['Theaters'] = df['Theaters'].apply(lambda x: int(x.replace(',',''))) df['PrevRelease'] = df['Release'].shift() # Create a second dataframe with franchise group-related information. df_group = pd.DataFrame(df.groupby(['Franchise'])['Title'].apply(lambda x: x.count())) df_g...
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
For the regression model, I decided to keep data for films released through 2016, but drop the 3 films released this year; because of their recent release date, their gross earnings will not yet be representative.
films17 = df.loc[[530,712,676]] # Grabbing columns for regression model and dropping 2017 films dfreg = df[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']] dfreg = dfreg.drop([530,712,676]) dfreg.shape
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Step 3 Apply Linear Regression.
dfreg.corr() sns.pairplot(dfreg); sns.regplot((dfreg.PrevAvgGross), (dfreg.AdjGross)); sns.regplot(np.log(dfreg.Theaters), np.log(dfreg.AdjGross));
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
In the pairplot we can see that 'AdjGross' may have some correlation with the variables, particularly 'Theaters' and 'PrevAvgGross'. However, it looks like a polynomial model, or natural log / some other transformation will be required before fitting a linear model.
y, X = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=dfreg, return_type="dataframe")
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
First try: Initial linear regression model with statsmodels
model = sm.OLS(y, X) fit = model.fit() fit.summary() fit.resid.plot(style='o');
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Try Polynomial Regression
polyX=PolynomialFeatures(2).fit_transform(X) polymodel = sm.OLS(y, polyX) polyfit = polymodel.fit() polyfit.rsquared polyfit.resid.plot(style='o'); polyfit.rsquared_adj
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Heteroskedasticity The polynomial regression improved the Adjusted Rsquared and the residual plot, but there's still issues with other statistics including skew. It's worth running the Breusch-Pagan test:
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val'] hettest = sm.stats.diagnostic.het_breushpagan(fit.resid, fit.model.exog) zip(hetnames,hettest) hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val'] hettest = sm.stats.diagnostic.het_breushpagan(polyfit.resid, fit.model.exog)...
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Apply Box-Cox Transformation As seen above the p-values were very low, suggesting the data is indeed tending towards heteroskedasticity. To improve the data we can apply boxcox.
dfPolyX = pd.DataFrame(polyX) bcPolyX = pd.DataFrame() for i in range(dfPolyX.shape[1]): bcPolyX[i] = scipy.stats.boxcox(dfPolyX[i])[0] # Transformed data with Box-Cox: bcPolyX.head() # Introduce log(y) for target variable: y = y.reset_index(drop=True) logy = np.log(y)
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Try Polynomial Regression again with Log Y and Box-Cox transformed X
logPolyModel = sm.OLS(logy, bcPolyX) logPolyFit = logPolyModel.fit() logPolyFit.rsquared_adj
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Apply Regularization using Elastic Net to optimize this model.
X_scaled = preprocessing.scale(bcPolyX) en_cv = linear_model.ElasticNetCV(cv=10, normalize=False) en_cv.fit(X_scaled, logy) en_cv.coef_ logy_en = en_cv.predict(X_scaled) mse = metrics.mean_squared_error(logy, logy_en) # The mean square error for this model mse plt.scatter([x for x in range(540)],(pd.DataFrame(logy_...
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Step 4 As seen above, Polynomial Regression with Elastic Net produces a model with several nonzero coefficients for the given features. I decided to try testing this model on the three new sequels for 2017.
films17 df17 = films17[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']] y17, X17 = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=df17, return_type="dataframe") polyX17 = PolynomialFea...
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Multiplexers \begin{definition}\label{def:MUX} A Multiplexer, typically referred to as a MUX, is a Digital(or analog) switching unit that picks one input channel to be streamed to an output via a control input. For single output MUXs with $2^n$ inputs, there are then $n$ input selection signals that make up the control...
x0, x1, s, y=symbols('x0, x1, s, y') y21Eq=Eq(y, (~s&x0) |(s&x1) ); y21Eq TruthTabelGenrator(y21Eq)[[x1, x0, s, y]] y21EqN=lambdify([x0, x1, s], y21Eq.rhs, dummify=False) SystmaticVals=np.array(list(itertools.product([0,1], repeat=3))) print(SystmaticVals) y21EqN(SystmaticVals[:, 1], SystmaticVals[:, 2], SystmaticVal...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Module
@block def MUX2_1_Combo(x0, x1, s, y): """ 2:1 Multiplexer written in full combo Input: x0(bool): input channel 0 x1(bool): input channel 1 s(bool): channel selection input Output: y(bool): ouput """ @always_comb def logic(): y.next= (not s and x...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values #stimules inputs X1 and X2 TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=3)) x0TVs=np.array([i[1] for i in SystmaticVals]).astype(int) np.random.seed(15) x0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int) x1TVs=np.array([i[2] for i in Systma...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX2_1_Combo');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_Combo_RTL.png}} \caption{\label{fig:M21CRTL} MUX2_1_Combo RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_Combo_SYN.png}} \caption{\label{fig:M21CSYN} MUX2_1_Combo Synthesized Schematic; Xilin...
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:] x0TVs, bin(x0TVs), x1TVs, bin(x1TVs), sTVs, bin(sTVs) @block def MUX2_1_Combo_TBV(): """ myHDL -> Verilog testbench for...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit \begin{figure} \centerline{\includegraphics[width=5cm]{MUX21PYNQZ1Circ.png}} \caption{\label{fig:M21Circ} 2:1 MUX PYNQ-Z1 (Non SoC) conceptualized circuit} \end{figure} Board Constraint
ConstraintXDCTextReader('MUX2_1');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Video of Deployment MUX2_1_Combo myHDL PYNQ-Z1 (YouTube) 4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic Sympy Expression
x0, x1, x2, x3, s0, s1, y=symbols('x0, x1, x2, x3, s0, s1, y') y41Eq=Eq(y, (~s0&~s1&x0) | (s0&~s1&x1)| (~s0&s1&x2)|(s0&s1&x3)) y41Eq TruthTabelGenrator(y41Eq)[[x3, x2, x1, x0, s1, s0, y]] y41EqN=lambdify([x0, x1, x2, x3, s0, s1], y41Eq.rhs, dummify=False) SystmaticVals=np.array(list(itertools.product([0,1], repeat=6)...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Module
@block def MUX4_1_Combo(x0, x1, x2, x3, s0, s1, y): """ 4:1 Multiplexer written in full combo Input: x0(bool): input channel 0 x1(bool): input channel 1 x2(bool): input channel 2 x3(bool): input channel 3 s1(bool): channel selection input bit 1 s0(bool): chann...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=5 SystmaticVals=list(itertools.product([0,1], repeat=6)) s0TVs=np.array([i[0] for i in SystmaticVals]).astype(int) np.random.seed(15) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the r...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_Combo');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_Combo_RTL.png}} \caption{\label{fig:M41CRTL} MUX4_1_Combo RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_Combo_SYN.png}} \caption{\label{fig:M41CSYN} MUX4_1_Combo Synthesized Schematic; Xilin...
#create BitVectors for MUX4_1_Combo_TBV x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] x2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:] x3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2)...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit \begin{figure} \centerline{\includegraphics[width=5cm]{MUX41PYNQZ1Circ.png}} \caption{\label{fig:M41Circ} 4:1 MUX PYNQ-Z1 (Non SoC) conceptualized circuit} \end{figure} Board Constraint
ConstraintXDCTextReader('MUX4_1');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Video of Deployment MUX4_1_MS myHDL PYNQ-Z1 (YouTube) Shannon's Expansion Formula & Stacking of MUXs Claude Shannon, of the famed Shannon-Nyquist theorem, discovered that any boolean expression $F(x_0, x_1, \ldots, x_n)$ can be decomposed in a manner akin to polynomials of perfect squares via $$ F(x_0, x_1, \ldots, x...
@block def MUX4_1_MS(x0, x1, x2, x3, s0, s1, y): """ 4:1 Multiplexer via 2:1 MUX stacking Input: x0(bool): input channel 0 x1(bool): input channel 1 x2(bool): input channel 2 x3(bool): input channel 3 s1(bool): channel selection input bit 1 s0(bool): channel s...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=5 SystmaticVals=list(itertools.product([0,1], repeat=6)) s0TVs=np.array([i[0] for i in SystmaticVals]).astype(int) np.random.seed(15) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the r...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_MS');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_MS_RTL.png}} \caption{\label{fig:M41MSRTL} MUX4_1_MS RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_MS_SYN.png}} \caption{\label{fig:M41MSSYN} MUX4_1_MS Synthesized Schematic; Xilinx Vivado 2...
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] x2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:] x3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=in...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Board Constraint uses same 'MUX4_1.xdc' as "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Video of Deployment MUX4_1_MS myHDL PYNQ-Z1 (YouTube) Introduction to HDL Behavioral...
@block def MUX2_1_B(x0, x1, s, y): """ 2:1 Multiplexer written via behavioral if Input: x0(bool): input channel 0 x1(bool): input channel 1 s(bool): channel selection input Output: y(bool): ouput """ @always_comb def logic(): if s: y....
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=3)) x0TVs=np.array([i[1] for i in SystmaticVals]).astype(int) np.random.seed(15) x0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int) x1TVs=np.array([i[2] for i in SystmaticVals]).astype(int) #the ...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX2_1_B');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_B_RTL.png}} \caption{\label{fig:M21BRTL} MUX2_1_B RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_B_SYN.png}} \caption{\label{fig:M21BSYN} MUX2_1_B Synthesized Schematic; Xilinx Vivado 2017.4}...
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:] x0TVs, bin(x0TVs), x1TVs, bin(x1TVs), sTVs, bin(sTVs) @block def MUX2_1_B_TBV(): """ myHDL -> Verilog testbench for mo...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "2 Channel Input:1 Channel Output multiplexer in Gate Level Logic" Board Constraint uses the same MUX2_1.xdc as "2 Channel Input:1 Channel Output multiplexer in Gate Level Logic" Video of Deployment MUX2_1_B myHDL PYNQ-Z1 (YouTube) 4:1 MUX via Behavioral if-elif-el...
@block def MUX4_1_B(x0, x1, x2, x3, s0, s1, y): """ 4:1 Multiblexer written in if-elif-else Behavioral Input: x0(bool): input channel 0 x1(bool): input channel 1 x2(bool): input channel 2 x3(bool): input channel 3 s1(bool): channel selection input bit 1 s0(boo...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=5 SystmaticVals=list(itertools.product([0,1], repeat=6)) s0TVs=np.array([i[0] for i in SystmaticVals]).astype(int) np.random.seed(15) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the r...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_B');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_B_RTL.png}} \caption{\label{fig:M41BRTL} MUX4_1_B RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_B_SYN.png}} \caption{\label{fig:M41BSYN} MUX4_1_B Synthesized Schematic; Xilinx Vivado 2017.4}...
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] x2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:] x3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=in...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Board Constraint uses same 'MUX4_1.xdc' as "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Video of Deployment MUX4_1_B myHDL PYNQ-Z1 (YouTube) Multiplexer 4:1 Behavioral via ...
@block def MUX4_1_BV(X, S, y): """ 4:1 Multiblexerwritten in behvioral "if-elif-else"(case) with BitVector inputs Input: X(4bitBV):input bit vector; min=0, max=15 S(2bitBV):selection bit vector; min=0, max=3 Output: y(bool): ouput """ @always_comb def logic()...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
XTVs=np.array([1,2,4,8]) XTVs=np.append(XTVs, np.random.choice([1,2,4,8], 6)).astype(int) TestLen=len(XTVs) np.random.seed(12) STVs=np.arange(0,4) STVs=np.append(STVs, np.random.randint(0,4, 5)) TestLen, XTVs, STVs Peeker.clear() X=Signal(intbv(0)[4:]); Peeker(X, 'X') S=Signal(intbv(0)[2:]); Peeker(S, 'S') y=Signal(b...
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_BV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_BV_RTL.png}} \caption{\label{fig:M41BVRTL} MUX4_1_BV RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_BV_SYN.png}} \caption{\label{fig:M41BVSYN} MUX4_1_BV Synthesized Schematic; Xilinx Vivado 2...
ConstraintXDCTextReader('MUX4_1_BV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Using MPI To distribute emcee3 across nodes on a cluster, you'll need to use MPI. This can be done with the MPIPool from schwimmbad. To use this, you'll need to install the dependency mpi4py. Otherwise, the code is almost the same as the multiprocessing example above – the main change is the definition of the pool: The...
# Connect to the cluster. from ipyparallel import Client rc = Client() dv = rc.direct_view() # Run the imports on the cluster too. with dv.sync_imports(): import emcee3 import numpy # Define the model. def log_prob(x): return -0.5 * numpy.sum(x ** 2) # Distribute the model to the nodes of the cluster. dv...
docs/user/parallel.ipynb
dfm/emcee3
mit
1. Conceptual Questions (8 Points) Answer these in Markdown [1 point] In problem 4 from HW 3 we discussed probabilities of having HIV and results of a test being positive. What was the sample space for this problem? [4 points] One of the notations in the answer key is a random variable $H$ which indicated if a person...
#2.1 nile = pydataset.data('Nile').as_matrix() plt.plot(nile[:,0], nile[:,1], '-o') plt.xlabel('Year') plt.ylabel('Nile Flow Rate') plt.show() #2.2 print('{:.3}'.format(np.corrcoef(nile[:,0], nile[:,1])[0,1])) #2.3 ok to distplot or plt.hist sns.distplot(nile[:,1]) plt.axvline(np.mean(nile[:,1]), color='C2', label='M...
unit_7/hw_2018/Homework_7_Key.ipynb
whitead/numerical_stats
gpl-3.0
2. Insect Spray (10 Points) Answer in Python [2 points] Load the 'InsectSpray' dataset, convert to a numpy array and print the number of rows and columns. Recall that numpy arrays can only hold one type of data (e.g., string, float, int). What is the data type of the loaded dataset? [2 points] Using np.unique, prin...
#1.1 insect = pydataset.data('InsectSprays').as_matrix() print(insect.shape, 'string or object is acceptable') #1.2 print(np.unique(insect[:,1])) #1.3 labels = np.unique(insect[:,1]) ldata = [] #slice out each set of rows that matches label #and add to list for l in labels: ldata.append(insect[insect[:,1] == l, ...
unit_7/hw_2018/Homework_7_Key.ipynb
whitead/numerical_stats
gpl-3.0
3. NY Air Quality (6 Points) Load the 'airquality' dataset and convert into to a numpy array. Make a scatter plot of wind (column 2, mph) and ozone concentration (column 0, ppb). Using the plt.text command, display the correlation coefficient in the plot. This data as nan, which means "not a number". You can select non...
nyair = pydataset.data('airquality').as_matrix() plt.plot(nyair[:,2], nyair[:,0], 'o') plt.xlabel('Wind [mph]') plt.ylabel('Ozone [ppb]') nans = np.isnan(nyair[:,0]) r = np.corrcoef(nyair[~nans,2], nyair[~nans,0])[0,1] plt.text(10, 130, 'Correlation Coefficient = {:.2}'.format(r)) plt.show()
unit_7/hw_2018/Homework_7_Key.ipynb
whitead/numerical_stats
gpl-3.0
Read the RIRE data and generate a larger point set as a reference
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32) moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32) fixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth(fdata("ct_T1.standard")) # Estimate the reference_transform defined by the RIRE fiduc...
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Initial Alignment We use the CenteredTransformInitializer. Should we use the GEOMETRY based version or the MOMENTS based one?
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelIDValue()), moving_image, sitk.Euler3DTransform(), sitk.Ce...
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Registration Possible choices for simple rigid multi-modality registration framework (<b>300</b> component combinations, in addition to parameter settings for each of the components): <ul> <li>Similarity metric, 2 options (Mattes MI, JointHistogram MI): <ul> <li>Number of histogram bins.</li> <li>Sampling strategy,...
#%%timeit -r1 -n1 # to time this cell uncomment the line above #the arguments to the timeit magic specify that this cell should only be run once. running it multiple #times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy #results from multiple runs you will have to ...
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
In some cases visual comparison of the registration errors using the same scale is not informative, as seen above [all points are grey/black]. We therefor set the color scale to the min-max error range found in the current data and not the range from the previous stage.
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, display_errors=True)
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Now using the built in multi-resolution framework Perform registration using the same settings as above, but take advantage of the multi-resolution framework which provides a significant speedup with minimal effort (3 lines of code). It should be noted that when using this framework the similarity metric value will not...
%%timeit -r1 -n1 #the arguments to the timeit magic specify that this cell should only be run once. running it multiple #times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy #results from multiple runs you will have to modify the code to save them instead of just p...
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Sufficient accuracy <u>inside</u> the ROI Up to this point our accuracy evaluation has ignored the content of the image and is likely overly conservative. We have been looking at the registration errors inside the volume, but not necesserily in the smaller ROI. To see the difference you will have to <b>comment out the ...
# Threshold the original fixed, CT, image at 0HU (water), resulting in a binary labeled [0,1] image. roi = fixed_image> 0 # Our ROI consists of all voxels with a value of 1, now get the bounding box surrounding the head. label_shape_analysis = sitk.LabelShapeStatisticsImageFilter() label_shape_analysis.SetBackgroundVa...
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Line Chart Selectors Fast Interval Selector
## First we define a Figure dt_x_fast = DateScale() lin_y = LinearScale() x_ax = Axis(label="Index", scale=dt_x_fast) x_ay = Axis(label=(symbol + " Price"), scale=lin_y, orientation="vertical") lc = Lines( x=dates_actual, y=prices, scales={"x": dt_x_fast, "y": lin_y}, colors=["orange"] ) lc_2 = Lines( x=dates_...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Index Selector
db_index = HTML(value="[]") ## Now we try a selector made to select all the y-values associated with a single x-value index_sel = IndexSelector(scale=dt_x_fast, marks=[lc, lc_2]) ## Now, we define a function that will be called when the selectors are interacted with def index_change_callback(change): db_index.val...
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0