markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
For the example, we will use a linear regression model.
np.random.seed(26) xdata = np.linspace(0, 50, 100) b0, b1, sigma = -2, 1, 3 ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma) plt.plot(xdata, ydata);
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
Now we will write the PyMC3 model, keeping in mind the following two points: 1. Data must be modifiable (both x and y). 2. The model must be recompiled in order to be refitted with the modified data. We, therefore, have to create a function that recompiles the model when it's called. Luckily for us, compilation in PyMC...
def compile_linreg_model(xdata, ydata): with pm.Model() as model: x = pm.Data("x", xdata) b0 = pm.Normal("b0", 0, 10) b1 = pm.Normal("b1", 0, 10) sigma_e = pm.HalfNormal("sigma_e", 10) y = pm.Normal("y", b0 + b1 * x, sigma_e, observed=ydata) return model sample_kwargs =...
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters. We follow the same pattern with {func}az.from_pymc3 <arviz.from_pymc3>. Note, however, how coords are not set. This is done to prevent errors due to coordi...
dims = {"y": ["time"], "x": ["time"]} idata_kwargs = { "dims": dims, "log_likelihood": False, } idata = az.from_pymc3(trace, model=linreg_model, **idata_kwargs) idata
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We are now missing the log_likelihood group due to setting log_likelihood=False in idata_kwargs. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get PyMC3 to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we wil...
def calculate_log_lik(x, y, b0, b1, sigma_e): mu = b0 + b1 * x return stats.norm(mu, sigma_e).logpdf(y)
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars. Therefore, we can use {func}xr.apply_ufunc <xarray.apply_ufunc> to handle the broadasting ...
log_lik = xr.apply_ufunc( calculate_log_lik, idata.constant_data["x"], idata.observed_data["y"], idata.posterior["b0"], idata.posterior["b1"], idata.posterior["sigma_e"], ) idata.add_groups(log_likelihood=log_lik)
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to xr.apply_ufunc. We are now passing the arguments to calculate_log_lik in...
calculate_log_lik( idata.constant_data["x"].values, idata.observed_data["y"].values, idata.posterior["b0"].values, idata.posterior["b1"].values, idata.posterior["sigma_e"].values )
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
If you are still curious about the magic of xarray and xr.apply_ufunc, you can also try to modify the dims used to generate the InferenceData a couple cells before: dims = {"y": ["time"], "x": ["time"]} What happens to the result if you use a different name for the dimension of x?
idata
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We will create a subclass of az.SamplingWrapper.
class PyMC3LinRegWrapper(az.SamplingWrapper): def sample(self, modified_observed_data): with self.model(*modified_observed_data) as linreg_model: idata = pm.sample( **self.sample_kwargs, return_inferencedata=True, idata_kwargs=self.idata_...
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We initialize our sampling wrapper. Let's stop and analyze each of the arguments. We'd generally use model to pass a model object of some kind, already compiled and reexecutable, however, as we saw before, we need to recompile the model every time we use it to pass the model generating function instead. Close enough. ...
pymc3_wrapper = PyMC3LinRegWrapper( model=compile_linreg_model, log_lik_fun=calculate_log_lik, posterior_vars=("b0", "b1", "sigma_e"), idata_orig=idata, sample_kwargs=sample_kwargs, idata_kwargs=idata_kwargs, )
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inpu...
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), 'input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), 'input_z') return inputs_real, inputs_z
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Th...
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model, g_logits = generator(input_z, input_size, g_hidden_size, False, alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discrimin...
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Discriminator and Generator Losses For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mea...
# Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( labels=tf.ones_like(d_logits_real) * (1-smooth), logits=d_logits_real)) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( labels=tf.zeros_like(d_logits_fake), logits=d_...
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the gener...
# Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [x for x in tf.trainable_variables() if 'generator' in x.name] d_vars = [x for x in tf.trainable_variables() if x.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer...
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training.
def view_samples(epoch, samples): print(len(samples[0][0]),len(samples[0][1])) fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch][0]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.im...
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) print(np.array(samples).shape) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[0][::int(len(sample[0])/cols)], ax_row): ax.imshow(img.reshape((28,28)),...
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Drop Row Based On A Conditional
%%sql -- Delete all rows DELETE FROM criminals -- if the age is less than 18 WHERE age < 18
sql/drop_rows.ipynb
tpin3694/tpin3694.github.io
mit
Setup
path = "data/dogscats/" # path = "data/dogscats/sample/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=32 # batch_size=1 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_...
nbs/dogscats-ensemble.ipynb
roebius/deeplearning_keras2
apache-2.0
Dense model
def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return con...
nbs/dogscats-ensemble.ipynb
roebius/deeplearning_keras2
apache-2.0
Load IMDB Dataset
# load the dataset but only keep the top n words, zero the rest # docs at: https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb/load_data top_words = 5000 start_char=1 oov_char=2 index_from=3 (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words, start_...
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Pad sequences so they are all the same length (required by keras/tensorflow).
# truncate and pad input sequences max_review_length = 500 X_train = sequence.pad_sequences(X_train, maxlen=max_review_length) X_test = sequence.pad_sequences(X_test, maxlen=max_review_length) print(X_train.shape) print(y_train.shape) print(len(X_train[0])) print(len(X_train[1])) print(X_test.shape) print(y_test.sha...
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Setup Vocabulary Dictionary The index value loaded differes from the dictionary value by "index_from" so that special characters for padding, start of sentence, and out of vocabulary can be prepended to the start of the vocabulary.
word_index = imdb.get_word_index() inv_word_index = np.empty(len(word_index)+index_from+3, dtype=np.object) for k, v in word_index.items(): inv_word_index[v+index_from]=k inv_word_index[0]='<pad>' inv_word_index[1]='<start>' inv_word_index[2]='<oov>' word_index['ai'] inv_word_index[16942+index_from] inv_wo...
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Convert Encoded Sentences to Readable Text
def toText(wordIDs): s = '' for i in range(len(wordIDs)): if wordIDs[i] != 0: w = str(inv_word_index[wordIDs[i]]) s+= w + ' ' return s for i in range(5): print() print(str(i) + ') sentiment = ' + ('negative' if y_train[i]==0 else 'positive')) print(toText(X_train...
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Build the model Sequential guide, compile() and fit() Embedding The embeddings layer works like an effiecient one hot encoding for the word index followed by a dense layer of size embedding_vector_length. LSTM (middle of page) Dense Dropout (1/3 down the page) "model.compile(...) sets up the "adam" optimizer, similar ...
backend.clear_session() embedding_vector_length = 5 rnn_vector_length = 150 #activation = 'relu' activation = 'sigmoid' model = Sequential() model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length)) model.add(Dropout(0.2)) #model.add(LSTM(rnn_vector_length, activation=activation)) ...
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Setup Tensorboard make sure /data/kaggle-tensorboard path exists or can be created Start tensorboard from command line: tensorboard --logdir=/data/kaggle-tensorboard open http://localhost:6006/
log_dir = '/data/kaggle-tensorboard' shutil.rmtree(log_dir, ignore_errors=True) os.makedirs(log_dir) tbCallBack = TensorBoard(log_dir=log_dir, histogram_freq=0, write_graph=True, write_images=True) full_history=[]
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Train the Model Each epoch takes about 3 min. You can reduce the epochs to 3 for a faster build and still get good accuracy. Overfitting starts to happen at epoch 7 to 9. Note: You can run this cell multiple times to add more epochs to the model training without starting over.
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=8, batch_size=64, callbacks=[tbCallBack]) full_history += history.history['loss']
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Accuracy on the Test Set
scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) print( 'embedding_vector_length = ' + str( embedding_vector_length )) print( 'rnn_vector_length = ' + str( rnn_vector_length ))
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
# Hyper Parameter Tuning Notes | Accuracy % | Type | max val acc epoch | embedding_vector_length | RNN state size | Dropout | :------------ | :-------- | :---- | :---- | :---- | :---- | 88.46 * | GRU | 6 | 5 | 150 | 0.2 (after Em...
history.history # todo: add graph of all 4 values with history plt.plot(history.history['loss']) plt.yscale('log') plt.show() plt.plot(full_history) plt.yscale('log') plt.show()
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Evaluate on Custom Text
import re words_only = r'[^\s!,.?\-":;0-9]+' re.findall(words_only, "Some text to, tokenize. something's.Something-else?".lower()) def encode(reviewText): words = re.findall(words_only, reviewText.lower()) reviewIDs = [start_char] for word in words: index = word_index.get(word, oov_char -index_fro...
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Features View
for row in X_user_pad: print() print(toText(row))
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Results
user_scores = model.predict(X_user_pad) is_positive = user_scores >= 0.5 # I'm an optimist for i in range(len(user_reviews)): print( '\n%.2f %s:' % (user_scores[i][0], 'positive' if is_positive[i] else 'negative' ) + ' ' + user_reviews[i] )
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
# Installing the latest version of the package import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) %%bash # Exporting the project export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results. You can observe the model as it's being trained b...
%%bigquery # Eval statistics on the held out data. # Here, ML.EVALUATE function is used to evaluate model metrics SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL feat_eng.baseline_model) %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT * FROM ML.EVALUATE(MODEL feat_...
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
NOTE: Because you performed a linear regression, the results include the following columns: mean_absolute_error mean_squared_error mean_squared_log_error median_absolute_error r2_score explained_variance Resource for an explanation of the Regression Metrics. Mean squared error (MSE) - Measures the difference between ...
%%bigquery #TODO 1 # Here, ML.EVALUATE function is used to evaluate model metrics SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.baseline_model)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Model 1: EXTRACT dayofweek from the pickup_datetime feature. As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday). If you were to extract the dayofweek from...
%%bigquery #TODO 2 CREATE OR REPLACE MODEL feat_eng.model_1 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, pickup_datetime, EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, pickuplon, pickuplat, dropofflon, dropofflat FROM feat_...
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
%%bigquery # Here, ML.TRAINING_INFO function is used to see information about the training iterations of a model. SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL feat_eng.model_1) %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT * FROM ML.EVALUATE(MODEL feat_eng.mod...
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
%%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.model_1)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Model 2: EXTRACT hourofday from the pickup_datetime feature As you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number o...
%%bigquery #TODO 3a CREATE OR REPLACE MODEL feat_eng.model_2 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, #pickup_datetime, EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) AS hourofday, pick...
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Model 3: Feature cross dayofweek and hourofday using CONCAT First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross. Note: BQML by default assumes that numbers are numeric features, and strings are categorical feat...
%%bigquery #TODO 3b CREATE OR REPLACE MODEL feat_eng.model_3 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, #pickup_datetime, #EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, #EXTRACT(HOUR FROM pickup_datetime) AS hourofday, CONCAT(CAST(EX...
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
2. Check that your problem is solvable While we don't have to check everything. Rather than spend hours trying to debug our program, it helps to spend a few moments and make sure the data you have to work with makes sense. Certainly this doesn't have to be exhaustive, but it saves headaches later. Some common things to...
#Do we have lanes defined for factories or warehouses that don't exist? all_locations = set(lanes.origin) | set(lanes.destination) for f in factories.Factory: if f not in all_locations: print('missing ', f) for w in warehouses.Warehouse: if w not in all_locations: print('missing ', w) ...
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
3. Model the data with a graph Our data has a very obvious graph structure to it. We have factories and warehouses (nodes), and we have lanes that connect them (edges). In many cases the extra effort of explicitly making a graph allows us to have very natural looking constraint and objective formulations. This is absol...
import networkx as nx G = nx.DiGraph() #add all the nodes for i, row in factories.iterrows(): G.add_node(row.Factory, supply=row.Supply, node_type='factory') for i, row in warehouses.iterrows(): G.add_node(row.Warehouse, demand=row.Demand, node_type='warehouse') #add the lanes (edges) for i, row in lane...
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
4. Define the actual Linear Program So far everything we have done hasn't concerned itself with solving a linear program. We have one primary question to answer here: What quantity of plumbus-es should we ship from each factory to each warehouse to minimize the total shipping cost? Taking this apart, we are looking for...
from pulp import *
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
The variables are the amounts to put on each edge. LpVariable.dicts allows us to access the variables using dictionary access syntax, i.e., the quantity from Garfield to BurgerQueen is python qty[('Garfield','BurgerQueen')] the actual variable name created under the hood is qty_('Garfield',_'BurgerQueen')
qty = LpVariable.dicts("qty", G.edges(), lowBound=0)
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
okay cool, so what about our objective? Revisiting the question: What quantity of plumbus-es should we ship from each factory to each warehouse to minimize the total shipping cost? We are seeking to minimize the shipping cost. So we need to calculate our shipping cost as a function of our variables (the lanes), and it ...
#the total cost of this routing is the cost per unit * the qty sent on each lane def objective(): shipping_cost = lpSum([ qty[(org,dest)]*data['cost'] for (org,dest,data) in G.edges(data=True)]) return shipping_cost
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
We have a few constraints to define: The demand at each retailer must be satisfied. In graph syntax this means the sum of all inbound edges must match the demand we have on file: $$\sum_{o,d \in in_edges(d)} qty_{o,d} = Demand(d)$$ We must not use more supply than each factory has. i.e., the sum of the outbound edg...
def constraints(): constraints=[] for x, data in G.nodes(data=True): #demand must be met if data['node_type'] =='warehouse': inbound_qty = lpSum([ qty[(org,x)] for org, _ in G.in_edges(x)]) c = inbound_qty == data['demand'] constraints.append(c) ...
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
Finally ready to create the problem, add the objective, and add the constraints
#setup the problem prob = LpProblem('warehouse_routing',LpMinimize) #add the objective prob += objective() #add all the constraints for c in constraints(): prob+=c
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
Now we can finally answer: What quantity of plumbus-es should we ship from each factory to each warehouse?
#you can also use the value() function instead of .varValue for org,dest in G.edges(): v= value(qty[(org,dest)]) if v >0: print(org,dest, v)
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
and, How much will our shipping cost be?
value(prob.objective)
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
It is a good idea to verify explicitly that all the constraints were met. Sometimes it is easy to forget a necessary constraint.
#lets verify all the conditions #first lets stuff our result into a dataframe for export result=[] for org,dest in G.edges(): v= value(qty[(org,dest)]) result.append({'origin':org,'destination':dest,'qty':v}) result_df = pd.DataFrame(result) lanes['key']=lanes.origin+lanes.destination result_df['key'] = result...
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
The Kernel The kernel decays quickly around $x=0$, which is why cubic splines suffer from very little "ringing" -- moving one point doesn't significantly affect the curve at points far away.
# # Plot the kernel # DECAY = math.sqrt(3)-2; vs = [3*(DECAY**x) for x in range(1,7)] ys = [0]*len(vs) + [1] + [0]*len(vs) vs = [-v for v in vs[::-1]] + [0.0] + vs xs = numpy.linspace(0,len(ys)-1, 1000) plt.figure(0, figsize=(12.0,4.0));plt.grid(True);plt.ylim([-0.2,1.1]);plt.xticks(range(-5,6)) plt.plot([x-6.0 for x i...
NaturalCubicSplines.ipynb
mtimmerm/IPythonNotebooks
apache-2.0
Printing had a weird implementation in Python 2 that was rectified, and now printing is a function like every other. This means that you must use brackets when you print something. Then in Python 2, there were two ways of doing integer division: 2 / 3 and 2 // 3 both gave zero. In Python 3, the former triggers a type u...
l = [] for i in range(10): l.append(i) print(l)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
This is more Pythonesque:
l = [i**2 for i in range(10)] print(l)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
What you have inside the square bracket is a generator expression. Sometimes you do not need the list, only its values. In such cases, it suffices to use the generator expression. The following two lines of code achieve the same thing:
print(sum([i for i in range(10)])) print(sum(i for i in range(10)))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Which one is more efficient? Why? You can also use conditionals in the generator expressions. For instance, this is a cheap way to get even numbers:
[i for i in range(10) if i % 2 == 0]
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Exercise 3. List all odd square numbers below 1000. 7.2 PEP8 And on the seventh day, God created PEP8. Python Enhancement Proposal (PEP) is a series of ideas and good practices for writing nice Python code and evolving the language. PEP8 is the set of policies that tells you what makes Python syntax pretty (meaning it ...
for _ in range(10): print("Vomit")
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Good:
for _ in range(10): print("OMG, the code generating this is so prettily idented")
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
The code is more readable if it is a bit leafy. For this reason, leave a space after every comma just as you would do in natural languages:
print([1,2,3,4]) # Ugly crap print([1, 2, 3, 4]) # My god, this is so much easier to read!
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Spyder has tools for helping you keeping to PEP8, but it is not so straightforward in Jupyter unfortunately. Exercise 4. Clean up this horrific mess:
for i in range(2,5): print(i) for j in range( -10,0, 1): print(j )
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
7.3 Tuples, swap Tuples are like lists, but with a fixed number of entries. Technically, this is a tuple:
t = (2, 3, 4) print(t) print(type(t))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
You would, however, seldom use it in this form, because you would just use a list. They come handy in certain scenarios, like enumerating a list:
very_interesting_list = [i**2-1 for i in range(10) if i % 2 != 0] for i, e in enumerate(very_interesting_list): print(i, e)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Here enumerate returns you a tuple with the running index and the matching entry of the list. You can also zip several lists and create a stream of tuples:
another_interesting_list = [i**2+1 for i in range(10) if i % 2 == 0] for i, j in zip(very_interesting_list, another_interesting_list): print(i, j)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
You can use tuple-like assignment to initialize multiple variables:
a, b, c = 1, 2, 3 print(a, b, c)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
This syntax in turn enables you the most elegant way of swapping the value of two variables:
a, b = b, a print(a, b)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
7.4 Indexing You saw that you can use in, zip, and enumerate to iterate over lists. You can also use slicing on one-dimensional lists:
l = [i for i in range(10)] print(l) print(l[2:5]) print(l[2:]) print(l[:-1]) l[-2]
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Note that the upper index is not inclusive (the same as in range). The index -1 refers to the last item, -2 to the second last, and so on. Python lists are zero-indexed. Unfortunately, you cannot do convenient double indexing on multidimensional lists. For this, you need numpy.
import numpy as np a = np.array([[(i+1)*(j+1)for j in range(5)] for i in range(3)]) print(a) print(a[:, 0]) print(a[0, :])
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Exercise 5. Get the bottom-right 2x2 submatrix of a. 8. Types Python will hide the pain of working with types: you don't have to declare the type of any variable. But this does not mean they don't have a type. The type gets assigned automatically via an internal type inference mechanism. To demonstrate this, we import...
import sympy as sp import numpy as np from sympy.interactive import printing printing.init_printing(use_latex='mathjax') print(np.sqrt(2)) sp.sqrt(2)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
The types tell you why these two look different:
print(type(np.sqrt(2))) print(type(sp.sqrt(2)))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
The symbolic representation is, in principle, infinite precision, whereas the numerical representation uses 64 bits. As we said above, you can do some things with numpy arrays that you cannot do with lists. Their types can be checked:
a = [0. for _ in range(5)] b = np.zeros(5) print(a) print(b) print(type(a)) print(type(b))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
There are many differences between numpy arrays and lists. The most important ones are that lists can expand, but arrays cannot, and lists can contain any object, whereas numpy arrays can only contain things of the same type. Type conversion is (usually) easy:
print(type(list(b))) print(type(np.array(a)))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
This is where the trouble begins:
from sympy import sqrt from numpy import sqrt sqrt(2)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Because of this, never import everything from a package: from numpy import * is forbidden. Exercise 6. What would you do to keep everything at infinite precision to ensure the correctness of a computational proof? This does not seem to be working:
b = np.zeros(3) b[0] = sp.pi b[1] = sqrt(2) b[2] = 1/3 print(b)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
9. Read the fine documentation (and write it) Python packages and individual functions typically come with documentation. Documentation is often hosted on ReadTheDocs. For individual functions, you can get the matching documentation as you type. Just press Shift+Tab on a function:
sp.sqrt
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
In Spyder, Ctrl+I will bring up the documentation of the function. This documentation is called docstring, and it is extremely easy to write, and you should do it yourself if you write a function. It is epsilon effort and it will take you a second to write it. Here is an example:
def multiply(a, b): """Multiply two numbers together. :param a: The first number to be multiplied. :type a: float. :param b: The second number to be multiplied. :type b: float. :returns: the multiplication of the two numbers. """ return a*b
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Now you can press Shift+Tab to see the above documentation:
multiply
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
To give our Python function a test run, we will now do some imports and generate the input data for the initial conditions of our metal sheet with a few very hot points. We'll also make two plots, one after a thousand time steps, and a second plot after another two thousand time steps. Do note that the plots are using ...
import numpy #setup initial conditions def get_initial_conditions(nx, ny): field = numpy.ones((ny, nx)).astype(numpy.float32) field[numpy.random.randint(0,nx,size=10), numpy.random.randint(0,ny,size=10)] = 1e3 return field field = get_initial_conditions(nx, ny)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can now use this initial condition to solve the diffusion problem and plot the results.
from matplotlib import pyplot %matplotlib inline #run the diffuse function a 1000 times and another 2000 times and make plots fig, (ax1, ax2) = pyplot.subplots(1,2) cpu=numpy.copy(field) for i in range(1000): cpu = diffuse(cpu) ax1.imshow(cpu) for i in range(2000): cpu = diffuse(cpu) ax2.imshow(cpu)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Now let's take a quick look at the execution time of our diffuse function. Before we do, we also copy the current state of the metal sheet to be able to restart the computation from this state.
#run another 1000 steps of the diffuse function and measure the time from time import time start = time() cpu=numpy.copy(field) for i in range(1000): cpu = diffuse(cpu) end = time() print("1000 steps of diffuse on a %d x %d grid took" %(nx,ny), (end-start)*1000.0, "ms") pyplot.imshow(cpu)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
The above CUDA kernel parallelizes the work such that every grid point will be processed by a different CUDA thread. Therefore, the kernel is executed by a 2D grid of threads, which are grouped together into 2D thread blocks. The specific thread block dimensions we choose are not important for the result of the computa...
from pycuda import driver, compiler, gpuarray, tools import pycuda.autoinit from time import time #allocate GPU memory u_old = gpuarray.to_gpu(field) u_new = gpuarray.to_gpu(field) #setup thread block dimensions and compile the kernel threads = (16,16,1) grid = (int(nx/16), int(ny/16), 1) block_size_string = "#define...
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
The above code is a bit of boilerplate we need to compile a kernel using PyCuda. We've also, for the moment, fixed the thread block dimensions at 16 by 16. These dimensions serve as our initial guess for what a good performing pair of thread block dimensions could look like. Now that we've setup everything, let's see h...
#call the GPU kernel a 1000 times and measure performance t0 = time() for i in range(500): diffuse_kernel(u_new, u_old, block=threads, grid=grid) diffuse_kernel(u_old, u_new, block=threads, grid=grid) driver.Context.synchronize() print("1000 steps of diffuse ona %d x %d grid took" %(nx,ny), (time()-t0)*1000, "m...
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Note that the Kernel Tuner prints a lot of useful information. To ensure you'll be able to tell what was measured in this run the Kernel Tuner always prints the GPU or OpenCL Device name that is being used, as well as the name of the kernel. After that every line contains the combination of parameters and the time that...
kernel_string_shared = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x; int by = blockIdx.y * block_size_y; __shared__ float sh_u[block_size_y+2][block_size_x+...
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can now tune this new kernel using the kernel tuner
result = tune_kernel("diffuse_kernel", kernel_string_shared, problem_size, args, tune_params)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Tiling GPU Code One very useful code optimization is called tiling, sometimes also called thread-block-merge. You can look at it in this way, currently we have many thread blocks that together work on the entire domain. If we were to use only half of the number of thread blocks, every thread block would need to double ...
kernel_string_tiled = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x * tile_size_x; int by = blockIdx.y * block_size_y * tile_size_y; __shared__ float sh_u[bl...
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can tune our tiled kernel by adding the two new tunable parameters to our dictionary tune_params. We also need to somehow tell the Kernel Tuner to use fewer thread blocks to launch kernels with tile_size_x or tile_size_y larger than one. For this purpose the Kernel Tuner's tune_kernel function supports two optional ...
tune_params["tile_size_x"] = [1,2,4] #add tile_size_x to the tune_params tune_params["tile_size_y"] = [1,2,4] #add tile_size_y to the tune_params grid_div_x = ["block_size_x", "tile_size_x"] #tile_size_x impacts grid dimensions grid_div_y = ["block_size_y", "tile_size_y"] #tile_size_y impact...
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can see that the number of kernel configurations tried by the Kernel Tuner is growing rather quickly. Also, the best performing configuration quite a bit faster than the best kernel before we started optimizing. On our GTX Titan X, the execution time went from 0.72 ms to 0.53 ms, a performance improvement of 26%! No...
import pycuda.autoinit # define the optimal parameters size = [nx,ny,1] threads = [128,4,1] # create a dict of fixed parameters fixed_params = OrderedDict() fixed_params['block_size_x'] = threads[0] fixed_params['block_size_y'] = threads[1] # select the kernel to use kernel_string = kernel_string_shared # replace t...
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We also need to determine the size of the grid
# for regular and shared kernel grid = [int(numpy.ceil(n/t)) for t,n in zip(threads,size)]
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can then transfer the data initial condition on the two gpu arrays as well as compile the code and get the function we want to use.
#allocate GPU memory u_old = gpuarray.to_gpu(field) u_new = gpuarray.to_gpu(field) # compile the kernel mod = compiler.SourceModule(kernel_string) diffuse_kernel = mod.get_function("diffuse_kernel")
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We now just have to use the kernel with these optimized parameters to run the simulation
#call the GPU kernel a 1000 times and measure performance t0 = time() for i in range(500): diffuse_kernel(u_new, u_old, block=tuple(threads), grid=tuple(grid)) diffuse_kernel(u_old, u_new, block=tuple(threads), grid=tuple(grid)) driver.Context.synchronize() print("1000 steps of diffuse on a %d x %d grid took" %...
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
C run If you wish to incorporate the optimized parameters in the kernel and use it in a C run you can use ifndef statement at the begining of the kerenel as demonstrated in the psedo code below.
kernel_string = """ #ifndef block_size_x #define block_size_x <insert optimal value> #endif #ifndef block_size_y #define block_size_y <insert optimal value> #endif #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { ...... } } """ % (nx, ny)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Calculate the average time step
print np.mean(np.diff(t))
hfr_start_stop.ipynb
rsignell-usgs/notebook
mit
So we have time steps of about 1 hour Now calculate the unique time steps
print(np.unique(np.diff(t)).data)
hfr_start_stop.ipynb
rsignell-usgs/notebook
mit
So there are gaps of 2, 3, 6, 9, 10, 14 and 19 hours in the otherwise hourly data
nc['time'][:]
hfr_start_stop.ipynb
rsignell-usgs/notebook
mit
Un poco de estadística
import numpy as np import matplotlib.pyplot as plt %matplotlib inline
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Hacemos dos listas, la primera contendrá las edades de los chavos de clubes de ciencia y la segusda el número de personas que tienen dicha edad
Edades = np.array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24]) Frecuencia = np.array([10, 22, 39, 32, 26, 10, 7, 5, 8, 1]) print sum(Frecuencia) plt.bar(Edades, Frecuencia) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Distribución uniforme Lotería mexicana
x1=np.random.rand(50) plt.hist(x1) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Distribución de Poisson Número de solicitudes de amistad en facebook en una semana
s = np.random.poisson(5,20) plt.hist(s) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Distribución normal Distribución de calificaciones en un exámen
x=np.random.randn(50) plt.hist(x) plt.show() x=np.random.randn(100) plt.hist(x) plt.show() x=np.random.randn(200) plt.hist(x) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Una forma de automatizar esto es:
tams = [1,2,3,4,5,6,7] for tam in tams: numeros = np.random.randn(10**tam) plt.hist(numeros,bins=20 ) plt.title('%d' %tam) plt.show() numeros = np.random.normal(loc=2.0,scale=2.0,size=1000) plt.hist(numeros) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Probabilidad en una distribución normal $1 \sigma$ = 68.26% $2 \sigma$ = 95.44% $3 \sigma$ = 99.74% $4 \sigma$ = 99.995% $5 \sigma$ = 99.99995% Actividades Grafica lo siguiente: Crear 3 distribuciones variando mean Crear 3 distribuciones variando std Crear 2 distribuciones con cierto sobrelape Campanas gaussianas en...
x = np.random.normal(loc=2.0,scale=2.0,size=100) y = np.random.normal(loc=2.0,scale=2.0,size=100) plt.scatter(x,y) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Carregar dados CSV <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/csv"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver em TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google...
try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass from __future__ import absolute_import, division, print_function, unicode_literals import functools import numpy as np import tensorflow as tf TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/tr...
site/pt-br/tutorials/load_data/csv.ipynb
tensorflow/docs-l10n
apache-2.0