markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Discriminator The discriminator takes as input ($x^*$) the 784 dimensional output of the generator or a real MNIST image, re-shapes the input to a 28 x 28 image and outputs the estimated probability that the input image is a real MNIST image. The network is modeled using strided convolution with Leaky ReLU activation e...
def convolutional_discriminator(x): with default_options(init=C.normal(scale=0.02)): dfc_dim = 1024 df_dim = 64 print('Discriminator convolution input shape', x.shape) x = C.reshape(x, (1, img_h, img_w)) h0 = Convolution2D(dkernel, 1, strides=dstride)(x) h0 = bn_wi...
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
We use a minibatch size of 128 and a fixed learning rate of 0.0002 for training. In the fast mode (isFast = True) we verify only functional correctness with 5000 iterations. Note: In the slow mode, the results look a lot better but it requires in the order of 10 minutes depending on your hardware. In general, the mor...
# training config minibatch_size = 128 num_minibatches = 5000 if isFast else 10000 lr = 0.0002 momentum = 0.5 #equivalent to beta1
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
Build the graph The rest of the computational graph is mostly responsible for coordinating the training algorithms and parameter updates, which is particularly tricky with GANs for couple reasons. The GANs are sensitive to the choice of learner and the parameters. Many of the parameters chosen here are based on many ha...
def build_graph(noise_shape, image_shape, generator, discriminator): input_dynamic_axes = [C.Axis.default_batch_axis()] Z = C.input(noise_shape, dynamic_axes=input_dynamic_axes) X_real = C.input(image_shape, dynamic_axes=input_dynamic_axes) X_real_scaled = X_real / 255.0 # Create the model function...
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
With the value functions defined we proceed to interatively train the GAN model. The training of the model can take significnantly long depending on the hardware especiallly if isFast flag is turned off.
def train(reader_train, generator, discriminator): X_real, X_fake, Z, G_trainer, D_trainer = \ build_graph(g_input_dim, d_input_dim, generator, discriminator) # print out loss for each model for upto 25 times print_frequency_mbsize = num_minibatches // 25 print("First row is Generator loss,...
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
This gives us a nice way to move from our preference $x_i$ to a probability of switching styles. Here $\beta$ is inversely related to noise. For large $\beta$, the noise is small and we basically map $x > 0$ to a 100% probability of switching, and $x<0$ to a 0% probability of switching. As $\beta$ gets smaller, the pro...
class HipsterStep(object): """Class to implement hipster evolution Parameters ---------- initial_style : length-N array values > 0 indicate one style, while values <= 0 indicate the other. is_hipster : length-N array True or False, indicating whether each person is a hipster ...
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Now we'll create a function which will return an instance of the HipsterStep class with the appropriate settings:
def get_sim(Npeople=500, hipster_frac=0.8, initial_state_frac=0.5, delay=20, log10_beta=0.5, rseed=42): rng = np.random.RandomState(rseed) initial_state = (rng.rand(1, Npeople) > initial_state_frac) is_hipster = (rng.rand(Npeople) > hipster_frac) influence_matrix = abs(rng.randn(Npeople, Npeople)) ...
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Exploring this data Now that we've defined the simulation, we can start exploring this data. I'll quickly demonstrate how to advance simulation time and get the results. First we initialize the model with a certain fraction of hipsters:
sim = get_sim(hipster_frac=0.8)
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
To run the simulation a number of steps we execute sim.step(Nsteps) giving us a matrix of identities for each invidual at each timestep.
result = sim.step(200) result
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Now we can simply go right ahead and visualize this data using an Image Element type, defining the dimensions and bounds of the space.
%%opts Image [width=600] hv.Image(result.T, bounds=(0, 0, 100, 500), kdims=['Time', 'individual'], vdims=['State'])
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Now that you know how to run the simulation and access the data have a go at exploring the effects of different parameters on the population dynamics or apply some custom analyses to this data. Here are two quick examples of what you can do:
%%opts Curve [width=350] Image [width=350] hipster_frac = hv.HoloMap(kdims=['Hipster Fraction']) for i in np.linspace(0.1, 1, 10): sim = get_sim(hipster_frac=i) hipster_frac[i] = hv.Image(sim.step(200).T, (0, 0, 500, 500), group='Population Dynamics', kdims=['Time', 'individual'],...
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
1. Create data loaders Use DataLoader to create a <tt>train_loader</tt> and a <tt>test_loader</tt>. Batch sizes should be 10 for both.
# CODE HERE # DON'T WRITE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
2. Examine a batch of images Use DataLoader, <tt>make_grid</tt> and matplotlib to display the first batch of 10 images.<br> OPTIONAL: display the labels as well
# CODE HERE # DON'T WRITE HERE # IMAGES ONLY # DON'T WRITE HERE # IMAGES AND LABELS
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
Downsampling <h3>3. If a 28x28 image is passed through a Convolutional layer using a 5x5 filter, a step size of 1, and no padding, what is the resulting matrix size?</h3> <div style='border:1px black solid; padding:5px'> <br><br> </div>
################################################## ###### ONLY RUN THIS TO CHECK YOUR ANSWER! ###### ################################################ # Run the code below to check your answer: conv = nn.Conv2d(1, 1, 5, 1) for x,labels in train_loader: print('Orig size:',x.shape) break x = conv(x) print('Down s...
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
4. If the sample from question 3 is then passed through a 2x2 MaxPooling layer, what is the resulting matrix size? <div style='border:1px black solid; padding:5px'> <br><br> </div>
################################################## ###### ONLY RUN THIS TO CHECK YOUR ANSWER! ###### ################################################ # Run the code below to check your answer: x = F.max_pool2d(x, 2, 2) print('Down size:',x.shape)
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
CNN definition 5. Define a convolutional neural network Define a CNN model that can be trained on the Fashion-MNIST dataset. The model should contain two convolutional layers, two pooling layers, and two fully connected layers. You can use any number of neurons per layer so long as the model takes in a 28x28 image and ...
# CODE HERE class ConvolutionalNetwork(nn.Module): def __init__(self): super().__init__() pass def forward(self, X): pass return torch.manual_seed(101) model = ConvolutionalNetwork()
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
Trainable parameters 6. What is the total number of trainable parameters (weights & biases) in the model above? Answers will vary depending on your model definition. <div style='border:1px black solid; padding:5px'> <br><br> </div>
# CODE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
7. Define loss function & optimizer Define a loss function called "criterion" and an optimizer called "optimizer".<br> You can use any functions you want, although we used Cross Entropy Loss and Adam (learning rate of 0.001) respectively.
# CODE HERE # DON'T WRITE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
8. Train the model Don't worry about tracking loss values, displaying results, or validating the test set. Just train the model through 5 epochs. We'll evaluate the trained model in the next step.<br> OPTIONAL: print something after each epoch to indicate training progress.
# CODE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
9. Evaluate the model Set <tt>model.eval()</tt> and determine the percentage correct out of 10,000 total test images.
# CODE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
The dataset contains information (21 features, including the price) related to 21613 houses. Our target variable (i.e., what we want to predict when a new house gets on sale) is the price. Baseline: the simplest model Now let's compute the loss in the case of the simplest model: a fixed price equal to the average of hi...
# Let's compute the mean of the House Prices in King County y = sales['price'] # extract the price column avg_price = y.mean() # this is our baseline print ("average price: ${:.0f} ".format(avg_price)) ExamplePrice = y[0] ExamplePrice
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The predictions are very easy to calculate, just the baseline value:
def get_baseline_predictions(): # Simplest version: return the baseline as predicted values predicted_values = avg_price return predicted_values
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Example:
my_house_size = 2500 estimated_price = get_baseline_predictions() print ("The estimated price for a house with {} squared feet is {:.0f}".format(my_house_size, estimated_price))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The estimated price for the example house will still be around 540K, wile the real value is around 222K. Quite an error! Measures of loss There are several way of implementing the loss, I use the squared error here. $L = [y - f(X)]^2$
import numpy as np def get_loss(yhat, target): """ Arguments: yhat -- vector of size m (predicted labels) target -- vector of size m (true labels) Returns: loss -- the value of the L2 loss function """ # compute the residuals (since we are squaring it doesn't matter # whi...
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
To better see the value of the cost function we use also the RMSE, the Root Mean Square Deviation. Basically the average of the losses, rooted.
baselineCost = get_loss(get_baseline_predictions(), y) print ("Training Error for baseline RSS: {:.0f}".format(baselineCost)) print ("Average Training Error for baseline RMSE: {:.0f}".format(np.sqrt(baselineCost/m)))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
As you can see, it is quite high error, especially related to the average selling price. Now, we can look at how training error behaves as model complexity increases. Learning a better but still simple model Using a constant value, the average, is easy but does not make too much sense. Let's create a linear model with ...
from sklearn import linear_model simple_model = linear_model.LinearRegression() simple_features = sales[['sqft_living']] # input X: the house size simple_model.fit(simple_features, y)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now that we have fit the model we can extract the regression weights (coefficients) as follows:
simple_model_intercept = simple_model.intercept_ print (simple_model_intercept) simple_model_weights = simple_model.coef_ print (simple_model_weights)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
This means that our simple model to predict a house price y is (approximated): $y = -43581 + 281x $ where x is the size in squared feet. It is not anymore a horizontal line but a diagonal one, with a slope. Making Predictions Recall that once a model is built we can use the .predict() function to find the predicted v...
training_predictions = simple_model.predict(simple_features) print (training_predictions[0])
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
We are getting closer to the real value for the example house (recall, it's around 222K). Compute the Training Error Now that we can make predictions given the model, let's again compute the RSS and the RMSE.
# First get the predictions using the features subset predictions = simple_model.predict(sales[['sqft_living']]) simpleCost = get_loss(predictions, y) print ("Training Error for baseline RSS: {:.0f}".format(simpleCost)) print ("Average Training Error for baseline RMSE: {:.0f}".format(np.sqrt(simpleCost/m)))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The simple model reduced greatly the training error. Learning a multiple regression model We can add more features to the model, for example the number of bedrooms and bathrooms.
more_features = sales[['sqft_living', 'bedrooms', 'bathrooms']] # input X
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
We can learn a multiple regression model predicting 'price' based on the above features on the data with the following code:
better_model = linear_model.LinearRegression() better_model.fit(more_features, y)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now that we have fitted the model we can extract the regression weights (coefficients) as follows:
betterModel_intercept = better_model.intercept_ print (betterModel_intercept) betterModel_weights = better_model.coef_ print (betterModel_weights)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The better model is therefore: $y = 74847 + 309x1 - 57861x2 + 7933x3$ Note that the equation has now three variables: the size, the bedrooms and the bathrooms. Making Predictions Again we can use the .predict() function to find the predicted values for data we pass. For the model above:
better_predictions = better_model.predict(more_features) print (better_predictions[0])
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Again, a little bit closer to the real value (222K) Compute the Training Error Now that we can make predictions given the model, let's write a function to compute the RSS of the model.
predictions = better_model.predict(more_features) betterCost = get_loss(predictions, y) print ("Training Error for baseline RSS: {:.0f}".format(betterCost)) print ("Average Training Error for baseline RMSE: {:.0f}".format(np.sqrt(betterCost/m)))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Only a slight improvement this time Create some new features Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" fea...
from math import log
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Next we create the following new features as column : * bedrooms_squared = bedrooms*bedrooms * bed_bath_rooms = bedrooms*bathrooms * log_sqft_living = log(sqft_living) * lat_plus_long = lat + long * more polynomial features: bedrooms ^ 4, bathrooms ^ 7, size ^ 3
sales['bedrooms_squared'] = sales['bedrooms'].apply(lambda x: x**2) sales['bed_bath_rooms'] = sales['bedrooms'] * sales.bathrooms sales['log_sqft_living'] = sales['sqft_living'].apply(lambda x: log(x)) sales['lat_plus_long'] = sales['lat'] + sales.long sales['bedrooms_4'] = sales['bedrooms'].apply(lambda x: x**4) ...
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms. bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are lar...
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long', 'sqft_lot', 'floors'] model_2_features = model_1_features + ['log_sqft_living', 'bedrooms_squared', 'bed_bath_rooms'] model_3_features = model_2_features + ['lat_plus_long'] model_4_features = model_3_features + ['bedrooms_4', 'bathrooms_7'] mod...
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now that we have the features, we learn the weights for the five different models for predicting target = 'price' using and look at the value of the weights/coefficients:
model_1 = linear_model.LinearRegression() model_1.fit(sales[model_1_features], y) model_2 = linear_model.LinearRegression() model_2.fit(sales[model_2_features], y) model_3 = linear_model.LinearRegression() model_3.fit(sales[model_3_features], y) model_4 = linear_model.LinearRegression() model_4.fit(sales[model_4_fea...
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Interesting: in the previous model the weight coefficient for the size lot was positive but now in the model_2 is negative. This is an effect of adding the feature logging the size. Comparing multiple models Now that you've learned three models and extracted the model weights we want to evaluate which model is best. We...
# Compute the RSS for each of the models: print (get_loss(model_1.predict(sales[model_1_features]), y)) print (get_loss(model_2.predict(sales[model_2_features]), y)) print (get_loss(model_3.predict(sales[model_3_features]), y)) print (get_loss(model_4.predict(sales[model_4_features]), y)) print (get_loss(model_5.predic...
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
model_5 has the lowest RSS on the training data. The most complex model. The test error Training error decreases quite significantly with model complexity. This is quite intuitive, because the model was fit on the training points and then as we increase the model complexity, we are better able to fit the training data ...
from sklearn.model_selection import train_test_split train_data,test_data = train_test_split(sales, test_size=0.3, random_state=999) train_data.head() train_data.shape # test_data = pd.read_csv('kc_house_test_data.csv', dtype=dtype_dict) test_data.head() test_data.shape
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
In this case the testing set will be the 30% (therefore the training set is 70% of the original data)
train_y = train_data.price # extract the price column test_y = test_data.price
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Retrain the models on training data only:
model_1.fit(train_data[model_1_features], train_y) model_2.fit(train_data[model_2_features], train_y) model_3.fit(train_data[model_3_features], train_y) model_4.fit(train_data[model_4_features], train_y) model_5.fit(train_data[model_5_features], train_y) # Compute the RSS on TRAINING data for each of the models pr...
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now compute the RSS on TEST data for each of the models.
# Compute the RSS on TESTING data for each of the three models and record the values: print (get_loss(model_1.predict(test_data[model_1_features]), test_y)) print (get_loss(model_2.predict(test_data[model_2_features]), test_y)) print (get_loss(model_3.predict(test_data[model_3_features]), test_y)) print (get_loss(model...
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Training with k-Fold Cross-Validation This recipe repeatedly trains a logistic regression classifier over different subsets (folds) of sample data. It attempts to match the percentage of each class in every fold to its percentage in the overall dataset (stratification). It evaluates each model against a test set and co...
# <help:scikit_cross_validation> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them import pandas import sklearn import sklearn.datasets import sklearn.metrics as metrics from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import StratifiedKFol...
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Principal Component Analysis Plots This recipe performs a PCA and plots the data against the first two principal components in a scatter plot. It then prints the eigenvalues and eigenvectors of the covariance matrix and finally prints the precentage of total variance explained by each component. This recipe defaults t...
# <help:scikit_pca> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them from __future__ import division import math import pandas as pd import numpy as np import matplotlib.pyplot as plt import sklearn.datasets import sklearn.metrics as metrics from sklearn.decomposition impo...
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
K-Means Clustering Plots This recipe performs a K-means clustering k=1..n times. It prints and plots the the within-clusters sum of squares error for each k (i.e., inertia) as an indicator of what value of k might be appropriate for the given dataset. This recipe defaults to using the Iris data set. To use your own d...
# <help:scikit_k_means_cluster> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them from time import time import numpy as np import matplotlib.pyplot as plt import sklearn.datasets from sklearn.cluster import KMeans # load datasets and assign data and features dataset = skle...
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
SVM Classifier Hyperparameter Tuning with Grid Search This recipe performs a grid search for the best settings for a support vector machine, predicting the class of each flower in the dataset. It splits the dataset into training and test instances once. This recipe defaults to using the Iris data set. To use your own ...
#<help_scikit_grid_search> import numpy as np import matplotlib.pyplot as plt import sklearn.datasets import sklearn.metrics as metrics from sklearn.svm import SVC from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn.cross_validation import train_test_split from sk...
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Plot ROC Curves This recipe plots the reciever operating characteristic (ROC) curve for a SVM classifier trained over the given dataset. This recipe defaults to using the Iris data set which has three classes. The recipe uses a one-vs-the-rest strategy to create the binary classifications appropriate for ROC plotting. ...
# <help:scikit_roc> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them import numpy as np import matplotlib.pyplot as plt import sklearn.datasets import sklearn.metrics as metrics from sklearn.svm import SVC from sklearn.multiclass import OneVsRestClassifier from sklearn.cro...
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Build a Transformation and Classification Pipeline This recipe builds a transformation and training pipeline for a model that can classify a snippet of text as belonging to one of 20 USENET newgroups. It then prints the precision, recall, and F1-score for predictions over a held-out test set as well as the confusion ma...
# <help:scikit_pipeline> import pandas import sklearn.metrics as metrics from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.feature_extraction.text import HashingVectorizer from sklearn.linear_model import Perceptron from sklearn.naive_bayes import MultinomialNB from sklearn.line...
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Defining Geometry At this point, we have three materials defined, exported to XML, and ready to be used in our model. To finish our model, we need to define the geometric arrangement of materials. OpenMC represents physical volumes using constructive solid geometry (CSG), also known as combinatorial geometry. The objec...
sph = openmc.Sphere(R=1.0)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
Pin cell geometry We now have enough knowledge to create our pin-cell. We need three surfaces to define the fuel and clad: The outer surface of the fuel -- a cylinder parallel to the z axis The inner surface of the clad -- same as above The outer surface of the clad -- same as above These three surfaces will all be i...
fuel_or = openmc.ZCylinder(R=0.39) clad_ir = openmc.ZCylinder(R=0.40) clad_or = openmc.ZCylinder(R=0.46)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.
box = openmc.get_rectangular_prism(width=pitch, height=pitch, boundary_type='reflective') type(box)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
Geometry plotting We saw before that we could call the Universe.plot() method to show a universe while we were creating our geometry. There is also a built-in plotter in the Fortran codebase that is much faster than the Python plotter and has more options. The interface looks somewhat similar to the Universe.plot() met...
p = openmc.Plot() p.filename = 'pinplot' p.width = (pitch, pitch) p.pixels = (200, 200) p.color_by = 'material' p.colors = {uo2: 'yellow', water: 'blue'}
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
That was a little bit cumbersome. Thankfully, OpenMC provides us with a function that does all that "boilerplate" work.
openmc.plot_inline(p)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
Linear regression
# build a linear regression model from sklearn.linear_model import LinearRegression linreg = LinearRegression() linreg.fit(X_train, y_train) # examine the coefficients print(linreg.coef_) # make predictions y_pred = linreg.predict(X_test) # calculate RMSE from sklearn import metrics import numpy as np print(np.sqrt(...
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Ridge regression Ridge documentation alpha: must be positive, increase for more regularization normalize: scales the features (without using StandardScaler)
# alpha=0 is equivalent to linear regression from sklearn.linear_model import Ridge ridgereg = Ridge(alpha=0, normalize=True) ridgereg.fit(X_train, y_train) y_pred = ridgereg.predict(X_test) print(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # try alpha=0.1 ridgereg = Ridge(alpha=0.1, normalize=True) ridgereg....
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
RidgeCV: ridge regression with built-in cross-validation of the alpha parameter alphas: array of alpha values to try
# create an array of alpha values alpha_range = 10.**np.arange(-2, 3) alpha_range # select the best alpha with RidgeCV from sklearn.linear_model import RidgeCV ridgeregcv = RidgeCV(alphas=alpha_range, normalize=True, scoring='neg_mean_squared_error') ridgeregcv.fit(X_train, y_train) ridgeregcv.alpha_ # predict method...
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Lasso regression Lasso documentation alpha: must be positive, increase for more regularization normalize: scales the features (without using StandardScaler)
# try alpha=0.001 and examine coefficients from sklearn.linear_model import Lasso lassoreg = Lasso(alpha=0.001, normalize=True) lassoreg.fit(X_train, y_train) print(lassoreg.coef_) # try alpha=0.01 and examine coefficients lassoreg = Lasso(alpha=0.01, normalize=True) lassoreg.fit(X_train, y_train) print(lassoreg.coef_...
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
LassoCV: lasso regression with built-in cross-validation of the alpha parameter n_alphas: number of alpha values (automatically chosen) to try
# select the best alpha with LassoCV import warnings warnings.filterwarnings('ignore') from sklearn.linear_model import LassoCV lassoregcv = LassoCV(n_alphas=100, normalize=True, random_state=1,cv=5) lassoregcv.fit(X_train, y_train) lassoregcv.alpha_ # examine the coefficients print(lassoregcv.coef_) # predict method...
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Part 5: Regularized classification in scikit-learn Wine dataset from the UCI Machine Learning Repository: data, data dictionary Goal: Predict the origin of wine using chemical analysis Load and prepare the wine dataset
# read in the dataset url = 'https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/wine.data' wine = pd.read_csv(url, header=None) wine.head() # examine the response variable wine[0].value_counts() # define X and y X = wine.drop(0, axis=1) y = wine[0] # split into training and testing sets f...
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Logistic regression (unregularized)
# build a logistic regression model from sklearn.linear_model import LogisticRegression logreg = LogisticRegression(C=1e9,solver='liblinear',multi_class='auto') logreg.fit(X_train, y_train) # examine the coefficients print(logreg.coef_) # generate predicted probabilities y_pred_prob = logreg.predict_proba(X_test) pri...
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Logistic regression (regularized) LogisticRegression documentation C: must be positive, decrease for more regularization penalty: l1 (lasso) or l2 (ridge)
# standardize X_train and X_test from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train = X_train.astype(float) X_test = X_test.astype(float) scaler.fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # try C=0.1 with L1 penalty logreg = Logistic...
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
<br> Import required modules
import json import time import numpy as np import pandas as pd from geopy.geocoders import GoogleV3 from geopy.exc import GeocoderQueryError, GeocoderQuotaExceeded
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Load the datafile spb2018_-_cleaned.csv, which contains the form responses to the Suomen Parhaat Boulderit 2018 survey.
# Load cleaned dataset spb2018_df = pd.read_csv("data/survey_-_cleaned.csv") # Drop duplicates (exclude the Timestamp column from comparisons) spb2018_df = spb2018_df.drop_duplicates(subset=spb2018_df.columns.values.tolist()[1:]) spb2018_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Load the datafile boulders_-_prefilled.csv, which contains manually added details of each voted boulder.
boulder_details_df = pd.read_csv("data/boulders_-_prefilled.csv", index_col="Name") boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column VotedBy
""" # Simpler but slower (appr. four times) implementation # 533 ms ± 95.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) def add_column_votedby(column_name="VotedBy"): # Gender mappings from Finnish to English gender_dict = { "Mies": "Male", "Nainen": "Female" } # Iterate over b...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column Votes.
def add_column_votes(column_name="Votes"): boulder_name_columns = [spb2018_df["Boulderin nimi"], spb2018_df["Boulderin nimi.1"], spb2018_df["Boulderin nimi.2"]] all_voted_boulders_s = pd.concat(boulder_name_columns, ignore_index=True).dropna() boulder_votes_s = all_voted_boulders_s.value_counts() boulde...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add columns Latitude and Longitude.
def add_columns_latitude_and_longitude(column_names=["Latitude", "Longitude"]): boulder_details_df[[column_names[0], column_names[1]]] = boulder_details_df["Coordinates"].str.split(",", expand=True).astype(float) add_columns_latitude_and_longitude() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column GradeNumeric.
def add_column_gradenumeric(column_name="GradeNumeric"): # Grade mappings from Font to numeric grade_dict = { "?": 0, "1": 1, "2": 2, "3": 3, "4": 4, "4+": 5, "5": 6, "5+": 7, "6A": 8, "6A+": 9, "6B"...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column Adjectives
def add_column_adjectives(column_name="Adjectives"): def set_adjectives(row): boulder_name = row.name adjectives1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Kuvaile boulderia kolmella (3) adjektiivilla"] adjectives2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] ...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column MainHoldTypes
def add_column_main_hold_types(column_name="MainHoldTypes"): def set_main_hold_types(row): boulder_name = row.name main_hold_types1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Boulderin pääotetyypit"] main_hold_types2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"]...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column MainProfiles
def add_column_main_profiles(column_name="MainProfiles"): def set_main_profiles(row): boulder_name = row.name main_profiles1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Boulderin pääprofiilit"] main_profiles2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] == bould...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column MainSkillsNeeded
def add_column_main_skills_needed(column_name="MainSkillsNeeded"): def set_main_skills_needed(row): boulder_name = row.name main_skills_needed1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Boulderin kiipeämiseen vaadittavat pääkyvyt"] main_skills_needed2_s = spb2018_df...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column Comments
def add_column_comments(column_name="Comments"): def set_comments(row): boulder_name = row.name comments1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Kuvaile boulderia omin sanoin (vapaaehtoinen)"] comments2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] == boulde...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add columns AreaLevel1, AreaLevel2, and AreaLevel3
def add_columns_arealevel1_arealevel2_and_arealevel3(column_names=["AreaLevel1", "AreaLevel2", "AreaLevel3"]): boulder_details_df.drop(columns=[column_names[0], column_names[1], column_names[2]], inplace=True, errors="ignore") geolocator = GoogleV3(api_key=GOOGLE_MAPS_JAVASCRIPT_API_KEY) def extract_admini...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Create boulders final file boulders_-_final.csv.
def create_boulders_final(): boulder_details_reset_df = boulder_details_df.reset_index() boulder_details_reset_df = boulder_details_reset_df[["Votes", "VotedBy", "Name", "Grade", "GradeNumeric", "InFinland", "AreaLevel1", "AreaLevel2", "AreaLevel3", "Crag", "ApproximateCoordinates", "Coordinates", "Latitude", "...
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
Exercise #1: What are the most predictive features? Determine correlation for each feature with the label. You may find the corr function useful. Train Gradient Boosting model Training Steps to build model an ensemble of $K$ estimators. 1. At $k=0$ build base model , $\hat{y}{0}$: $\hat{y}{0}=base_predicted$ 3. Comput...
class BaseModel(object): """Initial model that predicts mean of train set.""" def __init__(self, y_train): self.train_mean = # TODO def predict(self, x): """Return train mean for every prediction.""" return # TODO def compute_residuals(label, pred): """Compute difference of la...
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train Boosting model Returning back to boosting, let's use our very first base model as are initial prediction. We'll then perform subsequent boosting iterations to improve upon this model. create_weak_model
def create_weak_learner(**tree_params): """Initialize a Decision Tree model.""" model = DecisionTreeRegressor(**tree_params) return model
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make initial prediction. Exercise #3: Update the prediction on the training set (train_pred) and on the testing set (test_pred) using the weak learner that predicts the residuals.
base_model = BaseModel(y_train) # Training parameters. tree_params = { 'max_depth': 1, 'criterion': 'mse', 'random_state': 123 } N_ESTIMATORS = 50 BOOSTING_LR = 0.1 # Initial prediction, residuals. train_pred = base_model.prediction(x_train) test_pred = base_model.prediction(x_test) train_residuals = c...
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Interpret results Can you improve the model results?
plt.figure() plt.plot(train_rmse, label='train error') plt.plot(test_rmse, label='test error') plt.ylabel('rmse', size=20) plt.xlabel('Boosting Iterations', size=20); plt.legend()
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We need to pick some first guess parameters. Because we're lazy we'll just start by setting them all to 1:
log_a = 0.0;log_b = 0.0; log_c = 0.0; log_P = 0.0 kernel = CustomTerm(log_a, log_b, log_c, log_P) gp = celerite.GP(kernel, mean=0.0) yerr = 0.000001*np.ones(time.shape) gp.compute(time,yerr) print("Initial log-likelihood: {0}".format(gp.log_likelihood(value))) t = np.arange(np.min(time),np.max(time),0.1) # calcula...
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
The key parameter here is the period, which is the fourth number along. We expect this to be about 3.9 and... we're getting 4.24, so not a million miles off. From the paper: This star has a published rotation period of 3.88 ± 0.58 days, measured using traditional periodogram and autocorrelation function approaches appl...
# pass the parameters to the george kernel: gp.set_parameter_vector(results.x) t = np.arange(np.min(time),np.max(time),0.1) # calculate expectation and variance at each point: mu, cov = gp.predict(value, t) std = np.sqrt(np.diag(cov)) ax = pl.subplot(111) pl.plot(t,mu) ax.fill_between(t,mu-std,mu+std,facecolor='ligh...
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
First we need to define a log(likelihood). We'll use the log(likelihood) implemented in the george library, which implements: $$ \ln L = -\frac{1}{2}(y - \mu)^{\rm T} C^{-1}(y - \mu) - \frac{1}{2}\ln |C\,| + \frac{N}{2}\ln 2\pi $$ (see Eq. 5 in https://arxiv.org/pdf/1706.05459.pdf).
# set the loglikelihood: def lnlike(p, x, y): lnB = np.log(p[0]) lnC = p[1] lnL = np.log(p[2]) lnP = np.log(p[3]) p0 = np.array([lnB,lnC,lnL,lnP]) # update kernel parameters: gp.set_parameter_vector(p0) # calculate the likelihood: ll = gp.log_likelihood(y) ...
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
We also need to specify our parameter priors. Here we'll just use uniform logarithmic priors. The ranges are the same as specified in Table 3 of https://arxiv.org/pdf/1703.09710.pdf. <img src="table3.png">
# set the logprior def lnprior(p): # These ranges are taken from Table 4 # of https://arxiv.org/pdf/1703.09710.pdf lnB = np.log(p[0]) lnC = p[1] lnL = np.log(p[2]) lnP = np.log(p[3]) # really crappy prior: if (-10<lnB<0.) and (-5.<lnC<5.) and (-5.<lnL<1.5) and (-3.<lnP<5....
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
The paper then says: initialize 32 walkers by sampling from an isotropic Gaussian with a standard deviation of $10^{−5}$ centered on the MAP parameters. So, let's do that:
# put all the data into a single array: data = (x_train,y_train) # set your initial guess parameters # as the output from the scipy optimiser # remember celerite keeps these in ln() form! # C looks like it's going to be a very small # value - so we will sample from ln(C): # A, lnC, L, P p = gp.get_parameter_vector() ...
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
The paper says: We run 500 steps of burn-in, followed by 5000 steps of MCMC using emcee. First let's run the burn-in:
# run a few samples as a burn-in: print("Running burn-in") p0, lnp, _ = sampler.run_mcmc(p0, 500) sampler.reset()
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
Now let's run the production MCMC:
# take the highest likelihood point from the burn-in as a # starting point and now begin your production run: print("Running production") p = p0[np.argmax(lnp)] p0 = [p + 1e-5 * np.random.randn(ndim) for i in xrange(nwalkers)] p0, _, _ = sampler.run_mcmc(p0, 5000) print "Finished" import acor # calculate the converg...
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
Activate the Economics and Reliability themes
names = theme_menu.get_available(new_core, new_project) message = html_list(names) HTML(message) theme_menu.activate(new_core, new_project, "Economics") # Here we are expecting Hydrodynamics assert _get_connector(new_project, "modules").get_current_interface_name(new_core, new_project) == "Hydrodynamics" from aneris...
notebooks/DTOcean Floating Wave Scenario Analysis.ipynb
DTOcean/dtocean-core
gpl-3.0
Objective Build a model to make predictions on blighted buildings based on real data from data.detroitmi.gov as given by coursera. Building demolition is very important for the city to turn around and revive its economy. However, it's no easy task. Accurate predictions can provide guidance on potential blighted buildi...
# The resulted buildings: Image("./data/buildings_distribution.png")
Final_Report.ipynb
cyang019/blight_fight
mit
Features Three kinds (311-calls, blight-violations, and crimes) of incident counts and coordinates (normalized) was used in the end. I also tried to generate more features by differentiating each kind of crimes or each kind of violations in this notebook. However, these differentiated features lead to smaller AUC score...
Image('./data/train_process.png')
Final_Report.ipynb
cyang019/blight_fight
mit
This model resulted in an AUC score of 0.858 on test data. Feature importances are shown below:
Image('./data/feature_f_scores.png')
Final_Report.ipynb
cyang019/blight_fight
mit
Locations were most important features in this model. Although I tried using more features generated by differentiating different kind of crimes or violations, the AUC scores did not improve. Feature importance can also be viewed using tree representation:
Image('./data/bst_tree.png')
Final_Report.ipynb
cyang019/blight_fight
mit
To reduce variance of the model, since overfitting was observed during training. I also tried to reduce variance by including in more nonblighted buildings by sampling again multiple times with replacement (bagging). A final AUC score of 0.8625 was achieved. The resulted ROC Curve on test data is shown below:
Image('./data/ROC_Curve_combined.png')
Final_Report.ipynb
cyang019/blight_fight
mit
Challenge: You have a couple of airports and want to bring them into a numerical representation to enable processing with neural networks. How do you do that?
# https://en.wikipedia.org/wiki/List_of_busiest_airports_by_passenger_traffic airports = { 'HAM': ["germany europe regional", 18], 'TXL': ["germany europe regional", 21], 'FRA': ["germany europe hub", 70], 'MUC': ["germany europe hub", 46], 'CPH': ["denmark capital scandinavia europe hub", 29], 'ARN': ["sweden c...
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
Encode Texts in multi-hot frequency
tokenizer = tf.keras.preprocessing.text.Tokenizer() tokenizer.fit_on_texts(airport_descriptions) description_matrix = tokenizer.texts_to_matrix(airport_descriptions, mode='freq') aiport_count, word_count = description_matrix.shape dictionary_size = word_count aiport_count, word_count x = airport_numbers Y = descripti...
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
2d embeddings
%%time import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import Flatten, GlobalAveragePooling1D, Dense, LSTM, GRU, SimpleRNN, Bidirectional, Embedding from tensorflow.keras.models import Sequential, Model from tensorflow.keras.initializers import gloro...
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
1d embeddings
seed = 3 input_dim = len(airports) embedding_dim = 1 model = Sequential() model.add(Embedding(name='embedding', input_dim=input_dim, output_dim=embedding_dim, input_length=1, embeddings_initializer=glorot_normal(seed=seed))) model.add...
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
What country are most billionaires from? For the top ones, how many billionaires per billion people?
recent = df[df['year'] == 2014] #recent is a variable, a variable can be assigned to different things, here it was assigned to a data frame recent.head() recent.columns.values
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
where are all the billionaires from?
recent['countrycode'].value_counts() #value_counts counts每个country出现的次数 recent.sort_values(by='networthusbillion', ascending=False).head(10) #sort_values reorganizes the data basde on the by column
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
What's the average wealth of a billionaire? Male? Female?
recent['networthusbillion'].describe() # the average wealth of a billionaire is $3.9 billion recent.groupby('gender')['networthusbillion'].describe()#group by is a function, group everything by gender, and show the billionnetworth # female mean is 3.920556 billion # male mean is 3.902716 billion
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
Who is the poorest billionaire? Who are the top 10 poorest billionaires?
recent.sort_values(by='rank',ascending=False).head(10)
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit