markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Load the data Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
from cs231n.data_utils import load_CIFAR10 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condense...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Train a network To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
input_size = 32 * 32 * 3 hidden_size = 100 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, num_iters=10000, batch_size=200, learning_rate=1e-4, learning_rate_decay=0.95, reg=0.4, verbose=T...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Debug the training With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good. One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization. An...
# Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classi...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Tune your hyperparameters What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity...
best_net = None # store the best model into this ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_net. # # ...
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Run on the test set When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%. We will give you extra bonus point for every 1% of accuracy above 52%.
test_acc = (best_net.predict(X_test) == y_test).mean() print 'Test accuracy: ', test_acc
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
It's most common to pass a list into the sorted() function, but in fact it can take as input any sort of iterable collection. The older list.sort() method is an alternative detailed below. The sorted() function seems easier to use compared to sort(), so I recommend using sorted(). The sorted() function can be customize...
strs = ['aa', 'BB', 'zz', 'CC'] print sorted(strs) print sorted(strs, reverse=True)
4 - Sorting.ipynb
sastels/Onboarding
mit
Custom Sorting With key For more complex custom sorting, sorted() takes an optional "key=" specifying a "key" function that transforms each element before comparison. The key function takes in 1 value and returns 1 value, and the returned "proxy" value is used for the comparisons within the sort. For example with a lis...
strs = ['ccc', 'aaaa', 'd', 'bb'] print sorted(strs, key=len)
4 - Sorting.ipynb
sastels/Onboarding
mit
As another example, specifying "str.lower" as the key function is a way to force the sorting to treat uppercase and lowercase the same:
strs = ['aa', 'BB', 'zz', 'CC'] print sorted(strs, key=str.lower)
4 - Sorting.ipynb
sastels/Onboarding
mit
You can also pass in your own MyFn as the key function. Say we have a list of strings we want to sort by the last letter of the string.
strs = ['xc', 'zb', 'yd' ,'wa']
4 - Sorting.ipynb
sastels/Onboarding
mit
A little function that takes a string, and returns its last letter. This will be the key function (takes in 1 value, returns 1 value).
def MyFn(s): return s[-1]
4 - Sorting.ipynb
sastels/Onboarding
mit
Now pass key=MyFn to sorted() to sort by the last letter.
print sorted(strs, key=MyFn)
4 - Sorting.ipynb
sastels/Onboarding
mit
To use key= custom sorting, remember that you provide a function that takes one value and returns the proxy value to guide the sorting. There is also an optional argument "cmp=cmpFn" to sorted() that specifies a traditional two-argument comparison function that takes two values from the list and returns negative/0/posi...
alist = [1,5,9,2,5] alist.sort() alist
4 - Sorting.ipynb
sastels/Onboarding
mit
Incorrect (returns None):
blist = alist.sort() blist
4 - Sorting.ipynb
sastels/Onboarding
mit
The above is a very common misunderstanding with sort() -- it does not return the sorted list. The sort() method must be called on a list; it does not work on any enumerable collection (but the sorted() function above works on anything). The sort() method predates the sorted() function, so you will likely see it in old...
tuple = (1, 2, 'hi') print len(tuple) print tuple[2]
4 - Sorting.ipynb
sastels/Onboarding
mit
Tuples are immutable, i.e. they cannot be changed.
tuple[2] = 'bye'
4 - Sorting.ipynb
sastels/Onboarding
mit
If you want to change a tuple variable, you must reassign it to a new tuple:
tuple = (1, 2, 'bye') tuple
4 - Sorting.ipynb
sastels/Onboarding
mit
To create a size-1 tuple, the lone element must be followed by a comma.
tuple = ('hi',) tuple
4 - Sorting.ipynb
sastels/Onboarding
mit
It's a funny case in the syntax, but the comma is necessary to distinguish the tuple from the ordinary case of putting an expression in parentheses. In some cases you can omit the parenthesis and Python will see from the commas that you intend a tuple. Assigning a tuple to an identically sized tuple of variable names a...
(err_string, err_code) = ('uh oh', 666) print err_code, ':', err_string
4 - Sorting.ipynb
sastels/Onboarding
mit
List Comprehensions A list comprehension is a compact way to write an expression that expands to a whole list. Suppose we have a list nums [1, 2, 3], here is the list comprehension to compute a list of their squares [1, 4, 9]:
nums = [1, 2, 3, 4] squares = [ n * n for n in nums ] squares
4 - Sorting.ipynb
sastels/Onboarding
mit
The syntax is [ expr for var in list ] -- the for var in list looks like a regular for-loop, but without the colon (:). The expr to its left is evaluated once for each element to give the values for the new list. Here is an example with strings, where each string is changed to upper case with '!!!' appended:
strs = ['hello', 'and', 'goodbye'] shouting = [ s.upper() + '!!!' for s in strs ] shouting
4 - Sorting.ipynb
sastels/Onboarding
mit
You can add an if test to the right of the for-loop to narrow the result. The if test is evaluated for each element, including only the elements where the test is true.
## Select values <= 2 nums = [2, 8, 1, 6] small = [ n for n in nums if n <= 2 ] small ## Select fruits containing 'a', change to upper case fruits = ['apple', 'cherry', 'bannana', 'lemon'] afruits = [ s.upper() for s in fruits if 'a' in s ] afruits
4 - Sorting.ipynb
sastels/Onboarding
mit
This shows us where the competition data is stored, so that we can load the files into the notebook. We'll do that next. Load the data The second code cell in your notebook now appears below the three lines of output with the file locations. Type the two lines of code below into your second code cell. Then, once you...
train_data = pd.read_csv("../input/titanic/train.csv") train_data.head()
notebooks/machine_learning/raw/tut_titanic.ipynb
Kaggle/learntools
apache-2.0
Your code should return the output above, which corresponds to the first five rows of the table in train.csv. It's very important that you see this output in your notebook before proceeding with the tutorial! If your code does not produce this output, double-check that your code is identical to the two lines above. ...
test_data = pd.read_csv("../input/titanic/test.csv") test_data.head()
notebooks/machine_learning/raw/tut_titanic.ipynb
Kaggle/learntools
apache-2.0
As before, make sure that you see the output above in your notebook before continuing. Once all of the code runs successfully, all of the data (in train.csv and test.csv) is loaded in the notebook. (The code above shows only the first 5 rows of each table, but all of the data is there -- all 891 rows of train.csv an...
women = train_data.loc[train_data.Sex == 'female']["Survived"] rate_women = sum(women)/len(women) print("% of women who survived:", rate_women)
notebooks/machine_learning/raw/tut_titanic.ipynb
Kaggle/learntools
apache-2.0
Before moving on, make sure that your code returns the output above. The code above calculates the percentage of female passengers (in train.csv) who survived. Then, run the code below in another code cell:
men = train_data.loc[train_data.Sex == 'male']["Survived"] rate_men = sum(men)/len(men) print("% of men who survived:", rate_men)
notebooks/machine_learning/raw/tut_titanic.ipynb
Kaggle/learntools
apache-2.0
The code above calculates the percentage of male passengers (in train.csv) who survived. From this you can see that almost 75% of the women on board survived, whereas only 19% of the men lived to tell about it. Since gender seems to be such a strong indicator of survival, the submission file in gender_submission.csv is...
from sklearn.ensemble import RandomForestClassifier y = train_data["Survived"] features = ["Pclass", "Sex", "SibSp", "Parch"] X = pd.get_dummies(train_data[features]) X_test = pd.get_dummies(test_data[features]) model = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=1) model.fit(X, y) predictions...
notebooks/machine_learning/raw/tut_titanic.ipynb
Kaggle/learntools
apache-2.0
这些可能看起来是很小的任务,但是它体现了一个非常重要的概念。通过画出分割线,我们已经学习了一个可以生成新数据的模型。如果您往这张图上添加一个没有被分类的点,这个算法现在可以预测它应是一个红色的点还是一个蓝色的点。 如果你希望看到生成这个的源代码,你也可以在fig_code文件夹中打开代码,或者你可以用%load命令加载这段代码。 下一个简单的例子我们看一个回归的算法,为一组数据拟合一条最佳的直线。
from fig_code import plot_linear_regression plot_linear_regression()
notebooks/02.1-Machine-Learning-Intro.ipynb
palandatarxcom/sklearn_tutorial_cn
bsd-3-clause
这也是一个从数据中建立模型的例子,所以这个模型可以被用来生成新的数据。这个模型从训练数据中被学习出来,而且可以用来预测测试数据的结果:我们给出一个点的x坐标值,这个模型可以让我们去预测对应的y坐标值。同样的,这看起来是一个简单的例子,但是它是机器学习算法的一个基础的操作。 Scikit-learn中的数据表示 机器学习是从数据中建立模型的,我们将会从怎样让用电脑理解的方式去表示数据开始。同时,我们会用matplotlib的例子讲解如何将数据用图表的形式显示出来。 在Scikit-learn中,大多数的机器学习算法的数据在二维的数组或者矩阵中存储。这些数据可能是numpy数组,在某些情况下也可能是scipy.sparse矩阵。数组的大...
from IPython.core.display import Image, display display(Image(filename='images/iris_setosa.jpg')) print("Iris Setosa\n") display(Image(filename='images/iris_versicolor.jpg')) print("Iris Versicolor\n") display(Image(filename='images/iris_virginica.jpg')) print("Iris Virginica")
notebooks/02.1-Machine-Learning-Intro.ipynb
palandatarxcom/sklearn_tutorial_cn
bsd-3-clause
问题: 如果我们想设计一个算法去分辨iris的品种,数据可能是什么? 记住:我们需要一个2D的数组,其大小为[样本数 * 特征数] 样本数指的是什么? 特征数指的是什么? 记住每一个样本的特征数必须是固定的,而且对于每一个样本,特征数i必须是一个数值型的元素。 用scikit-learn 加载 Iris 数据 Scikit-learn对于Iris数据有一个非常直接表示。数据表示如下: Iris 数据集的特征: 萼片长度(cm) 萼片宽度(cm) 花瓣长度(cm) 花瓣宽度(cm) 预测的目标类别 Iris Setosa Iris Versicolour Iris Virginica scikit-le...
from sklearn.datasets import load_iris iris = load_iris() iris.keys() n_samples, n_features = iris.data.shape print((n_samples, n_features)) print(iris.data[0]) print(iris.data.shape) print(iris.target.shape) print(iris.target) print(iris.target_names)
notebooks/02.1-Machine-Learning-Intro.ipynb
palandatarxcom/sklearn_tutorial_cn
bsd-3-clause
这个数据是四维的,但是我们可以使用简单的scatter-plot一次显示出两维的数据:
import numpy as np import matplotlib.pyplot as plt x_index = 0 y_index = 1 # 这段代码使用iris的名字来标注颜色条(colorbar) formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)]) plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3)) plt.colorbar(tic...
notebooks/02.1-Machine-Learning-Intro.ipynb
palandatarxcom/sklearn_tutorial_cn
bsd-3-clause
快速练习: 在上面的脚本中改变 x_index 和 y_index, 找到一种可以最大化分隔出三个类别的它们的组合。 这个练习是降维算法的一个预告,我们在之后会看到。 其他数据 它们分为如下三种: 包内置数据: 这些小的数据集已经被集成在scikit-learn的安装包里面了,可以用sklearn.datasets.load_*去下载它 供下载数据: 这些较大的数据可以供用户们下载,scikit-learn里面已经包含了下载这些数据集的流通道。这些数据可以在sklearn.datasets.fetch_*中找到。 生成数据: 通过随机种子,可以通过现有模型随机生成一些数据集。它们可以在sklearn.datasets.make_*...
from sklearn import datasets # Type datasets.fetch_<TAB> or datasets.load_<TAB> in IPython to see all possibilities # datasets.fetch_ # datasets.load_
notebooks/02.1-Machine-Learning-Intro.ipynb
palandatarxcom/sklearn_tutorial_cn
bsd-3-clause
3.8.1 Sorting out the metadata The main decision to make when building a supermatrix is what metadata will be used to indicate that sequences of several genes belong to the same OTU in the tree. Obvious candidates would be the species name (stored as 'source_organism' if we read a GenBank file), or sample ID, voucher s...
from IPython.display import Image Image('images/fix_otus.png', width = 400)
notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb
szitenberg/ReproPhyloVagrant
mit
Our Project has to be updated with the recent changes to the spreadsheet:
pj.correct_metadata_from_file('data/Tetillida_otus_corrected.csv')
notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb
szitenberg/ReproPhyloVagrant
mit
Such fixes can also be done programmatically (see section 3.4) 3.8.2 Designing the supermatrix Supermatrices are configured with objects of the class Concatenation. In a Concatenation object we can indicate the following: The name of the concatenation The loci it includes (here we pass locus objects rather than just L...
concat = Concatenation('large_concat', # Any unique string pj.loci, # This is a list of Locus objects 'source_otu', # The values of this qualifier ...
notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb
szitenberg/ReproPhyloVagrant
mit
If we print this Concatenation object we get this message:
print concat
notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb
szitenberg/ReproPhyloVagrant
mit
3.8.3 Building the supermatrix Building the suprematrix has two steps. First we need to mount the Concatenation object onto the Project where it will be stored in the list pj.concatenations. Second, we need to construct the MultipleSeqAlignment object, which will be stored in the pj.trimmed_alignments dictionary, under...
pj.add_concatenation(concat) pj.make_concatenation_alignments() pickle_pj(pj, 'outputs/my_project.pkpj')
notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb
szitenberg/ReproPhyloVagrant
mit
Now that this supermatrix is stored as a trimmed alignment in the pj.trimmed_alignments dictionary, we can write it to a file or fetch the MultipleSeqAlignment object, as shown in section 3.7. 3.8.4 Quick reference
# Design a supermatrix concat = Concatenation('concat_name', loci_list, 'otu_qualifier' **kwargs) # Add it to a project pj.add_concatenation(concat) # Build supermatrices based on the Concatenation # objects in pj.concatenations pj.make_concatenation_alignments()
notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb
szitenberg/ReproPhyloVagrant
mit
1. The dataset We describe next the regression task that we will use in the session. The dataset is an adaptation of the <a href=http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html> STOCK dataset</a>, taken originally from the <a href=http://lib.stat.cmu.edu/> StatLib Repository</a>. The goal of this problem is to...
# SELECT dataset # Available options are 'stock', 'concrete' or 'advertising' ds_name = 'stock' # Let us start by loading the data into the workspace, and visualizing the dimensions of all matrices if ds_name == 'stock': # STOCK DATASET data = scipy.io.loadmat('datasets/stock.mat') X_tr = data['xTrain'] ...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
1.1. Scatter plots We can get a first rough idea about the regression task representing the scatter plot of each of the one-dimensional variables against the target data.
pylab.subplots_adjust(hspace=0.2) for idx in range(X_tr.shape[1]): ax1 = plt.subplot(3,3,idx+1) ax1.plot(X_tr[:,idx],S_tr,'.') ax1.get_xaxis().set_ticks([]) ax1.get_yaxis().set_ticks([]) plt.show()
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
2. Baseline estimation. Using the average of the training set labels A first very simple method to build the regression model is to use the average of all the target values in the training set as the output of the model, discarding the value of the observation input vector. This approach can be considered as a baseline...
# Mean of all target values in the training set s_hat = np.mean(S_tr) print(s_hat)
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
for any input ${\bf x}$. Exercise 1 Compute the mean square error over training and test sets, for the baseline estimation method.
# We start by defining a function that calculates the average square error def square_error(s, s_est): # Squeeze is used to make sure that s and s_est have the appropriate dimensions. y = np.mean(np.power((s - s_est), 2)) # y = np.mean(np.power((np.squeeze(s) - np.squeeze(s_est)), 2)) return y # Mean s...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
Note that in the previous piece of code, function 'square_error' can be used when the second argument is a number instead of a vector with the same length as the first argument. The value will be subtracted from each of the components of the vector provided as the first argument.
if sys.version_info.major == 2: Test.assertTrue(np.isclose(MSE_tr, square_error(S_tr, s_hat)),'Incorrect value for MSE_tr') Test.assertTrue(np.isclose(MSE_tst, square_error(S_tst, s_hat)),'Incorrect value for MSE_tst')
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
3. Unidimensional regression with the $k$-nn method The principles of the $k$-nn method are the following: For each point where a prediction is to be made, find the $k$ closest neighbors to that point (in the training set) Obtain the estimation averaging the labels corresponding to the selected neighbors The number o...
# We implement unidimensional regression using the k-nn method # In other words, the estimations are to be made using only one variable at a time from scipy import spatial var = 0 # pick a variable (e.g., any value from 0 to 8 for the STOCK dataset) k = 1 # Number of neighbors n_points = 1000 # Number of points in t...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
3.1. Evolution of the error with the number of neighbors ($k$) We see that a small $k$ results in a regression curve that exhibits many and large oscillations. The curve is capturing any noise that may be present in the training data, and <i>overfits</i> the training set. On the other hand, picking a too large $k$ (e....
var = 0 k_max = 60 k_max = np.minimum(k_max, X_tr.shape[0]) # k_max cannot be larger than the number of samples #Be careful with the use of range, e.g., range(3) = [0,1,2] and range(1,3) = [1,2] MSEk_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:,var],k)) for k in range(1, k_max+1)] MS...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
As we can see, the error initially decreases achiving a minimum (in the test set) for some finite value of $k$ ($k\approx 10$ for the STOCK dataset). Increasing the value of $k$ beyond that value results in poorer performance. Exercise 2 Analize the training MSE for $k=1$. Why is it smaller than for any other $k$? Unde...
k_max = 20 var_performance = [] k_values = [] for var in range(X_tr.shape[1]): MSE_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:, var], k)) for k in range(1, k_max+1)] MSE_tst = [square_error(S_tst, knn_regression(X_tr[:,var], S_tr, X_tst[:, var], k)) fo...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
4. Multidimensional regression with the $k$-nn method In the previous subsection, we have studied the performance of the $k$-nn method when using only one variable. Doing so was convenient, because it allowed us to plot the regression curves in a 2-D plot, and to get some insight about the consequences of modifying the...
k_max = 20 MSE_tr = [square_error(S_tr, knn_regression(X_tr, S_tr, X_tr, k)) for k in range(1, k_max+1)] MSE_tst = [square_error(S_tst, knn_regression(X_tr, S_tr, X_tst, k)) for k in range(1, k_max+1)] plt.plot(np.arange(k_max)+1, MSE_tr,'bo',label='Training square error') plt.plot(np.arange(k_max)+1, MSE_tst,'ro',la...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
In this case, we can check that the average test square error is much lower than the error that was achieved when using only one variable, and also far better than the baseline method. It is also interesting to note that in this particular case the best performance is achieved for a small value of $k$, with the error i...
### This fragment of code runs k-nn with M-fold cross validation # Parameters: M = 5 # Number of folds for M-cv k_max = 40 # Maximum value of the k-nn hyperparameter to explore # First we compute the train error curve, that will be useful for comparative visualization. MSE_tr = [square_error(S_tr, knn_regressi...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
Exercise 4 Modify the previous code to use only one of the variables in the input dataset - Following a cross-validation approach, select the best value of $k$ for the $k$-nn based in variable 0 only. - Compute the test error for the selected valua of $k$. 6. Scikit-learn implementation In practice, most well-known...
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Fabian Pedregosa <fabian.pedregosa@inria.fr> # # License: BSD 3 clause (C) INRIA ############################################################################### # Generate sample data import numpy as np import matplotlib.pyplot as plt from sklearn im...
R2.kNN_Regression/regression_knn_student.ipynb
ML4DS/ML4all
mit
GINI Water Vapor Imagery Use MetPy's support for GINI files to read in a water vapor satellite image and plot the data using CartoPy.
import cartopy.feature as cfeature import matplotlib.pyplot as plt import xarray as xr from metpy.cbook import get_test_data from metpy.io import GiniFile from metpy.plots import add_metpy_logo, add_timestamp, colortables # Open the GINI file from the test data f = GiniFile(get_test_data('WEST-CONUS_4km_WV_20151208_2...
v1.1/_downloads/5f6dfc4b913dc349eba9f04f6161b5f1/GINI_Water_Vapor.ipynb
metpy/MetPy
bsd-3-clause
Get a Dataset view of the data (essentially a NetCDF-like interface to the underlying data). Pull out the data and (x, y) coordinates. We use metpy.parse_cf to handle parsing some netCDF Climate and Forecasting (CF) metadata to simplify working with projections.
ds = xr.open_dataset(f) x = ds.variables['x'][:] y = ds.variables['y'][:] dat = ds.metpy.parse_cf('WV')
v1.1/_downloads/5f6dfc4b913dc349eba9f04f6161b5f1/GINI_Water_Vapor.ipynb
metpy/MetPy
bsd-3-clause
Plot the image. We use MetPy's xarray/cartopy integration to automatically handle parsing the projection information.
fig = plt.figure(figsize=(10, 12)) add_metpy_logo(fig, 125, 145) ax = fig.add_subplot(1, 1, 1, projection=dat.metpy.cartopy_crs) wv_norm, wv_cmap = colortables.get_with_range('WVCIMSS', 100, 260) wv_cmap.set_under('k') im = ax.imshow(dat[:], cmap=wv_cmap, norm=wv_norm, extent=(x.min(), x.max(), y.min(), ...
v1.1/_downloads/5f6dfc4b913dc349eba9f04f6161b5f1/GINI_Water_Vapor.ipynb
metpy/MetPy
bsd-3-clause
Create Fake Index Data
names = ['foo','bar','rf'] dates = pd.date_range(start='2015-01-01',end='2018-12-31', freq=pd.tseries.offsets.BDay()) n = len(dates) rdf = pd.DataFrame( np.zeros((n, len(names))), index = dates, columns = names ) np.random.seed(1) rdf['foo'] = np.random.normal(loc = 0.1/252,scale=0.2/np.sqrt(252),size=n) r...
examples/PTE.ipynb
pmorissette/bt
mit
Build and run Target Strategy I will first run a strategy that rebalances everyday. Then I will use those weights as target to rebalance to whenever the PTE is too high.
selectTheseAlgo = bt.algos.SelectThese(['foo','bar']) # algo to set the weights to 1/vol contributions from each asset # with data over the last 3 months excluding yesterday weighInvVolAlgo = bt.algos.WeighInvVol( lookback=pd.DateOffset(months=3), lag=pd.DateOffset(days=1) ) # algo to rebalance the current w...
examples/PTE.ipynb
pmorissette/bt
mit
Now use the PTE rebalance algo to trigger a rebalance whenever predicted tracking error is greater than 1%.
# algo to fire whenever predicted tracking error is greater than 1% wdf = res_target.get_security_weights() PTE_rebalance_Algo = bt.algos.PTE_Rebalance( 0.01, wdf, lookback=pd.DateOffset(months=3), lag=pd.DateOffset(days=1), covar_method='standard', annualization_factor=252 ) selectTheseAlgo =...
examples/PTE.ipynb
pmorissette/bt
mit
If we plot the total risk contribution of each asset class and divide by the total volatility, then we can see that both strategy's contribute roughly similar amounts of volatility from both of the securities.
weights_target = res_target.get_security_weights() rolling_cov_target = pdf.loc[:,weights_target.columns].pct_change().rolling(window=3*20).cov()*252 weights_PTE = res_PTE.get_security_weights().loc[:,weights_target.columns] rolling_cov_PTE = pdf.loc[:,weights_target.columns].pct_change().rolling(window=3*20).cov()*25...
examples/PTE.ipynb
pmorissette/bt
mit
Looking at the Target strategy's and PTE strategy's Total Risk they are very similar.
fig, ax = plt.subplots(nrows=1,ncols=1) trc_target.sum(axis=1).plot(ax=ax,label='Target') trc_PTE.sum(axis=1).plot(ax=ax,label='PTE') ax.legend() ax.set_title('Total Risk') ax.plot() transactions = res_PTE.get_transactions() transactions = (transactions['quantity'] * transactions['price']).reset_index() bar_mask = tr...
examples/PTE.ipynb
pmorissette/bt
mit
data
categories = ['alt.atheism', 'soc.religion.christian'] newsgroups_train = fetch_20newsgroups(subset='train', shuffle=True, categories=categories) print(f'number of training samples: {len(newsgroups_train.data)}') example_sample_data = ...
Keras_CNN_newsgroups_text_classification.ipynb
wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era
gpl-3.0
data preparation
labels = newsgroups_train.target texts = newsgroups_train.data max_sequence_length = 1000 max_words = 20000 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(texts) sequences = tokenizer.texts_to_sequences(texts) word_index = tokenizer.word_index #print(sequences[0...
Keras_CNN_newsgroups_text_classification.ipynb
wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era
gpl-3.0
model: convolutional neural network
embedding_layer = Embedding(len(word_index) + 1, word_vector_dimensionality, weights=[embedding_matrix], input_length=max_sequence_length, trainable=False) inputs = Input(shape=(max_sequence_length,), dtype=...
Keras_CNN_newsgroups_text_classification.ipynb
wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era
gpl-3.0
model: convolutional neural network with multiple towers of varying kernel sizes
embedding_layer = Embedding(len(word_index) + 1, word_vector_dimensionality, weights=[embedding_matrix], input_length=max_sequence_length, trainable=False) inputs = Input(shape=(max_sequence_length,), dtype=...
Keras_CNN_newsgroups_text_classification.ipynb
wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era
gpl-3.0
Here the class statement did not create anything, it just the blueprint to create "Person" objects. To create an object we need to instantiate the "Person" class.
P1 = Person() print(type(P1))
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Now we created a "Person" object and assigned it to "P1". We can create any number of objects but please note there will be only one "Person" class.
# Doc string for class class Person: '''Simple Person Class''' pass print(Person.__doc__)
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Attributes & Methods Classes contain attributes (also called fields, members etc...) and methods (a.k.a functions). Attributes defines the characteristics of the object and methods perfom action on the object. For example, the class definition below has firstname and lastname attributes and fullname is a method.
class Person: '''Simple Person Class Attributes: firstname: String representing first name of the person lastname: String representing last name of the person ''' def __init__(self,firstname,lastname): '''Initialiser method for Person''' self.firstname = fir...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Inside the class body, we define two functions – these are our object’s methods. The first is called _init_, which is a special method. When we call the class object, a new instance of the class is created, and the _init_ method on this new object is immediately executed with all the parameters that we passed to the cl...
class Person: '''Simple Person Class Attributes: firstname: String representing first name of the person lastname: String representing last name of the person ''' TITLES = ['Mr','Mrs','Master'] def __init__(self,title,firstname,lastname): '''Initialiser met...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Please note that when we set an attribute on an instance which has the same name as a class attribute, we are overriding the class attribute with an instance attribute, which will take precedence over it. Class Decorators Class Methods Just like we can define class attributes, which are shared between all instances of...
class ClassGrades: def __init__(self, grades): self.grades = grades @classmethod def from_csv(cls, grade_csv_str): grades = grade_csv_str.split(', ') return cls(grades) class_grades = ClassGrades.from_csv('92, -15, 99, 101, 77, 65, 100') print(class_grades.grades)
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Static Methods A static method doesn’t have the calling object passed into it as the first parameter. This means that it doesn’t have access to the rest of the class or instance at all. We can call them from an instance or a class object, but they are most commonly called from class objects, like class methods. If we a...
class ClassGrades: def __init__(self, grades): self.grades = grades @classmethod def from_csv(cls, grade_csv_str): grades = grade_csv_str.split(', ') cls.validate(grades) return cls(grades) @staticmethod def validate(grades): for g in grades: i...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
The difference between a static method and a class method is: Static method knows nothing about the class and just deals with the parameters. Class method works with the class since its parameter is always the class itself. Property Sometimes we use a method to generate a property of an object dynamically, calculatin...
class Person: '''Simple Person Class Attributes: firstname: String representing first name of the person lastname: String representing last name of the person ''' def __init__(self,firstname,lastname): '''Initialiser method for Person''' self.firstname = fir...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
There are also decorators which we can use to define a setter and a deleter for our attribute (a deleter will delete the attribute from our object). The getter, setter and deleter methods must all have the same name
class Person: '''Simple Person Class Attributes: firstname: String representing first name of the person lastname: String representing last name of the person ''' def __init__(self,firstname,lastname): '''Initialiser method for Person''' self.firstname = fir...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Inspecting an Object
class Person: def __init__(self, name, surname): self.name = name self.surname = surname def fullname(self): return "%s %s" % (self.name, self.surname) jane = Person("Jane", "Smith") print(dir(jane))
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Built In Class Attributes
class Employee: 'Common base class for all employees' empCount = 0 def __init__(self, name, salary): self.name = name self.salary = salary Employee.empCount += 1 def displayCount(self): print("Total Employee", Employee.empCount) def displayEmployee(self): print("Name : ...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Overriding Magic Methods
import datetime class Person: def __init__(self, name, surname, birthdate, address, telephone, email): self.name = name self.surname = surname self.birthdate = birthdate self.address = address self.telephone = telephone self.email = email def __str__(self): ...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Create Class Using Key Value Arguments
class Student: def __init__(self, **kwargs): for k,v in kwargs.items(): setattr(self,k,v,) def __str__(self): attrs = ["{}={}".format(k, v) for (k, v) in self.__dict__.items()] return str(attrs) #classname = self.__class__.__name__ #return "{}: {}".format...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Class Inheritance Inheritance is a way of arranging objects in a hierarchy from the most general to the most specific. An object which inherits from another object is considered to be a subtype of that object. We also often say that a class is a subclass or child class of a class from which it inherits, or that the oth...
# Simple Example of Inheritance class Person: pass # Parent class must be defined inside the paranthesis class Employee(Person): pass e1 = Employee() print(dir(e1)) class Person: def __init__(self,firstname,lastname): self.firstname = firstname self.lastname = lastname ...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Abstract Classes and Interfaces Abstract classes are not intended to be instantiated because all the method definitions are empty – all the insides of the methods must be implemented in a subclass. They serves as a template for suitable objects by defining a list of methods that these objects must implement.
# Abstract Classes class shape2D: def area(self): raise NotImplementedError() class shape3D: def volume(self): raise NotImplementedError() sh1 = shape2D() sh1.area() class shape2D: def area(self): raise NotImplementedError() class shape3D: def volume(...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Multiple Inheritance
class Person: pass class Company: pass class Employee(Person,Company): pass print(Employee.mro())
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Diamond Problem Multiple inheritance isn’t too difficult to understand if a class inherits from multiple classes which have completely different properties, but things get complicated if two parent classes implement the same method or attribute. If classes B and C inherit from A and class D inherits from B and C, and b...
class X: pass class Y: pass class Z: pass class A(X,Y): pass class B(Y,Z): pass class M(B,A,Z): pass # Output: # [<class '__main__.M'>, <class '__main__.B'>, # <class '__main__.A'>, <class '__main__.X'>, # <class '__main__.Y'>, <class '__main__.Z'>, # <class 'object'>] print(M.mro())
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Method Resolution Order (MRO) In the multiple inheritance scenario, any specified attribute is searched first in the current class. If not found, the search continues into parent classes in depth-first, left-right fashion without searching same class twice. So, in the above example of MultiDerived class the search orde...
class Person: def __init__(self): print('Person') class Company: def __init__(self): print('Company') class Employee(Person,Company): def _init_(self): super(Employee,self).__init__() print('Employee') e1=Employee()
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Mixins If we use multiple inheritance, it is often a good idea for us to design our classes in a way which avoids the kind of ambiguity described above. One way of doing this is to split up optional functionality into mix-ins. A Mix-in is a class which is not intended to stand on its own – it exists to add extra functi...
class Person: def __init__(self, name, surname, number): self.name = name self.surname = surname self.number = number class LearnerMixin: def __init__(self): self.classes = [] def enrol(self, course): self.classes.append(course) class TeacherMixin: def __init...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Now Tutor inherits from one “main” class, Person, and two mix-ins which are not related to Person. Each mix-in is responsible for providing a specific piece of optional functionality. Our mix-ins still have _init_ methods, because each one has to initialise a list of courses (we saw in the previous chapter that we can’...
class Student: def __init__(self, name, student_number): self.name = name self.student_number = student_number self.classes = [] def enrol(self, course_running): self.classes.append(course_running) course_running.add_student(self) class Department: def __init__(sel...
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
A student can be enrolled in several courses (CourseRunning objects), and a course (CourseRunning) can have multiple students enrolled in it in a particular year, so this is a many-to-many relationship. A student knows about all his or her courses, and a course has a record of all enrolled students, so this is a bidire...
class Person: pass class Employee(Person): pass class Tutor(Employee): pass emp = Employee() print(isinstance(emp, Tutor)) # False print(isinstance(emp, Person)) # True print(isinstance(emp, Employee)) # True print(issubclass(Tutor, Person)) # True
Classes+and+Objects.ipynb
vravishankar/Jupyter-Books
mit
Import convention You can import explicitly from statsmodels.formula.api
from statsmodels.formula.api import ols
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Alternatively, you can just use the formula namespace of the main statsmodels.api.
sm.formula.ols
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Or you can use the following convention
import statsmodels.formula.api as smf
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance
sm.OLS.from_formula
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a str...
dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True) df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna() df.head()
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Fit the model:
mod = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df) res = mod.fit() print(res.summary())
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Categorical variables Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories. If Region had been an integer var...
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit() print(res.params)
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Patsy's mode advanced features for categorical variables are discussed in: Patsy: Contrast Coding Systems for categorical variables Operators We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix. Removing variables The "-" ...
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit() print(res.params)
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Multiplicative interactions ":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together:
res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit() res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit() print(res1.params, '\n') print(res2.params)
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Many other things are possible with operators. Please consult the patsy docs to learn more. Functions You can apply vectorized functions to the variables in your model:
res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit() print(res.params)
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Define a custom function:
def log_plus_1(x): return np.log(x) + 1. res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit() print(res.params)
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Any function that is in the calling namespace is available to the formula. Using formulas with models that do not (yet) support them Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices can then be fed to the fitting funct...
import patsy f = 'Lottery ~ Literacy * Wealth' y,X = patsy.dmatrices(f, df, return_type='matrix') print(y[:5]) print(X[:5])
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
To generate pandas data frames:
f = 'Lottery ~ Literacy * Wealth' y,X = patsy.dmatrices(f, df, return_type='dataframe') print(y[:5]) print(X[:5]) print(sm.OLS(y, X).fit().summary())
examples/notebooks/formulas.ipynb
jseabold/statsmodels
bsd-3-clause
Maps with Natural Earth backgrounds I got the background image from Natural Earth; it is the 10 m, Cross Blended Hypso with Relief, Water, Drains, and Ocean Bottom. I changed the colour curves slightly in Gimp, to make the image darker. Adjustment for Natural Earth:
from IPython.display import Image Image(filename='./data/TravelMap/HYP_HR_SR_OB_DR/Adjustment.jpg')
MX_BarrancasDelCobre.ipynb
prisae/blog-notebooks
cc0-1.0
Profile from viewpoint down to Urique Not used in blog, later added
import numpy as np import matplotlib.pyplot as plt fig_p,ax = plt.subplots(figsize=(tm.cm2in([10.8, 5]))) # Switch off axis and ticks ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.xaxis.set_ticks_position('bottom...
MX_BarrancasDelCobre.ipynb
prisae/blog-notebooks
cc0-1.0
Find MEG reference channel artifacts Use ICA decompositions of MEG reference channels to remove intermittent noise. Many MEG systems have an array of reference channels which are used to detect external magnetic noise. However, standard techniques that use reference channels to remove noise from standard channels often...
# Authors: Jeff Hanna <jeff.hanna@gmail.com> # # License: BSD (3-clause) import mne from mne import io from mne.datasets import refmeg_noise from mne.preprocessing import ICA import numpy as np print(__doc__) data_path = refmeg_noise.data_path()
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read raw data, cropping to 5 minutes to save memory
raw_fname = data_path + '/sample_reference_MEG_noise-raw.fif' raw = io.read_raw_fif(raw_fname).crop(300, 600).load_data()
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that even though standard noise removal has already been applied to these data, much of the noise in the reference channels (bottom of the plot) can still be seen in the standard channels.
select_picks = np.concatenate( (mne.pick_types(raw.info, meg=True)[-32:], mne.pick_types(raw.info, meg=False, ref_meg=True))) plot_kwargs = dict( duration=100, order=select_picks, n_channels=len(select_picks), scalings={"mag": 8e-13, "ref_meg": 2e-11}) raw.plot(**plot_kwargs)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The PSD of these data show the noise as clear peaks.
raw.plot_psd(fmax=30)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Run the "together" algorithm.
raw_tog = raw.copy() ica_kwargs = dict( method='picard', fit_params=dict(tol=1e-4), # use a high tol here for speed ) all_picks = mne.pick_types(raw_tog.info, meg=True, ref_meg=True) ica_tog = ICA(n_components=60, allow_ref_meg=True, **ica_kwargs) ica_tog.fit(raw_tog, picks=all_picks) # low threshold (2.0) her...
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause