markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Transforming and Creating Columns | df.assign(bmi=df['weight'] / (df['height']/100)**2)
df['bmi'] = df['weight'] / (df['height']/100)**2
df
df['something'] = [2,2,None,None,3]
df | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Sorting Data Frames Sort on indexes | df.sort_index(axis=1)
df.sort_index(axis=0, ascending=False) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Sort on values | df.sort_values(by=['something', 'bmi'], ascending=[True, False]) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Summarizing Apply an aggregation function | df.select_dtypes(include=np.number)
df.select_dtypes(include=np.number).agg(np.sum)
df.agg(['count', np.sum, np.mean]) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Split-Apply-CombineWe often want to perform subgroup analysis (conditioning by some discrete or categorical variable). This is done with `groupby` followed by an aggregate function. Conceptually, we split the data frame into separate groups, apply the aggregate function to each group separately, then combine the aggregated results back into a single data frame. | df['treatment'] = list('ababa')
df
grouped = df.groupby('treatment')
grouped.get_group('a')
grouped.mean() | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Using `agg` with `groupby` | grouped.agg('mean')
grouped.agg(['mean', 'std'])
grouped.agg({'weight': ['mean', 'std'], 'height': ['min', 'max'], 'bmi': lambda x: (x**2).sum()}) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Using `trasnform` wtih `groupby` | g_mean = grouped['weight', 'height'].transform(np.mean)
g_mean
g_std = grouped['weight', 'height'].transform(np.std)
g_std
(df[['weight', 'height']] - g_mean)/g_std | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Combining Data Frames | df
df1 = df.iloc[3:].copy()
df1.drop('something', axis=1, inplace=True)
df1 | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Adding rowsNote that `pandas` aligns by column indexes automatically. | df.append(df1, sort=False)
pd.concat([df, df1], sort=False) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Adding columns | df.pid
df2 = pd.DataFrame(OrderedDict(pid=[649, 533, 400, 600], age=[23,34,45,56]))
df2.pid
df.pid = df.pid.astype('int')
pd.merge(df, df2, on='pid', how='inner')
pd.merge(df, df2, on='pid', how='left')
pd.merge(df, df2, on='pid', how='right')
pd.merge(df, df2, on='pid', how='outer') | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Merging on the index | df1 = pd.DataFrame(dict(x=[1,2,3]), index=list('abc'))
df2 = pd.DataFrame(dict(y=[4,5,6]), index=list('abc'))
df3 = pd.DataFrame(dict(z=[7,8,9]), index=list('abc'))
df1
df2
df3
df1.join([df2, df3]) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Fixing common DataFrame issues Multiple variables in a column | df = pd.DataFrame(dict(pid_treat = ['A-1', 'B-2', 'C-1', 'D-2']))
df
df.pid_treat.str.split('-')
df.pid_treat.str.split('-').apply(pd.Series, index=['pid', 'treat']) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Multiple values in a cell | df = pd.DataFrame(dict(pid=['a', 'b', 'c'], vals = [(1,2,3), (4,5,6), (7,8,9)]))
df
df[['t1', 't2', 't3']] = df.vals.apply(pd.Series)
df
df.drop('vals', axis=1, inplace=True)
pd.melt(df, id_vars='pid', value_name='vals').drop('variable', axis=1) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Reshaping Data FramesSometimes we need to make rows into columns or vice versa. Converting multiple columns into a single columnThis is often useful if you need to condition on some variable. | url = 'https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv'
iris = pd.read_csv(url)
iris.head()
iris.shape
df_iris = pd.melt(iris, id_vars='species')
df_iris.sample(10) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Chaining commandsSometimes you see this functional style of method chaining that avoids the need for temporary intermediate variables. | (
iris.
sample(frac=0.2).
filter(regex='s.*').
assign(both=iris.sepal_length + iris.sepal_length).
groupby('species').agg(['mean', 'sum']).
pipe(lambda x: np.around(x, 1))
) | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
Moving between R and Python in Jupyter | %load_ext rpy2.ipython
import warnings
warnings.simplefilter('ignore', FutureWarning)
iris = %R iris
iris.head()
iris_py = iris.copy()
iris_py.Species = iris_py.Species.str.upper()
%%R -i iris_py -o iris_r
iris_r <- iris_py[1:3,]
iris_r | _____no_output_____ | BSD-3-Clause | notebooks/03_Using_Pandas_Annotated.ipynb | abbarcenasj/bios-823-2019 |
SLU13: Bias-Variance trade-off & Model Selection -- Examples--- 1. Model evaluation* a. [Train-test split](traintest)* b. [Train-val-test split](val)* c. [Cross validation](crossval) 2. [Learning curves](learningcurves) 1. Model evaluation | import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import learning_curve
%matplotlib inline
# Create the DataFrame with the data
df = pd.read_csv("data/beer.csv")
# Create a DataFrame with the features (X) and labels (y)
X = df.drop(["IsIPA"], axis=1)
y = df["IsIPA"]
print("Number of entries: ", X.shape[0]) | Number of entries: 1000
| MIT | S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb | FarhadManiCodes/batch5-students |
[Return to top](top) Create a training and a test set | from sklearn.model_selection import train_test_split
# Using 20 % of the data as test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print("Number of training entries: ", X_train.shape[0])
print("Number of test entries: ", X_test.shape[0]) | Number of training entries: 800
Number of test entries: 200
| MIT | S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb | FarhadManiCodes/batch5-students |
[Return to top](top) Create a training, test and validation set | # Using 20 % as test set and 20 % as validation set
X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.4)
X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.50)
print("Number of training entries: ", X_train.shape[0])
print("Number of validation entries: ", X_val.shape[0])
print("Number of test entries: ", X_test.shape[0]) | Number of training entries: 600
Number of validation entries: 200
Number of test entries: 200
| MIT | S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb | FarhadManiCodes/batch5-students |
[Return to top](top) Use cross-validation (using a given classifier) | from sklearn.model_selection import cross_val_score
knn = KNeighborsClassifier(n_neighbors=5)
# Use cv to specify the number of folds
scores = cross_val_score(knn, X, y, cv=5)
print(f"Mean of scores: {scores.mean():.3f}")
print(f"Variance of scores: {scores.var():.3f}") | Mean of scores: 0.916
Variance of scores: 0.000
| MIT | S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb | FarhadManiCodes/batch5-students |
[Return to top](top) 2. Learning Curves Here is the function that is taken from the sklearn page on learning curves: | def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
"""
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Test Set score")
plt.legend(loc="best")
return plt
# and this is how we used it
X = df.select_dtypes(exclude='object').fillna(-1).drop('IsIPA', axis=1)
y = df.IsIPA
clf = DecisionTreeClassifier(random_state=1, max_depth=5)
plot_learning_curve(X=X, y=y, estimator=clf, title='DecisionTreeClassifier'); | _____no_output_____ | MIT | S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb | FarhadManiCodes/batch5-students |
And remember the internals of what this function is actually doing by knowing how to use theoutput of the scikit [learning_curve](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html) function | # here's where the magic happens! The learning curve function is going
# to take your classifier and your training data and subset the data
train_sizes, train_scores, test_scores = learning_curve(clf, X, y)
# 5 different training set sizes have been selected
# with the smallest being 59 and the largest being 594
# the remaining is used for testing
print('train set sizes', train_sizes)
print('test set sizes', X.shape[0] - train_sizes)
# each row corresponds to a training set size
# each column corresponds to a cross validation fold
# the first row is the highest because it corresponds
# to the smallest training set which means that it's very
# easy for the classifier to overfit and have perfect
# test set predictions while as the test set grows it
# becomes a bit more difficult for this to happen.
train_scores
# The test set scores where again, each row corresponds
# to a train / test set size and each column is a differet
# run with the same train / test sizes
test_scores
# Let's average the scores across each fold so that we can plot them
train_scores_mean = np.mean(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
# this one isn't quite as cool as the other because it doesn't show the variance
# but the fundamentals are still here and it's a much simpler one to understand
learning_curve_df = pd.DataFrame({
'Training score': train_scores_mean,
'Test Set score': test_scores_mean
}, index=train_sizes)
plt.figure()
plt.ylabel("Score")
plt.xlabel("Training examples")
plt.title('Learning Curve')
plt.plot(learning_curve_df);
plt.legend(learning_curve_df.columns, loc="best");
| _____no_output_____ | MIT | S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb | FarhadManiCodes/batch5-students |
Phi_K advanced tutorialThis notebook guides you through the more advanced functionality of the phik package. This notebook will not cover all the underlying theory, but will just attempt to give an overview of all the options that are available. For a theoretical description the user is referred to our paper.The package offers functionality on three related topics:1. Phik correlation matrix2. Significance matrix3. Outlier significance matrix | %%capture
# install phik (if not installed yet)
import sys
!"{sys.executable}" -m pip install phik
# import standard packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools
import phik
from phik import resources
from phik.binning import bin_data
from phik.decorators import *
from phik.report import plot_correlation_matrix
%matplotlib inline
# if one changes something in the phik-package one can automatically reload the package or module
%load_ext autoreload
%autoreload 2 | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Load dataA simulated dataset is part of the phik-package. The dataset concerns car insurance data. Load the dataset here: | data = pd.read_csv( resources.fixture('fake_insurance_data.csv.gz') )
data.head() | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Specify bin typesThe phik-package offers a way to calculate correlations between variables of mixed types. Variable types can be inferred automatically although we recommend to variable types to be specified by the user. Because interval type variables need to be binned in order to calculate phik and the significance, a list of interval variables is created. | data_types = {'severity': 'interval',
'driver_age':'interval',
'satisfaction':'ordinal',
'mileage':'interval',
'car_size':'ordinal',
'car_use':'ordinal',
'car_color':'categorical',
'area':'categorical'}
interval_cols = [col for col, v in data_types.items() if v=='interval' and col in data.columns]
interval_cols
# interval_cols is used below | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Phik correlation matrixNow let's start calculating the correlation phik between pairs of variables. Note that the original dataset is used as input, the binning of interval variables is done automatically. | phik_overview = data.phik_matrix(interval_cols=interval_cols)
phik_overview | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Specify binning per interval variableBinning can be set per interval variable individually. One can set the number of bins, or specify a list of bin edges. Note that the measured phik correlation is dependent on the chosen binning. The default binning is uniform between the min and max values of the interval variable. | bins = {'mileage':5, 'driver_age':[18,25,35,45,55,65,125]}
phik_overview = data.phik_matrix(interval_cols=interval_cols, bins=bins)
phik_overview | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Do not apply noise correctionFor low statistics samples often a correlation larger than zero is measured when no correlation is actually present in the true underlying distribution. This is not only the case for phik, but also for the pearson correlation and Cramer's phi (see figure 4 in XX ). In the phik calculation a noise correction is applied by default, to take into account erroneous correlation values as a result of low statistics. To switch off this noise cancellation (not recommended), do: | phik_overview = data.phik_matrix(interval_cols=interval_cols, noise_correction=False)
phik_overview | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Using a different expectation histogramBy default phik compares the 2d distribution of two (binned) variables with the distribution that assumes no dependency between them. One can also change the expected distribution though. Phi_K is calculated in the same way, but using the other expectation distribution. | from phik.binning import auto_bin_data
from phik.phik import phik_observed_vs_expected_from_rebinned_df, phik_from_hist2d
from phik.statistics import get_dependent_frequency_estimates
# get observed 2d histogram of two variables
cols = ["mileage", "car_size"]
icols = ["mileage"]
observed = data[cols].hist2d(interval_cols=icols).values
# default phik evaluation from observed distribution
phik_value = phik_from_hist2d(observed)
print (phik_value)
# phik evaluation from an observed and expected distribution
expected = get_dependent_frequency_estimates(observed)
phik_value = phik_from_hist2d(observed=observed, expected=expected)
print (phik_value)
# one can also compare two datasets against each other, and get a full phik matrix that way.
# this needs binned datasets though.
# (the user needs to make sure the binnings of both datasets are identical.)
data_binned, _ = auto_bin_data(data, interval_cols=interval_cols)
# here we are comparing data_binned against itself
phik_matrix = phik_observed_vs_expected_from_rebinned_df(data_binned, data_binned)
# all off-diagonal entries are zero, meaning the all 2d distributions of both datasets are identical.
# (by construction the diagonal is one.)
phik_matrix | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Statistical significance of the correlationWhen assessing correlations it is good practise to evaluate both the correlation and the significance of the correlation: a large correlation may be statistically insignificant, and vice versa a small correlation may be very significant. For instance, scipy.stats.pearsonr returns both the pearson correlation and the p-value. Similarly, the phik package offers functionality the calculate a significance matrix. Significance is defined as:$$Z = \Phi^{-1}(1-p)\ ;\quad \Phi(z)=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{z} e^{-t^{2}/2}\,{\rm d}t $$Several corrections to the 'standard' p-value calculation are taken into account, making the method more robust for low statistics and sparse data cases. The user is referred to our paper for more details.Due to the corrections, the significance calculation can take a few seconds. | significance_overview = data.significance_matrix(interval_cols=interval_cols)
significance_overview | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Specify binning per interval variableBinning can be set per interval variable individually. One can set the number of bins, or specify a list of bin edges. Note that the measure phik correlation is dependent on the chosen binning. | bins = {'mileage':5, 'driver_age':[18,25,35,45,55,65,125]}
significance_overview = data.significance_matrix(interval_cols=interval_cols, bins=bins)
significance_overview | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Specify significance methodThe recommended method to calculate the significance of the correlation is a hybrid approach, which uses the G-test statistic. The number of degrees of freedom and an analytical, empirical description of the $\chi^2$ distribution are sed, based on Monte Carlo simulations. This method works well for both high as low statistics samples.Other approaches to calculate the significance are implemented:- asymptotic: fast, but over-estimates the number of degrees of freedom for low statistics samples, leading to erroneous values of the significance- MC: Many simulated samples are needed to accurately measure significances larger than 3, making this method computationally expensive. | significance_overview = data.significance_matrix(interval_cols=interval_cols, significance_method='asymptotic')
significance_overview | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Simulation methodThe chi2 of a contingency table is measured using a comparison of the expected frequencies with the true frequencies in a contingency table. The expected frequencies can be simulated in a variety of ways. The following methods are implemented: - multinominal: Only the total number of records is fixed. (default) - row_product_multinominal: The row totals fixed in the sampling. - col_product_multinominal: The column totals fixed in the sampling. - hypergeometric: Both the row or column totals are fixed in the sampling. (Note that this type of sampling is only available when row and column totals are integers, which is usually the case.) | # --- Warning, can be slow
# turned off here by default for unit testing purposes
#significance_overview = data.significance_matrix(interval_cols=interval_cols, simulation_method='hypergeometric')
#significance_overview | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Expected frequencies | from phik.simulation import sim_2d_data_patefield, sim_2d_product_multinominal, sim_2d_data
inputdata = data[['driver_age', 'area']].hist2d(interval_cols=['driver_age'])
inputdata | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Multinominal | simdata = sim_2d_data(inputdata.values)
print('data total:', inputdata.sum().sum())
print('sim total:', simdata.sum().sum())
print('data row totals:', inputdata.sum(axis=0).values)
print('sim row totals:', simdata.sum(axis=0))
print('data column totals:', inputdata.sum(axis=1).values)
print('sim column totals:', simdata.sum(axis=1)) | data total: 2000.0
sim total: 2000
data row totals: [ 65. 462. 724. 639. 110.]
sim row totals: [ 75 468 748 586 123]
data column totals: [388. 379. 388. 339. 281. 144. 56. 21. 2. 2.]
sim column totals: [378 380 375 335 281 164 59 25 1 2]
| Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
product multinominal | simdata = sim_2d_product_multinominal(inputdata.values, axis=0)
print('data total:', inputdata.sum().sum())
print('sim total:', simdata.sum().sum())
print('data row totals:', inputdata.sum(axis=0).astype(int).values)
print('sim row totals:', simdata.sum(axis=0).astype(int))
print('data column totals:', inputdata.sum(axis=1).astype(int).values)
print('sim column totals:', simdata.sum(axis=1).astype(int)) | data total: 2000.0
sim total: 2000
data row totals: [ 65 462 724 639 110]
sim row totals: [ 65 462 724 639 110]
data column totals: [388 379 388 339 281 144 56 21 2 2]
sim column totals: [399 353 415 349 272 139 45 22 4 2]
| Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
hypergeometric ("patefield") | # patefield simulation needs compiled c++ code.
# only run this if the python binding to the (compiled) patefiled simulation function is found.
try:
from phik.simcore import _sim_2d_data_patefield
CPP_SUPPORT = True
except ImportError:
CPP_SUPPORT = False
if CPP_SUPPORT:
simdata = sim_2d_data_patefield(inputdata.values)
print('data total:', inputdata.sum().sum())
print('sim total:', simdata.sum().sum())
print('data row totals:', inputdata.sum(axis=0).astype(int).values)
print('sim row totals:', simdata.sum(axis=0))
print('data column totals:', inputdata.sum(axis=1).astype(int).values)
print('sim column totals:', simdata.sum(axis=1)) | data total: 2000.0
sim total: 2000
data row totals: [ 65 462 724 639 110]
sim row totals: [ 65 462 724 639 110]
data column totals: [388 379 388 339 281 144 56 21 2 2]
sim column totals: [388 379 388 339 281 144 56 21 2 2]
| Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Outlier significanceThe normal pearson correlation between two interval variables is easy to interpret. However, the phik correlation between two variables of mixed type is not always easy to interpret, especially when it concerns categorical variables. Therefore, functionality is provided to detect "outliers": excesses and deficits over the expected frequencies in the contingency table of two variables. Example 1: mileage versus car_size For the categorical variable pair mileage - car_size we measured:$$\phi_k = 0.77 \, ,\quad\quad \mathrm{significance} = 46.3$$Let's use the outlier significance functionality to gain a better understanding of this significance correlation between mileage and car size. | c0 = 'mileage'
c1 = 'car_size'
tmp_interval_cols = ['mileage']
outlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols,
retbins=True)
outlier_signifs | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Specify binning per interval variableBinning can be set per interval variable individually. One can set the number of bins, or specify a list of bin edges. Note: in case a bin is created without any records this bin will be automatically dropped in the phik and (outlier) significance calculations. However, in the outlier significance calculation this will currently lead to an error as the number of provided bin edges does not match the number of bins anymore. | bins = [0,1E2, 1E3, 1E4, 1E5, 1E6]
outlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols,
bins=bins, retbins=True)
outlier_signifs | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Specify binning per interval variable -- dealing with underflow and overflowWhen specifying custom bins as situation can occur when the minimal (maximum) value in the data is smaller (larger) than the minimum (maximum) bin edge. Data points outside the specified range will be collected in the underflow (UF) and overflow (OF) bins. One can choose how to deal with these under/overflow bins, by setting the drop_underflow and drop_overflow variables.Note that the drop_underflow and drop_overflow options are also available for the calculation of the phik matrix and the significance matrix. | bins = [1E2, 1E3, 1E4, 1E5]
outlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols,
bins=bins, retbins=True,
drop_underflow=False,
drop_overflow=False)
outlier_signifs | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Dealing with NaN's in the data Let's add some missing values to our data | data.loc[np.random.choice(range(len(data)), size=10), 'car_size'] = np.nan
data.loc[np.random.choice(range(len(data)), size=10), 'mileage'] = np.nan | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Sometimes there can be information in the missing values and in which case you might want to consider the NaN values as a separate category. This can be achieved by setting the dropna argument to False. | bins = [1E2, 1E3, 1E4, 1E5]
outlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols,
bins=bins, retbins=True,
drop_underflow=False,
drop_overflow=False,
dropna=False)
outlier_signifs | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Here OF and UF are the underflow and overflow bin of car_size, respectively.To just ignore records with missing values set dropna to True (default). | bins = [1E2, 1E3, 1E4, 1E5]
outlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols,
bins=bins, retbins=True,
drop_underflow=False,
drop_overflow=False,
dropna=True)
outlier_signifs | _____no_output_____ | Apache-2.0 | phik/notebooks/phik_tutorial_advanced.ipynb | ionicsolutions/PhiK |
Support Vector Clustering visualizedTo get started, please click on the cell with the code below and hit `Shift + Enter` This may take a while. Support Vector Clustering(SVC) is a variation of Support Vector Machine (SVM).SVC is a way of determining a boudary point between different labels. It utilizes a kernel method, helps us to make better decisions on non-linear datasets.In this demo, we will be able to play with 3 parameters, namely `Sample Size`, `C` (Penalty parameter for Cost fucntion), and `gamma` (Kernel coefficent) | %matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import *
from IPython.display import display
from sklearn.svm import SVC
plt.style.use('ggplot')
def plot_data(data, labels, sep):
data_x = data[:, 0]
data_y = data[:, 1]
sep_x = sep[:, 0]
sep_y = sep[:, 1]
# plot data
fig = plt.figure(figsize=(4, 4))
pos_inds = np.argwhere(labels == 1)
pos_inds = [s[0] for s in pos_inds]
neg_inds = np.argwhere(labels == -1)
neg_inds = [s[0] for s in neg_inds]
plt.scatter(data_x[pos_inds], data_y[pos_inds], color='b', linewidth=1, marker='o', edgecolor='k', s=50)
plt.scatter(data_x[neg_inds], data_y[neg_inds], color='r', linewidth=1, marker='o', edgecolor='k', s=50)
# plot target
plt.plot(sep_x, sep_y, '--k', linewidth=3)
# clean up plot
plt.yticks([], [])
plt.xlim([-2.1, 2.1])
plt.ylim([-2.1, 2.1])
plt.axis('off')
return plt
def update_plot_data(plt, data, labels, sep):
plt.cla()
plt.clf()
data_x = data[:, 0]
data_y = data[:, 1]
sep_x = sep[:, 0]
sep_y = sep[:, 1]
# plot data
#plt.draw(figsize=(4, 4))
pos_inds = np.argwhere(labels == 1)
pos_inds = [s[0] for s in pos_inds]
neg_inds = np.argwhere(labels == -1)
neg_inds = [s[0] for s in neg_inds]
plt.scatter(data_x[pos_inds], data_y[pos_inds], color='b', linewidth=1, marker='o', edgecolor='k', s=50)
plt.scatter(data_x[neg_inds], data_y[neg_inds], color='r', linewidth=1, marker='o', edgecolor='k', s=50)
# plot target
plt.plot(sep_x, sep_y, '--k', linewidth=3)
# clean up plot
plt.yticks([], [])
plt.xlim([-2.1, 2.1])
plt.ylim([-2.1, 2.1])
plt.axis('off')
# plot approximation
def plot_approx(clf):
# plot classification boundary and color regions appropriately
r = np.linspace(-2.1, 2.1, 500)
s, t = np.meshgrid(r, r)
s = np.reshape(s, (np.size(s), 1))
t = np.reshape(t, (np.size(t), 1))
h = np.concatenate((s, t), 1)
# use classifier to make predictions
z = clf.predict(h)
# reshape predictions for plotting
s.shape = (np.size(r), np.size(r))
t.shape = (np.size(r), np.size(r))
z.shape = (np.size(r), np.size(r))
# show the filled in predicted-regions of the plane
plt.contourf(s, t, z, colors=['r', 'b'], alpha=0.2, levels=range(-1, 2))
# show the classification boundary if it exists
if len(np.unique(z)) > 1:
plt.contour(s, t, z, colors='k', linewidths=3)
def update_plot_approx(plt, clf):
# plot classification boundary and color regions appropriately
r = np.linspace(-2.1, 2.1, 500)
s, t = np.meshgrid(r, r)
s = np.reshape(s, (np.size(s), 1))
t = np.reshape(t, (np.size(t), 1))
h = np.concatenate((s, t), 1)
# use classifier to make predictions
z = clf.predict(h)
# reshape predictions for plotting
s.shape = (np.size(r), np.size(r))
t.shape = (np.size(r), np.size(r))
z.shape = (np.size(r), np.size(r))
plt.cla()
plt.clf()
# show the filled in predicted-regions of the plane
plt.contourf(s, t, z, colors=['r', 'b'], alpha=0.2, levels=range(-1, 2))
# show the classification boundary if it exists
if len(np.unique(z)) > 1:
plt.contour(s, t, z, colors='k', linewidths=3)
def make_circle_classification_dataset(num_pts):
'''
This function generates a random circle dataset with two classes.
You can run this a couple times to get a distribution you like visually.
You can also adjust the num_pts parameter to change the total number of points in the dataset.
'''
# generate points
num_misclass = 5 # total number of misclassified points
s = np.random.rand(num_pts)
data_x = np.cos(2 * np.pi * s)
data_y = np.sin(2 * np.pi * s)
radi = 2 * np.random.rand(num_pts)
data_x = data_x * radi
data_y = data_y * radi
data_x.shape = (len(data_x), 1)
data_y.shape = (len(data_y), 1)
data = np.concatenate((data_x, data_y), axis=1)
# make separator
s = np.linspace(0, 1, 100)
x_f = np.cos(2 * np.pi * s)
y_f = np.sin(2 * np.pi * s)
x_f.shape = (len(x_f), 1)
y_f.shape = (len(y_f), 1)
sep = np.concatenate((x_f, y_f), axis=1)
# make labels and flip a few to show some misclassifications
labels = radi.copy()
ind1 = np.argwhere(labels > 1)
ind1 = [v[0] for v in ind1]
ind2 = np.argwhere(labels <= 1)
ind2 = [v[0] for v in ind2]
labels[ind1] = -1
labels[ind2] = +1
flip = np.random.permutation(num_pts)
flip = flip[:num_misclass]
for i in flip:
labels[i] = (-1) * labels[i]
# return datapoints and labels for study
return data, labels, sep
sample_size = widgets.IntSlider(
value=50,
min=50,
max=1000,
step=1,
description='Sample size: ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
slider_color='white'
)
split_ratio = widgets.FloatSlider(
value=0.2,
min=0,
max=1.0,
step=0.1,
description='Train/Test Split Ratio (0-1): ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
slider_color='white'
)
c = widgets.FloatSlider(
value=0.1,
min=0.1,
max=10.0,
step=0.1,
description='C: ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
slider_color='white'
)
gamma = widgets.FloatSlider(
value=0.1,
min=0.1,
max=10.0,
step=0.1,
description='Gamma: ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
slider_color='white'
)
display(sample_size)
#init plot
data, labels, true_sep = make_circle_classification_dataset(num_pts=sample_size.value)
# preparing the plot
clf = SVC(C=c.value, kernel='rbf', gamma=gamma.value)
# fit classifier
clf.fit(data, labels)
# plot results
fit_plot = plot_data(data, labels, true_sep)
plot_approx(clf)
def on_train_info_change(change):
clf = SVC(C=c.value, kernel='rbf', gamma=gamma.value)
# fit classifier
clf.fit(data, labels)
# plot results
update_plot_data(fit_plot, data, labels, true_sep)
plot_approx(clf)
def on_value_change_sample(change):
global data
global labels
global true_sep
data, labels, true_sep = make_circle_classification_dataset(num_pts=sample_size.value)
update_plot_data(fit_plot, data, labels, true_sep)
clf = SVC(C=c.value,kernel='rbf',gamma=gamma.value)
# fit classifier
clf.fit(data, labels)
# plot results
update_plot_data(fit_plot, data, labels, true_sep)
plot_approx(clf)
sample_size.observe(on_value_change_sample, names='value')
display(c)
display(gamma)
c.observe(on_train_info_change, names='value')
gamma.observe(on_train_info_change, names='value') | _____no_output_____ | MIT | demo/classification.ipynb | DandikUnited/dandikunited.github.io |
import pandas as pd
import numpy as np
values_1 = np.random.randint(10, size=10)
values_2 = np.random.randint(10, size = 10)
print(values_1)
print(values_2)
years = np.arange(2010, 2020)
print(years)
groups = ['A','A','B','A','B','B','C','A','C','C']
len(groups)
df = pd.DataFrame({'group':groups, 'year':years,'value_1': values_1, 'value_2':values_2})
df
df.query('value_1<value_2')
new_col = np.random.randn(10)
df.insert(2, 'new_col',new_col)
df
df['cumsum_2'] = df[['value_2','group']].groupby('group').cumsum()
df
| _____no_output_____ | MIT | 2020_07_15_pandas_functions.ipynb | daekee0325/Data-Analysis |
|
SMPLE | Sample1 = df.sample(n=3)
Sample1
Sample2 = df.sample(frac=0.5)
Sample2
df['new_col'].where(df['new_col']>0,0)
np.where(df['new_col'] >0, df['new_col'], 0)
| _____no_output_____ | MIT | 2020_07_15_pandas_functions.ipynb | daekee0325/Data-Analysis |
isin | years = ['2010','2014','2015']
df[df.year.isin(years)]
df.loc[:2, ['group','year'] ]
df.loc[[1,3,5],['year','value_1']]
df['value_1']
df.value_1.pct_change()
df.value_1.sort_values()
df.value_1.sort_values().pct_change()
df['rank_1'] = df['value_1'].rank( )
df
df.select_dtypes(exclude='int64')
df.replace({'A':'A_1','B':'B_1'})
def color_negative_values(val):
color = 'red' if val < 0 else 'black'
return 'color : %s' %color
df[['new_col']].style.applymap(color_negative_values)
df3 = pd.DataFrame({'A': np.random.randn(10), 'B': np.random.randn(10)})
df3
df3.style.applymap(color_negative_values)
| _____no_output_____ | MIT | 2020_07_15_pandas_functions.ipynb | daekee0325/Data-Analysis |
Description This notebook runs some pre-analyses using DBSCAN to explore the best set of parameters (`min_samples` and `eps`) to cluster `pca` data version. Environment variables | from IPython.display import display
import conf
N_JOBS = conf.GENERAL["N_JOBS"]
display(N_JOBS)
%env MKL_NUM_THREADS=$N_JOBS
%env OPEN_BLAS_NUM_THREADS=$N_JOBS
%env NUMEXPR_NUM_THREADS=$N_JOBS
%env OMP_NUM_THREADS=$N_JOBS | env: MKL_NUM_THREADS=2
env: OPEN_BLAS_NUM_THREADS=2
env: NUMEXPR_NUM_THREADS=2
env: OMP_NUM_THREADS=2
| BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Modules loading | %load_ext autoreload
%autoreload 2
from pathlib import Path
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from sklearn.metrics import pairwise_distances
from sklearn.cluster import DBSCAN
from sklearn.metrics import (
silhouette_score,
calinski_harabasz_score,
davies_bouldin_score,
)
import matplotlib.pyplot as plt
import seaborn as sns
from utils import generate_result_set_name
from clustering.ensembles.utils import generate_ensemble | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Global settings | np.random.seed(0)
CLUSTERING_ATTRIBUTES_TO_SAVE = ["n_clusters"] | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Data version: pca | INPUT_SUBSET = "pca"
INPUT_STEM = "z_score_std-projection-smultixcan-efo_partial-mashr-zscores"
DR_OPTIONS = {
"n_components": 50,
"svd_solver": "full",
"random_state": 0,
}
input_filepath = Path(
conf.RESULTS["DATA_TRANSFORMATIONS_DIR"],
INPUT_SUBSET,
generate_result_set_name(
DR_OPTIONS, prefix=f"{INPUT_SUBSET}-{INPUT_STEM}-", suffix=".pkl"
),
).resolve()
display(input_filepath)
assert input_filepath.exists(), "Input file does not exist"
input_filepath_stem = input_filepath.stem
display(input_filepath_stem)
data = pd.read_pickle(input_filepath)
data.shape
data.head() | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Tests different k values (k-NN) | # `k_values` is the full range of k for kNN, whereas `k_values_to_explore` is a
# subset that will be explored in this notebook. If the analysis works, then
# `k_values` and `eps_range_per_k` below are copied to the notebook that will
# produce the final DBSCAN runs (`../002_[...]-dbscan-....ipynb`)
k_values = np.arange(2, 125 + 1, 1)
k_values_to_explore = (2, 5, 10, 15, 20, 30, 40, 50, 75, 100, 125)
results = {}
for k in k_values_to_explore:
nbrs = NearestNeighbors(n_neighbors=k, n_jobs=N_JOBS).fit(data)
distances, indices = nbrs.kneighbors(data)
results[k] = (distances, indices)
eps_range_per_k = {
k: (10, 20)
if k < 5
else (11, 25)
if k < 10
else (12, 30)
if k < 15
else (13, 35)
if k < 20
else (14, 40)
for k in k_values
}
eps_range_per_k_to_explore = {k: eps_range_per_k[k] for k in k_values_to_explore}
for k, (distances, indices) in results.items():
d = distances[:, 1:].mean(axis=1)
d = np.sort(d)
fig, ax = plt.subplots()
plt.plot(d)
r = eps_range_per_k_to_explore[k]
plt.hlines(r[0], 0, data.shape[0], color="red")
plt.hlines(r[1], 0, data.shape[0], color="red")
plt.xlim((3000, data.shape[0]))
plt.title(f"k={k}")
display(fig)
plt.close(fig) | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Extended test Generate clusterers | CLUSTERING_OPTIONS = {}
# K_RANGE is the min_samples parameter in DBSCAN (sklearn)
CLUSTERING_OPTIONS["K_RANGE"] = k_values_to_explore
CLUSTERING_OPTIONS["EPS_RANGE_PER_K"] = eps_range_per_k_to_explore
CLUSTERING_OPTIONS["EPS_STEP"] = 33
CLUSTERING_OPTIONS["METRIC"] = "euclidean"
display(CLUSTERING_OPTIONS)
CLUSTERERS = {}
idx = 0
for k in CLUSTERING_OPTIONS["K_RANGE"]:
eps_range = CLUSTERING_OPTIONS["EPS_RANGE_PER_K"][k]
eps_values = np.linspace(eps_range[0], eps_range[1], CLUSTERING_OPTIONS["EPS_STEP"])
for eps in eps_values:
clus = DBSCAN(min_samples=k, eps=eps, metric="precomputed", n_jobs=N_JOBS)
method_name = type(clus).__name__
CLUSTERERS[f"{method_name} #{idx}"] = clus
idx = idx + 1
display(len(CLUSTERERS))
_iter = iter(CLUSTERERS.items())
display(next(_iter))
display(next(_iter))
clustering_method_name = method_name
display(clustering_method_name) | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Generate ensemble | data_dist = pairwise_distances(data, metric=CLUSTERING_OPTIONS["METRIC"])
data_dist.shape
pd.Series(data_dist.flatten()).describe().apply(str)
ensemble = generate_ensemble(
data_dist,
CLUSTERERS,
attributes=CLUSTERING_ATTRIBUTES_TO_SAVE,
)
ensemble.shape
ensemble.head()
_tmp = ensemble["n_clusters"].value_counts()
display(_tmp)
assert _tmp.index[0] == 3
assert _tmp.loc[3] == 22
ensemble_stats = ensemble["n_clusters"].describe()
display(ensemble_stats)
# number of noisy points
_tmp = ensemble.copy()
_tmp = _tmp.assign(n_noisy=ensemble["partition"].apply(lambda x: np.isnan(x).sum()))
_tmp_stats = _tmp["n_noisy"].describe()
display(_tmp_stats)
assert _tmp_stats["min"] > 5
assert _tmp_stats["max"] < 600
assert 90 < _tmp_stats["mean"] < 95 | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Testing | assert ensemble_stats["min"] > 1
assert not ensemble["n_clusters"].isna().any()
# all partitions have the right size
assert np.all(
[part["partition"].shape[0] == data.shape[0] for idx, part in ensemble.iterrows()]
) | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Add clustering quality measures | def _remove_nans(data, part):
not_nan_idx = ~np.isnan(part)
return data.iloc[not_nan_idx], part[not_nan_idx]
def _apply_func(func, data, part):
no_nan_data, no_nan_part = _remove_nans(data, part)
return func(no_nan_data, no_nan_part)
ensemble = ensemble.assign(
si_score=ensemble["partition"].apply(
lambda x: _apply_func(silhouette_score, data, x)
),
ch_score=ensemble["partition"].apply(
lambda x: _apply_func(calinski_harabasz_score, data, x)
),
db_score=ensemble["partition"].apply(
lambda x: _apply_func(davies_bouldin_score, data, x)
),
)
ensemble.shape
ensemble.head() | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Cluster quality | with pd.option_context("display.max_rows", None, "display.max_columns", None):
_df = ensemble.groupby(["n_clusters"]).mean()
display(_df)
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="si_score")
ax.set_ylabel("Silhouette index\n(higher is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="ch_score")
ax.set_ylabel("Calinski-Harabasz index\n(higher is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="db_score")
ax.set_ylabel("Davies-Bouldin index\n(lower is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout() | _____no_output_____ | BSD-2-Clause-Patent | nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb | greenelab/phenoplier |
Generic startnotebook, course on webscraping*By Olav ten Bosch, Dick Windmeijer and Marijn Detiger* Documentation: [Requests.py](http://docs.python-requests.org) [Beautifulsoup.py](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) | # Imports:
import requests
from bs4 import BeautifulSoup
import time # for sleeping between multiple requests
#Issue a request:
#r1 = requests.get('http://testing-ground.scraping.pro')
#print(r1.status_code, r1.headers['content-type'], r1.encoding, r1.text)
#Issue a request with dedicated user-agent string:
#headers = {'user-agent': 'myOwnScraper'}
#r = requests.get('http://testing-ground.scraping.pro', headers=headers)
# Request with parameters:
#pars = {'products': 2, 'years': 2}
#r2 = requests.get('http://testing-ground.scraping.pro/table?', params=pars)
#print(r2.url)
# Soup:
#soup = BeautifulSoup(r2.text, 'lxml')
#print(soup3.title.text)
#soup.find_all("div", class_="product")
# One second idle time between requests:
#time.sleep(1) | _____no_output_____ | CC-BY-4.0 | 20200907/GenericStartNotebook.ipynb | SNStatComp/CBSAcademyBD |
Multi Investment OptimizationIn the following, we show how PyPSA can deal with multi-investment optimization, also known as multi-horizon optimization. Here, the total set of snapshots is divided into investment periods. For the model, this translates into multi-indexed snapshots with the first level being the investment period and the second level the according time steps. In each investment period new asset may be added to the system. On the other hand assets may only operate as long as allowed by their lifetime.In contrast to the ordinary optimisation, the following concepts have to be taken into account. 1. `investment_periods` - `pypsa.Network` attribute. This is the set of periods which specify when new assets may be built. In the current implementation, these have to be the same as the first level values in the `snapshots` attribute.2. `investment_period_weightings` - `pypsa.Network` attribute. These specify the weighting of each period in the objective function. 3. `build_year` - general component attribute. A single asset may only be built when the build year is smaller or equal to the current investment period. For example, assets with a build year `2029` are considered in the investment period `2030`, but not in the period `2025`. 4. `lifetime` - general component attribute. An asset is only considered in an investment period if present at the beginning of an investment period. For example, an asset with build year `2029` and lifetime `30` is considered in the investment period `2055` but not in the period `2060`. In the following, we set up a three node network with generators, lines and storages and run a optimisation covering the time span from 2020 to 2050 and each decade is one investment period. | import pypsa
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt | _____no_output_____ | MIT | examples/notebooks/multi-investment-optimisation.ipynb | p-glaum/PyPSA |
We set up the network with investment periods and snapshots. | n = pypsa.Network()
years = [2020, 2030, 2040, 2050]
freq = "24"
snapshots = pd.DatetimeIndex([])
for year in years:
period = pd.date_range(
start="{}-01-01 00:00".format(year),
freq="{}H".format(freq),
periods=8760 / float(freq),
)
snapshots = snapshots.append(period)
# convert to multiindex and assign to network
n.snapshots = pd.MultiIndex.from_arrays([snapshots.year, snapshots])
n.investment_periods = years
n.snapshot_weightings
n.investment_periods | _____no_output_____ | MIT | examples/notebooks/multi-investment-optimisation.ipynb | p-glaum/PyPSA |
Set the years and objective weighting per investment period. For the objective weighting, we consider a discount rate defined by $$ D(t) = \dfrac{1}{(1+r)^t} $$ where $r$ is the discount rate. For each period we sum up all discounts rates of the corresponding years which gives us the effective objective weighting. | n.investment_period_weightings["years"] = list(np.diff(years)) + [10]
r = 0.01
T = 0
for period, nyears in n.investment_period_weightings.years.items():
discounts = [(1 / (1 + r) ** t) for t in range(T, T + nyears)]
n.investment_period_weightings.at[period, "objective"] = sum(discounts)
T += nyears
n.investment_period_weightings | _____no_output_____ | MIT | examples/notebooks/multi-investment-optimisation.ipynb | p-glaum/PyPSA |
Add the components | for i in range(3):
n.add("Bus", "bus {}".format(i))
# add three lines in a ring
n.add(
"Line",
"line 0->1",
bus0="bus 0",
bus1="bus 1",
)
n.add(
"Line",
"line 1->2",
bus0="bus 1",
bus1="bus 2",
capital_cost=10,
build_year=2030,
)
n.add(
"Line",
"line 2->0",
bus0="bus 2",
bus1="bus 0",
)
n.lines["x"] = 0.0001
n.lines["s_nom_extendable"] = True
n.lines
# add some generators
p_nom_max = pd.Series(
(np.random.uniform() for sn in range(len(n.snapshots))),
index=n.snapshots,
name="generator ext 2020",
)
# renewable (can operate 2020, 2030)
n.add(
"Generator",
"generator ext 0 2020",
bus="bus 0",
p_nom=50,
build_year=2020,
lifetime=20,
marginal_cost=2,
capital_cost=1,
p_max_pu=p_nom_max,
carrier="solar",
p_nom_extendable=True,
)
# can operate 2040, 2050
n.add(
"Generator",
"generator ext 0 2040",
bus="bus 0",
p_nom=50,
build_year=2040,
lifetime=11,
marginal_cost=25,
capital_cost=10,
carrier="OCGT",
p_nom_extendable=True,
)
# can operate in 2040
n.add(
"Generator",
"generator fix 1 2040",
bus="bus 1",
p_nom=50,
build_year=2040,
lifetime=10,
carrier="CCGT",
marginal_cost=20,
capital_cost=1,
)
n.generators
n.add(
"StorageUnit",
"storageunit non-cyclic 2030",
bus="bus 2",
p_nom=0,
capital_cost=2,
build_year=2030,
lifetime=21,
cyclic_state_of_charge=False,
p_nom_extendable=False,
)
n.add(
"StorageUnit",
"storageunit periodic 2020",
bus="bus 2",
p_nom=0,
capital_cost=1,
build_year=2020,
lifetime=21,
cyclic_state_of_charge=True,
cyclic_state_of_charge_per_period=True,
p_nom_extendable=True,
)
n.storage_units | _____no_output_____ | MIT | examples/notebooks/multi-investment-optimisation.ipynb | p-glaum/PyPSA |
Add the load | load_var = pd.Series(
100 * np.random.rand(len(n.snapshots)), index=n.snapshots, name="load"
)
n.add("Load", "load 2", bus="bus 2", p_set=load_var)
load_fix = pd.Series(75, index=n.snapshots, name="load")
n.add("Load", "load 1", bus="bus 1", p_set=load_fix) | _____no_output_____ | MIT | examples/notebooks/multi-investment-optimisation.ipynb | p-glaum/PyPSA |
Run the optimization | n.loads_t.p_set
n.lopf(pyomo=False, multi_investment_periods=True)
c = "Generator"
df = pd.concat(
{
period: n.get_active_assets(c, period) * n.df(c).p_nom_opt
for period in n.investment_periods
},
axis=1,
)
df.T.plot.bar(
stacked=True,
edgecolor="white",
width=1,
ylabel="Capacity",
xlabel="Investment Period",
rot=0,
figsize=(10, 5),
)
plt.tight_layout()
df = n.generators_t.p.sum(level=0).T
df.T.plot.bar(
stacked=True,
edgecolor="white",
width=1,
ylabel="Generation",
xlabel="Investment Period",
rot=0,
figsize=(10, 5),
) | _____no_output_____ | MIT | examples/notebooks/multi-investment-optimisation.ipynb | p-glaum/PyPSA |
Intro**This is Lesson 3 in the [Deep Learning](https://www.kaggle.com/education/machine-learning) track** At the end of this lesson, you will be able to write TensorFlow and Keras code to use one of the best models in computer vision. Lesson | from IPython.display import YouTubeVideo
YouTubeVideo('sDG5tPtsbSA', width=800, height=450) | _____no_output_____ | MIT | deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb | DrDarbin/Kaggle-learn-tasks-solutions |
Sample Code Choose Images to Work With | from os.path import join
image_dir = '../input/dog-breed-identification/train/'
img_paths = [join(image_dir, filename) for filename in
['0246f44bb123ce3f91c939861eb97fb7.jpg',
'84728e78632c0910a69d33f82e62638c.jpg',
'8825e914555803f4c67b26593c9d5aff.jpg',
'91a5e8db15bccfb6cfa2df5e8b95ec03.jpg']] | _____no_output_____ | MIT | deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb | DrDarbin/Kaggle-learn-tasks-solutions |
Function to Read and Prep Images for Modeling | import numpy as np
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import load_img, img_to_array
image_size = 224
def read_and_prep_images(img_paths, img_height=image_size, img_width=image_size):
imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]
img_array = np.array([img_to_array(img) for img in imgs])
output = preprocess_input(img_array)
return(output) | _____no_output_____ | MIT | deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb | DrDarbin/Kaggle-learn-tasks-solutions |
Create Model with Pre-Trained Weights File. Make Predictions | from tensorflow.python.keras.applications import ResNet50
my_model = ResNet50(weights='../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels.h5')
test_data = read_and_prep_images(img_paths)
preds = my_model.predict(test_data) | _____no_output_____ | MIT | deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb | DrDarbin/Kaggle-learn-tasks-solutions |
Visualize Predictions | from learntools.deep_learning.decode_predictions import decode_predictions
from IPython.display import Image, display
most_likely_labels = decode_predictions(preds, top=3, class_list_path='../input/resnet50/imagenet_class_index.json')
for i, img_path in enumerate(img_paths):
display(Image(img_path))
print(most_likely_labels[i]) | _____no_output_____ | MIT | deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb | DrDarbin/Kaggle-learn-tasks-solutions |
Rhetorical relations classification used in tree building: ESIMPrepare data and model-related scripts.Evaluate models.Make and evaluate ansembles for ESIM and BiMPM model / ESIM and feature-based model.Output: - ``models/relation_predictor_esim/*`` | %load_ext autoreload
%autoreload 2
import os
import glob
import pandas as pd
import numpy as np
import pickle
from utils.file_reading import read_edus, read_gold, read_negative, read_annotation | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Make a directory | MODEL_PATH = 'models/label_predictor_esim'
! mkdir $MODEL_PATH
TRAIN_FILE_PATH = os.path.join(MODEL_PATH, 'nlabel_cf_train.tsv')
DEV_FILE_PATH = os.path.join(MODEL_PATH, 'nlabel_cf_dev.tsv')
TEST_FILE_PATH = os.path.join(MODEL_PATH, 'nlabel_cf_test.tsv') | mkdir: cannot create directory ‘models/label_predictor_esim’: File exists
| MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Prepare train/test sets | IN_PATH = 'data_labeling'
train_samples = pd.read_pickle(os.path.join(IN_PATH, 'train_samples.pkl'))
dev_samples = pd.read_pickle(os.path.join(IN_PATH, 'dev_samples.pkl'))
test_samples = pd.read_pickle(os.path.join(IN_PATH, 'test_samples.pkl'))
counts = train_samples['relation'].value_counts(normalize=False).values
NUMBER_CLASSES = len(counts)
print("number of classes:", NUMBER_CLASSES)
print("class weights:")
np.round(counts.min() / counts, decimals=6)
counts = train_samples['relation'].value_counts()
counts
import razdel
def tokenize(text):
result = ' '.join([tok.text for tok in razdel.tokenize(text)])
return result
train_samples['snippet_x'] = train_samples.snippet_x.map(tokenize)
train_samples['snippet_y'] = train_samples.snippet_y.map(tokenize)
dev_samples['snippet_x'] = dev_samples.snippet_x.map(tokenize)
dev_samples['snippet_y'] = dev_samples.snippet_y.map(tokenize)
test_samples['snippet_x'] = test_samples.snippet_x.map(tokenize)
test_samples['snippet_y'] = test_samples.snippet_y.map(tokenize)
train_samples = train_samples.reset_index()
train_samples[['relation', 'snippet_x', 'snippet_y', 'index']].to_csv(TRAIN_FILE_PATH, sep='\t', header=False, index=False)
dev_samples = dev_samples.reset_index()
dev_samples[['relation', 'snippet_x', 'snippet_y', 'index']].to_csv(DEV_FILE_PATH, sep='\t', header=False, index=False)
test_samples = test_samples.reset_index()
test_samples[['relation', 'snippet_x', 'snippet_y', 'index']].to_csv(TEST_FILE_PATH, sep='\t', header=False, index=False) | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Modify model (Add F1, concatenated encoding) | %%writefile models/bimpm_custom_package/model/esim.py
from typing import Dict, List, Any, Optional
import numpy
import torch
from allennlp.common.checks import check_dimensions_match
from allennlp.data import TextFieldTensors, Vocabulary
from allennlp.models.model import Model
from allennlp.modules import FeedForward, InputVariationalDropout
from allennlp.modules.matrix_attention.matrix_attention import MatrixAttention
from allennlp.modules import Seq2SeqEncoder, TextFieldEmbedder
from allennlp.nn import InitializerApplicator
from allennlp.nn.util import (
get_text_field_mask,
masked_softmax,
weighted_sum,
masked_max,
)
from allennlp.training.metrics import CategoricalAccuracy, F1Measure
@Model.register("custom_esim")
class CustomESIM(Model):
"""
This `Model` implements the ESIM sequence model described in [Enhanced LSTM for Natural Language Inference]
(https://api.semanticscholar.org/CorpusID:34032948) by Chen et al., 2017.
Registered as a `Model` with name "esim".
# Parameters
vocab : `Vocabulary`
text_field_embedder : `TextFieldEmbedder`
Used to embed the `premise` and `hypothesis` `TextFields` we get as input to the
model.
encoder : `Seq2SeqEncoder`
Used to encode the premise and hypothesis.
matrix_attention : `MatrixAttention`
This is the attention function used when computing the similarity matrix between encoded
words in the premise and words in the hypothesis.
projection_feedforward : `FeedForward`
The feedforward network used to project down the encoded and enhanced premise and hypothesis.
inference_encoder : `Seq2SeqEncoder`
Used to encode the projected premise and hypothesis for prediction.
output_feedforward : `FeedForward`
Used to prepare the concatenated premise and hypothesis for prediction.
output_logit : `FeedForward`
This feedforward network computes the output logits.
dropout : `float`, optional (default=`0.5`)
Dropout percentage to use.
initializer : `InitializerApplicator`, optional (default=`InitializerApplicator()`)
Used to initialize the model parameters.
"""
def __init__(
self,
vocab: Vocabulary,
text_field_embedder: TextFieldEmbedder,
encoder: Seq2SeqEncoder,
matrix_attention: MatrixAttention,
projection_feedforward: FeedForward,
inference_encoder: Seq2SeqEncoder,
output_feedforward: FeedForward,
output_logit: FeedForward,
encode_together: bool = False,
dropout: float = 0.5,
class_weights: list = [],
initializer: InitializerApplicator = InitializerApplicator(),
**kwargs,
) -> None:
super().__init__(vocab, **kwargs)
self._text_field_embedder = text_field_embedder
self._encoder = encoder
self._matrix_attention = matrix_attention
self._projection_feedforward = projection_feedforward
self._inference_encoder = inference_encoder
if dropout:
self.dropout = torch.nn.Dropout(dropout)
self.rnn_input_dropout = InputVariationalDropout(dropout)
else:
self.dropout = None
self.rnn_input_dropout = None
self._output_feedforward = output_feedforward
self._output_logit = output_logit
self.encode_together = encode_together
self._num_labels = vocab.get_vocab_size(namespace="labels")
check_dimensions_match(
text_field_embedder.get_output_dim(),
encoder.get_input_dim(),
"text field embedding dim",
"encoder input dim",
)
check_dimensions_match(
encoder.get_output_dim() * 4,
projection_feedforward.get_input_dim(),
"encoder output dim",
"projection feedforward input",
)
check_dimensions_match(
projection_feedforward.get_output_dim(),
inference_encoder.get_input_dim(),
"proj feedforward output dim",
"inference lstm input dim",
)
self.metrics = {"accuracy": CategoricalAccuracy()}
if class_weights:
self.class_weights = class_weights
else:
self.class_weights = [1.] * self.classifier_feedforward.get_output_dim()
for _class in range(len(self.class_weights)):
self.metrics.update({
f"f1_rel{_class}": F1Measure(_class),
})
self._loss = torch.nn.CrossEntropyLoss(weight=torch.FloatTensor(self.class_weights))
initializer(self)
def forward( # type: ignore
self,
premise: TextFieldTensors,
hypothesis: TextFieldTensors,
label: torch.IntTensor = None,
metadata: List[Dict[str, Any]] = None,
) -> Dict[str, torch.Tensor]:
"""
# Parameters
premise : `TextFieldTensors`
From a `TextField`
hypothesis : `TextFieldTensors`
From a `TextField`
label : `torch.IntTensor`, optional (default = `None`)
From a `LabelField`
metadata : `List[Dict[str, Any]]`, optional (default = `None`)
Metadata containing the original tokenization of the premise and
hypothesis with 'premise_tokens' and 'hypothesis_tokens' keys respectively.
# Returns
An output dictionary consisting of:
label_logits : `torch.FloatTensor`
A tensor of shape `(batch_size, num_labels)` representing unnormalised log
probabilities of the entailment label.
label_probs : `torch.FloatTensor`
A tensor of shape `(batch_size, num_labels)` representing probabilities of the
entailment label.
loss : `torch.FloatTensor`, optional
A scalar loss to be optimised.
"""
embedded_premise = self._text_field_embedder(premise)
embedded_hypothesis = self._text_field_embedder(hypothesis)
premise_mask = get_text_field_mask(premise)
hypothesis_mask = get_text_field_mask(hypothesis)
# apply dropout for LSTM
if self.rnn_input_dropout:
embedded_premise = self.rnn_input_dropout(embedded_premise)
embedded_hypothesis = self.rnn_input_dropout(embedded_hypothesis)
# encode premise and hypothesis
encoded_premise = self._encoder(embedded_premise, premise_mask)
encoded_hypothesis = self._encoder(embedded_hypothesis, hypothesis_mask)
# Shape: (batch_size, premise_length, hypothesis_length)
similarity_matrix = self._matrix_attention(encoded_premise, encoded_hypothesis)
# Shape: (batch_size, premise_length, hypothesis_length)
p2h_attention = masked_softmax(similarity_matrix, hypothesis_mask)
# Shape: (batch_size, premise_length, embedding_dim)
attended_hypothesis = weighted_sum(encoded_hypothesis, p2h_attention)
# Shape: (batch_size, hypothesis_length, premise_length)
h2p_attention = masked_softmax(similarity_matrix.transpose(1, 2).contiguous(), premise_mask)
# Shape: (batch_size, hypothesis_length, embedding_dim)
attended_premise = weighted_sum(encoded_premise, h2p_attention)
# the "enhancement" layer
premise_enhanced = torch.cat(
[
encoded_premise,
attended_hypothesis,
encoded_premise - attended_hypothesis,
encoded_premise * attended_hypothesis,
],
dim=-1,
)
hypothesis_enhanced = torch.cat(
[
encoded_hypothesis,
attended_premise,
encoded_hypothesis - attended_premise,
encoded_hypothesis * attended_premise,
],
dim=-1,
)
# The projection layer down to the model dimension. Dropout is not applied before
# projection.
projected_enhanced_premise = self._projection_feedforward(premise_enhanced)
projected_enhanced_hypothesis = self._projection_feedforward(hypothesis_enhanced)
# Run the inference layer
if self.rnn_input_dropout:
projected_enhanced_premise = self.rnn_input_dropout(projected_enhanced_premise)
projected_enhanced_hypothesis = self.rnn_input_dropout(projected_enhanced_hypothesis)
v_ai = self._inference_encoder(projected_enhanced_premise, premise_mask)
v_bi = self._inference_encoder(projected_enhanced_hypothesis, hypothesis_mask)
# The pooling layer -- max and avg pooling.
# (batch_size, model_dim)
v_a_max = masked_max(v_ai, premise_mask.unsqueeze(-1), dim=1)
v_b_max = masked_max(v_bi, hypothesis_mask.unsqueeze(-1), dim=1)
v_a_avg = torch.sum(v_ai * premise_mask.unsqueeze(-1), dim=1) / torch.sum(
premise_mask, 1, keepdim=True
)
v_b_avg = torch.sum(v_bi * hypothesis_mask.unsqueeze(-1), dim=1) / torch.sum(
hypothesis_mask, 1, keepdim=True
)
# Now concat
# (batch_size, model_dim * 2 * 4)
v_all = torch.cat([v_a_avg, v_a_max, v_b_avg, v_b_max], dim=1)
# the final MLP -- apply dropout to input, and MLP applies to output & hidden
if self.dropout:
v_all = self.dropout(v_all)
output_hidden = self._output_feedforward(v_all)
label_logits = self._output_logit(output_hidden)
label_probs = torch.nn.functional.softmax(label_logits, dim=-1)
output_dict = {"label_logits": label_logits, "label_probs": label_probs}
if label is not None:
loss = self._loss(label_logits, label.long().view(-1))
output_dict["loss"] = loss
for metric in self.metrics.values():
metric(label_logits, label.long().view(-1))
return output_dict
def get_metrics(self, reset: bool = False) -> Dict[str, float]:
metrics = {"accuracy": self.metrics["accuracy"].get_metric(reset=reset)}
for _class in range(len(self.class_weights)):
metrics.update({
f"f1_rel{_class}": self.metrics[f"f1_rel{_class}"].get_metric(reset=reset)['f1'],
})
metrics["f1_macro"] = numpy.mean([metrics[f"f1_rel{_class}"] for _class in range(len(self.class_weights))])
return metrics
default_predictor = "textual_entailment"
! cp models/bimpm_custom_package/model/esim.py ../../../maintenance_rst/models/customization_package/model/esim.py | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
2. Generate config files ELMo | %%writefile $MODEL_PATH/config_elmo.json
local NUM_EPOCHS = 200;
local LR = 1e-3;
local LSTM_ENCODER_HIDDEN = 25;
{
"dataset_reader": {
"type": "quora_paraphrase",
"tokenizer": {
"type": "just_spaces"
},
"token_indexers": {
"token_characters": {
"type": "characters",
"min_padding_length": 30,
},
"elmo": {
"type": "elmo_characters"
}
}
},
"train_data_path": "label_predictor_esim/nlabel_cf_train.tsv",
"validation_data_path": "label_predictor_esim/nlabel_cf_dev.tsv",
"test_data_path": "label_predictor_esim/nlabel_cf_test.tsv",
"model": {
"type": "custom_esim",
"dropout": 0.5,
"class_weights": [
0.027483, 0.032003, 0.080478, 0.102642, 0.121394, 0.135027,
0.136856, 0.170897, 0.172355, 0.181655, 0.193858, 0.211297,
0.231651, 0.260982, 0.334437, 0.378277, 0.392996, 0.567416,
0.782946, 0.855932, 0.971154, 1.0],
"encode_together": false,
"text_field_embedder": {
"token_embedders": {
"elmo": {
"type": "elmo_token_embedder",
"options_file": "rsv_elmo/options.json",
"weight_file": "rsv_elmo/model.hdf5",
"do_layer_norm": false,
"dropout": 0.1
},
"token_characters": {
"type": "character_encoding",
"dropout": 0.1,
"embedding": {
"embedding_dim": 20,
"padding_index": 0,
"vocab_namespace": "token_characters"
},
"encoder": {
"type": "lstm",
"input_size": $.model.text_field_embedder.token_embedders.token_characters.embedding.embedding_dim,
"hidden_size": LSTM_ENCODER_HIDDEN,
"num_layers": 1,
"bidirectional": true,
"dropout": 0.4
},
},
}
},
"encoder": {
"type": "lstm",
"input_size": 1024+LSTM_ENCODER_HIDDEN+LSTM_ENCODER_HIDDEN,
"hidden_size": 300,
"num_layers": 1,
"bidirectional": true
},
"matrix_attention": {"type": "dot_product"},
"projection_feedforward": {
"input_dim": 2400,
"hidden_dims": 300,
"num_layers": 1,
"activations": "relu"
},
"inference_encoder": {
"type": "lstm",
"input_size": 300,
"hidden_size": 300,
"num_layers": 1,
"bidirectional": true
},
"output_feedforward": {
"input_dim": 2400,
"num_layers": 1,
"hidden_dims": 300,
"activations": "relu",
"dropout": 0.5
},
"output_logit": {
"input_dim": 300,
"num_layers": 1,
"hidden_dims": 22,
"activations": "linear"
},
"initializer": {
"regexes": [
[".*linear_layers.*weight", {"type": "xavier_normal"}],
[".*linear_layers.*bias", {"type": "constant", "val": 0}],
[".*weight_ih.*", {"type": "xavier_normal"}],
[".*weight_hh.*", {"type": "orthogonal"}],
[".*bias.*", {"type": "constant", "val": 0}],
[".*matcher.*match_weights.*", {"type": "kaiming_normal"}]
]
}
},
"data_loader": {
"batch_sampler": {
"type": "bucket",
"batch_size": 20,
"padding_noise": 0.0,
"sorting_keys": ["premise"],
},
},
"trainer": {
"num_epochs": NUM_EPOCHS,
"cuda_device": 1,
"grad_clipping": 5.0,
"validation_metric": "+f1_macro",
"shuffle": true,
"optimizer": {
"type": "adam",
"lr": LR
},
"learning_rate_scheduler": {
"type": "reduce_on_plateau",
"factor": 0.5,
"mode": "max",
"patience": 0
}
}
}
! cp -r $MODEL_PATH ../../../maintenance_rst/models/label_predictor_esim
! cp -r $MODEL_PATH/config_elmo.json ../../../maintenance_rst/models/label_predictor_esim/ | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
3. Scripts for training/prediction Option 1. Directly from the config Train a model | %%writefile models/train_label_predictor_esim.sh
# usage:
# $ cd models
# $ sh train_label_predictor.sh {bert|elmo} result_30
export METHOD=${1}
export RESULT_DIR=${2}
export DEV_FILE_PATH="nlabel_cf_dev.tsv"
export TEST_FILE_PATH="nlabel_cf_test.tsv"
rm -r label_predictor_esim/${RESULT_DIR}/
allennlp train -s label_predictor_esim/${RESULT_DIR}/ label_predictor_esim/config_${METHOD}.json \
--include-package bimpm_custom_package
allennlp predict --use-dataset-reader --silent \
--output-file label_predictor_esim/${RESULT_DIR}/predictions_dev.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${DEV_FILE_PATH} \
--include-package bimpm_custom_package \
--predictor textual-entailment
allennlp predict --use-dataset-reader --silent \
--output-file label_predictor_esim/${RESULT_DIR}/predictions_test.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${TEST_FILE_PATH} \
--include-package bimpm_custom_package \
--predictor textual-entailment
! cp models/train_label_predictor_esim.sh ../../../maintenance_rst/models/ | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Predict on dev&test | %%writefile models/eval_label_predictor_esim.sh
# usage:
# $ cd models
# $ sh train_label_predictor.sh {bert|elmo} result_30
export METHOD=${1}
export RESULT_DIR=${2}
export DEV_FILE_PATH="nlabel_cf_dev.tsv"
export TEST_FILE_PATH="nlabel_cf_test.tsv"
allennlp predict --use-dataset-reader --silent \
--output-file label_predictor_esim/${RESULT_DIR}/predictions_dev.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${DEV_FILE_PATH} \
--include-package bimpm_custom_package \
--predictor textual-entailment
allennlp predict --use-dataset-reader --silent \
--output-file label_predictor_esim/${RESULT_DIR}/predictions_test.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${TEST_FILE_PATH} \
--include-package bimpm_custom_package \
--predictor textual-entailment
! cp models/eval_label_predictor_esim.sh ../../../maintenance_rst/models/ | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
(optional) predict on train | %%writefile models/eval_label_predictor_train.sh
# usage:
# $ cd models
# $ sh eval_label_predictor_train.sh {bert|elmo} result_30
export METHOD=${1}
export RESULT_DIR=${2}
export TEST_FILE_PATH="nlabel_cf_train.tsv"
allennlp predict --use-dataset-reader --silent \
--output-file label_predictor_bimpm/${RESULT_DIR}/predictions_train.json label_predictor_bimpm/${RESULT_DIR}/model.tar.gz label_predictor_bimpm/${TEST_FILE_PATH} \
--include-package customization_package \
--predictor textual-entailment | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Option 2. Using wandb for parameters adjustment | %%writefile ../../../maintenance_rst/models/wandb_label_predictor_esim.yaml
name: label_predictor_esim
program: wandb_allennlp # this is a wrapper console script around allennlp commands. It is part of wandb-allennlp
method: bayes
## Do not for get to use the command keyword to specify the following command structure
command:
- ${program} #omit the interpreter as we use allennlp train command directly
- "--subcommand=train"
- "--include-package=customization_package" # add all packages containing your registered classes here
- "--config_file=label_predictor_esim/config_elmo.json"
- ${args}
metric:
name: best_f1_macro
goal: maximize
parameters:
model.encode_together:
values: ["true", ]
iterator.batch_size:
values: [8,]
trainer.optimizer.lr:
values: [0.001,]
model.dropout:
values: [0.5]
| _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
3. Run training ``wandb sweep wandb_label_predictor_esim.yaml``(returns %sweepname1)``wandb sweep wandb_label_predictor2.yaml``(returns %sweepname2)``wandb agent --count 1 %sweepname1 && wandb agent --count 1 %sweepname2`` Move the best model in label_predictor_bimpm | ! ls -laht models/wandb
! cp -r models/wandb/run-20201218_123424-kcphaqhi/training_dumps models/label_predictor_esim/esim_elmo | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
**Or** load from wandb by %sweepname | import wandb
api = wandb.Api()
run = api.run("tchewik/tmp/7hum4oom")
for file in run.files():
file.download(replace=True)
! cp -r training_dumps models/label_predictor_bimpm/toasty-sweep-1 | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
And run evaluation from shell``sh eval_label_predictor_esim.sh {elmo|elmo_fasttext} toasty-sweep-1`` 4. Evaluate classifier | def load_predictions(path):
result = []
vocab = []
with open(path, 'r') as file:
for line in file.readlines():
line = json.loads(line)
if line.get("label"):
result.append(line.get("label"))
elif line.get("label_probs"):
if not vocab:
vocab = open(path[:path.rfind('/')] + '/vocabulary/labels.txt', 'r').readlines()
vocab = [label.strip() for label in vocab]
result.append(vocab[np.argmax(line.get("label_probs"))])
print('length of result:', len(result))
return result
RESULT_DIR = 'esim_elmo'
! mkdir models/label_predictor_esim/$RESULT_DIR
! cp -r ../../../maintenance_rst/models/label_predictor_esim/$RESULT_DIR/*.json models/label_predictor_esim/$RESULT_DIR/ | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
On dev set | import pandas as pd
import json
true = pd.read_csv(DEV_FILE_PATH, sep='\t', header=None)[0].values.tolist()
pred = load_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_dev.json')
from sklearn.metrics import classification_report
print(classification_report(true[:len(pred)], pred, digits=4))
test_metrics = classification_report(true[:len(pred)], pred, digits=4, output_dict=True)
test_f1 = np.array(
[test_metrics[label].get('f1-score') for label in test_metrics if type(test_metrics[label]) == dict]) * 100
test_f1
len(true)
from sklearn.metrics import f1_score, precision_score, recall_score
print('f1: %.2f'%(f1_score(true[:len(pred)], pred, average='macro')*100))
print('pr: %.2f'%(precision_score(true[:len(pred)], pred, average='macro')*100))
print('re: %.2f'%(recall_score(true[:len(pred)], pred, average='macro')*100))
from utils.plot_confusion_matrix import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
labels = list(set(true))
labels.sort()
plot_confusion_matrix(confusion_matrix(true[:len(pred)], pred, labels), target_names=labels, normalize=True)
top_classes = [
'attribution_NS',
'attribution_SN',
'purpose_NS',
'purpose_SN',
'condition_SN',
'contrast_NN',
'condition_NS',
'joint_NN',
'concession_NS',
'same-unit_NN',
'elaboration_NS',
'cause-effect_NS',
]
class_mapper = {weird_class: 'other' + weird_class[-3:] for weird_class in labels if not weird_class in top_classes}
import numpy as np
true = [class_mapper.get(value) if class_mapper.get(value) else value for value in true]
pred = [class_mapper.get(value) if class_mapper.get(value) else value for value in pred]
pred_mapper = {
'other_NN': 'joint_NN',
'other_NS': 'joint_NN',
'other_SN': 'joint_NN'
}
pred = [pred_mapper.get(value) if pred_mapper.get(value) else value for value in pred]
_to_stay = (np.array(true) != 'other_NN') & (np.array(true) != 'other_SN') & (np.array(true) != 'other_NS')
_true = np.array(true)[_to_stay]
_pred = np.array(pred)[_to_stay[:len(pred)]]
labels = list(set(_true))
from sklearn.metrics import f1_score, precision_score, recall_score
print('f1: %.2f'%(f1_score(true[:len(pred)], pred, average='macro')*100))
print('pr: %.2f'%(precision_score(true[:len(pred)], pred, average='macro')*100))
print('re: %.2f'%(recall_score(true[:len(pred)], pred, average='macro')*100))
labels.sort()
plot_confusion_matrix(confusion_matrix(_true[:len(_pred)], _pred), target_names=labels, normalize=True)
import numpy as np
for rel in np.unique(_true):
print(rel) | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
On train set (optional) | import pandas as pd
import json
true = pd.read_csv('models/label_predictor_bimpm/nlabel_cf_train.tsv', sep='\t', header=None)[0].values.tolist()
pred = load_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_train.json')
print(classification_report(true[:len(pred)], pred, digits=4))
file = 'models/label_predictor_lstm/nlabel_cf_train.tsv'
true_train = pd.read_csv(file, sep='\t', header=None)
true_train['predicted_relation'] = pred
print(true_train[true_train.relation != true_train.predicted_relation].shape)
true_train[true_train.relation != true_train.predicted_relation].to_csv('mispredicted_relations.csv', sep='\t') | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
On test set | import pandas as pd
import json
true = pd.read_csv(TEST_FILE_PATH, sep='\t', header=None)[0].values.tolist()
pred = load_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_test.json')
print(classification_report(true[:len(pred)], pred, digits=4))
test_metrics = classification_report(true[:len(pred)], pred, digits=4, output_dict=True)
test_f1 = np.array(
[test_metrics[label].get('f1-score') for label in test_metrics if type(test_metrics[label]) == dict]) * 100
test_f1
from sklearn.metrics import f1_score, precision_score, recall_score
print('f1: %.2f'%(f1_score(true[:len(pred)], pred, average='macro')*100))
print('pr: %.2f'%(precision_score(true[:len(pred)], pred, average='macro')*100))
print('re: %.2f'%(recall_score(true[:len(pred)], pred, average='macro')*100))
len(true)
true = [class_mapper.get(value) if class_mapper.get(value) else value for value in true]
pred = [class_mapper.get(value) if class_mapper.get(value) else value for value in pred]
pred = [pred_mapper.get(value) if pred_mapper.get(value) else value for value in pred]
_to_stay = (np.array(true) != 'other_NN') & (np.array(true) != 'other_SN') & (np.array(true) != 'other_NS')
_true = np.array(true)[_to_stay]
_pred = np.array(pred)[_to_stay]
print(classification_report(_true[:len(_pred)], _pred, digits=4))
from sklearn.metrics import f1_score, precision_score, recall_score
print('f1: %.2f'%(f1_score(_true[:len(_pred)], _pred, average='macro')*100))
print('pr: %.2f'%(precision_score(_true[:len(_pred)], _pred, average='macro')*100))
print('re: %.2f'%(recall_score(_true[:len(_pred)], _pred, average='macro')*100)) | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Ensemble: (Logreg+Catboost) + ESIM | ! ls models/label_predictor_esim
import json
model_vocab = open(MODEL_PATH + '/' + RESULT_DIR + '/vocabulary/labels.txt', 'r').readlines()
model_vocab = [label.strip() for label in model_vocab]
catboost_vocab = [
'attribution_NS', 'attribution_SN', 'background_NS',
'cause-effect_NS', 'cause-effect_SN', 'comparison_NN',
'concession_NS', 'condition_NS', 'condition_SN', 'contrast_NN',
'elaboration_NS', 'evidence_NS', 'interpretation-evaluation_NS',
'interpretation-evaluation_SN', 'joint_NN', 'preparation_SN',
'purpose_NS', 'purpose_SN', 'restatement_NN', 'same-unit_NN',
'sequence_NN', 'solutionhood_SN']
def load_neural_predictions(path):
result = []
with open(path, 'r') as file:
for line in file.readlines():
line = json.loads(line)
if line.get('probs'):
probs = line.get('probs')
elif line.get('label_probs'):
probs = line.get('label_probs')
probs = {model_vocab[i]: probs[i] for i in range(len(model_vocab))}
result.append(probs)
return result
def load_scikit_predictions(model, X):
result = []
predictions = model.predict_proba(X)
for prediction in predictions:
probs = {catboost_vocab[j]: prediction[j] for j in range(len(catboost_vocab))}
result.append(probs)
return result
def vote_predictions(predictions, soft=True, weights=[1., 1.]):
for i in range(1, len(predictions)):
assert len(predictions[i-1]) == len(predictions[i])
if weights == [1., 1.]:
weights = [1.,] * len(predictions)
result = []
for i in range(len(predictions[0])):
sample_result = {}
for key in predictions[0][i].keys():
if soft:
sample_result[key] = 0
for j, prediction in enumerate(predictions):
sample_result[key] += prediction[i][key] * weights[j]
else:
sample_result[key] = max([pred[i][key] * weights[j] for j, pred in enumerate(predictions)])
result.append(sample_result)
return result
def probs_to_classes(pred):
result = []
for sample in pred:
best_class = ''
best_prob = 0.
for key in sample.keys():
if sample[key] > best_prob:
best_prob = sample[key]
best_class = key
result.append(best_class)
return result
! pip install catboost
import pickle
fs_catboost_plus_logreg = pickle.load(open('models/relation_predictor_baseline/model.pkl', 'rb'))
lab_encoder = pickle.load(open('models/relation_predictor_baseline/label_encoder.pkl', 'rb'))
scaler = pickle.load(open('models/relation_predictor_baseline/scaler.pkl', 'rb'))
drop_columns = pickle.load(open('models/relation_predictor_baseline/drop_columns.pkl', 'rb')) | /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator Pipeline from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator LabelEncoder from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator VotingClassifier from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator StandardScaler from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
| MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
On dev set | from sklearn import metrics
TARGET = 'relation'
y_dev, X_dev = dev_samples['relation'].to_frame(), dev_samples.drop('relation', axis=1).drop(
columns=drop_columns + ['category_id', 'index'])
X_scaled_np = scaler.transform(X_dev)
X_dev = pd.DataFrame(X_scaled_np, index=X_dev.index)
catboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_dev)
neural_predictions = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_dev.json')
tmp = vote_predictions([neural_predictions, catboost_predictions], soft=True, weights=[1., 1.])
ensemble_pred = probs_to_classes(tmp)
print('weighted f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='weighted'))
print('macro f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='macro'))
print('accuracy: ', metrics.accuracy_score(y_dev.values, ensemble_pred))
print()
print(metrics.classification_report(y_dev, ensemble_pred, digits=4)) | weighted f1: 0.5413872373769657
macro f1: 0.5354738926873194
accuracy: 0.5389321468298109
precision recall f1-score support
attribution_NS 0.8409 0.9024 0.8706 82
attribution_SN 0.8424 0.8564 0.8493 181
background_NS 0.2167 0.1461 0.1745 89
cause-effect_NS 0.6015 0.5128 0.5536 156
cause-effect_SN 0.5096 0.4598 0.4834 174
comparison_NN 0.1449 0.1923 0.1653 52
concession_NS 0.9000 0.5625 0.6923 32
condition_NS 0.6438 0.6620 0.6528 71
condition_SN 0.6716 0.8333 0.7438 108
contrast_NN 0.7229 0.6231 0.6693 268
elaboration_NS 0.3932 0.5776 0.4679 644
evidence_NS 0.1698 0.1698 0.1698 53
interpretation-evaluation_NS 0.3281 0.3717 0.3485 226
interpretation-evaluation_SN 0.5294 0.3214 0.4000 28
joint_NN 0.6974 0.5053 0.5860 748
preparation_SN 0.4153 0.2634 0.3224 186
purpose_NS 0.8506 0.7957 0.8222 93
purpose_SN 0.6667 0.8889 0.7619 18
restatement_NN 0.4167 0.3571 0.3846 14
same-unit_NN 0.7064 0.6063 0.6525 127
sequence_NN 0.4585 0.5250 0.4895 200
solutionhood_SN 0.4815 0.5652 0.5200 46
accuracy 0.5389 3596
macro avg 0.5549 0.5317 0.5355 3596
weighted avg 0.5623 0.5389 0.5414 3596
| MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
On test set | _test_samples = test_samples[:]
test_samples = _test_samples[:]
mask = test_samples.filename.str.contains('news')
test_samples = test_samples[test_samples['filename'].str.contains('news')]
mask.shape
test_samples.shape
def mask_predictions(predictions, mask):
result = []
mask = mask.values
for i, prediction in enumerate(predictions):
if mask[i]:
result.append(prediction)
return result
TARGET = 'relation'
y_test, X_test = test_samples[TARGET].to_frame(), test_samples.drop(TARGET, axis=1).drop(
columns=drop_columns + ['category_id', 'index'])
X_scaled_np = scaler.transform(X_test)
X_test = pd.DataFrame(X_scaled_np, index=X_test.index)
catboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_test)
neural_predictions = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_test.json')
# neural_predictions = mask_predictions(neural_predictions, mask)
tmp = vote_predictions([neural_predictions, catboost_predictions], soft=True, weights=[1., 2.])
ensemble_pred = probs_to_classes(tmp)
print('weighted f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='weighted'))
print('macro f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='macro'))
print('accuracy: ', metrics.accuracy_score(y_test.values, ensemble_pred))
print()
print(metrics.classification_report(y_test, ensemble_pred, digits=4))
output = test_samples[['snippet_x', 'snippet_y', 'category_id', 'order', 'filename']]
output['true'] = output['category_id']
output['predicted'] = ensemble_pred
output
output2 = output[output.true != output.predicted.map(lambda row: row.split('_')[0])]
output2.shape
output2
del output2['category_id']
output2.to_csv('mispredictions.csv')
test_metrics = metrics.classification_report(y_test, ensemble_pred, digits=4, output_dict=True)
test_f1 = np.array(
[test_metrics[label].get('f1-score') for label in test_metrics if type(test_metrics[label]) == dict]) * 100
test_f1 | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Ensemble: BiMPM + ESIM On dev set | !ls models/label_predictor_bimpm/
from sklearn import metrics
TARGET = 'relation'
y_dev, X_dev = dev_samples['relation'].to_frame(), dev_samples.drop('relation', axis=1).drop(
columns=drop_columns + ['category_id', 'index'])
X_scaled_np = scaler.transform(X_dev)
X_dev = pd.DataFrame(X_scaled_np, index=X_dev.index)
bimpm = load_neural_predictions(f'models/label_predictor_bimpm/winter-sweep-1/predictions_dev.json')
esim = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_dev.json')
catboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_dev)
tmp = vote_predictions(bimpm, esim, soft=False, weights=[1., 1.])
tmp = vote_predictions(tmp, catboost_predictions, soft=True, weights=[1., 1.])
ensemble_pred = probs_to_classes(tmp)
print('weighted f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='weighted'))
print('macro f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='macro'))
print('accuracy: ', metrics.accuracy_score(y_dev.values, ensemble_pred))
print()
print(metrics.classification_report(y_dev, ensemble_pred, digits=4)) | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
On test set | TARGET = 'relation'
y_test, X_test = test_samples[TARGET].to_frame(), test_samples.drop(TARGET, axis=1).drop(
columns=drop_columns + ['category_id', 'index'])
X_scaled_np = scaler.transform(X_test)
X_test = pd.DataFrame(X_scaled_np, index=X_test.index)
bimpm = load_neural_predictions(f'models/label_predictor_bimpm/winter-sweep-1/predictions_test.json')
esim = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_test.json')
catboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_test)
tmp = vote_predictions([bimpm, catboost_predictions, esim], soft=True, weights=[2., 1, 15.])
ensemble_pred = probs_to_classes(tmp)
print('weighted f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='weighted'))
print('macro f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='macro'))
print('accuracy: ', metrics.accuracy_score(y_test.values, ensemble_pred))
print()
print(metrics.classification_report(y_test, ensemble_pred, digits=4)) | _____no_output_____ | MIT | src/maintenance/3.5_relations_classification_esim.ipynb | tchewik/isanlp_rst |
Quantitative Value Strategy"Value investing" means investing in the stocks that are cheapest relative to common measures of business value (like earnings or assets).For this project, we're going to build an investing strategy that selects the 50 stocks with the best value metrics. From there, we will calculate recommended trades for an equal-weight portfolio of these 50 stocks. Library ImportsThe first thing we need to do is import the open-source software libraries that we'll be using in this tutorial. | import numpy as np
import pandas as pd
import xlsxwriter
import requests
from scipy import stats
import math | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Importing Our List of Stocks & API TokenAs before, we'll need to import our list of stocks and our API token before proceeding. Make sure the .csv file is still in your working directory and import it with the following command: | stocks = pd.read_csv('sp_500_stocks.csv')
from secrets import IEX_CLOUD_API_TOKEN | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Making Our First API CallIt's now time to make the first version of our value screener!We'll start by building a simple value screener that ranks securities based on a single metric (the price-to-earnings ratio). | symbol = 'aapl'
api_url = f"https://sandbox.iexapis.com/stable/stock/{symbol}/quote?token={IEX_CLOUD_API_TOKEN}"
data = requests.get(api_url).json() | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Parsing Our API CallThis API call has the metric we need - the price-to-earnings ratio.Here is an example of how to parse the metric from our API call: | price = data['latestPrice']
pe_ratio = data['peRatio']
pe_ratio | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Executing A Batch API Call & Building Our DataFrameJust like in our first project, it's now time to execute several batch API calls and add the information we need to our DataFrame.We'll start by running the following code cell, which contains some code we already built last time that we can re-use for this project. More specifically, it contains a function called chunks that we can use to divide our list of securities into groups of 100. | # Function sourced from
# https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
symbol_groups = list(chunks(stocks['Ticker'], 100))
symbol_strings = []
for i in range(0, len(symbol_groups)):
symbol_strings.append(','.join(symbol_groups[i]))
# print(symbol_strings[i])
my_columns = ['Ticker', 'Price', 'Price-to-Earnings Ratio', 'Number of Shares to Buy'] | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Now we need to create a blank DataFrame and add our data to the data frame one-by-one. | df = pd.DataFrame(columns = my_columns)
for batch in symbol_strings:
batch_api_call_url = f"https://sandbox.iexapis.com/stable/stock/market/batch?symbols={batch}&types=quote&token={IEX_CLOUD_API_TOKEN}"
data = requests.get(batch_api_call_url).json()
for symbol in batch.split(','):
df = df.append(
pd.Series(
[
symbol,
data[symbol]['quote']['latestPrice'],
data[symbol]['quote']['peRatio'],
'N/A'
],
index=my_columns
),
ignore_index=True
)
df.dropna(inplace=True)
df
| _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Removing Glamour StocksThe opposite of a "value stock" is a "glamour stock". Since the goal of this strategy is to identify the 50 best value stocks from our universe, our next step is to remove glamour stocks from the DataFrame.We'll sort the DataFrame by the stocks' price-to-earnings ratio, and drop all stocks outside the top 50. | df.sort_values('Price-to-Earnings Ratio', ascending=False, inplace=True)
df = df[df['Price-to-Earnings Ratio'] > 0]
df = df[:50]
df.reset_index(inplace=True, drop=True)
df | /var/folders/q_/gmxdkf893w3bm9wxvh6635t80000gp/T/ipykernel_89390/1321168316.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df.sort_values('Price-to-Earnings Ratio', ascending=False, inplace=True)
| MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Calculating the Number of Shares to BuyWe now need to calculate the number of shares we need to buy. To do this, we will use the `portfolio_input` function that we created in our momentum project.I have included this function below. | def portfolio_input():
global portfolio_size
portfolio_size = input("Enter the value of your portfolio:")
try:
portfolio_size = float(portfolio_size)
except ValueError:
print("That's not a number! \n Try again:")
portfolio_size = input("Enter the value of your portfolio:") | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Use the `portfolio_input` function to accept a `portfolio_size` variable from the user of this script. | portfolio_input() | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
You can now use the global `portfolio_size` variable to calculate the number of shares that our strategy should purchase. | position_size = portfolio_size/len(df.index)
for row in df.index:
df.loc[row, 'Number of Shares to Buy'] = math.floor(position_size/df.loc[row, 'Price'])
df | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Building a Better (and More Realistic) Value StrategyEvery valuation metric has certain flaws.For example, the price-to-earnings ratio doesn't work well with stocks with negative earnings.Similarly, stocks that buyback their own shares are difficult to value using the price-to-book ratio.Investors typically use a `composite` basket of valuation metrics to build robust quantitative value strategies. In this section, we will filter for stocks with the lowest percentiles on the following metrics:* Price-to-earnings ratio* Price-to-book ratio* Price-to-sales ratio* Enterprise Value divided by Earnings Before Interest, Taxes, Depreciation, and Amortization (EV/EBITDA)* Enterprise Value divided by Gross Profit (EV/GP)Some of these metrics aren't provided directly by the IEX Cloud API, and must be computed after pulling raw data. We'll start by calculating each data point from scratch. | symbol = 'AAPL'
batch_api_call_url = f"https://sandbox.iexapis.com/stable/stock/market/batch?symbols={symbol}&types=quote,advanced-stats&token={IEX_CLOUD_API_TOKEN}"
data = requests.get(batch_api_call_url).json()
# * Price-to-earnings ratio
pe_ratio = data[symbol]['quote']['peRatio']
# * Price-to-book ratio
pb_ratio = data[symbol]['advanced-stats']['priceToBook']
# * Price-to-sales ratio
ps_ratio = data[symbol]['advanced-stats']['priceToSales']
enterprise_value = data[symbol]['advanced-stats']['enterpriseValue']
ebitda = data[symbol]['advanced-stats']['EBITDA']
gross_profit = data[symbol]['advanced-stats']['grossProfit']
# * Enterprise Value divided by Earnings Before Interest, Taxes, Depreciation, and Amortization (EV/EBITDA)
ev_to_ebitda = enterprise_value/ebitda
# * Enterprise Value divided by Gross Profit (EV/GP)
ev_to_gross_profit = enterprise_value/gross_profit | _____no_output_____ | MIT | 003_quantitative_value_strategy.ipynb | gyalpodongo/algorithmic_trading_python |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.