markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We have 134 emoji faces, including a few terminator robots. We'll again be using the [sklearn](https://scikit-learn.org/) library to create our model. The interface is usually the same, and for gaussian anomaly detection, sklearn again expect a NumPy matrix where the rows are our images and the columns are the pixels. So we can apply the same transformations as notebook 3.2:
import numpy as np arrays = [np.asarray(im) for im in images] # 64 * 64 = 4096 vectors = [arr.reshape((4096,)) for arr in arrays] data = np.stack(vectors)
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
3. Training Next, we will create an [`EllipticEnvelope`](https://scikit-learn.org/stable/modules/generated/sklearn.covariance.EllipticEnvelope.html) object. This will fit a multi-variate gaussian distribution to our data. It then allows us to pick a threshold to define an _ellipsoid_ decision boundary , and detect outliers. Remember that we are using a _learning_ algorithm, which must therefore be _trained_ before it can be used. This is why we'll use the `.fit()` method first, before calling `.predict()`:
from sklearn.covariance import EllipticEnvelope cov = EllipticEnvelope(random_state=0).fit(data)
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
😰 What's happening? Why is it stuck? Have the killer robots already taken over? No need to panic, this kind of hiccup is very common when dealing with machine learning algorithms. We can kill the process (before it fries our laptop fan) by clicking the `stop` button ⬛️ in the notebook toolbar.Most learning algorithms are based around an _optimisation_ procedure. This step is often iterative and stochastic, i.e it tries its statistical best to maximise the learning in incremental steps. This process isn't fail proof:* it can dramatically stop because of out of memory errors, or overflow errors 💥* it can get stuck, e.g when the optimisation is too slow 🐌* it can fail silently, and return wrong results 💩ℹ️ We will encounter many of these failures throughout our ML experiments, so knowing how to overcome them is a part of the data scientist skillset. Let's go back to our killer robot detection: the model fitting got _stuck_ , which suggests that something about our data was too much to handle. We find the following "notes" in the [official documentation](https://scikit-learn.org/stable/modules/generated/sklearn.covariance.EllipticEnvelope.htmlsklearn.covariance.EllipticEnvelope):> Outlier detection from covariance estimation may break or not perform well in high-dimensional settings.We recall that our images are $64 \times 64$ pixels, so $4096$ dimensions.... that's a lot. It seems a good candidate to explain why our multivariate gaussian distribution failed to fit our dataset. If only there was a way to reduce the dimensions of our data... 😏Let's apply PCA to reduce the number of dimensions of our dataset. Our emoji faces dataset is smaller than the full emoji dataset, so 40 dimensions should suffice to explain its variance:
from sklearn.decomposition import PCA pca = PCA(n_components=40) pca.fit(data) components = pca.transform(data) components.shape
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
💪 Visualise the eigenvector images of our PCA model. You can use the code from lecture 3.2!🧠 Can you explain what those eigenvector images represent? Why are they different than from the full emoji dataset?Fantastic, we've managed to reduce the number of dimensions by 99%! Hopefully that should be enough to make our gaussian distribution fitting happy. Let's try again with the _principal components_ instead of the original data:
cov = EllipticEnvelope(random_state=0).fit(components)
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
😅 that was fast! 4. PredictionWe can now use our fitted gaussian distribution to detect the outliers in our `data`. For this, we use the `.predict()` method:
y = cov.predict(components) y
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
`y` is our vector of predictions, where $1$ is a normal data point, and $-1$ is an anomaly. We can therefore iterate through our original `arrays` to find outliers:
outliers = [] for i in range(0, len(arrays)): if y[i] == -1: outliers.append(arrays[i]) len(outliers) import matplotlib.pyplot as plt fig, axs = plt.subplots(dpi=150, nrows=2, ncols=7) for outlier, ax in zip(outliers, axs.flatten()): ax.imshow(outlier, cmap='gray', vmin=0, vmax=255) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False)
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
THERE'S OUR TERMINATORS! 🤖 We can count 5 of them in total. Notice how some real emoji faces were also detected as outliers. This is perhaps a sign that we should change our _threshold_ , to make the ellipsoid decision boundary smaller. In fact, we didn't even specify a threshold before, we just used the default value of `contamination=0.1` in the [`EllipticEnvelope`](https://scikit-learn.org/stable/modules/generated/sklearn.covariance.EllipticEnvelope.html) class. This represents our estimation of the proportion of data points which are outliers. Since it looks like we detected double the amount of actual anomalies, let's try again with `contamination=0.05`:
cov = EllipticEnvelope(random_state=0, contamination=0.05).fit(components) y = cov.predict(components) outliers = [] for i in range(0, len(arrays)): if y[i] == -1: outliers.append(arrays[i]) fig, axs = plt.subplots(dpi=150, nrows=1, ncols=7) for outlier, ax in zip(outliers, axs.flatten()): ax.imshow(outlier, cmap='gray', vmin=0, vmax=255) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False)
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
Better! `contamination=0.05` was a better choice of threshold, and we assessed this through _manual inspection_. This means we went through the results and used our human jugement to change the value of this _hyperparameter_.ℹ️ Notice how our outlier detection is not _perfect_. Some emojis were also erroneously detected as anomalous killer robots. This can seem like a problem, or a sign that our model was malfunctioning. But, quite the contrary, _imperfection_ is a core aspect of all _learning_ algorithms. Instead of seeing the glass half-empty and looking at the outlier detector's mistakes, we should reflect on the task itself. It would have been almost impossible to detect those killer robot images using rule-based algorithms, and our model _accuracy_ was good _enough_ to save the emojis from Skynet. As data scientists, our goal is to make models which are accurate _enough_ to be useful, not to aim for perfect scores. We will revisit these topics later in the course when discussing Machine Learning Engineering 🛠 5. Analysis We have detected the robot intruders and saved the emojis from a jealous AI from the future, all is good! We still want to better understand how anomaly detection defeated Skynet. For this, we would like to leverage our shiny new data visualization skills. Representing our dataset in space would allow us to identify its structures and hopefully understand how our gaussian distribution model identified terminators as "abnormal".Our data is high dimensional, so we can use our trusted PCA once again to project it down to 2 dimensions. We understand that this will lose a lot of the variance of our data, but the results were still somewhat interpretable with the full emoji dataset, so let's go!
# Dimesionality reduction to 2 pca_model = PCA(n_components=2) pca_model.fit(data) # fit the model T = pca_model.transform(data) # transform the 'normalized model' plt.scatter(T[:, 0], T[:, 1], # use the predictions as color c=y, marker='o', alpha=0.4 ) plt.title('Anomaly detection of the emoji faces dataset with PCA dimensionality reduction');
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
We can notice that most of the outliers are clearly _separable_ from the bulk of the dataset, even with only 2 principal components. One outlier is very much within the main cluster however. This could be explained by the dimensionality reduction, i.e that this point is separated from the cluster in other dimensions, or by the fact our threshold might be too permissive.We can check this by displaying the images directly on the scatter plot:
from matplotlib import offsetbox def plot_components(data, model, images=None, ax=None, thumb_frac=0.05, cmap='gray'): ax = ax or plt.gca() proj = model.fit_transform(data) ax.plot(proj[:, 0], proj[:, 1], '.k') if images is not None: min_dist_2 = (thumb_frac * max(proj.max(0) - proj.min(0))) ** 2 shown_images = np.array([2 * proj.max(0)]) for i in range(data.shape[0]): dist = np.sum((proj[i] - shown_images) ** 2, 1) if np.min(dist) < min_dist_2: # don't show points that are too close continue shown_images = np.vstack([shown_images, proj[i]]) imagebox = offsetbox.AnnotationBbox( offsetbox.OffsetImage(images[i], cmap=cmap), proj[i]) ax.add_artist(imagebox) small_images = [im[::2, ::2] for im in arrays] fig, ax = plt.subplots(figsize=(10, 10)) plot_components(data, model=PCA(n_components=2), images=small_images, thumb_frac=0.02) plt.title('Anomaly detection of the emoji faces dataset with PCA dimensionality reduction');
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer
Import Necessary Libraries
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn.ensemble import RandomForestClassifier from sklearn import svm from sklearn.metrics import precision_score, recall_score # display images from IPython.display import Image # linear algebra import numpy as np # data processing import pandas as pd # data visualization import seaborn as sns %matplotlib inline from matplotlib import pyplot as plt from matplotlib import style # Algorithms from sklearn import linear_model from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC, LinearSVC from sklearn.naive_bayes import GaussianNB
_____no_output_____
MIT
Titanic.ipynb
hashmat3525/Titanic
Titanic Titanic was a British passenger liner that sank in the North Atlantic Ocean in the early morning hours of 15 April 1912, after it collided with an iceberg during its maiden voyage from Southampton to New York City. There were an estimated 2,224 passengers and crew aboard the ship, and more than 1,500 died, making it one of the deadliest commercial peacetime maritime disasters in modern history. The RMS Titanic was the largest ship afloat at the time it entered service and was the second of three Olympic-class ocean liners operated by the White Star Line. The Titanic was built by the Harland and Wolff shipyard in Belfast. Thomas Andrews, her architect, died in the disaster.
# Image of Titanic ship Image(filename='C:/Users/Nemgeree Armanonah/Documents/GitHub/Titanic/images/ship.jpeg')
_____no_output_____
MIT
Titanic.ipynb
hashmat3525/Titanic
Getting the Data
#reading train.csv data = pd.read_csv('./titanic datasets/train.csv') data
_____no_output_____
MIT
Titanic.ipynb
hashmat3525/Titanic
Exploring Data
data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 12 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Pclass 891 non-null int64 Name 891 non-null object Sex 891 non-null object Age 714 non-null float64 SibSp 891 non-null int64 Parch 891 non-null int64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 204 non-null object Embarked 889 non-null object dtypes: float64(2), int64(5), object(5) memory usage: 83.7+ KB
MIT
Titanic.ipynb
hashmat3525/Titanic
Describe Statistics Describe method is used to view some basic statistical details like PassengerId,Servived,Age etc.
data.describe()
_____no_output_____
MIT
Titanic.ipynb
hashmat3525/Titanic
View All Features
data.columns.values
_____no_output_____
MIT
Titanic.ipynb
hashmat3525/Titanic
What features could contribute to a high survival rate ? To Us it would make sense if everything except ‘PassengerId’, ‘Ticket’ and ‘Name’ would be correlated with a high survival rate.
# defining variables survived = 'survived' not_survived = 'not survived' # data to be plotted fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10, 4)) women = data[data['Sex']=='female'] men = data[data['Sex']=='male'] # plot the data ax = sns.distplot(women[women['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[0], kde =False) ax = sns.distplot(women[women['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[0], kde =False) ax.legend() ax.set_title('Female') ax = sns.distplot(men[men['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[1], kde = False) ax = sns.distplot(men[men['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[1], kde = False) ax.legend() _ = ax.set_title('Male') # count the null values null_values = data.isnull().sum() null_values plt.plot(null_values) plt.grid() plt.show()
_____no_output_____
MIT
Titanic.ipynb
hashmat3525/Titanic
Data Processing
def handle_non_numerical_data(df): columns = df.columns.values for column in columns: text_digit_vals = {} def convert_to_int(val):10 return text_digit_vals[val] #print(column,df[column].dtype) if df[column].dtype != np.int64 and df[column].dtype != np.float64: column_contents = df[column].values.tolist() #finding just the uniques unique_elements = set(column_contents) # great, found them. x = 0 for unique in unique_elements: if unique not in text_digit_vals: text_digit_vals[unique] = x x+=1 df[column] = list(map(convert_to_int,df[column])) return df y_target = data['Survived'] # Y_target.reshape(len(Y_target),1) x_train = data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare','Embarked', 'Ticket']] x_train = handle_non_numerical_data(x_train) x_train.head() fare = pd.DataFrame(x_train['Fare']) # Normalizing min_max_scaler = preprocessing.MinMaxScaler() newfare = min_max_scaler.fit_transform(fare) x_train['Fare'] = newfare x_train null_values = x_train.isnull().sum() null_values plt.plot(null_values) plt.show() # Fill the NAN values with the median values in the datasets x_train['Age'] = x_train['Age'].fillna(x_train['Age'].median()) print("Number of NULL values" , x_train['Age'].isnull().sum()) x_train.head() x_train['Sex'] = x_train['Sex'].replace('male', 0) x_train['Sex'] = x_train['Sex'].replace('female', 1) # print(type(x_train)) corr = x_train.corr() corr.style.background_gradient() def plot_corr(df,size=10): corr = df.corr() fig, ax = plt.subplots(figsize=(size, size)) ax.matshow(corr) plt.xticks(range(len(corr.columns)), corr.columns); plt.yticks(range(len(corr.columns)), corr.columns); # plot_corr(x_train) x_train.corr() corr.style.background_gradient() # Dividing the data into train and test data set X_train, X_test, Y_train, Y_test = train_test_split(x_train, y_target, test_size = 0.4, random_state = 40) clf = RandomForestClassifier() clf.fit(X_train, Y_train) print(clf.predict(X_test)) print("Accuracy: ",clf.score(X_test, Y_test)) ## Testing the model. test_data = pd.read_csv('./titanic datasets/test.csv') test_data.head(3) # test_data.isnull().sum() ### Preprocessing on the test data test_data = test_data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare', 'Ticket', 'Embarked']] test_data = handle_non_numerical_data(test_data) fare = pd.DataFrame(test_data['Fare']) min_max_scaler = preprocessing.MinMaxScaler() newfare = min_max_scaler.fit_transform(fare) test_data['Fare'] = newfare test_data['Fare'] = test_data['Fare'].fillna(test_data['Fare'].median()) test_data['Age'] = test_data['Age'].fillna(test_data['Age'].median()) test_data['Sex'] = test_data['Sex'].replace('male', 0) test_data['Sex'] = test_data['Sex'].replace('female', 1) print(test_data.head()) print(clf.predict(test_data)) from sklearn.model_selection import cross_val_predict predictions = cross_val_predict(clf, X_train, Y_train, cv=3) print("Precision:", precision_score(Y_train, predictions)) print("Recall:",recall_score(Y_train, predictions)) from sklearn.metrics import precision_recall_curve # getting the probabilities of our predictions y_scores = clf.predict_proba(X_train) y_scores = y_scores[:,1] precision, recall, threshold = precision_recall_curve(Y_train, y_scores) def plot_precision_and_recall(precision, recall, threshold): plt.plot(threshold, precision[:-1], "r-", label="precision", linewidth=5) plt.plot(threshold, recall[:-1], "b", label="recall", linewidth=5) plt.xlabel("threshold", fontsize=19) plt.legend(loc="upper right", fontsize=19) plt.ylim([0, 1]) plt.figure(figsize=(14, 7)) plot_precision_and_recall(precision, recall, threshold) plt.axis([0.3,0.8,0.8,1]) plt.show() def plot_precision_vs_recall(precision, recall): plt.plot(recall, precision, "g--", linewidth=2.5) plt.ylabel("recall", fontsize=19) plt.xlabel("precision", fontsize=19) plt.axis([0, 1.5, 0, 1.5]) plt.figure(figsize=(14, 7)) plot_precision_vs_recall(precision, recall) plt.show() from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix predictions = cross_val_predict(clf, X_train, Y_train, cv=3) confusion_matrix(Y_train, predictions)
_____no_output_____
MIT
Titanic.ipynb
hashmat3525/Titanic
Gaussian Transformation with Scikit-learnScikit-learn has recently released transformers to do Gaussian mappings as they call the variable transformations. The PowerTransformer allows to do Box-Cox and Yeo-Johnson transformation. With the FunctionTransformer, we can specify any function we want.The transformers per se, do not allow to select columns, but we can do so using a third transformer, the ColumnTransformerAnother thing to keep in mind is that Scikit-learn transformers return NumPy arrays, and not dataframes, so we need to be mindful of the order of the columns not to mess up with our features. ImportantBox-Cox and Yeo-Johnson transformations need to learn their parameters from the data. Therefore, as always, before attempting any transformation it is important to divide the dataset into train and test set.In this demo, I will not do so for simplicity, but when using this transformation in your pipelines, please make sure you do so. In this demoWe will see how to implement variable transformations using Scikit-learn and the House Prices dataset.
import pandas as pd import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats from sklearn.preprocessing import FunctionTransformer, PowerTransformer # load the data data = pd.read_csv('../houseprice.csv') data.head()
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Let's select the numerical and positive variables in the dataset for this demonstration. As most of the transformations require the variables to be positive.
cols = [] for col in data.columns: if data[col].dtypes != 'O' and col != 'Id': # if the variable is numerical if np.sum(np.where(data[col] <= 0, 1, 0)) == 0: # if the variable is positive cols.append(col) # append variable to the list cols # let's explore the distribution of the numerical variables data[cols].hist(figsize=(20,20)) plt.show()
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Plots to assess normalityTo visualise the distribution of the variables, we plot a histogram and a Q-Q plot. In the Q-Q pLots, if the variable is normally distributed, the values of the variable should fall in a 45 degree line when plotted against the theoretical quantiles. We discussed this extensively in Section 3 of this course.
# plot the histograms to have a quick look at the variable distribution # histogram and Q-Q plots def diagnostic_plots(df, variable): # function to plot a histogram and a Q-Q plot # side by side, for a certain variable plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) df[variable].hist(bins=30) plt.subplot(1, 2, 2) stats.probplot(df[variable], dist="norm", plot=plt) plt.show()
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Logarithmic transformation
# create a log transformer transformer = FunctionTransformer(np.log, validate=True) # transform all the numerical and positive variables data_t = transformer.transform(data[cols].fillna(1)) # Scikit-learn returns NumPy arrays, so capture in dataframe # note that Scikit-learn will return an array with # only the columns indicated in cols data_t = pd.DataFrame(data_t, columns = cols) # original distribution diagnostic_plots(data, 'GrLivArea') # transformed distribution diagnostic_plots(data_t, 'GrLivArea') # original distribution diagnostic_plots(data, 'MSSubClass') # transformed distribution diagnostic_plots(data_t, 'MSSubClass')
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Reciprocal transformation
# create the transformer transformer = FunctionTransformer(lambda x: 1/x, validate=True) # also # transformer = FunctionTransformer(np.reciprocal, validate=True) # transform the positive variables data_t = transformer.transform(data[cols].fillna(1)) # re-capture in a dataframe data_t = pd.DataFrame(data_t, columns = cols) # transformed variable diagnostic_plots(data_t, 'GrLivArea') # transformed variable diagnostic_plots(data_t, 'MSSubClass')
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Square root transformation
transformer = FunctionTransformer(lambda x: x**(1/2), validate=True) # also # transformer = FunctionTransformer(np.sqrt, validate=True) data_t = transformer.transform(data[cols].fillna(1)) data_t = pd.DataFrame(data_t, columns = cols) diagnostic_plots(data_t, 'GrLivArea') diagnostic_plots(data_t, 'MSSubClass')
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Exponential
transformer = FunctionTransformer(lambda x: x**(1/1.2), validate=True) data_t = transformer.transform(data[cols].fillna(1)) data_t = pd.DataFrame(data_t, columns = cols) diagnostic_plots(data_t, 'GrLivArea') diagnostic_plots(data_t, 'MSSubClass')
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Box-Cox transformation
# create the transformer transformer = PowerTransformer(method='box-cox', standardize=False) # find the optimal lambda using the train set transformer.fit(data[cols].fillna(1)) # transform the data data_t = transformer.transform(data[cols].fillna(1)) # capture data in a dataframe data_t = pd.DataFrame(data_t, columns = cols) diagnostic_plots(data_t, 'GrLivArea') diagnostic_plots(data_t, 'MSSubClass')
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Yeo-JohnsonYeo-Johnson is an adaptation of Box-Cox that can also be used in negative value variables. So let's expand the list of variables for the demo, to include those that contain zero and negative values as well.
cols = [ 'MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual', 'OverallCond', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd', 'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal', 'SalePrice' ] # call the transformer transformer = PowerTransformer(method='yeo-johnson', standardize=False) # learn the lambda from the train set transformer.fit(data[cols].fillna(1)) # transform the data data_t = transformer.transform(data[cols].fillna(1)) # capture data in a dataframe data_t = pd.DataFrame(data_t, columns = cols) diagnostic_plots(data_t, 'GrLivArea') diagnostic_plots(data_t, 'MSSubClass')
_____no_output_____
BSD-3-Clause
07.02-Gaussian-transformation-sklearn.ipynb
sri-spirited/feature-engineering-for-ml
Money Channel
with open('../data/moneychanel.json') as json_data: data = json.load(json_data) len(data) data[13] entry = [] for i, row in tqdm_notebook(enumerate(data)): if len(row['Stock Include']) > 1: continue for stock in row['Stock Include']: if stock in target_stocks: entry.append([ row['Date'], stock, row['Content'] ]) df = pd.DataFrame.from_records(entry) df[0] = pd.to_datetime(df[0], format='%Y-%m-%d') df.columns = ['Date','Ticker','Text'] df.head() df.to_csv('../data/moneychanel.csv', index=False)
_____no_output_____
Apache-2.0
EDA/news_stat.ipynb
pcrete/stock_prediction_using_contextual_information
Pantip
with open('../data/pantip.json') as json_data: data = json.load(json_data) len(data) data[3] data[3]['date'] data[3]['stock'] text = data[3]['head']+' '+data[3]['content'] text for x in data[3]['comments']: text += x['message'] text entry = [] for i, row in tqdm_notebook(enumerate(data)): if len(row['stock']) > 1: continue for stock in row['stock']: if stock in target_stocks: text = row['head']+' '+row['content'] for comment in row['comments']: text += comment['message'] entry.append([ row['date'], stock, text ]) df = pd.DataFrame.from_records(entry) df[0] = pd.to_datetime(df[0], format='%Y-%m-%d') df.columns = ['Date','Ticker','Text'] df.head() df.to_csv('../data/pantip.csv', index=False)
_____no_output_____
Apache-2.0
EDA/news_stat.ipynb
pcrete/stock_prediction_using_contextual_information
Twitter
with open('../data/twitter.json') as json_data: data = json.load(json_data) len(data) data[0] entry = [] for i, row in tqdm_notebook(enumerate(data)): if len(row['Stock Include']) > 1: continue for stock in row['Stock Include']: if stock in target_stocks: entry.append([ row['date'], stock, row['text'] ]) df = pd.DataFrame.from_records(entry) df[0] = pd.to_datetime(df[0], format='%Y-%m-%d') df.columns = ['Date','Ticker','Text'] df.head() df.to_csv('../data/twitter.csv', index=False)
_____no_output_____
Apache-2.0
EDA/news_stat.ipynb
pcrete/stock_prediction_using_contextual_information
In-Class Coding Lab: IterationsThe goals of this lab are to help you to understand:- How loops work.- The difference between definite and indefinite loops, and when to use each.- How to build an indefinite loop with complex exit conditions.- How to create a program from a complex idea. Understanding IterationsIterations permit us to repeat code until a Boolean expression is `False`. Iterations or **loops** allow us to write succint, compact code. Here's an example, which counts to 3 before [Blitzing the Quarterback in backyard American Football](https://www.quora.com/What-is-the-significance-of-counting-one-Mississippi-two-Mississippi-and-so-on):
i = 1 while i <= 3: print(i,"Mississippi...") i=i+1 print("Blitz!")
1 Mississippi... 2 Mississippi... 3 Mississippi... Blitz!
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
Breaking it down...The `while` statement on line 2 starts the loop. The code indented beneath it (lines 3-4) will repeat, in a linear fashion until the Boolean expression on line 2 `i <= 3` is `False`, at which time the program continues with line 5. Some TerminologyWe call `i <=3` the loop's **exit condition**. The variable `i` inside the exit condition is the only thing that we can change to make the exit condition `False`, therefore it is the **loop control variable**. On line 4 we change the loop control variable by adding one to it, this is called an **increment**.Furthermore, we know how many times this loop will execute before it actually runs: 3. Even if we allowed the user to enter a number, and looped that many times, we would still know. We call this a **definite loop**. Whenever we iterate over a fixed number of values, regardless of whether those values are determined at run-time or not, we're using a definite loop.If the loop control variable never forces the exit condition to be `False`, we have an **infinite loop**. As the name implies, an Infinite loop never ends and typically causes our computer to crash or lock up.
## WARNING!!! INFINITE LOOP AHEAD ## IF YOU RUN THIS CODE YOU WILL NEED TO KILL YOUR BROWSER AND SHUT DOWN JUPYTER NOTEBOOK i = 1 while i <= 3: print(i,"Mississippi...") # i=i+1 print("Blitz!")
_____no_output_____
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
For loopsTo prevent an infinite loop when the loop is definite, we use the `for` statement. Here's the same program using `for`:
for i in range(1,4): print(i,"Mississippi...") print("Blitz!")
1 Mississippi... 2 Mississippi... 3 Mississippi... Blitz!
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
One confusing aspect of this loop is `range(1,4)` why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3. Now Try ItIn the space below, Re-Write the above program to count from 10 to 15. Note: How many times will that loop?
# TODO Write code here
10 Mississippi... 11 Mississippi... 12 Mississippi... 13 Mississippi... 14 Mississippi... 15 Mississippi... Blitz!
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
Indefinite loopsWith **indefinite loops** we do not know how many times the program will execute. This is typically based on user action, and therefore our loop is subject to the whims of whoever interacts with it. Most applications like spreadsheets, photo editors, and games use indefinite loops. They'll run on your computer, seemingly forever, until you choose to quit the application. The classic indefinite loop pattern involves getting input from the user inside the loop. We then inspect the input and based on that input we might exit the loop. Here's an example:
name = "" while name != 'mike': name = input("Say my name! : ") print("Nope, my name is not %s! " %(name))
Say my name! : rizzen Nope, my name is not rizzen! Say my name! : mike Nope, my name is not mike!
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
The classic problem with indefinite loops is that its really difficult to get the application's logic to line up with the exit condition. For example we need to set `name = ""` in line 1 so that line 2 start out as `True`. Also we have this wonky logic where when we say `'mike'` it still prints `Nope, my name is not mike!` before exiting. Break statementThe solution to this problem is to use the break statement. **break** tells Python to exit the loop immediately. We then re-structure all of our indefinite loops to look like this:```while True: if exit-condition: break```Here's our program we-written with the break statement. This is the recommended way to write indefinite loops in this course.
while True: name = input("Say my name!: ") if name == 'mike': break print("Nope, my name is not %s!" %(name))
Say my name!: bill Nope, my name is not bill! Say my name!: dave Nope, my name is not dave! Say my name!: mike
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
Multiple exit conditionsThis indefinite loop pattern makes it easy to add additional exit conditions. For example, here's the program again, but it now stops when you say my name or type in 3 wrong names. Make sure to run this program a couple of times. First enter mike to exit the program, next enter the wrong name 3 times.
times = 0 while True: name = input("Say my name!: ") times = times + 1 if name == 'mike': print("You got it!") break if times == 3: print("Game over. Too many tries!") break print("Nope, my name is not %s!" %(name))
Say my name!: mike You got it!
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
Number sumsLet's conclude the lab with you writing your own program whichuses an indefinite loop. We'll provide the to-do list, you write the code. This program should ask for floating point numbers as input and stops looping when **the total of the numbers entered is over 100**, or **more than 5 numbers have been entered**. Those are your two exit conditions. After the loop stops print out the total of the numbers entered and the count of numbers entered.
## TO-DO List #1 count = 0 #2 total = 0 #3 loop Indefinitely #4. input a number #5 increment count #6 add number to total #7 if count equals 5 stop looping #8 if total greater than 100 stop looping #9 print total and count # Write Code here: count = 0 total = 0 while true: number = int(input("enter a number:")) count = count + 1 total = total + number if count == 5: break if total >100 break print("total:", total,"count:", count)
_____no_output_____
MIT
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-rizzenM
data_processing.numeric> Numeric related data processing- toc: True
#export import pandas as pd # export def moving_average(data_frame: pd.DataFrame = None, window: int=7, group_col: str = None, value_col: str = None, shift=0)->pd.DataFrame: df = data_frame.copy() ma_col = '{value_col}_ma{window}'.format(value_col=value_col, window=window) if group_col is None: df[ma_col] = df[value_col].rolling(window=window).mean().shift(periods=shift) else: df[ma_col] = df.groupby(group_col)[value_col].apply(lambda x: x.rolling(window=window).mean().shift(periods=shift)) return df[ma_col]
_____no_output_____
Apache-2.0
notebooks/data_processing_numeric.ipynb
hirogen317/chamomile
第8章: ニューラルネット第6章で取り組んだニュース記事のカテゴリ分類を題材として,ニューラルネットワークでカテゴリ分類モデルを実装する.なお,この章ではPyTorch, TensorFlow, Chainerなどの機械学習プラットフォームを活用せよ. 70. 単語ベクトルの和による特徴量***問題50で構築した学習データ,検証データ,評価データを行列・ベクトルに変換したい.例えば,学習データについて,すべての事例$x_i$の特徴ベクトル$\boldsymbol{x}_i$を並べた行列$X$と正解ラベルを並べた行列(ベクトル)$Y$を作成したい.$$X = \begin{pmatrix} \boldsymbol{x}_1 \\ \boldsymbol{x}_2 \\ \dots \\ \boldsymbol{x}_n \\ \end{pmatrix} \in \mathbb{R}^{n \times d},Y = \begin{pmatrix} y_1 \\ y_2 \\ \dots \\ y_n \\ \end{pmatrix} \in \mathbb{N}^{n}$$ここで,$n$は学習データの事例数であり,$\boldsymbol x_i \in \mathbb{R}^d$と$y_i \in \mathbb N$はそれぞれ,$i \in \{1, \dots, n\}$番目の事例の特徴量ベクトルと正解ラベルを表す.なお,今回は「ビジネス」「科学技術」「エンターテイメント」「健康」の4カテゴリ分類である.$\mathbb N_{<4}$で$4$未満の自然数($0$を含む)を表すことにすれば,任意の事例の正解ラベル$y_i$は$y_i \in \mathbb N_{<4}$で表現できる.以降では,ラベルの種類数を$L$で表す(今回の分類タスクでは$L=4$である).$i$番目の事例の特徴ベクトル$\boldsymbol x_i$は,次式で求める.$$\boldsymbol x_i = \frac{1}{T_i} \sum_{t=1}^{T_i} \mathrm{emb}(w_{i,t})$$ここで,$i$番目の事例は$T_i$個の(記事見出しの)単語列$(w_{i,1}, w_{i,2}, \dots, w_{i,T_i})$から構成され,$\mathrm{emb}(w) \in \mathbb{R}^d$は単語$w$に対応する単語ベクトル(次元数は$d$)である.すなわち,$i$番目の事例の記事見出しを,その見出しに含まれる単語のベクトルの平均で表現したものが$\boldsymbol x_i$である.今回は単語ベクトルとして,問題60でダウンロードしたものを用いればよい.$300$次元の単語ベクトルを用いたので,$d=300$である.$i$番目の事例のラベル$y_i$は,次のように定義する.$$y_i = \begin{cases}0 & (\mbox{記事}\boldsymbol x_i\mbox{が「ビジネス」カテゴリの場合}) \\1 & (\mbox{記事}\boldsymbol x_i\mbox{が「科学技術」カテゴリの場合}) \\2 & (\mbox{記事}\boldsymbol x_i\mbox{が「エンターテイメント」カテゴリの場合}) \\3 & (\mbox{記事}\boldsymbol x_i\mbox{が「健康」カテゴリの場合}) \\\end{cases}$$なお,カテゴリ名とラベルの番号が一対一で対応付いていれば,上式の通りの対応付けでなくてもよい.以上の仕様に基づき,以下の行列・ベクトルを作成し,ファイルに保存せよ.+ 学習データの特徴量行列: $X_{\rm train} \in \mathbb{R}^{N_t \times d}$+ 学習データのラベルベクトル: $Y_{\rm train} \in \mathbb{N}^{N_t}$+ 検証データの特徴量行列: $X_{\rm valid} \in \mathbb{R}^{N_v \times d}$+ 検証データのラベルベクトル: $Y_{\rm valid} \in \mathbb{N}^{N_v}$+ 評価データの特徴量行列: $X_{\rm test} \in \mathbb{R}^{N_e \times d}$+ 評価データのラベルベクトル: $Y_{\rm test} \in \mathbb{N}^{N_e}$なお,$N_t, N_v, N_e$はそれぞれ,学習データの事例数,検証データの事例数,評価データの事例数である.
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00359/NewsAggregatorDataset.zip !unzip NewsAggregatorDataset.zip !wc -l ./newsCorpora.csv !head -10 ./newsCorpora.csv # 読込時のエラー回避のためダブルクォーテーションをシングルクォーテーションに置換 !sed -e 's/"/'\''/g' ./newsCorpora.csv > ./newsCorpora_re.csv import pandas as pd from sklearn.model_selection import train_test_split # データの読込 df = pd.read_csv('./newsCorpora_re.csv', header=None, sep='\t', names=['ID', 'TITLE', 'URL', 'PUBLISHER', 'CATEGORY', 'STORY', 'HOSTNAME', 'TIMESTAMP']) # データの抽出 df = df.loc[df['PUBLISHER'].isin(['Reuters', 'Huffington Post', 'Businessweek', 'Contactmusic.com', 'Daily Mail']), ['TITLE', 'CATEGORY']] # データの分割 train, valid_test = train_test_split(df, test_size=0.2, shuffle=True, random_state=123, stratify=df['CATEGORY']) valid, test = train_test_split(valid_test, test_size=0.5, shuffle=True, random_state=123, stratify=valid_test['CATEGORY']) # 事例数の確認 print('【学習データ】') print(train['CATEGORY'].value_counts()) print('【検証データ】') print(valid['CATEGORY'].value_counts()) print('【評価データ】') print(test['CATEGORY'].value_counts()) train.to_csv('drive/My Drive/nlp100/data/train.tsv', index=False, sep='\t', header=False) valid.to_csv('drive/My Drive/nlp100/data/valid.tsv', index=False, sep='\t', header=False) test.to_csv('drive/My Drive/nlp100/data/test.tsv', index=False, sep='\t', header=False) import gdown from gensim.models import KeyedVectors # 学習済み単語ベクトルのダウンロード url = "https://drive.google.com/uc?id=0B7XkCwpI5KDYNlNUTTlSS21pQmM" output = 'GoogleNews-vectors-negative300.bin.gz' gdown.download(url, output, quiet=True) # ダウンロードファイルのロード model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True) import string import torch def transform_w2v(text): table = str.maketrans(string.punctuation, ' '*len(string.punctuation)) words = text.translate(table).split() # 記号をスペースに置換後、スペースで分割してリスト化 vec = [model[word] for word in words if word in model] # 1語ずつベクトル化 return torch.tensor(sum(vec) / len(vec)) # 平均ベクトルをTensor型に変換して出力 # 特徴ベクトルの作成 X_train = torch.stack([transform_w2v(text) for text in train['TITLE']]) X_valid = torch.stack([transform_w2v(text) for text in valid['TITLE']]) X_test = torch.stack([transform_w2v(text) for text in test['TITLE']]) print(X_train.size()) print(X_train) # ラベルベクトルの作成 category_dict = {'b': 0, 't': 1, 'e':2, 'm':3} y_train = torch.LongTensor(train['CATEGORY'].map(lambda x: category_dict[x]).values) y_valid = torch.LongTensor(valid['CATEGORY'].map(lambda x: category_dict[x]).values) y_test = torch.LongTensor(test['CATEGORY'].map(lambda x: category_dict[x]).values) print(y_train.size()) print(y_train) # 保存 torch.save(X_train, 'X_train.pt') torch.save(X_valid, 'X_valid.pt') torch.save(X_test, 'X_test.pt') torch.save(y_train, 'y_train.pt') torch.save(y_valid, 'y_valid.pt') torch.save(y_test, 'y_test.pt')
_____no_output_____
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
71. 単層ニューラルネットワークによる予測***問題70で保存した行列を読み込み,学習データについて以下の計算を実行せよ.$$\hat{y}_1=softmax(x_1W),\\\hat{Y}=softmax(X_{[1:4]}W)$$ただし,$softmax$はソフトマックス関数,$X_{[1:4]}∈\mathbb{R}^{4×d}$は特徴ベクトル$x_1$,$x_2$,$x_3$,$x_4$を縦に並べた行列である.$$X_{[1:4]}=\begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix}$$行列$W \in \mathbb{R}^{d \times L}$は単層ニューラルネットワークの重み行列で,ここではランダムな値で初期化すればよい(問題73以降で学習して求める).なお,$\hat{\boldsymbol y_1} \in \mathbb{R}^L$は未学習の行列$W$で事例$x_1$を分類したときに,各カテゴリに属する確率を表すベクトルである.同様に,$\hat{Y} \in \mathbb{R}^{n \times L}$は,学習データの事例$x_1, x_2, x_3, x_4$について,各カテゴリに属する確率を行列として表現している.
from torch import nn torch.manual_seed(0) class SLPNet(nn.Module): def __init__(self, input_size, output_size): super().__init__() self.fc = nn.Linear(input_size, output_size, bias=False) # Linear(入力次元数, 出力次元数) nn.init.normal_(self.fc.weight, 0.0, 1.0) # 正規乱数で重みを初期化 def forward(self, x): x = self.fc(x) return x model = SLPNet(300, 4) y_hat_1 = torch.softmax(model.forward(X_train[:1]), dim=-1) print(y_hat_1) Y_hat = torch.softmax(model.forward(X_train[:4]), dim=-1) print(Y_hat)
tensor([[0.4273, 0.0958, 0.2492, 0.2277], [0.2445, 0.2431, 0.0197, 0.4927], [0.7853, 0.1132, 0.0291, 0.0724], [0.5279, 0.2319, 0.0873, 0.1529]], grad_fn=<SoftmaxBackward>)
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
72. 損失と勾配の計算***学習データの事例$x_1$と事例集合$x_1$,$x_2$,$x_3$,$x_4$に対して,クロスエントロピー損失と,行列$W$に対する勾配を計算せよ.なお,ある事例$x_i$に対して損失は次式で計算される.$$l_i=−log[事例x_iがy_iに分類される確率]$$ただし,事例集合に対するクロスエントロピー損失は,その集合に含まれる各事例の損失の平均とする.
criterion = nn.CrossEntropyLoss() l_1 = criterion(model.forward(X_train[:1]), y_train[:1]) # 入力ベクトルはsoftmax前の値 model.zero_grad() # 勾配をゼロで初期化 l_1.backward() # 勾配を計算 print(f'損失: {l_1:.4f}') print(f'勾配:\n{model.fc.weight.grad}') l = criterion(model.forward(X_train[:4]), y_train[:4]) model.zero_grad() l.backward() print(f'損失: {l:.4f}') print(f'勾配:\n{model.fc.weight.grad}')
損失: 1.8321 勾配: tensor([[-0.0063, 0.0042, -0.0139, ..., -0.0272, 0.0201, 0.0263], [-0.0047, -0.0025, 0.0195, ..., 0.0196, 0.0160, 0.0009], [ 0.0184, -0.0110, -0.0148, ..., 0.0070, -0.0055, -0.0001], [-0.0074, 0.0092, 0.0092, ..., 0.0006, -0.0306, -0.0272]])
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
73. 確率的勾配降下法による学習***確率的勾配降下法(SGD: Stochastic Gradient Descent)を用いて,行列$W$を学習せよ.なお,学習は適当な基準で終了させればよい(例えば「100エポックで終了」など).
from torch.utils.data import Dataset class CreateDataset(Dataset): def __init__(self, X, y): # datasetの構成要素を指定 self.X = X self.y = y def __len__(self): # len(dataset)で返す値を指定 return len(self.y) def __getitem__(self, idx): # dataset[idx]で返す値を指定 if isinstance(idx, torch.Tensor): idx = idx.tolist() return [self.X[idx], self.y[idx]] from torch.utils.data import DataLoader dataset_train = CreateDataset(X_train, y_train) dataset_valid = CreateDataset(X_valid, y_valid) dataset_test = CreateDataset(X_test, y_test) dataloader_train = DataLoader(dataset_train, batch_size=1, shuffle=True) dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False) dataloader_test = DataLoader(dataset_test, batch_size=len(dataset_test), shuffle=False) print(len(dataset_train)) print(next(iter(dataloader_train))) # モデルの定義 model = SLPNet(300, 4) # 損失関数の定義 criterion = nn.CrossEntropyLoss() # オプティマイザの定義 optimizer = torch.optim.SGD(model.parameters(), lr=1e-1) # 学習 num_epochs = 10 for epoch in range(num_epochs): # 訓練モードに設定 model.train() loss_train = 0.0 for i, (inputs, labels) in enumerate(dataloader_train): # 勾配をゼロで初期化 optimizer.zero_grad() # 順伝播 + 誤差逆伝播 + 重み更新 outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 損失を記録 loss_train += loss.item() # バッチ単位の平均損失計算 loss_train = loss_train / i # 検証データの損失計算 model.eval() with torch.no_grad(): inputs, labels = next(iter(dataloader_valid)) outputs = model.forward(inputs) loss_valid = criterion(outputs, labels) # ログを出力 print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, loss_valid: {loss_valid:.4f}')
epoch: 1, loss_train: 0.4686, loss_valid: 0.3738 epoch: 2, loss_train: 0.3159, loss_valid: 0.3349 epoch: 3, loss_train: 0.2846, loss_valid: 0.3248 epoch: 4, loss_train: 0.2689, loss_valid: 0.3194 epoch: 5, loss_train: 0.2580, loss_valid: 0.3094 epoch: 6, loss_train: 0.2503, loss_valid: 0.3089 epoch: 7, loss_train: 0.2437, loss_valid: 0.3068 epoch: 8, loss_train: 0.2401, loss_valid: 0.3083 epoch: 9, loss_train: 0.2358, loss_valid: 0.3077 epoch: 10, loss_train: 0.2338, loss_valid: 0.3052
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
74. 正解率の計測***問題73で求めた行列を用いて学習データおよび評価データの事例を分類したとき,その正解率をそれぞれ求めよ.
def calculate_accuracy(model, X, y): model.eval() with torch.no_grad(): outputs = model(X) pred = torch.argmax(outputs, dim=-1) return (pred == y).sum().item() / len(y) # 正解率の確認 acc_train = calculate_accuracy(model, X_train, y_train) acc_test = calculate_accuracy(model, X_test, y_test) print(f'正解率(学習データ):{acc_train:.3f}') print(f'正解率(評価データ):{acc_test:.3f}')
正解率(学習データ):0.925 正解率(評価データ):0.902
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
75. 損失と正解率のプロット***問題73のコードを改変し,各エポックのパラメータ更新が完了するたびに,訓練データでの損失,正解率,検証データでの損失,正解率をグラフにプロットし,学習の進捗状況を確認できるようにせよ.
def calculate_loss_and_accuracy(model, criterion, loader): model.eval() loss = 0.0 total = 0 correct = 0 with torch.no_grad(): for inputs, labels in loader: outputs = model(inputs) loss += criterion(outputs, labels).item() pred = torch.argmax(outputs, dim=-1) total += len(inputs) correct += (pred == labels).sum().item() return loss / len(loader), correct / total # モデルの定義 model = SLPNet(300, 4) # 損失関数の定義 criterion = nn.CrossEntropyLoss() # オプティマイザの定義 optimizer = torch.optim.SGD(model.parameters(), lr=1e-1) # 学習 num_epochs = 30 log_train = [] log_valid = [] for epoch in range(num_epochs): # 訓練モードに設定 model.train() for i, (inputs, labels) in enumerate(dataloader_train): # 勾配をゼロで初期化 optimizer.zero_grad() # 順伝播 + 誤差逆伝播 + 重み更新 outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 損失と正解率の算出 loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train) loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid) log_train.append([loss_train, acc_train]) log_valid.append([loss_valid, acc_valid]) # ログを出力 print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}') import numpy as np from matplotlib import pyplot as plt # 可視化 fig, ax = plt.subplots(1, 2, figsize=(15, 5)) ax[0].plot(np.array(log_train).T[0], label='train') ax[0].plot(np.array(log_valid).T[0], label='valid') ax[0].set_xlabel('epoch') ax[0].set_ylabel('loss') ax[0].legend() ax[1].plot(np.array(log_train).T[1], label='train') ax[1].plot(np.array(log_valid).T[1], label='valid') ax[1].set_xlabel('epoch') ax[1].set_ylabel('accuracy') ax[1].legend() plt.show()
_____no_output_____
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
76. チェックポイント***問題75のコードを改変し,各エポックのパラメータ更新が完了するたびに,チェックポイント(学習途中のパラメータ(重み行列など)の値や最適化アルゴリズムの内部状態)をファイルに書き出せ.
# モデルの定義 model = SLPNet(300, 4) # 損失関数の定義 criterion = nn.CrossEntropyLoss() # オプティマイザの定義 optimizer = torch.optim.SGD(model.parameters(), lr=1e-1) # 学習 num_epochs = 10 log_train = [] log_valid = [] for epoch in range(num_epochs): # 訓練モードに設定 model.train() for inputs, labels in dataloader_train: # 勾配をゼロで初期化 optimizer.zero_grad() # 順伝播 + 誤差逆伝播 + 重み更新 outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 損失と正解率の算出 loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train) loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid) log_train.append([loss_train, acc_train]) log_valid.append([loss_valid, acc_valid]) # チェックポイントの保存 torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt') # ログを出力 print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}')
epoch: 1, loss_train: 0.3281, accuracy_train: 0.8886, loss_valid: 0.3622, accuracy_valid: 0.8698 epoch: 2, loss_train: 0.2928, accuracy_train: 0.9040, loss_valid: 0.3351, accuracy_valid: 0.8832 epoch: 3, loss_train: 0.2638, accuracy_train: 0.9125, loss_valid: 0.3138, accuracy_valid: 0.8870 epoch: 4, loss_train: 0.2571, accuracy_train: 0.9131, loss_valid: 0.3097, accuracy_valid: 0.8892 epoch: 5, loss_train: 0.2450, accuracy_train: 0.9185, loss_valid: 0.3049, accuracy_valid: 0.8915 epoch: 6, loss_train: 0.2428, accuracy_train: 0.9194, loss_valid: 0.3054, accuracy_valid: 0.8952 epoch: 7, loss_train: 0.2400, accuracy_train: 0.9220, loss_valid: 0.3083, accuracy_valid: 0.8960 epoch: 8, loss_train: 0.2306, accuracy_train: 0.9232, loss_valid: 0.3035, accuracy_valid: 0.8967 epoch: 9, loss_train: 0.2293, accuracy_train: 0.9243, loss_valid: 0.3058, accuracy_valid: 0.8930 epoch: 10, loss_train: 0.2270, accuracy_train: 0.9254, loss_valid: 0.3054, accuracy_valid: 0.8952
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
77. ミニバッチ化***問題76のコードを改変し,$B$事例ごとに損失・勾配を計算し,行列$W$の値を更新せよ(ミニバッチ化).$B$の値を$1,2,4,8,…$と変化させながら,1エポックの学習に要する時間を比較せよ.
import time def train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, num_epochs): # dataloaderの作成 dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True) dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False) # 学習 log_train = [] log_valid = [] for epoch in range(num_epochs): # 開始時刻の記録 s_time = time.time() # 訓練モードに設定 model.train() for inputs, labels in dataloader_train: # 勾配をゼロで初期化 optimizer.zero_grad() # 順伝播 + 誤差逆伝播 + 重み更新 outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 損失と正解率の算出 loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train) loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid) log_train.append([loss_train, acc_train]) log_valid.append([loss_valid, acc_valid]) # チェックポイントの保存 torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt') # 終了時刻の記録 e_time = time.time() # ログを出力 print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}, {(e_time - s_time):.4f}sec') return {'train': log_train, 'valid': log_valid} # datasetの作成 dataset_train = CreateDataset(X_train, y_train) dataset_valid = CreateDataset(X_valid, y_valid) # モデルの定義 model = SLPNet(300, 4) # 損失関数の定義 criterion = nn.CrossEntropyLoss() # オプティマイザの定義 optimizer = torch.optim.SGD(model.parameters(), lr=1e-1) # モデルの学習 for batch_size in [2 ** i for i in range(11)]: print(f'バッチサイズ: {batch_size}') log = train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, 1)
バッチサイズ: 1 epoch: 1, loss_train: 0.3310, accuracy_train: 0.8858, loss_valid: 0.3579, accuracy_valid: 0.8795, 4.1217sec バッチサイズ: 2 epoch: 1, loss_train: 0.2985, accuracy_train: 0.8967, loss_valid: 0.3289, accuracy_valid: 0.8907, 2.3251sec バッチサイズ: 4 epoch: 1, loss_train: 0.2895, accuracy_train: 0.9000, loss_valid: 0.3226, accuracy_valid: 0.8900, 1.2911sec バッチサイズ: 8 epoch: 1, loss_train: 0.2870, accuracy_train: 0.9003, loss_valid: 0.3213, accuracy_valid: 0.8870, 0.7291sec バッチサイズ: 16 epoch: 1, loss_train: 0.2843, accuracy_train: 0.9027, loss_valid: 0.3189, accuracy_valid: 0.8915, 0.4637sec バッチサイズ: 32 epoch: 1, loss_train: 0.2833, accuracy_train: 0.9029, loss_valid: 0.3182, accuracy_valid: 0.8937, 0.3330sec バッチサイズ: 64 epoch: 1, loss_train: 0.2829, accuracy_train: 0.9028, loss_valid: 0.3180, accuracy_valid: 0.8930, 0.2453sec バッチサイズ: 128 epoch: 1, loss_train: 0.2822, accuracy_train: 0.9029, loss_valid: 0.3179, accuracy_valid: 0.8930, 0.2005sec バッチサイズ: 256 epoch: 1, loss_train: 0.2837, accuracy_train: 0.9028, loss_valid: 0.3178, accuracy_valid: 0.8930, 0.1747sec バッチサイズ: 512 epoch: 1, loss_train: 0.2823, accuracy_train: 0.9028, loss_valid: 0.3178, accuracy_valid: 0.8930, 0.1724sec バッチサイズ: 1024 epoch: 1, loss_train: 0.2869, accuracy_train: 0.9028, loss_valid: 0.3178, accuracy_valid: 0.8930, 0.1432sec
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
78. GPU上での学習***問題77のコードを改変し,GPU上で学習を実行せよ.
def calculate_loss_and_accuracy(model, criterion, loader, device): model.eval() loss = 0.0 total = 0 correct = 0 with torch.no_grad(): for inputs, labels in loader: inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) loss += criterion(outputs, labels).item() pred = torch.argmax(outputs, dim=-1) total += len(inputs) correct += (pred == labels).sum().item() return loss / len(loader), correct / total def train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, num_epochs, device=None): # GPUに送る model.to(device) # dataloaderの作成 dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True) dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False) # 学習 log_train = [] log_valid = [] for epoch in range(num_epochs): # 開始時刻の記録 s_time = time.time() # 訓練モードに設定 model.train() for inputs, labels in dataloader_train: # 勾配をゼロで初期化 optimizer.zero_grad() # 順伝播 + 誤差逆伝播 + 重み更新 inputs = inputs.to(device) labels = labels.to(device) outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 損失と正解率の算出 loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train, device) loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid, device) log_train.append([loss_train, acc_train]) log_valid.append([loss_valid, acc_valid]) # チェックポイントの保存 torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt') # 終了時刻の記録 e_time = time.time() # ログを出力 print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}, {(e_time - s_time):.4f}sec') return {'train': log_train, 'valid': log_valid} # datasetの作成 dataset_train = CreateDataset(X_train, y_train) dataset_valid = CreateDataset(X_valid, y_valid) # モデルの定義 model = SLPNet(300, 4) # 損失関数の定義 criterion = nn.CrossEntropyLoss() # オプティマイザの定義 optimizer = torch.optim.SGD(model.parameters(), lr=1e-1) # デバイスの指定 device = torch.device('cuda') for batch_size in [2 ** i for i in range(11)]: print(f'バッチサイズ: {batch_size}') log = train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, 1, device=device)
バッチサイズ: 1 epoch: 1, loss_train: 0.3322, accuracy_train: 0.8842, loss_valid: 0.3676, accuracy_valid: 0.8780, 10.2910sec バッチサイズ: 2 epoch: 1, loss_train: 0.3038, accuracy_train: 0.8983, loss_valid: 0.3469, accuracy_valid: 0.8840, 5.0635sec バッチサイズ: 4 epoch: 1, loss_train: 0.2929, accuracy_train: 0.9013, loss_valid: 0.3390, accuracy_valid: 0.8832, 2.5709sec バッチサイズ: 8 epoch: 1, loss_train: 0.2885, accuracy_train: 0.9024, loss_valid: 0.3352, accuracy_valid: 0.8877, 1.3670sec バッチサイズ: 16 epoch: 1, loss_train: 0.2865, accuracy_train: 0.9038, loss_valid: 0.3334, accuracy_valid: 0.8855, 0.7702sec バッチサイズ: 32 epoch: 1, loss_train: 0.2857, accuracy_train: 0.9039, loss_valid: 0.3329, accuracy_valid: 0.8855, 0.4686sec バッチサイズ: 64 epoch: 1, loss_train: 0.2851, accuracy_train: 0.9041, loss_valid: 0.3327, accuracy_valid: 0.8855, 0.3011sec バッチサイズ: 128 epoch: 1, loss_train: 0.2845, accuracy_train: 0.9041, loss_valid: 0.3325, accuracy_valid: 0.8855, 0.2226sec バッチサイズ: 256 epoch: 1, loss_train: 0.2850, accuracy_train: 0.9041, loss_valid: 0.3325, accuracy_valid: 0.8855, 0.1862sec バッチサイズ: 512 epoch: 1, loss_train: 0.2849, accuracy_train: 0.9041, loss_valid: 0.3324, accuracy_valid: 0.8855, 0.1551sec バッチサイズ: 1024 epoch: 1, loss_train: 0.2847, accuracy_train: 0.9041, loss_valid: 0.3324, accuracy_valid: 0.8855, 0.1477sec
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
79. 多層ニューラルネットワーク***問題78のコードを改変し,バイアス項の導入や多層化など,ニューラルネットワークの形状を変更しながら,高性能なカテゴリ分類器を構築せよ.
from torch.nn import functional as F class MLPNet(nn.Module): def __init__(self, input_size, mid_size, output_size, mid_layers): super().__init__() self.mid_layers = mid_layers self.fc = nn.Linear(input_size, mid_size) self.fc_mid = nn.Linear(mid_size, mid_size) self.fc_out = nn.Linear(mid_size, output_size) self.bn = nn.BatchNorm1d(mid_size) def forward(self, x): x = F.relu(self.fc(x)) for _ in range(self.mid_layers): x = F.relu(self.bn(self.fc_mid(x))) x = F.relu(self.fc_out(x)) return x from torch import optim def calculate_loss_and_accuracy(model, criterion, loader, device): model.eval() loss = 0.0 total = 0 correct = 0 with torch.no_grad(): for inputs, labels in loader: inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) loss += criterion(outputs, labels).item() pred = torch.argmax(outputs, dim=-1) total += len(inputs) correct += (pred == labels).sum().item() return loss / len(loader), correct / total def train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, num_epochs, device=None): # GPUに送る model.to(device) # dataloaderの作成 dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True) dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False) # スケジューラの設定 scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, num_epochs, eta_min=1e-5, last_epoch=-1) # 学習 log_train = [] log_valid = [] for epoch in range(num_epochs): # 開始時刻の記録 s_time = time.time() # 訓練モードに設定 model.train() for inputs, labels in dataloader_train: # 勾配をゼロで初期化 optimizer.zero_grad() # 順伝播 + 誤差逆伝播 + 重み更新 inputs = inputs.to(device) labels = labels.to(device) outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 損失と正解率の算出 loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train, device) loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid, device) log_train.append([loss_train, acc_train]) log_valid.append([loss_valid, acc_valid]) # チェックポイントの保存 torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt') # 終了時刻の記録 e_time = time.time() # ログを出力 print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}, {(e_time - s_time):.4f}sec') # 検証データの損失が3エポック連続で低下しなかった場合は学習終了 if epoch > 2 and log_valid[epoch - 3][0] <= log_valid[epoch - 2][0] <= log_valid[epoch - 1][0] <= log_valid[epoch][0]: break # スケジューラを1ステップ進める scheduler.step() return {'train': log_train, 'valid': log_valid} # datasetの作成 dataset_train = CreateDataset(X_train, y_train) dataset_valid = CreateDataset(X_valid, y_valid) # モデルの定義 model = MLPNet(300, 200, 4, 1) # 損失関数の定義 criterion = nn.CrossEntropyLoss() # オプティマイザの定義 optimizer = torch.optim.SGD(model.parameters(), lr=1e-3) # デバイスの指定 device = torch.device('cuda') log = train_model(dataset_train, dataset_valid, 64, model, criterion, optimizer, 1000, device) # 可視化 fig, ax = plt.subplots(1, 2, figsize=(15, 5)) ax[0].plot(np.array(log['train']).T[0], label='train') ax[0].plot(np.array(log['valid']).T[0], label='valid') ax[0].set_xlabel('epoch') ax[0].set_ylabel('loss') ax[0].legend() ax[1].plot(np.array(log['train']).T[1], label='train') ax[1].plot(np.array(log['valid']).T[1], label='valid') ax[1].set_xlabel('epoch') ax[1].set_ylabel('accuracy') ax[1].legend() plt.show() def calculate_accuracy(model, X, y, device): model.eval() with torch.no_grad(): inputs = X.to(device) outputs = model(inputs) pred = torch.argmax(outputs, dim=-1).cpu() return (pred == y).sum().item() / len(y) # 正解率の確認 acc_train = calculate_accuracy(model, X_train, y_train, device) acc_test = calculate_accuracy(model, X_test, y_test, device) print(f'正解率(学習データ):{acc_train:.3f}') print(f'正解率(評価データ):{acc_test:.3f}')
正解率(学習データ):0.960 正解率(評価データ):0.913
MIT
chapter08.ipynb
fKVzGecnXYhM/nlp100
Analyse a series Under construction
import os import pandas as pd from IPython.display import Image as DImage from IPython.core.display import display, HTML import series_details # Plotly helps us make pretty charts import plotly.offline as py import plotly.graph_objs as go # Make sure data directory exists os.makedirs('../../data/RecordSearch/images', exist_ok=True) # This lets Plotly draw charts in cells py.init_notebook_mode()
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
This notebook is for analysing a series that you've already harvested. If you haven't harvested any data yet, then you need to go back to the ['Harvesting a series' notebook](Harvesting series.ipynb).
# What series do you want to analyse? # Insert the series id between the quotes. series = 'J2483' # Load the CSV data for the specified series into a dataframe. Parse the dates as dates! df = pd.read_csv('../data/RecordSearch/{}.csv'.format(series.replace('/', '-')), parse_dates=['start_date', 'end_date'])
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
Remember that you can download harvested data from the workbench [data directory](../data/RecordSearch). Get some summary dataWe're going to create a simple summary of some of the main characteristics of the series, as reflected in the harvested files.
# We're going to assemble some summary data about the series in a 'summary' dictionary # Let's create the dictionary and add the series identifier summary = {'series': series} # The 'shape' property returns the number of rows and columns. So 'shape[0]' gives us the number of items harvested. summary['total_items'] = df.shape[0] print(summary['total_items']) # Get the frequency of the different access status categories summary['access_counts'] = df['access_status'].value_counts().to_dict() print(summary['access_counts']) # Get the number of files that have been digitised summary['digitised_files'] = len(df.loc[df['digitised_status'] == True]) print(summary['digitised_files']) # Get the number of individual pages that have been digitised summary['digitised_pages'] = df['digitised_pages'].sum() print(summary['digitised_pages']) # Get the earliest start date start = df['start_date'].min() try: summary['date_from'] = start.year except AttributeError: summary['date_from'] = None print(summary['date_from']) # Get the latest end date end = df['end_date'].max() try: summary['date_to'] = end.year except AttributeError: summary['date_to'] = None print(summary['date_to']) # Let's display all the summary data print('SERIES: {}'.format(summary['series'])) print('Number of items: {:,}'.format(summary['total_items'])) print('Access status:') for status, total in summary['access_counts'].items(): print(' {}: {:,}'.format(status, total)) print('Contents dates: {} to {}'.format(summary['date_from'], summary['date_to'])) print('Digitised files: {:,}'.format(summary['digitised_files'])) print('Digitised pages: {:,}'.format(summary['digitised_pages']))
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
Note that a slightly enhanced version of the code above is available in the `series_details` module that you can import into any notebook. So to create a summary of a series you can just:
# Import the module import series_details # Call display_series() providing the series name and the dataframe series_details.display_summary(series, df)
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
Plot the contents datesPlotting the dates is a bit tricky. Each file can have both a start date and an end date. So if we want to plot the years covered by a file, we need to include all the years between the start and end dates. Also dates can be recorded at different levels of granularity, for specific days to just years. And sometimes there are no end dates recorded at all – what does this mean?The code in the cell below does a few things:* It fills any empty end dates with the start date from the same item. This probably means some content years will be missed, but it's the only date we can be certain of.* It loops through all the rows in the dataframe, then for each row it extracts the years between the start and end date. Currently this looks to see if the 1 January is covered by the date range, so if there's an exact start date after 1 January I don't think it will be captured. I need to investigate this further.* It combines all of the years into one big series and then totals up the frquency of each year.I'm sure this is not perfect, but it seems to produce useful results.
# Fill any blank end dates with start dates df['end_date'] = df[['end_date']].apply(lambda x: x.fillna(value=df['start_date'])) # This is a bit tricky. # For each item we want to find the years that it has content from -- ie start_year <= year <= end_year. # Then we want to put all the years from all the items together and look at their frequency years = [] for row in df.itertuples(index=False): try: years_in_range = pd.date_range(start=row.start_date, end=row.end_date, freq='AS').year.to_series() except ValueError: # No start date pass else: years.append(years_in_range) year_counts = pd.concat(years).value_counts() # Put the resulting series in a dataframe so it looks pretty. year_totals = pd.DataFrame(year_counts) # Sort results by year year_totals.sort_index(inplace=True) # Display the results year_totals.style.format({0: '{:,}'}) # Let's graph the frequency of content years plotly_data = [go.Bar( x=year_totals.index.values, # The years are the index y=year_totals[0] )] # Add some labels layout = go.Layout( title='Content dates', xaxis=dict( title='Year' ), yaxis=dict( title='Number of items' ) ) # Create a chart fig = go.Figure(data=plotly_data, layout=layout) py.iplot(fig, filename='series-dates-bar')
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
Note that a slightly enhanced version of the code above is available in the series_details module that you can import into any notebook. So to create a summary of a series you can just:
# Import the module import series_details # Call plot_series() providing the series name and the dataframe fig = series_details.plot_dates(df) py.iplot(fig)
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
Filter by words in file titles
# Find titles containing a particular phrase -- in this case 'wife' # This creates a new dataframe # Try changing this to filter for other words search_term = 'wife' df_filtered = df.loc[df['title'].str.contains(search_term, case=False)].copy() df_filtered # We can plot this filtered dataframe just like the series fig = series_details.plot_dates(df) py.iplot(fig) # Save the new dataframe as a csv df_filtered.to_csv('../data/RecordSearch/{}-{}.csv'.format(series.replace('/', '-'), search_term)) # Find titles containing one of two words -- ie an OR statement # Try changing this to filter for other words df_filtered = df.loc[df['title'].str.contains('chinese', case=False) | df['title'].str.contains(r'\bah\b', case=False)].copy() df_filtered
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
Filter by date range
start_year = '1920' end_year = '1930' df_filtered = df[(df['start_date'] >= start_year) & (df['end_date'] <= end_year)] df_filtered
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
N-gram frequencies in file titles
# Import TextBlob for text analysis from textblob import TextBlob import nltk stopwords = nltk.corpus.stopwords.words('english') # Combine all of the file titles into a single string title_text = a = df['title'].str.lower().str.cat(sep=' ') blob = TextBlob(title_text) words = [[word, count] for word, count in blob.lower().word_counts.items() if word not in stopwords] word_counts = pd.DataFrame(words).rename({0: 'word', 1: 'count'}, axis=1).sort_values(by='count', ascending=False) word_counts[:25].style.format({'count': '{:,}'}).bar(subset=['count'], color='#d65f5f').set_properties(subset=['count'], **{'width': '300px'}) def get_ngram_counts(text, size): blob = TextBlob(text) # Extract n-grams as WordLists, then convert to a list of strings ngrams = [' '.join(ngram).lower() for ngram in blob.lower().ngrams(size)] # Convert to dataframe then count values and rename columns ngram_counts = pd.DataFrame(ngrams)[0].value_counts().rename_axis('ngram').reset_index(name='count') return ngram_counts def display_top_ngrams(text, size): ngram_counts = get_ngram_counts(text, size) # Display top 25 results as a bar chart display(ngram_counts[:25].style.format({'count': '{:,}'}).bar(subset=['count'], color='#d65f5f').set_properties(subset=['count'], **{'width': '300px'})) display_top_ngrams(title_text, 2) display_top_ngrams(title_text, 4)
_____no_output_____
MIT
recordsearch/2-Analyse-a-series.ipynb
GLAM-Workbench/glam-workbench-presentations
Pyopenssl[官方文档](https://www.pyopenssl.org/) 使用openssl生成私钥和公钥[参考资料](https://blog.csdn.net/huanhuanq1209/article/details/80899017)> openssl > genrsa -out private.pem 1024 > rsa -in public.pem -pubout -out rsa_public_key.pem 签名实例
import OpenSSL from OpenSSL._util import lib as _lib FILETYPE_PEM = _lib.SSL_FILETYPE_PEM import base64 def makeSign(message): order = sorted(message) sign_str = "" for key in order: sign_str = sign_str + "&{0}={1}".format(key,message[key]) sign_str = sign_str[1:] print("待签名的串 == 》 %s"%sign_str) with open('private.pem','rb') as f: pkey = OpenSSL.crypto.load_privatekey(FILETYPE_PEM, buffer=f.read()) sign = OpenSSL.crypto.sign(pkey, bytes(sign_str,encoding="utf-8"), "sha256") print("签名结果 ==》 %s"%sign) sign = base64.b64encode(sign) print("签名结果(base64编码) ==》 %s"%sign) return sign,sign_str sign,sign_str = makeSign({ "method":"any", "name":"Baird", "sex":"male", "mobile":"18300010001" })
待签名的串 == 》 method=any&mobile=18300010001&name=Baird&sex=male 签名结果 ==》 b'\x84\xb9\xae\xc3{\xfb"\xb5\x9fA\x02\x9bZ\x16g\xd5\x90`\x1e\xc6\x87\xef\xb1\xef\xb3\x8a\xb7\xbc\xc3\x0e\xab45T\xfaK\x02\xc25\x82\xbag\xb9\x94\t\x8c\xc8\x0f\xe9\x81\xd7U\x80\xd6\xf9\x871q>V\xdfn\x0b\x8e\xac\x8a\xab#B\xab\xf3\xc6\xfaM\xc4\x95$\xa7\xef*J\xd1~\x803\x14G\x80\x8d\x16\xbd4\xa5w\xf5\x03E\xb1\xffb\x99\x97#U3\x17\xd0\x98n\x89\xe9\xe5\x7f\x9f\x97\xde\x04\xc6\xa3p\xc3\x0f{\x01XZ\xc6\xcd\x84\x8be\xb3\xdd\x1cI\x87$\r\xfb\xe4\x85\x18\xc6\xbc\xfb\xed\xc3tl\xfe\xab{\x87\xd4|p0\x95\xd2!\x94\x80\x00\x8e0\xfdy\xb3\x1e({+\xb6\xd9D3\xd3W\xc0\xbe3\x05\xc6Y\x13\x84",\xef0\xdf\xdb\x15\x8b\xb1g\xe8\xc9\xa1\xbfQ\xd9\x12#\x92 S\xbe\xcbK\xc2\x17\xc5\xb9\x08\xbbp\xec\xedk\xf6\x82\xd4 \xa8\x91\ry\xc6A\xedK\\\x03\xafx\xaf7\xf8z%\xbcV1\xedu\xea$\xfeq\x84fDtUz' 签名结果(base64编码) ==》 b'hLmuw3v7IrWfQQKbWhZn1ZBgHsaH77Hvs4q3vMMOqzQ1VPpLAsI1grpnuZQJjMgP6YHXVYDW+YcxcT5W324LjqyKqyNCq/PG+k3ElSSn7ypK0X6AMxRHgI0WvTSld/UDRbH/YpmXI1UzF9CYbonp5X+fl94ExqNwww97AVhaxs2Ei2Wz3RxJhyQN++SFGMa8++3DdGz+q3uH1HxwMJXSIZSAAI4w/XmzHih7K7bZRDPTV8C+MwXGWROEIizvMN/bFYuxZ+jJob9R2RIjkiBTvstLwhfFuQi7cOzta/aC1CCokQ15xkHtS1wDr3ivN/h6JbxWMe116iT+cYRmRHRVeg=='
Apache-2.0
python/modules/jupyter/Pyopenssl.ipynb
HHW-zhou/snippets
验签实例
def makeVerify(sign, sign_str): sign = base64.b64decode(sign) with open("public.pem","rb") as f: pubkey = OpenSSL.crypto.load_publickey(FILETYPE_PEM, buffer=f.read()) x509 = OpenSSL.crypto.X509() x509.set_pubkey(pubkey) #验证通过返回None,否则抛出错误 try: OpenSSL.crypto.verify(x509, sign, bytes(sign_str, encoding="utf-8"), 'sha256') except Exception as e: return e return True result = makeVerify(sign,sign_str) result2 = makeVerify(sign,"hello world") result,result2
_____no_output_____
Apache-2.0
python/modules/jupyter/Pyopenssl.ipynb
HHW-zhou/snippets
VacationPy---- Note* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os # Import API key from api_keys import g_key
_____no_output_____
ADSL
starter_code/.ipynb_checkpoints/VacationPy-checkpoint.ipynb
jackaloppy/python-api-challenge
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
cities_df = pd.read_csv('../output_data/cities.csv') cities_df.dropna(inplace = True) cities_df.head()
_____no_output_____
ADSL
starter_code/.ipynb_checkpoints/VacationPy-checkpoint.ipynb
jackaloppy/python-api-challenge
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
gmaps.configure(api_key=g_key) locations = cities_df[["Lat", "Lng"]] humidity = cities_df["Humidity"] fig = gmaps.figure() heat_layer = gmaps.heatmap_layer(locations, weights=humidity, dissipating=False, max_intensity=150, point_radius=3) fig.add_layer(heat_layer) fig
_____no_output_____
ADSL
starter_code/.ipynb_checkpoints/VacationPy-checkpoint.ipynb
jackaloppy/python-api-challenge
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
ideal_df = cities_df[cities_df["Max Temp"].lt(80) & cities_df["Max Temp"].gt(70) & cities_df["Wind Speed"].lt(10) & cities_df["Cloudiness"].eq(0) & cities_df["Humidity"].lt(80) & cities_df["Humidity"].gt(30)] ideal_df
_____no_output_____
ADSL
starter_code/.ipynb_checkpoints/VacationPy-checkpoint.ipynb
jackaloppy/python-api-challenge
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
hotel_df = ideal_df[["City", "Lat", "Lng", "Country"]].reset_index(drop=True) hotel_df["Hotel Name"] = "" params = { "radius": 5000, "types": "lodging", "keyword": "hotel", "key": g_key } for index, row in hotel_df.iterrows(): lat = row["Lat"] lng = row["Lng"] params["location"] = f"{lat},{lng}" base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" name_address = requests.get(base_url, params=params).json() try: hotel_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"] except (KeyError, IndexError): hotel_df.loc[index, "Hotel Name"] = "NA" print("Couldn't find a hotel here at " + row["City"] + ", " + row["Country"]) # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """ <dl> <dt>Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{City}</dd> <dt>Country</dt><dd>{Country}</dd> </dl> """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] hover_info = [f"{row['City']}, {row['Country']}" for index,row in hotel_df.iterrows()] locations = hotel_df[["Lat", "Lng"]] markers = gmaps.marker_layer( locations, hover_text=hover_info, info_box_content=hotel_info) # Add marker layer ontop of heat map fig.add_layer(markers) fig.add_layer(heat_layer) # Display figure fig
_____no_output_____
ADSL
starter_code/.ipynb_checkpoints/VacationPy-checkpoint.ipynb
jackaloppy/python-api-challenge
SLU07 - Regression with Linear Regression: Example notebook 1 - Writing linear modelsIn this section you have a few examples on how to implement simple and multiple linear models.Let's start by implementing the following:$$y = 1.25 + 5x$$
def first_linear_model(x): """ Implements y = 1.25 + 5*x Args: x : float - input of model Returns: y : float - output of linear model """ y = 1.25 + 5 * x return y first_linear_model(1)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
You should be thinking that this is too easy. So let's generalize it a bit. We'll write the code for the next equation:$$ y = a + bx $$
def second_linear_model(x, a, b): """ Implements y = a + b * x Args: x : float - input of model a : float - intercept of model b : float - coefficient of model Returns: y : float - output of linear model """ y = a + b * x return y second_linear_model(1, 1.25, 5)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Still very simple, right? Now what if we want to have a linear model with multiple variables, such as this one:$$ y = a + bx_1 + cx_2 + dx_3 $$You can follow the same logic and just write the following:
def first_multiple_linear_model(x_1, x_2, x_3, a, b, c, d): """ Implements y = a + b * x_1 + c * x_2 + d * x_3 Args: x_1 : float - first input of model x_2 : float - second input of model x_3 : float - third input of model a : float - intercept of model b : float - first coefficient of model c : float - second coefficient of model d : float - third coefficient of model Returns: y : float - output of linear model """ y = a + b * x_1 + c * x_2 + d * x_3 return y first_multiple_linear_model(1.0, 1.0, 1.0, .5, .2, .1, .4)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
However, you should already be seeing the problem. The bigger our model gets, the more variables we need to consider, so this is clearly not efficient. Now let's write the generic form for a linear model:$$ y = w_0 + \sum_{i=1}^{N} w_i x_i$$And we will implement the inputs and outputs of the model as vectors:
def second_multiple_linear_model(x, w): """ Implements y = w_0 + sum(x_i*w_i) (where i=1...N) Args: x : vector of input features with size N-1 w : vector of model weights with size N Returns: y : float - output of linear model """ w_0 = w[0] y = w_0 for i in range(1, len(x)+1): y += x[i-1]*w[i] return y second_multiple_linear_model([1.0, 1.0, 1.0], [.5, .2, .1, .4])
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
You could go even one step further and use numpy to vectorize these computations. You can represent both vectors as numpy arrays and just do the same calculation:
import numpy as np def vectorized_multiple_linear_model(x, w): """ Implements y = w_0 + sum(x_i*w_i) (where i=1...N) Args: x : numpy array with shape (N-1, ) of inputs w : numpy array with shape (N, ) of model weights Returns: y : float - output of linear model """ y = w[0] + x*w[1:] vectorized_multiple_linear_model(np.array([1.0, 1.0, 1.0]), np.array([.5, .2, .1, .4]))
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Read more about numpy array and its manipulation at the end of this example notebook. This will be necessary as you will be requested to implement these types of models in a way that they can compute several samples with many features at once. 2 - Using sklearn's LinearRegressionThe following cells show you how to use the LinearRegression solver of the scikitlearn library. We'll start by creating some fake data to use in these examples:
import numpy as np import matplotlib.pyplot as plt np.random.seed(42) X = np.arange(-10, 10) + np.random.rand(20) y = 1.12 + .75 * X + 2. * np.random.rand(20) plt.xlim((-10, 10)) plt.ylim((-20, 20)) plt.plot(X, y, 'b.')
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
2.1 Training the modelWe will now use the base data created and show you how to fit the scikitlearn LinearRegression model with the data:
from sklearn.linear_model import LinearRegression # Since our numpy array has only 1 dimension, we need reshape # it to become a column vector - which corresponds to 1 feature # and N samples X = X.reshape(-1, 1) lr = LinearRegression() lr.fit(X, y)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
2.2 Coefficients and InterceptYou can get both the coefficients and the intercept from this model:
print('Coefficients: {}'.format(lr.coef_)) print('Intercept: {}'.format(lr.intercept_))
Coefficients: [0.76238153] Intercept: 2.030181639054948
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
2.3 Making predictionsWe can then make prediction with our model and see how they compare with the actual samples:
y_pred = lr.predict(X) plt.xlim((-10, 10)) plt.ylim((-20, 20)) plt.plot(X, y, 'b.') plt.plot(X, y_pred, 'r-')
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
2.4 Evaluating the modelWe can also extract the $R^2$ score of this model:
print('R² score: %f' % lr.score(X, y))
R² score: 0.983519
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Bonus examples: Numpy utilities With linear models, we normally have data that can be represented by either vectors or matrices. Even though you don't need advanced algebra knowledge to implement and understand the models presented, it is useful to understand its basics, since most of the computational part is typically implemented from these concepts.Numpy is a powerful library that allows us to represent our data easily in this format, and already implements a lot of functions to then manipulate or do calculations over our data. In this section we present the basic functions that you should know and will use the most to implement the basic models:
import numpy as np import pandas as pd
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
a) Pandas to numpy and backPandas stores our data in dataframes and series, which are very useful for visualization and even for some specific data operations we want to perform. However, for many algorithms that involve combination of numeric data, the standard form of implementing is by using numpy. Start by seeing how to convert from pandas to numpy and back:
df = pd.read_csv('data/polynomial.csv') df.head()
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
a.1) Pandas to numpyLet's transform our first column into a numpy vector. There are two ways of doing this, either by using the `.values` attribute:
np_array = df['x'].values print(np_array[:10])
[-0.97468167 1.04349486 1.67141609 -0.05145155 1.98901715 1.69483221 2.3605217 3.69166478 1.80589394 1.55395614]
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Or by calling the method `.to_numpy()` :
np_array = df['x'].to_numpy() print(np_array[:10])
[-0.97468167 1.04349486 1.67141609 -0.05145155 1.98901715 1.69483221 2.3605217 3.69166478 1.80589394 1.55395614]
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
You can also apply this to the full table:
np_array = df.values print(np_array[:5, :]) np_array = df.to_numpy() print(np_array[:5, :])
[[-9.74681670e-01 9.50004358e-01 -9.25951835e-01 -1.13819408e+00] [ 1.04349486e+00 1.08888152e+00 1.13624227e+00 1.11665074e+00] [ 1.67141609e+00 2.79363175e+00 4.66932106e+00 1.59111751e+00] [-5.14515491e-02 2.64726191e-03 -1.36205726e-04 1.00102006e+00] [ 1.98901715e+00 3.95618924e+00 7.86892827e+00 -9.73300421e-01]]
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
a.2) Numpy to pandasLet's start by defining an array and converting it to a pandas series:
np_array = np.array([4., .1, 1., .23, 3.]) pd_series = pd.Series(np_array) print(pd_series)
0 4.00 1 0.10 2 1.00 3 0.23 4 3.00 dtype: float64
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
We can also create several series and concatenate them to create a dataframe:
np_array = np.array([4., .1, 1., .23, 3.]) pd_series_1 = pd.Series(np_array, name='A') pd_series_2 = pd.Series(2 * np_array, name='B') pd_dataframe = pd.concat((pd_series_1, pd_series_2), axis=1) pd_dataframe.head()
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
We can also directly convert to a dataframe:
np_array = np.array([[1, 2, 3], [4, 5, 6]]) pd_dataframe = pd.DataFrame(np_array) pd_dataframe.head()
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
However, we might want more detailed names and specific indices. Some ways of achieving this follows:
data = np.array([['','Col1','Col2'], ['Row1',1,2], ['Row2',3,4]]) pd_dataframe = pd.DataFrame(data=data[1:,1:], index=data[1:,0], columns=data[0,1:]) pd_dataframe.head() pd_dataframe = pd.DataFrame(np.array([[4,5,6,7], [1,2,3,4]]), index=range(0, 2), columns=['A', 'B', 'C', 'D']) pd_dataframe.head() my_dict = {'A': np.array(['1', '3']), 'B': np.array(['1', '2']), 'C': np.array(['2', '4'])} pd_dataframe = pd.DataFrame(my_dict) pd_dataframe.head()
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
b) Vector and Matrix initialization and shapingWhen working with vectors and matrices, we need to be aware of the dimensions of these objects, and how they affect the possible operations perform over them. Numpy allows you to access these dimensions through the shape of the object:
v1 = np.array([ .1, 1., 2.]) print('1-d Array: {}'.format(v1)) print('Shape: {}'.format(v1.shape)) v2 = np.array([[ .1, 1., 2.]]) print('\n') print('2-d Row Array: {}'.format(v2)) print('Shape: {}'.format(v2.shape)) v3 = np.array([[ .1], [1.], [2.]]) print('\n') print('2-d Column Array:\n {}'.format(v3)) print('Shape: {}'.format(v3.shape)) m1 = np.array([[ .1, 3., 4., 1.], [1., .3, .1, .5], [2.,.7, 3.8, .1]]) print('\n') print('2-d matrix:\n {}'.format(m1)) print('Shape: {}'.format(m1.shape))
1-d Array: [0.1 1. 2. ] Shape: (3,) 2-d Row Array: [[0.1 1. 2. ]] Shape: (1, 3) 2-d Column Array: [[0.1] [1. ] [2. ]] Shape: (3, 1) 2-d matrix: [[0.1 3. 4. 1. ] [1. 0.3 0.1 0.5] [2. 0.7 3.8 0.1]] Shape: (3, 4)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Another important functionality provided is the possibility of reshaping these objects. For example, we can turn a 1-d array into a row vector:
v1 = np.array([ .1, 1., 2.]) v1_reshaped = v1.reshape((1, -1)) print('Old 1-d Array reshaped to row: {}'.format(v1_reshaped)) print('Shape: {}'.format(v1_reshaped.shape))
Old 1-d Array reshaped to row: [[0.1 1. 2. ]] Shape: (1, 3)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Or we can reshape it into a column vector:
v1 = np.array([ .1, 1., 2.]) v1_reshaped = v2.reshape((-1, 1)) print('Old 1-d Array reshaped to column: \n{}'.format(v1_reshaped)) print('Shape: {}'.format(v1_reshaped.shape))
Old 1-d Array reshaped to column: [[0.1] [1. ] [2. ]] Shape: (3, 1)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
We can also create specific vectors of 1s, 0s or random numbers with specific shapes from the start. See how to use each in the cells that follow:
custom_shape = (3, ) v1_ones = np.ones(custom_shape) print('1-D Vector of ones: \n{}'.format(v1_ones)) print('Shape: {}'.format(v1_ones.shape)) custom_shape = (5, 1) v1_zeros = np.zeros(custom_shape) print('2-D vector of zeros: \n{}'.format(v1_zeros)) print('Shape: {}'.format(v1_zeros.shape)) custom_shape = (5, 3) v1_rand = np.random.rand(custom_shape[0], custom_shape[1]) print('2-D Matrix of random numbers: \n{}'.format(v1_rand)) print('Shape: {}'.format(v1_rand.shape))
2-D Matrix of random numbers: [[0.12203823 0.49517691 0.03438852] [0.9093204 0.25877998 0.66252228] [0.31171108 0.52006802 0.54671028] [0.18485446 0.96958463 0.77513282] [0.93949894 0.89482735 0.59789998]] Shape: (5, 3)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
c) Vector and Matrix Concatenation In this section, you will learn how to concatenate 2 vectors, a matrix and a vector, or 2 matrices. c.1) Vector - VectorLet's start by defining 2 vectors:
v1 = np.array([ .1, 1., 2.]) v2 = np.array([5.1, .3, .41, 3. ]) print('1st array: {}'.format(v1)) print('Shape: {}'.format(v1.shape)) print('2nd array: {}'.format(v2)) print('Shape: {}'.format(v2.shape))
1st array: [0.1 1. 2. ] Shape: (3,) 2nd array: [5.1 0.3 0.41 3. ] Shape: (4,)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Since vectors only have one dimension with a given size (notice the shape with only one element) we can only concatenate in this dimension, leading to a longer vector:
vconcat = np.concatenate((v1, v2)) print('Concatenated vector: {}'.format(vconcat)) print('Shape: {}'.format(vconcat.shape))
Concatenated vector: [0.1 1. 2. 5.1 0.3 0.41 3. ] Shape: (7,)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Concatenating vectors is very easy, and since we can only concatenate them in their one dimension, the sizes do not have to match. Now let's move on to a more complex case. c.2) Matrix - row vectorWhen concatenating matrices and vectors we have to take into account their dimensions.
v1 = np.array([ .1, 1., 2., 3.]) m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]]) print('Array: {}'.format(v1)) print('Shape: {}'.format(v1.shape)) print('Matrix: \n{}'.format(m1)) print('Shape: {}'.format(m1.shape))
Array: [0.1 1. 2. 3. ] Shape: (4,) Matrix: [[5.1 0.3 0.41 3. ] [5.1 0.3 0.41 3. ]] Shape: (2, 4)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
The first thing you need to know is that whatever numpy objects you are trying to concatenate need to have the same dimensions. Run the code below to verify that you can not concatenate directly the vector and matrix:
try: vconcat = np.concatenate((v1, m1)) except Exception as e: print('Concatenation raised the following error: {}'.format(e))
Concatenation raised the following error: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 1 has 2 dimension(s)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
So how can we do matrix-vector concatenation? It is actually quite simple. We'll use the reshape functionality you seen before to add a dimension to the vector.
v1_reshaped = v1.reshape((1, v1.shape[0])) m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]]) print('Array: {}'.format(v1_reshaped)) print('Shape: {}'.format(v1_reshaped.shape)) print('Matrix: \n{}'.format(m1)) print('Shape: {}'.format(m1.shape))
Array: [[0.1 1. 2. 3. ]] Shape: (1, 4) Matrix: [[5.1 0.3 0.41 3. ] [5.1 0.3 0.41 3. ]] Shape: (2, 4)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
We've reshaped our vector into a 1-row matrix. Now we can try to perform the same concatenation:
vconcat = np.concatenate((v1_reshaped, m1)) print('Concatenated vector: {}'.format(vconcat)) print('Shape: {}'.format(vconcat.shape))
Concatenated vector: [[0.1 1. 2. 3. ] [5.1 0.3 0.41 3. ] [5.1 0.3 0.41 3. ]] Shape: (3, 4)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
c.3) Matrix - column vector We can also do this procedure with a column vector:
v1 = np.array([ .1, 1.]) v1_reshaped = v1.reshape((v1.shape[0], 1)) m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]]) print('Array: \n{}'.format(v1_reshaped)) print('Shape: {}'.format(v1_reshaped.shape)) print('Matrix: \n{}'.format(m1)) print('Shape: {}'.format(m1.shape)) vconcat = np.concatenate((v1_reshaped, m1), axis=1) print('Concatenated vector: {}'.format(vconcat)) print('Shape: {}'.format(vconcat.shape))
Array: [[0.1] [1. ]] Shape: (2, 1) Matrix: [[5.1 0.3 0.41 3. ] [5.1 0.3 0.41 3. ]] Shape: (2, 4) Concatenated vector: [[0.1 5.1 0.3 0.41 3. ] [1. 5.1 0.3 0.41 3. ]] Shape: (2, 5)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
There's yet another restriction when concatenating vectors and matrices. The dimension where we want to concatenate has to share the same size.See what would happen if we tried to concatenate a smaller vector with the same matrix:
v2 = np.array([ .1, 1.]) v2_reshaped = v2.reshape((1, v2.shape[0])) # Row vector as matrix try: vconcat = np.concatenate((v2, m1)) except Exception as e: print('Concatenation raised the following error: {}'.format(e))
Concatenation raised the following error: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 1 has 2 dimension(s)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
c.4) Matrix - MatrixThis is just an extension of the previous case, since what we did before was transforming the vector into a matrix where the size of one of the dimensions is 1. So all the same restrictions apply, the arrays must have compatible dimensions. Run the following examples to see this:
m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]]) m2 = np.array([[1., 2., 0., 3. ], [.1, .13, 1., 3. ], [.1, 2., .5, .3 ]]) m3 = np.array([[1., 0. ], [0., 1. ]]) print('Matrix 1: \n{}'.format(m1)) print('Shape: {}'.format(m1.shape)) print('Matrix 2: \n{}'.format(m2)) print('Shape: {}'.format(m2.shape)) print('Matrix 3: \n{}'.format(m3)) print('Shape: {}'.format(m3.shape))
Matrix 1: [[5.1 0.3 0.41 3. ] [5.1 0.3 0.41 3. ]] Shape: (2, 4) Matrix 2: [[1. 2. 0. 3. ] [0.1 0.13 1. 3. ] [0.1 2. 0.5 0.3 ]] Shape: (3, 4) Matrix 3: [[1. 0.] [0. 1.]] Shape: (2, 2)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Concatenate m1 and m2 at row level (stack the two matrices):
mconcat = np.concatenate((m1, m2)) print('Concatenated matrix:\n {}'.format(mconcat)) print('Shape: {}'.format(mconcat.shape))
Concatenated matrix: [[5.1 0.3 0.41 3. ] [5.1 0.3 0.41 3. ] [1. 2. 0. 3. ] [0.1 0.13 1. 3. ] [0.1 2. 0.5 0.3 ]] Shape: (5, 4)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Concatenate m1 and m2 at column level (joining the two matrices side by side) should produce an error:
try: vconcat = np.concatenate((m1, m2), axis=1) except Exception as e: print('Concatenation raised the following error: {}'.format(e))
Concatenation raised the following error: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 2 and the array at index 1 has size 3
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Concatenate m1 and m3 at column level (joining the two matrices side by side):
mconcat = np.concatenate((m1, m3), axis=1) print('Concatenated matrix:\n {}'.format(mconcat)) print('Shape: {}'.format(mconcat.shape))
Concatenated matrix: [[5.1 0.3 0.41 3. 1. 0. ] [5.1 0.3 0.41 3. 0. 1. ]] Shape: (2, 6)
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2
Concatenate m1 and m3 at row level (stack the two matrices) should produce an error:
try: vconcat = np.concatenate((m1, m3)) except Exception as e: print('Concatenation raised the following error: {}'.format(e))
Concatenation raised the following error: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 4 and the array at index 1 has size 2
MIT
S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb
claury/sidecar-academy-batch2