markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Download the data and print the sizes
train_data=torch.load('../data/fashion-mnist/train_data.pt') print(train_data.size()) train_label=torch.load('../data/fashion-mnist/train_label.pt') print(train_label.size()) test_data=torch.load('../data/fashion-mnist/test_data.pt') print(test_data.size())
torch.Size([10000, 28, 28])
MIT
codes/labs_lecture07/lab01_mlp/.ipynb_checkpoints/mlp_exercise-checkpoint.ipynb
wesleyjtann/Deep-learning-course-CE7454-2018
Make a ONE layer net class. The network output are the scores! No softmax needed! You have only one line to write in the forward function
class one_layer_net(nn.Module): def __init__(self, input_size, output_size): super(one_layer_net , self).__init__() self.linear_layer = nn.Linear(input_size, output_size, bias=False)# complete here def forward(self, x): scores = self.linear_layer(x) # complete here return scores
_____no_output_____
MIT
codes/labs_lecture07/lab01_mlp/.ipynb_checkpoints/mlp_exercise-checkpoint.ipynb
wesleyjtann/Deep-learning-course-CE7454-2018
Build the net
net= one_layer_net(784,10)# complete here print(net)
one_layer_net( (linear_layer): Linear(in_features=784, out_features=10, bias=False) )
MIT
codes/labs_lecture07/lab01_mlp/.ipynb_checkpoints/mlp_exercise-checkpoint.ipynb
wesleyjtann/Deep-learning-course-CE7454-2018
Choose the criterion and the optimizer: use the CHEAT SHEET to see the correct syntax. Remember that the optimizer need to have access to the parameters of the network (net.parameters()). Set the batchize and learning rate to be: batchize = 50 learning rate = 0.01
# make the criterion criterion = nn.CrossEntropyLoss()# complete here # make the SGD optimizer. optimizer=torch.optim.SGD(net.parameters(), lr=0.01) #complete here ) # set up the batch size bs=50
_____no_output_____
MIT
codes/labs_lecture07/lab01_mlp/.ipynb_checkpoints/mlp_exercise-checkpoint.ipynb
wesleyjtann/Deep-learning-course-CE7454-2018
Complete the training loop
for iter in range(1,5000): # Set dL/dU, dL/dV, dL/dW to be filled with zeros optimizer.zero_grad() # create a minibatch indices = torch.LongTensor(bs).random_(0,60000) minibatch_data = train_data[indices] minibatch_label = train_label[indices] # reshape the minibatch inputs = minibatch_data.view(bs, 784) # tell Pytorch to start tracking all operations that will be done on "inputs" inputs.requires_grad_() # forward the minibatch through the net scores = net(inputs) # Compute the average of the losses of the data points in the minibatch loss = criterion(scores, minibatch_label) # backward pass to compute dL/dU, dL/dV and dL/dW loss.backward() # do one step of stochastic gradient descent: U=U-lr(dL/dU), V=V-lr(dL/dU), ... optimizer.step()
_____no_output_____
MIT
codes/labs_lecture07/lab01_mlp/.ipynb_checkpoints/mlp_exercise-checkpoint.ipynb
wesleyjtann/Deep-learning-course-CE7454-2018
Choose image at random from the test set and see how good/bad are the predictions
# choose a picture at random idx=randint(0, 10000-1) im=test_data[idx] # diplay the picture utils.show(im) # feed it to the net and display the confidence scores scores = net( im.view(1,784)) probs= F.softmax(scores, dim=1) utils.show_prob_fashion_mnist(probs)
_____no_output_____
MIT
codes/labs_lecture07/lab01_mlp/.ipynb_checkpoints/mlp_exercise-checkpoint.ipynb
wesleyjtann/Deep-learning-course-CE7454-2018
NVTabular demo on Rossmann data - TensorFlow OverviewNVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems. It provides a high level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS cuDF library. Learning objectivesIn the previous notebooks ([rossmann-store-sales-preproc.ipynb](https://github.com/NVIDIA/NVTabular/blob/main/examples/rossmann/rossmann-store-sales-preproc.ipynb) and [rossmann-store-sales-feature-engineering.ipynb](https://github.com/NVIDIA/NVTabular/blob/main/examples/rossmann/rossmann-store-sales-feature-engineering.ipynb)), we downloaded, preprocessed and created features for the dataset. Now, we are ready to train our deep learning model on the dataset. In this notebook, we use **TensorFlow** with the NVTabular data loader for TensorFlow to accelereate the training pipeline.
import os import math import json import nvtabular as nvt import glob
_____no_output_____
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
Loading NVTabular workflowThis time, we only need to define our data directories. We can load the data schema from the NVTabular workflow.
DATA_DIR = os.environ.get("OUTPUT_DATA_DIR", "./data") INPUT_DATA_DIR = os.environ.get("INPUT_DATA_DIR", "./data") PREPROCESS_DIR = os.path.join(INPUT_DATA_DIR, 'ross_pre/') PREPROCESS_DIR_TRAIN = os.path.join(PREPROCESS_DIR, 'train') PREPROCESS_DIR_VALID = os.path.join(PREPROCESS_DIR, 'valid')
_____no_output_____
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
What files are available to train on in our directories?
!ls $PREPROCESS_DIR !ls $PREPROCESS_DIR_TRAIN !ls $PREPROCESS_DIR_VALID
_metadata part.0.parquet
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
We load the data schema and statistic information from `stats.json`. We created the file in the previous notebook `rossmann-store-sales-feature-engineering`.
stats = json.load(open(PREPROCESS_DIR + "/stats.json", "r")) CATEGORICAL_COLUMNS = stats['CATEGORICAL_COLUMNS'] CONTINUOUS_COLUMNS = stats['CONTINUOUS_COLUMNS'] LABEL_COLUMNS = stats['LABEL_COLUMNS'] COLUMNS = CATEGORICAL_COLUMNS + CONTINUOUS_COLUMNS + LABEL_COLUMNS
_____no_output_____
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
EMBEDDING_TABLE_SHAPES = stats['EMBEDDING_TABLE_SHAPES'] EMBEDDING_TABLE_SHAPES
_____no_output_____
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
Training a NetworkNow that our data is preprocessed and saved out, we can leverage `dataset`s to read through the preprocessed parquet files in an online fashion to train neural networks.We'll start by setting some universal hyperparameters for our model and optimizer. These settings will be the same across all of the frameworks that we explore in the different notebooks. If you're interested in contributing to NVTabular, feel free to take this challenge on and submit a pull request if successful. 12% RMSPE is achievable using the Novograd optimizer, but we know of no Novograd implementation for TensorFlow that supports sparse gradients, and so we are not including that solution below.
EMBEDDING_DROPOUT_RATE = 0.04 DROPOUT_RATES = [0.001, 0.01] HIDDEN_DIMS = [1000, 500] BATCH_SIZE = 65536 LEARNING_RATE = 0.001 EPOCHS = 25 # TODO: Calculate on the fly rather than recalling from previous analysis. MAX_SALES_IN_TRAINING_SET = 38722.0 MAX_LOG_SALES_PREDICTION = 1.2 * math.log(MAX_SALES_IN_TRAINING_SET + 1.0) TRAIN_PATHS = sorted(glob.glob(os.path.join(PREPROCESS_DIR_TRAIN, '*.parquet'))) VALID_PATHS = sorted(glob.glob(os.path.join(PREPROCESS_DIR_VALID, '*.parquet')))
_____no_output_____
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
TensorFlow TensorFlow: Preparing Datasets`KerasSequenceLoader` wraps a lightweight iterator around a `dataset` object to handle chunking, shuffling, and application of any workflows (which can be applied online as a preprocessing step). For column names, can use either a list of string names or a list of TensorFlow `feature_columns` that will be used to feed the network
import tensorflow as tf # we can control how much memory to give tensorflow with this environment variable # IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise # it's too late and TF will have claimed all free GPU memory os.environ['TF_MEMORY_ALLOCATION'] = "8192" # explicit MB os.environ['TF_MEMORY_ALLOCATION'] = "0.5" # fraction of free memory from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater # cheap wrapper to keep things some semblance of neat def make_categorical_embedding_column(name, dictionary_size, embedding_dim): return tf.feature_column.embedding_column( tf.feature_column.categorical_column_with_identity(name, dictionary_size), embedding_dim ) # instantiate our columns categorical_columns = [ make_categorical_embedding_column(name, *EMBEDDING_TABLE_SHAPES[name]) for name in CATEGORICAL_COLUMNS ] continuous_columns = [ tf.feature_column.numeric_column(name, (1,)) for name in CONTINUOUS_COLUMNS ] # feed them to our datasets train_dataset = KerasSequenceLoader( TRAIN_PATHS, # you could also use a glob pattern feature_columns=categorical_columns+continuous_columns, batch_size=BATCH_SIZE, label_names=LABEL_COLUMNS, shuffle=True, buffer_size=0.06 # amount of data, as a fraction of GPU memory, to load at once ) valid_dataset = KerasSequenceLoader( VALID_PATHS, # you could also use a glob pattern feature_columns=categorical_columns+continuous_columns, batch_size=BATCH_SIZE*4, label_names=LABEL_COLUMNS, shuffle=False, buffer_size=0.06 # amount of data, as a fraction of GPU memory, to load at once )
_____no_output_____
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
TensorFlow: Defining a ModelUsing Keras, we can define the layers of our model and their parameters explicitly. Here, for the sake of consistency, we'll mimic fast.ai's [TabularModel](https://docs.fast.ai/tabular.learner.html).
# DenseFeatures layer needs a dictionary of {feature_name: input} categorical_inputs = {} for column_name in CATEGORICAL_COLUMNS: categorical_inputs[column_name] = tf.keras.Input(name=column_name, shape=(1,), dtype=tf.int64) categorical_embedding_layer = tf.keras.layers.DenseFeatures(categorical_columns) categorical_x = categorical_embedding_layer(categorical_inputs) categorical_x = tf.keras.layers.Dropout(EMBEDDING_DROPOUT_RATE)(categorical_x) # Just concatenating continuous, so can use a list continuous_inputs = [] for column_name in CONTINUOUS_COLUMNS: continuous_inputs.append(tf.keras.Input(name=column_name, shape=(1,), dtype=tf.float32)) continuous_embedding_layer = tf.keras.layers.Concatenate(axis=1) continuous_x = continuous_embedding_layer(continuous_inputs) continuous_x = tf.keras.layers.BatchNormalization(epsilon=1e-5, momentum=0.1)(continuous_x) # concatenate and build MLP x = tf.keras.layers.Concatenate(axis=1)([categorical_x, continuous_x]) for dim, dropout_rate in zip(HIDDEN_DIMS, DROPOUT_RATES): x = tf.keras.layers.Dense(dim, activation='relu')(x) x = tf.keras.layers.BatchNormalization(epsilon=1e-5, momentum=0.1)(x) x = tf.keras.layers.Dropout(dropout_rate)(x) x = tf.keras.layers.Dense(1, activation='linear')(x) # TODO: Initialize model weights to fix saturation issues. # For now, we'll just scale the output of our model directly before # hitting the sigmoid. x = 0.1 * x x = MAX_LOG_SALES_PREDICTION * tf.keras.activations.sigmoid(x) # combine all our inputs into a single list # (note that you can still use .fit, .predict, etc. on a dict # that maps input tensor names to input values) inputs = list(categorical_inputs.values()) + continuous_inputs tf_model = tf.keras.Model(inputs=inputs, outputs=x)
_____no_output_____
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
TensorFlow: Training
def rmspe_tf(y_true, y_pred): # map back into "true" space by undoing transform y_true = tf.exp(y_true) - 1 y_pred = tf.exp(y_pred) - 1 percent_error = (y_true - y_pred) / y_true return tf.sqrt(tf.reduce_mean(percent_error**2)) %%time from time import time optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE) tf_model.compile(optimizer, 'mse', metrics=[rmspe_tf]) validation_callback = KerasSequenceValidater(valid_dataset) start = time() history = tf_model.fit( train_dataset, callbacks=[validation_callback], epochs=EPOCHS, ) t_final = time() - start total_rows = train_dataset.num_rows_processed + valid_dataset.num_rows_processed print(f"run_time: {t_final} - rows: {total_rows} - epochs: {EPOCHS} - dl_thru: { (EPOCHS * total_rows) / t_final}")
Epoch 1/25 13/13 [==============================] - 7s 168ms/step - loss: 6.3708 - rmspe_tf: 0.8916 Epoch 2/25 13/13 [==============================] - 2s 166ms/step - loss: 5.3491 - rmspe_tf: 0.8906 Epoch 3/25 13/13 [==============================] - 2s 168ms/step - loss: 4.7029 - rmspe_tf: 0.8801 Epoch 4/25 13/13 [==============================] - 2s 164ms/step - loss: 3.9542 - rmspe_tf: 0.8585 Epoch 5/25 13/13 [==============================] - 2s 168ms/step - loss: 3.0444 - rmspe_tf: 0.8195 Epoch 6/25 13/13 [==============================] - 2s 165ms/step - loss: 2.0530 - rmspe_tf: 0.7533 Epoch 7/25 13/13 [==============================] - 2s 167ms/step - loss: 1.1581 - rmspe_tf: 0.6474 Epoch 8/25 13/13 [==============================] - 2s 166ms/step - loss: 0.5232 - rmspe_tf: 0.5006 Epoch 9/25 13/13 [==============================] - 2s 164ms/step - loss: 0.1878 - rmspe_tf: 0.3450 Epoch 10/25 13/13 [==============================] - 2s 164ms/step - loss: 0.0650 - rmspe_tf: 0.2355 Epoch 11/25 13/13 [==============================] - 2s 166ms/step - loss: 0.0372 - rmspe_tf: 0.2073 Epoch 12/25 13/13 [==============================] - 2s 166ms/step - loss: 0.0329 - rmspe_tf: 0.2094 Epoch 13/25 13/13 [==============================] - 2s 165ms/step - loss: 0.0317 - rmspe_tf: 0.2090 Epoch 14/25 13/13 [==============================] - 2s 168ms/step - loss: 0.0301 - rmspe_tf: 0.2035 Epoch 15/25 13/13 [==============================] - 2s 170ms/step - loss: 0.0292 - rmspe_tf: 0.1987 Epoch 16/25 13/13 [==============================] - 2s 168ms/step - loss: 0.0283 - rmspe_tf: 0.1952 Epoch 17/25 13/13 [==============================] - 2s 166ms/step - loss: 0.0276 - rmspe_tf: 0.1905 Epoch 18/25 13/13 [==============================] - 2s 165ms/step - loss: 0.0274 - rmspe_tf: 0.1877 Epoch 19/25 13/13 [==============================] - 2s 165ms/step - loss: 0.0270 - rmspe_tf: 0.1956 Epoch 20/25 13/13 [==============================] - 2s 165ms/step - loss: 0.0248 - rmspe_tf: 0.1789 Epoch 21/25 13/13 [==============================] - 2s 166ms/step - loss: 0.0244 - rmspe_tf: 0.1792 Epoch 22/25 13/13 [==============================] - 2s 167ms/step - loss: 0.0239 - rmspe_tf: 0.1785 Epoch 23/25 13/13 [==============================] - 2s 167ms/step - loss: 0.0234 - rmspe_tf: 0.1775 Epoch 24/25 13/13 [==============================] - 2s 166ms/step - loss: 0.0233 - rmspe_tf: 0.1747 Epoch 25/25 13/13 [==============================] - 2s 168ms/step - loss: 0.0228 - rmspe_tf: 0.1725 CPU times: user 2min 52s, sys: 13 s, total: 3min 5s Wall time: 1min 8s
Apache-2.0
docs/source/examples/rossmann/tensorflow.ipynb
lgardenhire/NVTabular
Main points* Solution should be reasonably simple because the contest is only 24 hours long * Metric is based on the prediction of clicked pictures one week ahead, so clicks are the most important information* More recent information is more important* Only pictures that were shown to a user could be clicked, so pictures popularity is important* Metric is MAPK@100* Link https://contest.yandex.ru/contest/12899/problems (Russian) Plan* Build a classic recommending system based on user click history* Only use recent days of historical data* Take into consideration projected picture popularity Magic constants ALS recommending system:
# Factors for ALS factors_count=100 # Last days of click history used trail_days=14 # number of best candidates generated by ALS output_candidates_count=2000 # Last days of history with more weight last_days=1 # Coefficient for additional weight last_days_weight=4
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Popular pictures prediction model:
import lightgbm lightgbm.__version__ popularity_model = lightgbm.LGBMRegressor(seed=0) heuristic_alpha = 0.2 import datetime import tqdm import pandas as pd from scipy.sparse import coo_matrix import implicit implicit.__version__ test_users = pd.read_csv('Blitz/test_users.csv') data = pd.read_csv('Blitz/train_clicks.csv', parse_dates=['day'])
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Split last 7 days to calculate clicks similar to test set
train, target_week = ( data[data.day <= datetime.datetime(2019, 3, 17)].copy(), data[data.day > datetime.datetime(2019, 3, 17)], ) train.day.nunique(), target_week.day.nunique() last_date = train.day.max() train.loc[:, 'delta_days'] = 1 + (last_date - train.day).apply(lambda d: d.days) last_date = data.day.max() data.loc[:, 'delta_days'] = 1 + (last_date - data.day).apply(lambda d: d.days) def picture_features(data): """Generating clicks count for every picture in last days""" days = range(1, 3) features = [] names = [] for delta_days in days: features.append( data[(data.delta_days == delta_days)].groupby(['picture_id'])['user_id'].count() ) names.append('%s_%d' % ('click', delta_days)) features = pd.concat(features, axis=1).fillna(0) features.columns = names features = features.reindex(data.picture_id.unique()) return features.fillna(0) X = picture_features(train) X.mean(axis=0) def clicks_count(data, index): return data.groupby('picture_id')['user_id'].count().reindex(index).fillna(0) y = clicks_count(target_week, X.index) y.shape, y.mean()
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Train a model predicting popular pictures next week
popularity_model.fit(X, y) X_test = picture_features(data) X_test.mean(axis=0) X_test['p'] = popularity_model.predict(X_test) X_test.loc[X_test['p'] < 0, 'p'] = 0 X_test['p'].mean()
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Generate dict with predicted clicks for every picture
# This prediction would be used to correct recommender score picture = dict(X_test['p'])
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Recommender part Generate prediction using ALS approach
import os os.environ['OPENBLAS_NUM_THREADS'] = "1" def als_baseline( train, test_users, factors_n, last_days, trail_days, output_candidates_count, last_days_weight ): train = train[train.delta_days <= trail_days].drop_duplicates([ 'user_id', 'picture_id' ]) users = train.user_id items = train.picture_id weights = 1 + last_days_weight * (train.delta_days <= last_days) user_item = coo_matrix((weights, (users, items))) model = implicit.als.AlternatingLeastSquares(factors=factors_n, iterations=factors_n) model.fit(user_item.T.tocsr()) user_item_csr = user_item.tocsr() rows = [] for user_id in tqdm.tqdm_notebook(test_users.user_id.values): items = [(picture_id, score) for picture_id, score in model.recommend(user_id, user_item_csr, N=output_candidates_count)] rows.append(items) test_users['predictions_full'] = [ p for p, user_id in zip( rows, test_users.user_id.values ) ] test_users['predictions'] = [ [x[0] for x in p] for p, user_id in zip( rows, test_users.user_id.values ) ] return test_users test_users = als_baseline( data, test_users, factors_count, last_days, trail_days, output_candidates_count, last_days_weight)
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100.0/100 [11:00<00:00, 6.78s/it]
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Calculate history clicks to exclude them from results. Such clicks are excluded from test set according to task
clicked = data.groupby('user_id').agg({'picture_id': set}) def substract_clicked(p, c): filtered = [picture for picture in p if picture not in c][:100] return filtered
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Heuristical approach to reweight ALS score according to picture predicted popularity Recommender returns (picture, score) pairs sorted decreasing for every user.For every user we replace picture $score_p$ with $score_p \cdot (1 + popularity_{p})^{0.2}$$popularity_{p}$ - popularity predicted for this picture for next weekThis slightly moves popular pictures to the top of list for every user
import math rows = test_users['predictions_full'] def correct_with_popularity(items, picture, alpha): return sorted([ (score * (1 + picture.get(picture_id, 0)) ** alpha, picture_id, score, picture.get(picture_id, 0)) for picture_id, score in items], reverse=True ) corrected_rows = [ [x[1] for x in correct_with_popularity(items, picture, heuristic_alpha)] for items in rows ]
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Submission formatting
test_users['predictions'] = [ ' '.join(map(str, substract_clicked(p, {} if user_id not in clicked.index else clicked.loc[user_id][0]) )) for p, user_id in zip( corrected_rows, test_users.user_id.values ) ] test_users[['user_id', 'predictions']].to_csv('submit.csv', index=False)
_____no_output_____
MIT
Recommending System for Pictures - 4th place @ Yandex ML Competition.ipynb
dremovd/pictures-recommendation-yandex-ml-2019
Load predictor
%matplotlib inline import os import matplotlib import numpy as np import matplotlib.pyplot as plt matplotlib.use("Agg") os.getcwd() os.chdir('/home/del/research/span_ae') import span_ae from allennlp.models.archival import load_archive from allennlp.service.predictors import Predictor archive = load_archive("models/baseline/model.tar.gz") predictor = Predictor.from_archive(archive, 'span_ae')
_____no_output_____
MIT
notebooks/predict.ipynb
maxdel/span_ae
Func
def predict_plot(sentence): # predict result = predictor.predict_json(sentence) attention_matrix = result['attention_matrix'] predicted_tokens = result['predicted_tokens'] survived_span_ids = result['top_spans'] input_sentence = ['BOS'] + sentence['src'].split() + ['EOS'] predicted_tokens = predicted_tokens + ['EOS'] survived_spans = [] for span_id in survived_span_ids: ind_from = span_id[0] ind_to = span_id[1] + 1 survived_spans.append(" ".join(input_sentence[ind_from:ind_to])) attention_matrix_local = attention_matrix[0:len(predicted_tokens)] att_matrix_np = np.array([np.array(xi) for xi in attention_matrix_local]) #print print('ORIGINAL :', " ".join(input_sentence)) #print('TOP SPANs:', " \n ".join(survived_spans)) print('PREDICTED:', " ".join(predicted_tokens)) #print('span scores:', result['top_spans_scores']) print('\nAttnetion matrix:') # plot plt.figure(figsize=(9, 9), dpi= 80, facecolor='w', edgecolor='k') plt.imshow(att_matrix_np.transpose(), interpolation="nearest", cmap="Greys") plt.xlabel("target") plt.ylabel("source") plt.gca().set_xticks([i for i in range(0, len(predicted_tokens))]) plt.gca().set_yticks([i for i in range(0, len(survived_spans))]) plt.gca().set_xticklabels(predicted_tokens, rotation='vertical') plt.gca().set_yticklabels(survived_spans) plt.tight_layout()
_____no_output_____
MIT
notebooks/predict.ipynb
maxdel/span_ae
Inference
# change it sentence = "to school" # do not change it predict_plot({'src': sentence}) # change it sentence = "school" # do not change it predict_plot({'src': sentence}) # change it sentence = "it is spring already , but there are a lot of snow out there" # do not change it predict_plot({'src': sentence}) b # change it sentence = "let us discard our entire human knowledge" # do not change it predict_plot({'src': sentence})
ORIGINAL : BOS let us discard our entire human knowledge EOS PREDICTED: let us discard our entire development knowledge EOS Attnetion matrix:
MIT
notebooks/predict.ipynb
maxdel/span_ae
This challenge implements an instantiation of OTR based on AES block cipher with modified version 1.0. OTR, which stands for Offset Two-Round, is a blockcipher mode of operation to realize an authenticated encryption with associated data (see [[1]](1)). AES-OTR algorithm is a campaign of CAESAR competition, it has successfully entered the third round of screening by virtue of its unique advantages, you can see the whole algorithms and structure of AES-OTR from the design document (see [[2]](2)).However, the first version is vulnerable to forgery attacks in the known plaintext conditions and association data and public message number are reused, many attacks can be applied here to forge an excepted ciphertext with a valid tag (see [[3]](3)).For example, in this challenge we can build the following three plaintexts:
M_0 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????'] M_1 = [b'Uid=16111\xffUserNa', b'me=Administrator', b'r\xffT=11111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????'] M_2 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_Flag', b'g\xff??????????????']
_____no_output_____
Apache-2.0
Crypto/imposter/imposter_writeup.ipynb
NeSE-Team/XNUCA2020Qualifier
Here `'111111111111'` can represent any value since the server won't check whether the message and its corresponding hash value match, so we just need to make sure that they are at the right length. If you look closely, you will find that none of the three plaintexts contains illegal fields, so we can use the encrypt Oracle provided by the server to get their corresponding ciphertexts easily. Next, noticed that these plaintexts satisfied:
from Crypto.Util.strxor import strxor M_0 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????'] M_1 = [b'Uid=16111\xffUserNa', b'me=Administrator', b'r\xffT=11111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????'] M_2 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_Flag', b'g\xff??????????????'] strxor(M_0[1], M_0[3]) == strxor(M_1[1], M_2[3])
_____no_output_____
Apache-2.0
Crypto/imposter/imposter_writeup.ipynb
NeSE-Team/XNUCA2020Qualifier
So according to the forgery attacks described in [[3]](3), suppose their corresponding ciphertexts are `C_0`, `C_1` and `C_2`, then we can forge a valid ciphertext and tag using:
from Toy_AE import Toy_AE def unpack(r): data = r.split(b"\xff") uid, uname, token, cmd, appendix = int(data[0][4:]), data[1][9:], data[2][2:], data[3][4:], data[4] return (uid, uname, token, cmd, appendix) ae = Toy_AE() M_0 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????'] M_1 = [b'Uid=16111\xffUserNa', b'me=Administrator', b'r\xffT=11111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????'] M_2 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_Flag', b'g\xff??????????????'] C_0, T_0 = ae.encrypt(b''.join(M_0)) C_1, T_1 = ae.encrypt(b''.join(M_1)) C_2, T_2 = ae.encrypt(b''.join(M_2)) C_forge = C_1[:32] + C_2[32:64] + C_0[64:] T_forge = T_0 _, uname, _, cmd, _ = unpack(ae.decrypt(C_forge, T_forge)) uname == b"Administrator" and cmd == b"Give_Me_Flag"
_____no_output_____
Apache-2.0
Crypto/imposter/imposter_writeup.ipynb
NeSE-Team/XNUCA2020Qualifier
Here is my final exp:
import string from pwn import * from hashlib import sha256 from Crypto.Util.strxor import strxor from Crypto.Util.number import long_to_bytes, bytes_to_long def bypass_POW(io): chall = io.recvline() post = chall[14:30] tar = chall[38:-2] io.recvuntil(':') found = iters.bruteforce(lambda x:sha256((x + post.decode()).encode()).hexdigest() == tar.decode(), string.ascii_letters + string.digits, 4) io.sendline(found.encode()) C = [] T = [] io = remote("123.57.4.93", 45216) bypass_POW(io) io.sendlineafter(b"Your option:", '1') io.sendlineafter(b"Set up your user id:", '16108') io.sendlineafter(b"Your username:", 'AdministratoR') io.sendlineafter(b"Your command:", 'Give_Me_FlaG') io.sendlineafter(b"Any Appendix?", "???????????????") _ = io.recvuntil(b"Your ticket:") C.append(long_to_bytes(int(io.recvline().strip(), 16))) _ = io.recvuntil(b"With my Auth:") T.append(long_to_bytes(int(io.recvline().strip(), 16))) io.sendlineafter(b"Your option:", '1') io.sendlineafter(b"Set up your user id:", '16107') io.sendlineafter(b"Your username:", 'Administratorr') io.sendlineafter(b"Your command:", 'Give_Me_FlaG') io.sendlineafter(b"Any Appendix?", "???????????????") _ = io.recvuntil(b"Your ticket:") C.append(long_to_bytes(int(io.recvline().strip(), 16))) _ = io.recvuntil(b"With my Auth:") T.append(long_to_bytes(int(io.recvline().strip(), 16))) io.sendlineafter(b"Your option:", '1') io.sendlineafter(b"Set up your user id:", '16108') io.sendlineafter(b"Your username:", 'AdministratoR') io.sendlineafter(b"Your command:", 'Give_Me_Flagg') io.sendlineafter(b"Any Appendix?", "??????????????") _ = io.recvuntil(b"Your ticket:") C.append(long_to_bytes(int(io.recvline().strip(), 16))) _ = io.recvuntil(b"With my Auth:") T.append(long_to_bytes(int(io.recvline().strip(), 16))) ct = (C[1][:32] + C[2][32:64] + C[0][64:]).hex() te = T[0].hex() io.sendlineafter(b"Your option:", '2') io.sendlineafter(b"Ticket:", ct) io.sendlineafter(b"Auth:", te) flag = io.recvline().strip().decode() print(flag)
_____no_output_____
Apache-2.0
Crypto/imposter/imposter_writeup.ipynb
NeSE-Team/XNUCA2020Qualifier
Testing different SA methods 4/5 Textblob
import csv import re import random from textblob import TextBlob # Ugly hackery, but necessary: stackoverflow.com/questions/4383571/importing-files-from-different-folder import sys sys.path.append('../../../') from src.streaming import spark_functions preprocess = spark_functions.preprocessor() tokenize = spark_functions.tokenizer() with open('./../../../data/interim/sanders_hydrated.csv') as csv_file: iterator = csv.reader(csv_file, delimiter=',') # Load the parts we need and preprocess as well as tokenize the text tweets = [(text, sentiment) for (topic, sentiment, id, text) in iterator if sentiment=='positive' or sentiment=='negative'] # Shuffle for good measure random.shuffle(tweets) import matplotlib.pyplot as plt results = { "positive":{"color":"green","x":[],"y":[]}, "neutral":{"color":"orange","x":[],"y":[]}, "negative":{"color":"red","x":[],"y":[]} } # Create plot fig = plt.figure() ax = fig.add_subplot(1, 1, 1) for (tweet,sentiment) in tweets[:200]: analysis = TextBlob(preprocess(tweet)).sentiment results[sentiment]["x"].append(analysis.polarity) results[sentiment]["y"].append(analysis.subjectivity) for key in results: ax.scatter(results[key]["x"], results[key]["y"], alpha=0.8, c=results[key]["color"], edgecolors='none', label=key) plt.xlabel('polarity') plt.ylabel('subjectivity') plt.legend(loc=2) plt.savefig('textblob.pdf', format='pdf') plt.show() # This is fucking hopeless labeled_correctly = 0 for (tweet,sentiment) in tweets: analysis = TextBlob(preprocess(tweet)).sentiment if (analysis.polarity < 0 and sentiment == 'negative') \ or (analysis.polarity >= 0 and sentiment == 'positive'): labeled_correctly += 1 print("Labeled correctly: %d/%d = %.2d percent" % (labeled_correctly, len(tweets), labeled_correctly/len(tweets)*100))
Labeled correctly: 580/946 = 61 percent
MIT
notebooks/sentiment_analysis/textblob.ipynb
ClaasM/streamed-sentiment-topic-intent
Easy string manipulation
x = 'a string' y = "a string" if x == y: print("they are the same") fox = "tHe qUICk bROWn fOx."
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
fox.upper() fox.lower()
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
fox.title() fox.capitalize()
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
The cases can be swapped using the ``swapcase()`` method:
fox.swapcase() line = ' this is the content ' line.strip()
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
line.rstrip() line.lstrip()
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
num = "000000000000435" num.strip('0') line = 'the quick brown fox jumped over a lazy dog' line.find('fox') line.index('fox') line[16:21]
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
line.find('bear') line.index('bear') line.partition('fox')
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
line_list = line.split() print(line_list) print(line_list[1])
quick
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
haiku = """matsushima-ya aah matsushima-ya matsushima-ya""" haiku.splitlines()
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
'--'.join(['1', '2', '3'])
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya'])) pi = 3.14159 str(pi) print ("The value of pi is " + pi)
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
Pi is a float number so it must be transform to sting.
print( "The value of pi is " + str(pi))
The value of pi is 3.14159
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
"The value of pi is {}".format(pi)
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
Easy regex manipulation!
import re line = 'the quick brown fox jumped over a lazy dog'
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
line.index('fox') regex = re.compile('fox') match = regex.search(line) match.start()
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
line.replace('fox', 'BEAR') regex.sub('BEAR', line)
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``.`` | Any character | ``.*`` matches everything | | ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` |
bool(re.search(r'ab', "Boabab")) bool(re.search(r'.*ma.*', "Ala ma kota")) bool(re.search(r'.*(psa|kota).*', "Ala ma kota")) bool(re.search(r'.*(psa|kota).*', "Ala ma psa")) bool(re.search(r'.*(psa|kota).*', "Ala ma chomika")) zdanie = "Ala ma kota." wzor = r'.*' #pasuje do kaΕΌdego zdania zamiennik = r"Ala ma psa." re.sub(wzor, zamiennik, zdanie) wzor = r'(.*)kota.' zamiennik = r"\1 psa." re.sub(wzor, zamiennik, zdanie) wzor = r'(.*)ma(.*)' zamiennik = r"\1 posiada \2" re.sub(wzor, zamiennik, zdanie)
_____no_output_____
MIT
Some_strings_and_regex_operation_in_Python.ipynb
LanguegeEngineering/demo-igor-skorzybot
Data ExplorationNow that we have extracted our data, let's clean it up and take a look at what we have to work with.
df = pd.DataFrame.from_records(rows) df = df.set_index('Id', drop=False) df['Title'] = df['Title'].fillna('').astype('str') df['Tags'] = df['Tags'].fillna('').astype('str') df['Body'] = df['Body'].fillna('').astype('str') df['Id'] = df['Id'].astype('int') df['PostTypeId'] = df['PostTypeId'].astype('int') df['ViewCount'] = df['ViewCount'].astype('float') df.head() list(df[df['ViewCount'] > 250000]['Title']) from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences tokenizer = Tokenizer(num_words=VOCAB_SIZE) tokenizer.fit_on_texts(df['Body'] + df['Title']) # Compute TF/IDF Values total_count = sum(tokenizer.word_counts.values()) idf = { k: np.log(total_count/v) for (k,v) in tokenizer.word_counts.items() } # Download pre-trained word2vec embeddings import gensim glove_100d = utils.get_file( fname='glove.6B.100d.txt', origin='https://storage.googleapis.com/deep-learning-cookbook/glove.6B.100d.txt', ) w2v_100d = glove_100d + '.w2v' from gensim.scripts.glove2word2vec import glove2word2vec glove2word2vec(glove_100d, w2v_100d) w2v_model = gensim.models.KeyedVectors.load_word2vec_format(w2v_100d) w2v_weights = np.zeros((VOCAB_SIZE, w2v_model.syn0.shape[1])) idf_weights = np.zeros((VOCAB_SIZE, 1)) for k, v in tokenizer.word_index.items(): if v >= VOCAB_SIZE: continue if k in w2v_model: w2v_weights[v] = w2v_model[k] idf_weights[v] = idf[k] del w2v_model df['title_tokens'] = tokenizer.texts_to_sequences(df['Title']) df['body_tokens'] = tokenizer.texts_to_sequences(df['Body']) import random # We can create a data generator that will randomly title and body tokens for questions. We'll use random text # from other questions as a negative example when necessary. def data_generator(batch_size, negative_samples=1): questions = df[df['PostTypeId'] == 1] all_q_ids = list(questions.index) batch_x_a = [] batch_x_b = [] batch_y = [] def _add(x_a, x_b, y): batch_x_a.append(x_a[:MAX_DOC_LEN]) batch_x_b.append(x_b[:MAX_DOC_LEN]) batch_y.append(y) while True: questions = questions.sample(frac=1.0) for i, q in questions.iterrows(): _add(q['title_tokens'], q['body_tokens'], 1) negative_q = random.sample(all_q_ids, negative_samples) for nq_id in negative_q: _add(q['title_tokens'], df.at[nq_id, 'body_tokens'], 0) if len(batch_y) >= batch_size: yield ({ 'title': pad_sequences(batch_x_a, maxlen=None), 'body': pad_sequences(batch_x_b, maxlen=None), }, np.asarray(batch_y)) batch_x_a = [] batch_x_b = [] batch_y = [] # dg = data_generator(1, 2) # next(dg) # next(dg)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/05.1 Generating Text in the Style of an Example Text-checkpoint.ipynb
WillKoehrsen/deep_learning_cookbook
Embedding LookupsLet's define a helper class for looking up our embedding results. We'll use itto verify our models.
questions = df[df['PostTypeId'] == 1]['Title'].reset_index(drop=True) question_tokens = pad_sequences(tokenizer.texts_to_sequences(questions)) class EmbeddingWrapper(object): def __init__(self, model): self._r = questions self._i = {i:s for (i, s) in enumerate(questions)} self._w = model.predict({'title': question_tokens}, verbose=1, batch_size=1024) self._model = model self._norm = np.sqrt(np.sum(self._w * self._w + 1e-5, axis=1)) def nearest(self, sentence, n=10): x = tokenizer.texts_to_sequences([sentence]) if len(x[0]) < MIN_DOC_LEN: x[0] += [0] * (MIN_DOC_LEN - len(x)) e = self._model.predict(np.asarray(x))[0] norm_e = np.sqrt(np.dot(e, e)) dist = np.dot(self._w, e) / (norm_e * self._norm) top_idx = np.argsort(dist)[-n:] return pd.DataFrame.from_records([ {'question': self._r[i], 'dist': float(dist[i])} for i in top_idx ]) # Our first model will just sum up the embeddings of each token. # The similarity between documents will be the dot product of the final embedding. import tensorflow as tf def sum_model(embedding_size, vocab_size, embedding_weights=None, idf_weights=None): title = layers.Input(shape=(None,), dtype='int32', name='title') body = layers.Input(shape=(None,), dtype='int32', name='body') def make_embedding(name): if embedding_weights is not None: embedding = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=w2v_weights.shape[1], weights=[w2v_weights], trainable=False, name='%s/embedding' % name) else: embedding = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=embedding_size, name='%s/embedding' % name) if idf_weights is not None: idf = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=1, weights=[idf_weights], trainable=False, name='%s/idf' % name) else: idf = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=1, name='%s/idf' % name) return embedding, idf embedding_a, idf_a = make_embedding('a') embedding_b, idf_b = embedding_a, idf_a # embedding_b, idf_b = make_embedding('b') mask = layers.Masking(mask_value=0) def _combine_and_sum(args): [embedding, idf] = args return K.sum(embedding * K.abs(idf), axis=1) sum_layer = layers.Lambda(_combine_and_sum, name='combine_and_sum') sum_a = sum_layer([mask(embedding_a(title)), idf_a(title)]) sum_b = sum_layer([mask(embedding_b(body)), idf_b(body)]) sim = layers.dot([sum_a, sum_b], axes=1, normalize=True) sim_model = models.Model( inputs=[title, body], outputs=[sim], ) sim_model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['accuracy']) sim_model.summary() embedding_model = models.Model( inputs=[title], outputs=[sum_a] ) return sim_model, embedding_model # Try using our model with pretrained weights from word2vec sum_model_precomputed, sum_embedding_precomputed = sum_model( embedding_size=EMBEDDING_SIZE, vocab_size=VOCAB_SIZE, embedding_weights=w2v_weights, idf_weights=idf_weights ) x, y = next(data_generator(batch_size=4096)) sum_model_precomputed.evaluate(x, y) SAMPLE_QUESTIONS = [ 'Roundtrip ticket versus one way', 'Shinkansen from Kyoto to Hiroshima', 'Bus tour of Germany', ] def evaluate_sample(lookup): pd.set_option('display.max_colwidth', 100) results = [] for q in SAMPLE_QUESTIONS: print(q) q_res = lookup.nearest(q, n=4) q_res['result'] = q_res['question'] q_res['question'] = q results.append(q_res) return pd.concat(results) lookup = EmbeddingWrapper(model=sum_embedding_precomputed) evaluate_sample(lookup)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/05.1 Generating Text in the Style of an Example Text-checkpoint.ipynb
WillKoehrsen/deep_learning_cookbook
Training our own networkThe results are okay but not great... instead of using the word2vec embeddings, what happens if we train our network end-to-end?
sum_model_trained, sum_embedding_trained = sum_model( embedding_size=EMBEDDING_SIZE, vocab_size=VOCAB_SIZE, embedding_weights=None, idf_weights=None ) sum_model_trained.fit_generator( data_generator(batch_size=128), epochs=10, steps_per_epoch=1000 ) lookup = EmbeddingWrapper(model=sum_embedding_trained) evaluate_sample(lookup)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/05.1 Generating Text in the Style of an Example Text-checkpoint.ipynb
WillKoehrsen/deep_learning_cookbook
CNN ModelUsing a sum-of-embeddings model works well. What happens if we try to make a simple CNN model?
def cnn_model(embedding_size, vocab_size): title = layers.Input(shape=(None,), dtype='int32', name='title') body = layers.Input(shape=(None,), dtype='int32', name='body') embedding = layers.Embedding( mask_zero=False, input_dim=vocab_size, output_dim=embedding_size, ) def _combine_sum(v): return K.sum(v, axis=1) cnn_1 = layers.Convolution1D(256, 3) cnn_2 = layers.Convolution1D(256, 3) cnn_3 = layers.Convolution1D(256, 3) global_pool = layers.GlobalMaxPooling1D() local_pool = layers.MaxPooling1D(strides=2, pool_size=3) def forward(input): embed = embedding(input) return global_pool( cnn_2(local_pool(cnn_1(embed)))) sum_a = forward(title) sum_b = forward(body) sim = layers.dot([sum_a, sum_b], axes=1, normalize=False) sim_model = models.Model( inputs=[title, body], outputs=[sim], ) sim_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) embedding_model = models.Model( inputs=[title], outputs=[sum_a] ) return sim_model, embedding_model cnn, cnn_embedding = cnn_model(embedding_size=25, vocab_size=VOCAB_SIZE) cnn.summary() cnn.fit_generator( data_generator(batch_size=128), epochs=10, steps_per_epoch=1000, ) lookup = EmbeddingWrapper(model=cnn_embedding) evaluate_sample(lookup)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/05.1 Generating Text in the Style of an Example Text-checkpoint.ipynb
WillKoehrsen/deep_learning_cookbook
LSTM ModelWe can also make an LSTM model. Warning, this will be very slow to train and evaluate unless you have a relatively fast GPU to run it on!
def lstm_model(embedding_size, vocab_size): title = layers.Input(shape=(None,), dtype='int32', name='title') body = layers.Input(shape=(None,), dtype='int32', name='body') embedding = layers.Embedding( mask_zero=True, input_dim=vocab_size, output_dim=embedding_size, # weights=[w2v_weights], # trainable=False ) lstm_1 = layers.LSTM(units=512, return_sequences=True) lstm_2 = layers.LSTM(units=512, return_sequences=False) sum_a = lstm_2(lstm_1(embedding(title))) sum_b = lstm_2(lstm_1(embedding(body))) sim = layers.dot([sum_a, sum_b], axes=1, normalize=True) # sim = layers.Activation(activation='sigmoid')(sim) sim_model = models.Model( inputs=[title, body], outputs=[sim], ) sim_model.compile(loss='binary_crossentropy', optimizer='rmsprop') embedding_model = models.Model( inputs=[title], outputs=[sum_a] ) return sim_model, embedding_model lstm, lstm_embedding = lstm_model(embedding_size=EMBEDDING_SIZE, vocab_size=VOCAB_SIZE) lstm.summary() lstm.fit_generator( data_generator(batch_size=128), epochs=10, steps_per_epoch=100, ) lookup = EmbeddingWrapper(model=lstm_embedding) evaluate_sample(lookup)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/05.1 Generating Text in the Style of an Example Text-checkpoint.ipynb
WillKoehrsen/deep_learning_cookbook
def gen_downsample_noise(filters, size, apply_batchnorm=True): initializer = tf.random_normal_initializer(mean, std_dev) result = tf.keras.Sequential() result.add( tf.keras.layers.Conv2DTranspose(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) if apply_batchnorm: result.add(tf.keras.layers.BatchNormalization()) result.add(tf.keras.layers.ELU()) return result
def gen_upsample(filters, size,apply_batchnorm = False): initializer = tf.random_normal_initializer(mean, std_dev) result = tf.keras.Sequential() result.add( tf.keras.layers.Conv2DTranspose(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) if apply_batchnorm: result.add(tf.keras.layers.BatchNormalization()) result.add(tf.keras.layers.ELU()) return result def EncoderNN(): down_stack_parent = [ gen_downsample_parent(32,4,apply_batchnorm=True, apply_dropout=False), gen_downsample_parent(64,4,apply_batchnorm=True, apply_dropout=False) ] # down_stack_noise =[ # # z = 4x4x64 # gen_downsample_noise(64,4,apply_batchnorm=True), #8x8x64 # gen_downsample_noise(32,4,apply_batchnorm=True) #16x16x32 # ] final_conv =[ gen_upsample(32,4 ,apply_batchnorm = True) ] initializer = tf.random_normal_initializer(mean, sd_random_normal_init) last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4, strides=2, padding='same', kernel_initializer=initializer, activation='tanh') concat = tf.keras.layers.Concatenate() father = tf.keras.layers.Input(shape=(img_size,img_size,3)) mother = tf.keras.layers.Input(shape=(img_size,img_size,3)) x1 = father for down in down_stack_parent: x1 = down(x1) # print(x1.shape) x2 = mother for down in down_stack_parent: x2 = down(x2) # print(x2.shape) final = concat([x1,x2]) # print(final.shape) final = final_conv[0](final) final = last(final) # print(final.shape) return tf.keras.Model(inputs=[father, mother], outputs=final) encoder_optimizer = tf.keras.optimizers.Adam(learning_rate = lr, beta_1=b1) def tensor_to_array(tensor1): return tensor1.numpy() def train_encoder(father_batch, mother_batch, target_batch, b_size): with tf.GradientTape() as enc_tape: gen_outputs = encoder([father_batch, mother_batch], training=True) diff = tf.abs(target_batch - gen_outputs) flatten_diff = tf.reshape(diff, (b_size, img_size*img_size*3)) encoder_loss_batch = tf.reduce_mean(flatten_diff, axis=1) encoder_loss = tf.reduce_mean(encoder_loss_batch) print("ENCODER_LOSS: ",tensor_to_array(encoder_loss)) #calculate gradients encoder_gradients = enc_tape.gradient(encoder_loss,encoder.trainable_variables) #apply gradients on optimizer encoder_optimizer.apply_gradients(zip(encoder_gradients,encoder.trainable_variables)) def fit_encoder(train_ds, epochs, test_ds, batch): losses=np.array([]) for epoch in range(epochs): print("______________________________EPOCH %d_______________________________"%(epoch+1)) start = time.time() for i in range(len(train_ds)//batch): batch_data = np.asarray(generate_batch(train_ds[i*batch:(i+1)*batch])) batch_data = batch_data / 255 * 2 -1 print("Generated batch", batch_data.shape) X_Father_train = tf.convert_to_tensor(batch_data[:,0],dtype =tf.float32) X_Mother_train = tf.convert_to_tensor(batch_data[:,1],dtype =tf.float32) Y_train = tf.convert_to_tensor(batch_data[:,3],dtype =tf.float32) train_encoder(X_Father_train, X_Mother_train, Y_train,batch) print("Trained for batch %d/%d"%(i+1,(len(train_ds)//batch))) print("______________________________TRAINING COMPLETED_______________________________") train_dataset = all_families[:-100] test_dataset = all_families[-100:] encoder = EncoderNN() with tf.device('/gpu:0'): fit_encoder(train_dataset, EPOCHS, test_dataset,batch) f_no = 1106 family_data = generate_batch([all_families[f_no]]) inp = [family_data[0][0],family_data[0][1]] inp = tf.cast(inp, tf.float32) father_inp = inp[0][tf.newaxis,...] mother_inp = inp[1][tf.newaxis,...] with tf.device('/cpu:0'): gen_output = encoder([father_inp, mother_inp], training=True) temp = gen_output.numpy() plt.imshow(np.squeeze(temp)) # print(temp) print(np.amin(temp)) print(np.amax(temp)) target = family_data[0][3] plt.imshow(target)
_____no_output_____
MIT
DCGAN_V2.ipynb
SAKARA96/Offspring-Face-Generator
def disc_downsample_parent_target(filters, size, apply_batchnorm=True): initializer = tf.random_normal_initializer(mean, std_dev) result = tf.keras.Sequential() result.add( tf.keras.layers.Conv2D(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) if apply_batchnorm: result.add(tf.keras.layers.BatchNormalization()) result.add(tf.keras.layers.LeakyReLU(alpha = 0.2)) return result def disc_loss(filters, size,apply_batchnorm = False): initializer = tf.random_normal_initializer(mean, std_dev) result = tf.keras.Sequential() result.add( tf.keras.layers.Conv2D(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) if apply_batchnorm: result.add(tf.keras.layers.BatchNormalization()) result.add(tf.keras.layers.LeakyReLU(alpha = 0.2)) return result def Discriminator(): father = tf.keras.layers.Input(shape=(img_size,img_size,3)) mother = tf.keras.layers.Input(shape=(img_size,img_size,3)) target = tf.keras.layers.Input(shape=(img_size,img_size,3)) down_stack_parent_target = [ disc_downsample_parent_target(32,4,apply_batchnorm=False), #32x32x32 disc_downsample_parent_target(64,4,apply_batchnorm=True) #16x16x64 ] down_stack_combined =[ disc_loss(192,4,apply_batchnorm=True), disc_loss(256,4,apply_batchnorm=False) ] initializer = tf.random_normal_initializer(mean, sd_random_normal_init) last = tf.keras.layers.Conv2D(1, 4, strides=1,padding='same', kernel_initializer=initializer) # linear layer concat = tf.keras.layers.Concatenate() x1 = father for down in down_stack_parent_target: x1 = down(x1) x2 = mother for down in down_stack_parent_target: x2 = down(x2) x3 = target for down in down_stack_parent_target: x3 = down(x3) combined = concat([x1,x2,x3]) # combined is Batchx16x16x192 x4 = combined for down in down_stack_combined: x4 = down(x4) # print(x4.shape) output = last(x4) #4X4 print(output.shape) return tf.keras.Model(inputs=[father,mother,target], outputs=output) discriminator = Discriminator() # family_data = generate_image(all_families[126]) # p1 = tf.cast(family_data[0], tf.float32) # p2 = tf.cast(family_data[1], tf.float32) # c = tf.cast(family_data[2], tf.float32) # discriminator = Discriminator() # with tf.device('/cpu:0'): # disc_out = discriminator(inputs = [p1,p2,c], training=True) LAMBDA = 1 def tensor_to_array(tensor1): return tensor1.numpy() def discriminator_loss(disc_real_output, disc_generated_output,b_size): real_loss_diff = tf.abs(tf.ones_like(disc_real_output) - disc_real_output) real_flatten_diff = tf.reshape(real_loss_diff, (b_size, 4*4*1)) real_loss_batch = tf.reduce_mean(real_flatten_diff, axis=1) real_loss = tf.reduce_mean(real_loss_batch) gen_loss_diff = tf.abs(tf.zeros_like(disc_generated_output) - disc_generated_output) gen_flatten_diff = tf.reshape(gen_loss_diff, (b_size, 4*4*1)) gen_loss_batch = tf.reduce_mean(gen_flatten_diff, axis=1) gen_loss = tf.reduce_mean(gen_loss_batch) total_disc_loss = real_loss + gen_loss return total_disc_loss def generator_loss(disc_generated_output, gen_output, target,b_size): gen_loss_diff = tf.abs(tf.ones_like(disc_generated_output) - disc_generated_output) gen_flatten_diff = tf.reshape(gen_loss_diff, (b_size, 4*4*1)) gen_loss_batch = tf.reduce_mean(gen_flatten_diff, axis=1) gen_loss = tf.reduce_mean(gen_loss_batch) l1_loss_diff = tf.abs(target - gen_output) l1_flatten_diff = tf.reshape(l1_loss_diff, (b_size, img_size*img_size*3)) l1_loss_batch = tf.reduce_mean(l1_flatten_diff, axis=1) l1_loss = tf.reduce_mean(l1_loss_batch) total_gen_loss = gen_loss + LAMBDA * l1_loss # print("Reconstruction loss: {}, GAN loss: {}".format(l1_loss, gen_loss)) return total_gen_loss generator_optimizer = tf.keras.optimizers.Adam(lr, beta_1=b1 ,beta_2 = b2) discriminator_optimizer = tf.keras.optimizers.Adam(lr, beta_1=b1, beta_2 = b2) def train_step(father_batch, mother_batch, target_batch,b_size): with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: gen_outputs = encoder([father_batch, mother_batch], training=True) # print("Generated outputs",gen_outputs.shape) disc_real_output = discriminator([father_batch, mother_batch, target_batch], training=True) # print("disc_real_output ", disc_real_output.shape) disc_generated_output = discriminator([father_batch, mother_batch, gen_outputs], training=True) # print("disc_generated_output ", disc_generated_output.shape) gen_loss = generator_loss(disc_generated_output, gen_outputs, target_batch,b_size) disc_loss = discriminator_loss(disc_real_output, disc_generated_output,b_size) print("GEN_LOSS",tensor_to_array(gen_loss)) print("DISC_LOSS",tensor_to_array(disc_loss)) generator_gradients = gen_tape.gradient(gen_loss,encoder.trainable_variables) discriminator_gradients = disc_tape.gradient(disc_loss,discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(generator_gradients,encoder.trainable_variables)) discriminator_optimizer.apply_gradients(zip(discriminator_gradients,discriminator.trainable_variables)) def fit(train_ds, epochs, test_ds,batch): for epoch in range(epochs): print("______________________________EPOCH %d_______________________________"%(epoch)) start = time.time() for i in range(len(train_ds)//batch): batch_data = np.asarray(generate_batch(train_ds[i*batch:(i+1)*batch])) batch_data = batch_data / 255 * 2 -1 print("Generated batch", batch_data.shape) X_father_train = tf.convert_to_tensor(batch_data[:,0],dtype =tf.float32) X_mother_train = tf.convert_to_tensor(batch_data[:,1],dtype =tf.float32) # print("Xtrain",X_train.shape) # print("Batch converted to tensor") Y_train = tf.convert_to_tensor(batch_data[:,3],dtype =tf.float32) train_step(X_father_train, X_mother_train, Y_train, batch) print("Trained for batch %d/%d"%(i+1,(len(train_ds)//batch))) # family_no = 400 # family_data = generate_image(all_families[family_no][0], all_families[family_no][1], all_families[family_no][2]) # inp = [family_data[0],family_data[1]] # inp = tf.cast(inp, tf.float32) # father_inp = inp[0][tf.newaxis,...] # mother_inp = inp[1][tf.newaxis,...] # gen_output = encoder([father_inp, mother_inp], training=True) # print(tf.reduce_min(gen_output)) # print(tf.reduce_max(gen_output)) # plt.figure() # plt.imshow(gen_output[0,...]) # plt.show() print("______________________________TRAINING COMPLETED_______________________________") checkpoint.save(file_prefix = checkpoint_prefix) concat = tf.keras.layers.Concatenate() train_dataset = all_families[:-10] test_dataset = all_families[-10:] encoder = EncoderNN() discriminator = Discriminator() img_size = 64 mean = 0. std_dev = 0.02 lr = 0.0005 b1 = 0.9 b2 = 0.999 sd_random_normal_init = 0.02 EPOCHS = 5 batch = 25 checkpoint_dir = './checkpoint' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer, discriminator_optimizer=discriminator_optimizer, generator=encoder, discriminator=discriminator) with tf.device('/gpu:0'): fit(train_dataset, EPOCHS, test_dataset,batch) family_no = 1011 family_data = generate_image(all_families[family_no][0], all_families[family_no][1], all_families[family_no][2]) inp = [family_data[0],family_data[1]] inp = tf.cast(inp, tf.float32) father_inp = inp[0][tf.newaxis,...] mother_inp = inp[1][tf.newaxis,...] with tf.device('/gpu:0'): gen_output = encoder([father_inp, mother_inp], training=True) temp = gen_output.numpy() plt.imshow(np.squeeze(temp)) print(np.amin(temp)) +print(np.amax(temp)) family_no = 1011 family_data = generate_image(all_families[family_no][0], all_families[family_no][1], all_families[family_no][2]) inp = [family_data[0],family_data[1]] inp = tf.cast(inp, tf.float32) father_inp = inp[0][tf.newaxis,...] mother_inp = inp[1][tf.newaxis,...] with tf.device('/gpu:0'): gen_output = encoder([father_inp, mother_inp], training=True) temp = gen_output.numpy() plt.imshow(np.squeeze(temp)) print(np.amin(temp)) print(np.amax(temp))
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
MIT
DCGAN_V2.ipynb
SAKARA96/Offspring-Face-Generator
AWS Marketplace Product Usage Demonstration - Algorithms Using Algorithm ARN with Amazon SageMaker APIsThis sample notebook demonstrates two new functionalities added to Amazon SageMaker:1. Using an Algorithm ARN to run training jobs and use that result for inference2. Using an AWS Marketplace product ARN - we will use [Scikit Decision Trees](https://aws.amazon.com/marketplace/pp/prodview-ha4f3kqugba3u?qid=1543169069960&sr=0-1&ref_=srh_res_product_title) Overall flow diagram CompatibilityThis notebook is compatible only with [Scikit Decision Trees](https://aws.amazon.com/marketplace/pp/prodview-ha4f3kqugba3u?qid=1543169069960&sr=0-1&ref_=srh_res_product_title) sample algorithm published to AWS Marketplace. ***Pre-Requisite:*** Please subscribe to this free product before proceeding with this notebook Set up the environment
import sagemaker as sage from sagemaker import get_execution_role role = get_execution_role() # S3 prefixes common_prefix = "DEMO-scikit-byo-iris" training_input_prefix = common_prefix + "/training-input-data" batch_inference_input_prefix = common_prefix + "/batch-inference-input-data"
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Create the sessionThe session remembers our connection parameters to Amazon SageMaker. We'll use it to perform all of our Amazon SageMaker operations.
sagemaker_session = sage.Session()
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Upload the data for trainingWhen training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we're using some the classic [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which we have included. We can use use the tools provided by the Amazon SageMaker Python SDK to upload the data to a default bucket.
TRAINING_WORKDIR = "data/training" training_input = sagemaker_session.upload_data(TRAINING_WORKDIR, key_prefix=training_input_prefix) print("Training Data Location " + training_input)
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Creating Training Job using Algorithm ARNPlease put in the algorithm arn you want to use below. This can either be an AWS Marketplace algorithm you subscribed to (or) one of the algorithms you created in your own account.The algorithm arn listed below belongs to the [Scikit Decision Trees](https://aws.amazon.com/marketplace/pp/prodview-ha4f3kqugba3u?qid=1543169069960&sr=0-1&ref_=srh_res_product_title) product.
from src.scikit_product_arns import ScikitArnProvider algorithm_arn = ScikitArnProvider.get_algorithm_arn(sagemaker_session.boto_region_name) import json import time from sagemaker.algorithm import AlgorithmEstimator algo = AlgorithmEstimator( algorithm_arn=algorithm_arn, role=role, train_instance_count=1, train_instance_type="ml.c4.xlarge", base_job_name="scikit-from-aws-marketplace", )
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Run Training Job
print( "Now run the training job using algorithm arn %s in region %s" % (algorithm_arn, sagemaker_session.boto_region_name) ) algo.fit({"training": training_input})
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Automated Model Tuning (optional)Since this algorithm supports tunable hyperparameters with a tuning objective metric, we can run a Hyperparameter Tuning Job to obtain the best training job hyperparameters and its corresponding model artifacts.
from sagemaker.tuner import HyperparameterTuner, IntegerParameter ## This demo algorithm supports max_leaf_nodes as the only tunable hyperparameter. hyperparameter_ranges = {"max_leaf_nodes": IntegerParameter(1, 100000)} tuner = HyperparameterTuner( estimator=algo, base_tuning_job_name="some-name", objective_metric_name="validation:accuracy", hyperparameter_ranges=hyperparameter_ranges, max_jobs=2, max_parallel_jobs=2, ) tuner.fit({"training": training_input}, include_cls_metadata=False) tuner.wait()
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Batch Transform JobNow let's use the model built to run a batch inference job and verify it works. Batch Transform Input PreparationThe snippet below is removing the "label" column (column indexed at 0) and retaining the rest to be batch transform's input. ***NOTE:*** This is the same training data, which is a no-no from a ML science perspective. But the aim of this notebook is to demonstrate how things work end-to-end.
import pandas as pd ## Remove first column that contains the label shape = pd.read_csv(TRAINING_WORKDIR + "/iris.csv", header=None).drop([0], axis=1) TRANSFORM_WORKDIR = "data/transform" shape.to_csv(TRANSFORM_WORKDIR + "/batchtransform_test.csv", index=False, header=False) transform_input = ( sagemaker_session.upload_data(TRANSFORM_WORKDIR, key_prefix=batch_inference_input_prefix) + "/batchtransform_test.csv" ) print("Transform input uploaded to " + transform_input) transformer = algo.transformer(1, "ml.m4.xlarge") transformer.transform(transform_input, content_type="text/csv") transformer.wait() print("Batch Transform output saved to " + transformer.output_path)
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Inspect the Batch Transform Output in S3
from urllib.parse import urlparse parsed_url = urlparse(transformer.output_path) bucket_name = parsed_url.netloc file_key = "{}/{}.out".format(parsed_url.path[1:], "batchtransform_test.csv") s3_client = sagemaker_session.boto_session.client("s3") response = s3_client.get_object(Bucket=sagemaker_session.default_bucket(), Key=file_key) response_bytes = response["Body"].read().decode("utf-8") print(response_bytes)
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Live Inference EndpointFinally, we demonstrate the creation of an endpoint for live inference using this AWS Marketplace algorithm generated model
from sagemaker.predictor import csv_serializer predictor = algo.deploy(1, "ml.m4.xlarge", serializer=csv_serializer)
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Choose some data and use it for a predictionIn order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works.
shape = pd.read_csv(TRAINING_WORKDIR + "/iris.csv", header=None) import itertools a = [50 * i for i in range(3)] b = [40 + i for i in range(10)] indices = [i + j for i, j in itertools.product(a, b)] test_data = shape.iloc[indices[:-1]] test_X = test_data.iloc[:, 1:] test_y = test_data.iloc[:, 0]
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Prediction is as easy as calling predict with the predictor we got back from deploy and the data we want to do predictions with. The serializers take care of doing the data conversions for us.
print(predictor.predict(test_X.values).decode("utf-8"))
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Cleanup the endpoint
algo.delete_endpoint()
_____no_output_____
Apache-2.0
aws_marketplace/using_algorithms/amazon_demo_product/Using_Algorithm_Arn_From_AWS_Marketplace.ipynb
Amirosimani/amazon-sagemaker-examples
Detrending, Stylized Facts and the Business CycleIn an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as "structural time series models") to derive stylized facts of the business cycle.Their paper begins: "Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step in macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic properties of the data and (2) present meaningful information." In particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.Statsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt from IPython.display import display, Latex
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
Unobserved ComponentsThe unobserved components model available in Statsmodels can be written as:$$y_t = \underbrace{\mu_{t}}_{\text{trend}} + \underbrace{\gamma_{t}}_{\text{seasonal}} + \underbrace{c_{t}}_{\text{cycle}} + \sum_{j=1}^k \underbrace{\beta_j x_{jt}}_{\text{explanatory}} + \underbrace{\varepsilon_t}_{\text{irregular}}$$see Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation. TrendThe trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.$$\begin{align}\underbrace{\mu_{t+1}}_{\text{level}} & = \mu_t + \nu_t + \eta_{t+1} \qquad & \eta_{t+1} \sim N(0, \sigma_\eta^2) \\\\\underbrace{\nu_{t+1}}_{\text{trend}} & = \nu_t + \zeta_{t+1} & \zeta_{t+1} \sim N(0, \sigma_\zeta^2) \\\end{align}$$where the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.For both elements (level and trend), we can consider models in which:- The element is included vs excluded (if the trend is included, there must also be a level included).- The element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)The only additional parameters to be estimated via MLE are the variances of any included stochastic components.This leads to the following specifications:| | Level | Trend | Stochastic Level | Stochastic Trend ||----------------------------------------------------------------------|-------|-------|------------------|------------------|| Constant | βœ“ | | | || Local Level (random walk) | βœ“ | | βœ“ | || Deterministic trend | βœ“ | βœ“ | | || Local level with deterministic trend (random walk with drift) | βœ“ | βœ“ | βœ“ | || Local linear trend | βœ“ | βœ“ | βœ“ | βœ“ || Smooth trend (integrated random walk) | βœ“ | βœ“ | | βœ“ | SeasonalThe seasonal component is written as:$$\gamma_t = - \sum_{j=1}^{s-1} \gamma_{t+1-j} + \omega_t \qquad \omega_t \sim N(0, \sigma_\omega^2)$$The periodicity (number of seasons) is `s`, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.The variants of this model are:- The periodicity `s`- Whether or not to make the seasonal effects stochastic.If the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term). CycleThe cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between "1.5 and 12 years" (see Durbin and Koopman).The cycle is written as:$$\begin{align}c_{t+1} & = c_t \cos \lambda_c + c_t^* \sin \lambda_c + \tilde \omega_t \qquad & \tilde \omega_t \sim N(0, \sigma_{\tilde \omega}^2) \\\\c_{t+1}^* & = -c_t \sin \lambda_c + c_t^* \cos \lambda_c + \tilde \omega_t^* & \tilde \omega_t^* \sim N(0, \sigma_{\tilde \omega}^2)\end{align}$$The parameter $\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws). IrregularThe irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.$$\varepsilon_t \sim N(0, \sigma_\varepsilon^2)$$In some cases, we may want to generalize the irregular component to allow for autoregressive effects:$$\varepsilon_t = \rho(L) \varepsilon_{t-1} + \epsilon_t, \qquad \epsilon_t \sim N(0, \sigma_\epsilon^2)$$In this case, the autoregressive parameters would also be estimated via MLE. Regression effectsWe may want to allow for explanatory variables by including additional terms$$\sum_{j=1}^k \beta_j x_{jt}$$or for intervention effects by including$$\begin{align}\delta w_t \qquad \text{where} \qquad w_t & = 0, \qquad t < \tau, \\\\& = 1, \qquad t \ge \tau\end{align}$$These additional parameters could be estimated via MLE or by including them as components of the state space formulation. DataFollowing Harvey and Jaeger, we will consider the following time series:- US real GNP, "output", ([GNPC96](https://research.stlouisfed.org/fred2/series/GNPC96))- US GNP implicit price deflator, "prices", ([GNPDEF](https://research.stlouisfed.org/fred2/series/GNPDEF))- US monetary base, "money", ([AMBSL](https://research.stlouisfed.org/fred2/series/AMBSL))The time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.All data series considered here are taken from [Federal Reserve Economic Data (FRED)](https://research.stlouisfed.org/fred2/). Conveniently, the Python library [Pandas](http://pandas.pydata.org/) has the ability to download data from FRED directly.
# Datasets from pandas.io.data import DataReader # Get the raw data start = '1948-01' end = '2008-01' us_gnp = DataReader('GNPC96', 'fred', start=start, end=end) us_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end) us_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS') recessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS', how='last').values[:,0] # Construct the dataframe dta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1) dta.columns = ['US GNP','US Prices','US monetary base'] dates = dta.index._mpl_repr()
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
To get a sense of these three variables over the timeframe, we can plot them:
# Plot the data ax = dta.plot(figsize=(13,3)) ylim = ax.get_ylim() ax.xaxis.grid() ax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
ModelSince the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:$$y_t = \underbrace{\mu_{t}}_{\text{trend}} + \underbrace{c_{t}}_{\text{cycle}} + \underbrace{\varepsilon_t}_{\text{irregular}}$$The irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:1. Local linear trend (the "unrestricted" model)2. Smooth trend (the "restricted" model, since we are forcing $\sigma_\eta = 0$)Below, we construct `kwargs` dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.
# Model specifications # Unrestricted model, using string specification unrestricted_model = { 'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True } # Unrestricted model, setting components directly # This is an equivalent, but less convenient, way to specify a # local linear trend model with a stochastic damped cycle: # unrestricted_model = { # 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True, # 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True # } # The restricted model forces a smooth trend restricted_model = { 'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True } # Restricted model, setting components directly # This is an equivalent, but less convenient, way to specify a # smooth trend model with a stochastic damped cycle. Notice # that the difference from the local linear trend model is that # `stochastic_level=False` here. # unrestricted_model = { # 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True, # 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True # }
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
We now fit the following models:1. Output, unrestricted model2. Prices, unrestricted model3. Prices, restricted model4. Money, unrestricted model5. Money, restricted model
# Output output_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model) output_res = output_mod.fit(method='powell', disp=False) # Prices prices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model) prices_res = prices_mod.fit(method='powell', disp=False) prices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model) prices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False) # Money money_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model) money_res = money_mod.fit(method='powell', disp=False) money_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model) money_restricted_res = money_restricted_mod.fit(method='powell', disp=False)
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the `summary` method on the fit object.
print(output_res.summary())
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.The `plot_components` method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.
fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.
# Create Table I table_i = np.zeros((5,6)) start = dta.index[0] end = dta.index[-1] time_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter) models = [ ('US GNP', time_range, 'None'), ('US Prices', time_range, 'None'), ('US Prices', time_range, r'$\sigma_\eta^2 = 0$'), ('US monetary base', time_range, 'None'), ('US monetary base', time_range, r'$\sigma_\eta^2 = 0$'), ] index = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions']) parameter_symbols = [ r'$\sigma_\zeta^2$', r'$\sigma_\eta^2$', r'$\sigma_\kappa^2$', r'$\rho$', r'$2 \pi / \lambda_c$', r'$\sigma_\varepsilon^2$', ] i = 0 for res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res): if res.model.stochastic_level: (sigma_irregular, sigma_level, sigma_trend, sigma_cycle, frequency_cycle, damping_cycle) = res.params else: (sigma_irregular, sigma_level, sigma_cycle, frequency_cycle, damping_cycle) = res.params sigma_trend = np.nan period_cycle = 2 * np.pi / frequency_cycle table_i[i, :] = [ sigma_level*1e7, sigma_trend*1e7, sigma_cycle*1e7, damping_cycle, period_cycle, sigma_irregular*1e7 ] i += 1 pd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-') table_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols) table_i
_____no_output_____
BSD-3-Clause
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
yarikoptic/statsmodels
Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code.
# Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
# Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
### START CODE HERE ### (β‰ˆ 3 lines of code) m_train = None m_test = None num_px = None ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape))
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X```
# Reshape the training and test examples ### START CODE HERE ### (β‰ˆ 2 lines of code) train_set_x_flatten = None test_set_x_flatten = None ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset.
train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255.
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (β‰ˆ 1 line of code) s = None ### END CODE HERE ### return s print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
# GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (β‰ˆ 1 line of code) w = None b = None ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b))
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
# GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (β‰ˆ 2 lines of code) A = None # compute activation cost = None # compute cost ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (β‰ˆ 2 lines of code) dw = None db = None ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost))
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output**: ** dw ** [[ 0.99993216] [ 1.99980262]] ** db ** 0.499935230625 ** cost ** 6.000064773192205 d) Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
# GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation (β‰ˆ 1-4 lines of code) ### START CODE HERE ### grads, cost = None ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (β‰ˆ 2 lines of code) ### START CODE HERE ### w = None b = None ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training examples if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"]))
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output**: **w** [[ 0.1124579 ] [ 0.23106775]] **b** 1.55930492484 **dw** [[ 0.90158428] [ 1.76250842]] **db** 0.430462071679 **Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There is two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
# GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (β‰ˆ 1 line of code) A = None ### END CODE HERE ### for i in range(A.shape[1]): # Convert probabilities A[0,i] to actual predictions p[0,i] ### START CODE HERE ### (β‰ˆ 4 lines of code) pass ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction print ("predictions = " + str(predict(w, b, X)))
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output**: **predictions** [[ 1. 1.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize()
# GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### # initialize parameters with zeros (β‰ˆ 1 line of code) w, b = None # Gradient descent (β‰ˆ 1 line of code) parameters, grads, costs = None # Retrieve parameters w and b from dictionary "parameters" w = parameters["w"] b = parameters["b"] # Predict test/train set examples (β‰ˆ 2 lines of code) Y_prediction_test = None Y_prediction_train = None ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
Run the following cell to train your model.
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Expected Output**: **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
# Example of a picture that was wrongly classified. index = 1 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
Let's also plot the cost function and the gradients.
# Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show()
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "-------------------------------------------------------" + '\n') for i in learning_rates: plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"])) plt.ylabel('cost') plt.xlabel('iterations') legend = plt.legend(loc='upper center', shadow=True) frame = legend.get_frame() frame.set_facecolor('0.90') plt.show()
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
**Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "my_image.jpg" # change this to the name of your image file ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
_____no_output_____
MIT
01.Neural-networks-Deep-learning/Week2/Logistic Regression as a Neural Network/Logistic Regression with a Neural Network mindset v3.ipynb
navicester/deeplearning.ai-Assignments
SLU10 - Metrics for regression: Example NotebookIn this notebook [some regression validation metrics offered by scikit-learn](http://scikit-learn.org/stable/modules/model_evaluation.htmlcommon-cases-predefined-values) are presented.
import numpy as np import pandas as pd from sklearn.datasets import load_boston from sklearn.linear_model import LinearRegression # some scikit-learn regression validation metrics from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score np.random.seed(60)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU10 - Metrics for Regression/Example Notebook.ipynb
LDSSA/batch4-students
Load DataLoad the Boston house-prices dataset, fit a Linear Regression, and make prediction on the dataset (used to create the model).
data = load_boston() x = pd.DataFrame(data['data'], columns=data['feature_names']) y = pd.Series(data['target']) lr = LinearRegression() lr.fit(x, y) y_hat = lr.predict(x)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU10 - Metrics for Regression/Example Notebook.ipynb
LDSSA/batch4-students
Metrics with scikitlearnBelow follows a list of metrics made available by scikitlearn and its usage: Mean Squared Error$$MSE = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2$$
mean_squared_error(y, y_hat)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU10 - Metrics for Regression/Example Notebook.ipynb
LDSSA/batch4-students
Root Mean Squared Error$$RMSE = \sqrt{MSE}$$
np.sqrt(mean_squared_error(y, y_hat))
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU10 - Metrics for Regression/Example Notebook.ipynb
LDSSA/batch4-students
Mean Absolute Error$$MAE = \frac{1}{N} \sum_{n=1}^N \left| y_n - \hat{y}_n \right|$$
mean_absolute_error(y, y_hat)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU10 - Metrics for Regression/Example Notebook.ipynb
LDSSA/batch4-students
RΒ² score$$\bar{y} = \frac{1}{N} \sum_{n=1}^N y_n$$$$RΒ² = 1 - \frac{MSE(y, \hat{y})}{MSE(y, \bar{y})} = 1 - \frac{\frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2}{\frac{1}{N} \sum_{n=1}^N (y_n - \bar{y})^2}= 1 - \frac{\sum_{n=1}^N (y_n - \hat{y}_n)^2}{\sum_{n=1}^N (y_n - \bar{y})^2}$$
r2_score(y, y_hat)
_____no_output_____
MIT
S01 - Bootcamp and Binary Classification/SLU10 - Metrics for Regression/Example Notebook.ipynb
LDSSA/batch4-students
Planar data classification with one hidden layerWelcome to your week 3 programming assignment! It's time to build your first neural network, which will have one hidden layer. Now, you'll notice a big difference between this model and the one you implemented previously using logistic regression.By the end of this assignment, you'll be able to:- Implement a 2-class classification neural network with a single hidden layer- Use units with a non-linear activation function, such as tanh- Compute the cross entropy loss- Implement forward and backward propagation Table of Contents- [1 - Packages](1)- [2 - Load the Dataset](2) - [Exercise 1](ex-1)- [3 - Simple Logistic Regression](3)- [4 - Neural Network model](4) - [4.1 - Defining the neural network structure](4-1) - [Exercise 2 - layer_sizes](ex-2) - [4.2 - Initialize the model's parameters](4-2) - [Exercise 3 - initialize_parameters](ex-3) - [4.3 - The Loop](4-3) - [Exercise 4 - forward_propagation](ex-4) - [4.4 - Compute the Cost](4-4) - [Exercise 5 - compute_cost](ex-5) - [4.5 - Implement Backpropagation](4-5) - [Exercise 6 - backward_propagation](ex-6) - [4.6 - Update Parameters](4-6) - [Exercise 7 - update_parameters](ex-7) - [4.7 - Integration](4-7) - [Exercise 8 - nn_model](ex-8)- [5 - Test the Model](5) - [5.1 - Predict](5-1) - [Exercise 9 - predict](ex-9) - [5.2 - Test the Model on the Planar Dataset](5-2)- [6 - Tuning hidden layer size (optional/ungraded exercise)](6)- [7- Performance on other datasets](7) 1 - PackagesFirst import all the packages that you will need during this assignment.- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis. - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.- testCases provides some test examples to assess the correctness of your functions- planar_utils provide various useful functions used in this assignment
# Package imports import numpy as np import copy import matplotlib.pyplot as plt from testCases_v2 import * from public_tests import * import sklearn import sklearn.datasets import sklearn.linear_model from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets %matplotlib inline np.random.seed(2) # set a seed so that the results are consistent %load_ext autoreload %autoreload 2
_____no_output_____
MIT
Neural Networks and Deep Learning/Week3/Planar_data_classification_with_one_hidden_layer.ipynb
sounok1234/Deeplearning_Projects