code
stringlengths 2.5k
836k
| kind
stringclasses 2
values | parsed_code
stringlengths 2
404k
| quality_prob
float64 0.6
0.98
| learning_prob
float64 0.3
1
|
---|---|---|---|---|
# Visualizing Logistic Regression
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
```
# Define the graph
```
# Parameters of Logistic Regression
learning_rate = 0.01
training_epochs = 20
batch_size = 100
display_step = 5
# Create Graph for Logistic Regression
x = tf.placeholder("float", [None, 784], name="INPUT_x")
y = tf.placeholder("float", [None, 10], name="OUTPUT_y")
W = tf.Variable(tf.zeros([784, 10]), name="WEIGHT_W")
b = tf.Variable(tf.zeros([10]), name="BIAS_b")
# Activation, Cost, and Optimizing functions
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
corr = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accr = tf.reduce_mean(tf.cast(corr, "float"))
init = tf.initialize_all_variables()
```
# Launch the graph
```
sess = tf.Session()
sess.run(init)
```
# Summary writer
```
summary_path = '/tmp/tf_logs/logistic_regression_mnist'
summary_writer = tf.summary.FileWriter(summary_path, graph=sess.graph)
print ("Summary writer ready")
```
# Run
```
print ("Summary writer ready")
for epoch in range(training_epochs):
sum_cost = 0.
num_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(num_batch):
randidx = np.random.randint(trainimg.shape[0], size=batch_size)
batch_xs = trainimg[randidx, :]
batch_ys = trainlabel[randidx, :]
# Fit training using batch data
feeds = {x: batch_xs, y: batch_ys}
sess.run(optm, feed_dict=feeds)
# Compute average loss
sum_cost += sess.run(cost, feed_dict=feeds)
avg_cost = sum_cost / num_batch
# Display logs per epoch step
if epoch % display_step == 0:
train_acc = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print ("Epoch: %03d/%03d cost: %.9f train_acc: %.3f"
% (epoch, training_epochs, avg_cost, train_acc))
print ("Optimization Finished!")
# Test model
test_acc = sess.run(accr, feed_dict={x: testimg, y: testlabel})
print (("Test Accuracy: %.3f") % (test_acc))
float(epoch)
```
### Run the command line
##### tensorboard --logdir=/tmp/tf_logs/logistic_regression_mnist
### Open http://localhost:6006/ into your web browser
<img src="images/tsboard/logistic_regression_mnist.png">
| github_jupyter | import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
# Parameters of Logistic Regression
learning_rate = 0.01
training_epochs = 20
batch_size = 100
display_step = 5
# Create Graph for Logistic Regression
x = tf.placeholder("float", [None, 784], name="INPUT_x")
y = tf.placeholder("float", [None, 10], name="OUTPUT_y")
W = tf.Variable(tf.zeros([784, 10]), name="WEIGHT_W")
b = tf.Variable(tf.zeros([10]), name="BIAS_b")
# Activation, Cost, and Optimizing functions
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
corr = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accr = tf.reduce_mean(tf.cast(corr, "float"))
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
summary_path = '/tmp/tf_logs/logistic_regression_mnist'
summary_writer = tf.summary.FileWriter(summary_path, graph=sess.graph)
print ("Summary writer ready")
print ("Summary writer ready")
for epoch in range(training_epochs):
sum_cost = 0.
num_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(num_batch):
randidx = np.random.randint(trainimg.shape[0], size=batch_size)
batch_xs = trainimg[randidx, :]
batch_ys = trainlabel[randidx, :]
# Fit training using batch data
feeds = {x: batch_xs, y: batch_ys}
sess.run(optm, feed_dict=feeds)
# Compute average loss
sum_cost += sess.run(cost, feed_dict=feeds)
avg_cost = sum_cost / num_batch
# Display logs per epoch step
if epoch % display_step == 0:
train_acc = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print ("Epoch: %03d/%03d cost: %.9f train_acc: %.3f"
% (epoch, training_epochs, avg_cost, train_acc))
print ("Optimization Finished!")
# Test model
test_acc = sess.run(accr, feed_dict={x: testimg, y: testlabel})
print (("Test Accuracy: %.3f") % (test_acc))
float(epoch) | 0.676086 | 0.913252 |
# Final Project Submission
* Student name: `Reno Vieira Neto`
* Student pace: `self paced`
* Scheduled project review date/time: `Fri Oct 15, 2021 3pm – 3:45pm (PDT)`
* Instructor name: `James Irving`
* Blog post URL: https://renoneto.github.io/using_streamlit
#### This project originated the [following app](https://movie-recommender-reno.herokuapp.com/). I'd recommend playing with the app and then coming back here to understand how the model behind it works.
# Table of Contents <a class="anchor" id="toc"></a>
- **[Business Case and Goals](#bc)**
- **[The Dataset](#td)**
- **[Dataset Exploration and Cleaning](#dec)**
- **[No. of Movies by Genre](#mg)**
- **[No. of Ratings per Year](#ry)**
- **[No. of Users rating movies per Year](#urm)**
- **[Recommender System](#rs)**
- **[Create Popularity Model](#pop)**
- **[Collaborative-Based Filtering](#colab)**
- **[Hyperparameter Tuning](#grid)**
- **[Try different models](#dif)**
- **[Model Evaluation](#eval)**
- **[Create function to take user input and give recommendations (+ hint of content-based attribute)](#func)**
- **[Conclusion](#conclusion)**
- **[Export files to create app](#lit)**
- **[Improvements](#improvements)**
# Business Case and Goal <a class="anchor" id="bc"></a>
In this project, I'm creating a movie recommender using the [MovieLens dataset](https://grouplens.org/datasets/movielens/) to build a model that provides top 5 movie recommendations to a user, based on their ratings of other movies. I'm going to be addressing the cold start problem as well by being able to deal with users with no movie ratings.
# The Dataset <a class="anchor" id="td"></a>
The MovieLens dataset is a "classic" recommendation system dataset used in numerous academic papers and machine learning proofs-of-concept.
[You can find more about it here](https://grouplens.org/datasets/movielens/)
# Dataset Exploration and Cleaning <a class="anchor" id="dec"></a>
## Import necessary packages
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import re
import time
from surprise import Reader, Dataset, dump
from surprise.model_selection import cross_validate, GridSearchCV
from surprise.prediction_algorithms import KNNBasic, KNNBaseline, SVD, SVDpp
from surprise.accuracy import rmse
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# Import datasets
df_movies = pd.read_csv('./app/data/movies.csv')
df_ratings = pd.read_csv('./app/data/ratings.csv')
# Show first rows
display(df_movies.head())
display(df_ratings.head())
```
#### Notes
- Breakdown genres into different columns (one-hot encoding)
- `title` seems to have the release year of the movie. It might be interesting to have title and year in different columns.
```
# Check for nulls and data types
display(df_movies.info())
display(df_ratings.info())
```
#### Notes
- No nulls
- Might need to convert timestamps to `datetime`
- There are 9742 movies in the dataset
- 100836 ratings
### `df_movies`
First, I'm going to start exploring the movies dataset to understand what I'm dealing with.
```
# Create column with array of genres and calculate the Number of Genres per movie
df_movies['genres_array'] = df_movies['genres'].str.split('|')
# Flattened genres
stacked_genres = df_movies['genres_array'].apply(pd.Series).stack(level=0).reset_index()
stacked_genres.columns = ['index', 'level_1', 'genre']
# Combine original dataframe with flattened genres using the index
df_movies_new = pd.merge(df_movies, stacked_genres, how='left', left_index=True, right_on=['index'])
df_movies_new = df_movies_new[['movieId', 'title', 'genre']]
# One-hot Encoding of Genre column
one_hot = pd.get_dummies(df_movies_new['genre'])
# Get list of genres (it's going to be useful soon)
list_of_genres = list(one_hot.columns)
# Combine the new dataframe with the one-hot encoded dataframe
df_movies_new = pd.merge(df_movies_new, one_hot, left_index=True, right_index=True)
df_movies_new = df_movies_new.drop('genre', axis=1)
# Use groupby to have one row per movie
df_movies_new = df_movies_new.groupby(['movieId', 'title']).sum()[list_of_genres].reset_index()
# Split year and title
df_movies_new['release_year'] = df_movies_new.apply(lambda x: x['title'].strip()[-5:][:-1], axis=1)
df_movies_new['release_year'] = df_movies_new.apply(lambda x:
x['release_year']
if len(re.findall("[0-9]{4}", x['release_year'])) == 1
else np.nan, axis=1)
df_movies_new['title'] = df_movies_new.apply(lambda x:
x['title'][:-6].strip()
if x['release_year'] != np.nan
else x['title'], axis=1)
```
### No. of Movies by genre <a class="anchor" id="mg"></a>
**[Go back to Table of Contents](#toc)**
```
# Create empty dictionary to store the no of movies by genre
no_of_movies_by_genre = {}
for genre in list_of_genres:
no_of_movies = df_movies_new[genre].sum()
no_of_movies_by_genre[genre] = no_of_movies
# Transform that into a dataframe
to_plot = pd.DataFrame.from_dict(no_of_movies_by_genre, orient='index').reset_index()
to_plot.columns = ['genre', 'no_of_movies']
to_plot = to_plot.sort_values('no_of_movies', ascending=False).reset_index(drop=True)
# Plot
plt.figure(figsize=(10,8))
sns.barplot(x="no_of_movies", y="genre", data=to_plot)
plt.title('No of Movies by Genre', size=14)
plt.xlabel('No. of Movies', size=13)
plt.ylabel(None)
plt.show()
```
#### Note
- We are dealing with an unbalanced dataset from the perspective of the genres. There are way more Drama and Comedy movies than other genres. The consequence of that to the model is that certain genres will have a smaller set of options to choose from.
### `df_ratings`
### No. of Ratings per Year <a class="anchor" id="ry"></a>
I wonder how many ratings were created per year.
**[Go back to Table of Contents](#toc)**
```
# Convert timestamp column to datetime
df_ratings['datetime'] = pd.to_datetime(df_ratings['timestamp'], unit='s')
df_ratings['year'] = df_ratings['datetime'].dt.year
# Create plot with No. of ratings per year
to_plot = df_ratings.groupby('year').count()['rating'].reset_index()
plt.figure(figsize=(17,5))
sns.barplot(x='year', y='rating', data=to_plot, color='blue', alpha=0.5)
plt.title('No of Ratings per Year')
plt.show()
```
**Note**
- I don't see any trends. It's great to see that the last 4 years of the dataset had almost the same number of ratings.
### No. of Users rating movies per Year <a class="anchor" id="urm"></a>
**[Go back to Table of Contents](#toc)**
```
# Create Plot with No. of Unique Users giving ratings
to_plot = df_ratings.groupby('year').nunique()['userId'].reset_index()
plt.figure(figsize=(17,5))
sns.barplot(x='year', y='userId', data=to_plot, color='blue', alpha=0.5)
plt.title('No. of Users rating movies per Year')
plt.show()
```
**Note**
- Not many users rating movies. Around 40 per year.
# Recommender System <a class="anchor" id="rs"></a>
## Create Popularity Model <a class="anchor" id="pop"></a>
The first model is going to be very simple. It's a popularity model. Basically I'm going to rank movies by popularity. However, I need to find a way to scale the ratings because a movie with 100 ratings with an average of 4.5 and another with 2 with an average of 4.75 are completely different. I'd argue that the first movie actually has a higher rating score than the second one since more users have rated it with a high score.
To address that problem I'm using the IMDB's Weighted Rating Method I found [online](https://math.stackexchange.com/questions/169032/understanding-the-imdb-weighted-rating-function-for-usage-on-my-own-website) that does a good job at weighting the ratings.
#### Calculation
![](https://image.ibb.co/jYWZp9/wr.png)
where,
* v is the number of votes for the movie;
* m is the minimum votes required to be listed in the chart;
* R is the average rating of the movie; And
* C is the mean vote across the whole report
#### C: Calculate mean vote across the whole dataset
```
# Calculate Mean and Count the No. of Ratings to a given movie
mean_ratings_df = df_ratings.groupby('movieId').agg(avg_rating=('rating', 'mean'),
count_rating=('rating', 'count')).reset_index()
# Calculate the Overall Average Rating
mean_ratings_df['overall_avg_rating'] = mean_ratings_df['avg_rating'].mean()
mean_ratings_df.head()
```
#### m: Define the minimum number of ratings required to be listed
To define the minimum number of votes I'm going to look at the distribution of No. of Ratings by Movies.
```
# Plot
plt.figure(figsize=(15,5))
sns.boxplot(x=mean_ratings_df['count_rating'])
plt.title('Boxplot of No. of Ratings given to movies')
plt.show()
```
Not super helpful. I'm going to print different quantiles
```
# Calculate different quatiles
n_of_users = df_ratings['userId'].nunique()
n_of_movies = len(mean_ratings_df)
quantiles_list = []
for n in range(10, 100, 5):
q = mean_ratings_df['count_rating'].quantile(n/100)
n_of_selected_movies = len(mean_ratings_df[mean_ratings_df['count_rating'] >= q])
quantiles_list.append([n, q, n_of_selected_movies])
pd.DataFrame(quantiles_list, columns=['quantile', 'quantile_value', 'number_of_movies'])
```
Before deciding the Minimum No. of Ratings, I'm going to look at the number of movies users have rated.
```
df_ratings.groupby('userId').count()['movieId'].describe()
```
The Median number of movies a user has rated is 70 movies and the 75th quantile is 168 movies.
Therefore, I'm comfortable moving forward with having the Minimum Number of Ratings (or `m`) of 47 ratings since that represents 491 Movies which is more than most users have rated.
> **Disclamer**: I have tried a minimum of 27/17 ratings as well, however, the model resulted in weird recommendations. So I'm picking 47 after iteratively trying 17 and 27.
#### m = 47
#### Create function to apply to the dataset
```
def weighted_rating(df):
"""
Calculates the IMDB's Weighted Rating using the following formula:
(v / (v+m) * R) + (m / (m+v) * C)
where:
- v is the number of votes for the movie;
- m is the minimum votes required to be listed in the chart;
- R is the average rating of the movie; And
- C is the mean vote across the whole report
"""
v = df['count_rating']
m = df['minimum_no_of_ratings']
R = df['avg_rating']
C = df['overall_avg_rating']
return (v / (v+m) * R) + (m / (m+v) * C)
# Create Copy
popularity_df = mean_ratings_df.copy()
# Calculate the 95th quantile and the weighted rating
popularity_df['minimum_no_of_ratings'] = popularity_df['count_rating'].quantile(0.95)
popularity_df['weighted_rating'] = popularity_df.apply(weighted_rating, axis=1)
```
I'm going to look at the top 10 movies with the highest ratings.
```
# Grab the top 10 ids
top_ten_ids = popularity_df.sort_values('weighted_rating', ascending=False)['movieId'][:10].values
# Print them
for idx, movie_id in enumerate(top_ten_ids):
print((idx + 1), df_movies[df_movies['movieId'] == movie_id]['title'].item())
```
Not too bad, I agree with these being the top 10. _However, that's very personal._
**[Go back to Table of Contents](#toc)**
## Collaborative-Based Filtering <a class="anchor" id="colab"></a>
Collaborative Filtering is based on the idea that users similar to a me can be utilized to predict how much I will like a particular product or service that those same users have used/experienced but I have not.
The strategy is to use different models and compare their performances. The metric to optimize for is RMSE. However, most likely, the best model will be the Singular Value Decomposition (SVD) or SVD++ based on what I have seen in different places. Nonetheless, I think it's worth trying different models rather than simply trying only these two models.
Moreover, I'm also considering the fit time, otherwise, I might end up with a model that would not be _deployable_.
```
# Create a new dataframe to train the model.
df_ratings_clean = df_ratings[['userId', 'movieId', 'rating']]
```
#### Reduce dataset to decrease runtime
The dataset is too big and it's going to take too long to train the models if I use the whole dataset (_I've learned that the hard way_). Therefore, I'm picking only 50% of it to identify the best hyperparameters for the SVD model and I'm running GridSearchCV only for 50% of that. Once I identify the best hyperparameters, I'll then train the model using the whole dataset.
```
# Randomly pick 50,000 datapoints fmor the dataset
sample_df = df_ratings_clean.sample(n=50000, random_state=111)
# Split the sample data in two so I can test the best hyperparameters later on
train_df, test_df = train_test_split(sample_df, train_size=.50, random_state=111)
# Create reader and dataset objects
reader = Reader()
traindata = Dataset.load_from_df(train_df, reader)
testdata = Dataset.load_from_df(test_df, reader)
```
### GridSearchCV - Hyperparameter Tunning of SVD <a class="anchor" id="grid"></a>
**[Go back to Table of Contents](#toc)**
```
# Perform a gridsearch with SVD
param_grid = {'n_factors':[10, 15, 20]
, 'n_epochs': [10, 20]
, 'lr_all': [0.008, 0.012]
, 'reg_all': [0.06, 0.1]
, 'random_state': [111]}
gs_model = GridSearchCV(SVD, param_grid=param_grid, n_jobs = -1, joblib_verbose=False)
%time gs_model.fit(traindata)
print('The best parameters are:')
gs_model.best_params['rmse']
```
### GridSearchCV Metrics Analysis
Let's analyze the metrics of each run and pick the best parameters given the RMSE and Fit Time. Sometimes simply choosing the best parameters is not the best option since the only goal of the Grid is to minimize RMSE. We should also consider the Fit Time if we are planning on having this model as a service running online.
```
# Convert results from the GridSearchCV to dataframes
df_params = pd.DataFrame(gs_model.cv_results['params'])
df_rmse = pd.DataFrame(gs_model.cv_results['mean_test_rmse'], columns=['mean_test_rmse'])
df_time = pd.DataFrame(gs_model.cv_results['mean_fit_time'], columns=['mean_fit_time'])
df_results = pd.concat([df_params, df_rmse, df_time], axis=1)
```
Create a function to print metrics so we can see the impact of hyperparameters in RMSE and Fit Time.
```
def compare_metrics_chart(df, column_a, column_b):
"""
Function to plot the comparison of two metrics in a GridSearchCV run.
Args:
df(pd.Dataframe): Pandas Dataframe with GridSearchCV metrics.
column_a(str): First metric
column_b(str): Second Metric
"""
# Create Figure
fig = plt.figure(figsize=(10,5))
# Create first axis
ax = fig.add_subplot(111)
# Plot Column A
sns.lineplot(data=df[column_a], color="g", ax=ax)
# Set Y Label
ax.set_ylabel(column_a, color='g', size=10)
# Create axis 2
ax2 = plt.twinx()
# Plot Column B
sns.lineplot(data=df[column_b], color="b", ax=ax2)
# Set Y Label
ax2.set_ylabel(column_b, color='b', size=10)
# Change the format of the title
column_a_title = column_a.replace('_', ' ').title()
column_b_title = column_b.replace('_', ' ').title()
plt.title(column_a_title + ' vs. ' + column_b_title)
plt.show();
```
#### Number of Factors
```
compare_metrics_chart(df_results, 'n_factors', 'mean_test_rmse')
compare_metrics_chart(df_results, 'n_factors', 'mean_fit_time')
```
The lowest values for RMSE is reached regardless of the Number of Factors. It's arguable that we should have more factors to decrease RMSE since that's the expectation. However, it comes at a cost: fit time increase. Since the data is showing we can achieve low RMSE with only `10` factors then I'm going to choose that.
#### Number of Epochs
```
compare_metrics_chart(df_results, 'n_epochs', 'mean_test_rmse')
compare_metrics_chart(df_results, 'n_epochs', 'mean_fit_time')
```
The Number of Epochs reduces RMSE, but it's possible to see an increase of 50%-80% in Fit Time, which is more than the positive impact in RMSE. Therefore, I'll go with `20` epochs.
#### Regularization Term
```
compare_metrics_chart(df_results, 'reg_all', 'mean_test_rmse')
compare_metrics_chart(df_results, 'reg_all', 'mean_fit_time')
```
Low regularization term achieves better results with no impact in fit time.
#### Learning Rate
```
compare_metrics_chart(df_results, 'lr_all', 'mean_test_rmse')
compare_metrics_chart(df_results, 'lr_all', 'mean_fit_time')
```
Having high Learning Rate has a positive impact in RMSE with now impact in Fit Time.
#### Final hyperparameters:
- `n_factors`: 15
- `n_epochs`: 20
- `lr_all`: 0.012
- `reg_all`: 0.06
**[Go back to Table of Contents](#toc)**
### Try different models <a class="anchor" id="dif"></a>
#### Create a function to easily test different models
```
def full_model_training_evaluation(model, model_name, traindata, testdata):
"""
Train and test different models and collect fit time and train/test RMSE.
Args:
model(surprise.prediction_algorithms): Model instances from the surprise package.
model_name(str): Model name created by the User. A way to identify the model.
traindata(surprise.dataset.DatasetAutoFolds): Train dataset
testdata(surprise.dataset.DatasetAutoFolds): Test dataset
Returns:
results(dict): A dictionary with the model name, fit time and RMSE's (train/test).
"""
# Stor results in dictionary
results = {}
results['model_name'] = model_name
print('Training', model_name, 'model')
# Fit on train data
start_time = time.time()
model.fit(traindata.build_full_trainset())
end_time = time.time()
total_time = round(end_time - start_time, 2)
results['fit_time_in_seconds'] = total_time
# Get RMSE on train data
predictions_train = model.test(traindata.build_full_trainset().build_testset())
rmse_train = rmse(predictions_train, verbose=False).round(2)
results['rmse_train'] = rmse_train
# Get RMSE on test data
predictions_test = model.test(testdata.build_full_trainset().build_testset())
rmse_test = rmse(predictions_test, verbose=False).round(2)
results['rmse_test'] = rmse_test
return results
```
Instantiate different models
```
# Create SVD model with the best hyperparameters
svd = SVD(n_factors=15, n_epochs=20, lr_all=0.012, reg_all=0.06, random_state=111)
# SVD++: Use the same hyperparameters
svd_pp = SVDpp(n_factors=15, n_epochs=20, lr_all=0.012, reg_all=0.06, random_state=111)
# Different instances of KNN Basic models with different hyperparameters
knn_basic_person_baseline = KNNBasic(sim_options={'name':'pearson_baseline', 'user_based':True}, verbose=False)
knn_basic_person = KNNBasic(sim_options={'name':'pearson', 'user_based':True}, verbose=False)
knn_basic_cosine = KNNBasic(sim_options={'name':'cosine', 'user_based':True}, verbose=False)
# Different instances of KNN Baseline models with different hyperparameters
knn_base_person_baseline = KNNBaseline(sim_options={'name':'pearson_baseline', 'user_based':True}, verbose=False)
knn_base_person = KNNBaseline(sim_options={'name':'pearson', 'user_based':True}, verbose=False)
knn_base_cosine = KNNBaseline(sim_options={'name':'cosine', 'user_based':True}, verbose=False)
# Put all models in a dictionary
models = {'SVD': svd,
'SVD++': svd_pp,
'KNNBasic Cosine': knn_basic_cosine,
'KNNBasic Person': knn_basic_person,
'KNNBasic Person Baseline': knn_basic_person_baseline,
'KNNBaseline Cosine': knn_base_cosine,
'KNNBaseline Person': knn_base_person,
'KNNBaseline Person Baseline': knn_base_person_baseline}
# Loop through different models and evaluate them
model_results = []
for model_name, model_instance in models.items():
results = full_model_training_evaluation(model_instance, model_name, traindata, testdata)
model_results.append(results)
```
**[Go back to Table of Contents](#toc)**
### Model Evaluation <a class="anchor" id="eval"></a>
```
pd.DataFrame(model_results)
```
#### Notes:
- **Fit Time**: `SVD++` is by far the worst model. All KNN models have somewhat the same Fit Time, which is 4 times faster than `SVD`. However, they are all very fast relative to the `SVD++` model.
- **RMSE Train**: The KNN Models using `person_baseline` is overfitting the train set. When comparing both Singular Value Decomposition models, the `SVD++` is performing better than the `SVD`.
- **RMSE Test**: Both Singular Value Decomposition models had the same performance numbers and performed better than all KNN models.
### Conclusion
I'll move forward with the `SVD` model given the fit time and RMSE scores.
**[Go back to Table of Contents](#toc)**
## Create function to take user input and give recommendations (+ hint of content-based attribute) <a class="anchor" id="func"></a>
Finally, I'm going to create a function that takes a genre and ratings from a user who has no ratings in the dataset. In the process, I'm going to focus my recommendations based on the chosen genre (content-based part of the recommendation).
```
# Create list of genres
list_of_genres = stacked_genres['genre'].sort_values().unique()[1:]
# Combine mean ratings and movies details
ratings_movies_df = pd.merge(mean_ratings_df, df_movies, on='movieId')
```
#### Filter the dataset by removing movies with not enough ratings
```
def filtered_dataset(genre):
"""
Function to filter the dataset given the genre and remove outliers.
Args:
genre(str): The genre the user has chosen to come with recommendations.
Returns:
genre_df(pd.DataFrame): Filtered Dataframe with only the chosen genre.
"""
# Keep only the selected genre
genre_df = ratings_movies_df[ratings_movies_df['genres'].str.contains(genre)]
# Calculate the 95th quantile and the weighted rating
minimum_no_of_ratings = genre_df['count_rating'].quantile(0.95)
genre_df['minimum_no_of_ratings'] = minimum_no_of_ratings
genre_df['weighted_rating'] = genre_df.apply(weighted_rating, axis=1)
# Remove movies with not enough ratings
genre_df = genre_df[genre_df['count_rating'] >= minimum_no_of_ratings]
# Sorted it by weighted rating so we have the highest ratings on the top
genre_df = genre_df.sort_values('weighted_rating', ascending=False)
genre_df = genre_df.reset_index(drop=True)
# Keep certain relevant columns
genre_df = genre_df[['movieId', 'title',
'genres', 'count_rating',
'minimum_no_of_ratings', 'weighted_rating']]
return genre_df
```
#### Create first a function to let the user rate five movies
```
def rate_movie(n_of_movies=5, default_user_id=9999999):
"""
Function to request a new user to review some movies.
Args:
n_of_movies(int): Number of ratings the new will have to give.
default_user_id(int): Random user id that will be given to the user to be able to reference to it later.
Returns:
new_ratings_df(pd.DataFrame): Pandas Dataframe with the new ratings
favorite_genre(str): The User's favorite genre
"""
# Print a list of the available genres
print('List of Available Genres: ', ", ".join(list_of_genres))
# Gather input from user on which genre will be analyzed
favorite_genre = input('Choose one genre from the following (case-sensitive): ')
# Filter the dataset
df_movies_popularity = filtered_dataset(favorite_genre)
# Keep only movies that contain the chosen genre
favorite_genre_movies = df_movies_popularity[df_movies_popularity['genres'].str.contains(favorite_genre)]
# Keep the highest rated movies
favorite_genre_movies = favorite_genre_movies.iloc[:20].sample(frac=1, random_state=111)
favorite_genre_movies = favorite_genre_movies.iloc[:n_of_movies]
print('')
# Created to store ratings from user
ratings_list = []
# Loop through dataframe with movies to be rated
for row in favorite_genre_movies.iterrows():
# Extract Title and ID
movie_title = row[1]['title']
movie_id = row[1]['movieId']
print('Movie to rate: ', movie_title)
# Gather rating from user
rating = input('How do you rate this movie on a scale of 1-5, press n if you have not seen :\n')
# Deal with users not typing a number and create a new variable with the integer
try:
rating_int = int(rating)
except:
rating_int = 1
# While the Rating is not valid, keep asking the user
while (rating != 'n') and not (1 <= rating_int <=5):
rating = input('Please rate the movie between 1-5 or n if you have not seen : \n')
else:
# If the rating is different from 'n' then we need to add the rating to the list
if rating != 'n':
ratings_list.append({'userId': default_user_id,
'movieId': movie_id,
'rating': rating_int})
print('')
# Convert to DataFrame
new_ratings_df = pd.DataFrame(ratings_list)
return new_ratings_df, favorite_genre, df_movies_popularity
```
#### Create a function to give the recommendations
```
def give_n_recommendations(model, default_user_id=9999999, n_recommendations=5):
"""
Function to request a new user to review movies and give recommendations based on that.
Args:
model(surprise.prediction_algorithms): Model instances from the surprise package.
default_user_id(int): Random user id that will be given to the user to be able to reference to it later.
n_recommendations(int): Number of recommendations that will be given to the user.
"""
# Extract ratings from the user
new_ratings_df, favorite_genre, df_movies_popularity = rate_movie(default_user_id=default_user_id)
watched_movies_id = new_ratings_df['movieId']
## add the new ratings to the original ratings DataFrame
updated_df = pd.concat([new_ratings_df, df_ratings_clean])
new_data = Dataset.load_from_df(updated_df, reader)
new_dataset = new_data.build_full_trainset()
# Fit new dataset
model.fit(new_dataset)
# make predictions for the user
results = []
for movie_id in df_movies_popularity['movieId'].unique():
predicted_score = model.predict(default_user_id, movie_id)[3]
results.append((movie_id, predicted_score))
# order the predictions from highest to lowest rated
ranked_movies = pd.DataFrame(results, columns=['movieId', 'predicted_score'])
ranked_movies = ranked_movies[~ranked_movies['movieId'].isin(watched_movies_id)]
ranked_movies = ranked_movies.sort_values('predicted_score', ascending=False).reset_index(drop=True)
ranked_movies = pd.merge(ranked_movies, df_movies, on='movieId')
# ranked_movies = ranked_movies[ranked_movies['genres'].str.contains(favorite_genre)]
print('The recommendations are the following:')
if len(ranked_movies) < n_recommendations:
n_recommendations = len(ranked_movies)
for row in range(n_recommendations):
movie_id = ranked_movies.iloc[row]['movieId']
recommended_title = df_movies[df_movies['movieId'] == movie_id]['title'].item()
print(f'No. {row+1} is {recommended_title}')
```
#### Let's test it out!
I'm going to try different genres to see how the model behaves.
#### `Action`
```
give_n_recommendations(svd)
```
#### `Documentary`
```
give_n_recommendations(svd)
```
#### `Crime`
```
give_n_recommendations(svd)
```
#### `Romance`
```
give_n_recommendations(svd)
```
# Conclusion <a class="anchor" id="conclusion"></a>
I'm happy with the results. However, I think the function is a bit limited. I'd like to have the recommender in an app. To do that, I'm going to use Streamlit.
**[Go back to Table of Contents](#toc)**
# Export files to create app <a class="anchor" id="lit"></a>
I'm going to export some files so I can use them in Streamlit
```
# Export it to use it on streamlit
ratings_movies_df.to_csv('./app/data/movies_by_rating.csv', index=0)
df_ratings_clean.to_csv('./app/data/user_movie_ratings.csv', index=0)
dump.dump('./app/data/svd.pkl', algo=svd)
```
# [Check out the App!](https://movie-recommender-reno.herokuapp.com/)
# Improvements <a class="anchor" id="improvements"></a>
- Use Normalized Discounted Cumulative Gain (NDCG) to evaluate models.
- Develop a Content-Based layer using `tags` and `genres` or even `title`/`year`.
- Sometimes I rate Star Wars with 1 star and the recommender outputs more Start Wars movies.
**[Go back to Table of Contents](#toc)**
| github_jupyter | import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import re
import time
from surprise import Reader, Dataset, dump
from surprise.model_selection import cross_validate, GridSearchCV
from surprise.prediction_algorithms import KNNBasic, KNNBaseline, SVD, SVDpp
from surprise.accuracy import rmse
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# Import datasets
df_movies = pd.read_csv('./app/data/movies.csv')
df_ratings = pd.read_csv('./app/data/ratings.csv')
# Show first rows
display(df_movies.head())
display(df_ratings.head())
# Check for nulls and data types
display(df_movies.info())
display(df_ratings.info())
# Create column with array of genres and calculate the Number of Genres per movie
df_movies['genres_array'] = df_movies['genres'].str.split('|')
# Flattened genres
stacked_genres = df_movies['genres_array'].apply(pd.Series).stack(level=0).reset_index()
stacked_genres.columns = ['index', 'level_1', 'genre']
# Combine original dataframe with flattened genres using the index
df_movies_new = pd.merge(df_movies, stacked_genres, how='left', left_index=True, right_on=['index'])
df_movies_new = df_movies_new[['movieId', 'title', 'genre']]
# One-hot Encoding of Genre column
one_hot = pd.get_dummies(df_movies_new['genre'])
# Get list of genres (it's going to be useful soon)
list_of_genres = list(one_hot.columns)
# Combine the new dataframe with the one-hot encoded dataframe
df_movies_new = pd.merge(df_movies_new, one_hot, left_index=True, right_index=True)
df_movies_new = df_movies_new.drop('genre', axis=1)
# Use groupby to have one row per movie
df_movies_new = df_movies_new.groupby(['movieId', 'title']).sum()[list_of_genres].reset_index()
# Split year and title
df_movies_new['release_year'] = df_movies_new.apply(lambda x: x['title'].strip()[-5:][:-1], axis=1)
df_movies_new['release_year'] = df_movies_new.apply(lambda x:
x['release_year']
if len(re.findall("[0-9]{4}", x['release_year'])) == 1
else np.nan, axis=1)
df_movies_new['title'] = df_movies_new.apply(lambda x:
x['title'][:-6].strip()
if x['release_year'] != np.nan
else x['title'], axis=1)
# Create empty dictionary to store the no of movies by genre
no_of_movies_by_genre = {}
for genre in list_of_genres:
no_of_movies = df_movies_new[genre].sum()
no_of_movies_by_genre[genre] = no_of_movies
# Transform that into a dataframe
to_plot = pd.DataFrame.from_dict(no_of_movies_by_genre, orient='index').reset_index()
to_plot.columns = ['genre', 'no_of_movies']
to_plot = to_plot.sort_values('no_of_movies', ascending=False).reset_index(drop=True)
# Plot
plt.figure(figsize=(10,8))
sns.barplot(x="no_of_movies", y="genre", data=to_plot)
plt.title('No of Movies by Genre', size=14)
plt.xlabel('No. of Movies', size=13)
plt.ylabel(None)
plt.show()
# Convert timestamp column to datetime
df_ratings['datetime'] = pd.to_datetime(df_ratings['timestamp'], unit='s')
df_ratings['year'] = df_ratings['datetime'].dt.year
# Create plot with No. of ratings per year
to_plot = df_ratings.groupby('year').count()['rating'].reset_index()
plt.figure(figsize=(17,5))
sns.barplot(x='year', y='rating', data=to_plot, color='blue', alpha=0.5)
plt.title('No of Ratings per Year')
plt.show()
# Create Plot with No. of Unique Users giving ratings
to_plot = df_ratings.groupby('year').nunique()['userId'].reset_index()
plt.figure(figsize=(17,5))
sns.barplot(x='year', y='userId', data=to_plot, color='blue', alpha=0.5)
plt.title('No. of Users rating movies per Year')
plt.show()
# Calculate Mean and Count the No. of Ratings to a given movie
mean_ratings_df = df_ratings.groupby('movieId').agg(avg_rating=('rating', 'mean'),
count_rating=('rating', 'count')).reset_index()
# Calculate the Overall Average Rating
mean_ratings_df['overall_avg_rating'] = mean_ratings_df['avg_rating'].mean()
mean_ratings_df.head()
# Plot
plt.figure(figsize=(15,5))
sns.boxplot(x=mean_ratings_df['count_rating'])
plt.title('Boxplot of No. of Ratings given to movies')
plt.show()
# Calculate different quatiles
n_of_users = df_ratings['userId'].nunique()
n_of_movies = len(mean_ratings_df)
quantiles_list = []
for n in range(10, 100, 5):
q = mean_ratings_df['count_rating'].quantile(n/100)
n_of_selected_movies = len(mean_ratings_df[mean_ratings_df['count_rating'] >= q])
quantiles_list.append([n, q, n_of_selected_movies])
pd.DataFrame(quantiles_list, columns=['quantile', 'quantile_value', 'number_of_movies'])
df_ratings.groupby('userId').count()['movieId'].describe()
def weighted_rating(df):
"""
Calculates the IMDB's Weighted Rating using the following formula:
(v / (v+m) * R) + (m / (m+v) * C)
where:
- v is the number of votes for the movie;
- m is the minimum votes required to be listed in the chart;
- R is the average rating of the movie; And
- C is the mean vote across the whole report
"""
v = df['count_rating']
m = df['minimum_no_of_ratings']
R = df['avg_rating']
C = df['overall_avg_rating']
return (v / (v+m) * R) + (m / (m+v) * C)
# Create Copy
popularity_df = mean_ratings_df.copy()
# Calculate the 95th quantile and the weighted rating
popularity_df['minimum_no_of_ratings'] = popularity_df['count_rating'].quantile(0.95)
popularity_df['weighted_rating'] = popularity_df.apply(weighted_rating, axis=1)
# Grab the top 10 ids
top_ten_ids = popularity_df.sort_values('weighted_rating', ascending=False)['movieId'][:10].values
# Print them
for idx, movie_id in enumerate(top_ten_ids):
print((idx + 1), df_movies[df_movies['movieId'] == movie_id]['title'].item())
# Create a new dataframe to train the model.
df_ratings_clean = df_ratings[['userId', 'movieId', 'rating']]
# Randomly pick 50,000 datapoints fmor the dataset
sample_df = df_ratings_clean.sample(n=50000, random_state=111)
# Split the sample data in two so I can test the best hyperparameters later on
train_df, test_df = train_test_split(sample_df, train_size=.50, random_state=111)
# Create reader and dataset objects
reader = Reader()
traindata = Dataset.load_from_df(train_df, reader)
testdata = Dataset.load_from_df(test_df, reader)
# Perform a gridsearch with SVD
param_grid = {'n_factors':[10, 15, 20]
, 'n_epochs': [10, 20]
, 'lr_all': [0.008, 0.012]
, 'reg_all': [0.06, 0.1]
, 'random_state': [111]}
gs_model = GridSearchCV(SVD, param_grid=param_grid, n_jobs = -1, joblib_verbose=False)
%time gs_model.fit(traindata)
print('The best parameters are:')
gs_model.best_params['rmse']
# Convert results from the GridSearchCV to dataframes
df_params = pd.DataFrame(gs_model.cv_results['params'])
df_rmse = pd.DataFrame(gs_model.cv_results['mean_test_rmse'], columns=['mean_test_rmse'])
df_time = pd.DataFrame(gs_model.cv_results['mean_fit_time'], columns=['mean_fit_time'])
df_results = pd.concat([df_params, df_rmse, df_time], axis=1)
def compare_metrics_chart(df, column_a, column_b):
"""
Function to plot the comparison of two metrics in a GridSearchCV run.
Args:
df(pd.Dataframe): Pandas Dataframe with GridSearchCV metrics.
column_a(str): First metric
column_b(str): Second Metric
"""
# Create Figure
fig = plt.figure(figsize=(10,5))
# Create first axis
ax = fig.add_subplot(111)
# Plot Column A
sns.lineplot(data=df[column_a], color="g", ax=ax)
# Set Y Label
ax.set_ylabel(column_a, color='g', size=10)
# Create axis 2
ax2 = plt.twinx()
# Plot Column B
sns.lineplot(data=df[column_b], color="b", ax=ax2)
# Set Y Label
ax2.set_ylabel(column_b, color='b', size=10)
# Change the format of the title
column_a_title = column_a.replace('_', ' ').title()
column_b_title = column_b.replace('_', ' ').title()
plt.title(column_a_title + ' vs. ' + column_b_title)
plt.show();
compare_metrics_chart(df_results, 'n_factors', 'mean_test_rmse')
compare_metrics_chart(df_results, 'n_factors', 'mean_fit_time')
compare_metrics_chart(df_results, 'n_epochs', 'mean_test_rmse')
compare_metrics_chart(df_results, 'n_epochs', 'mean_fit_time')
compare_metrics_chart(df_results, 'reg_all', 'mean_test_rmse')
compare_metrics_chart(df_results, 'reg_all', 'mean_fit_time')
compare_metrics_chart(df_results, 'lr_all', 'mean_test_rmse')
compare_metrics_chart(df_results, 'lr_all', 'mean_fit_time')
def full_model_training_evaluation(model, model_name, traindata, testdata):
"""
Train and test different models and collect fit time and train/test RMSE.
Args:
model(surprise.prediction_algorithms): Model instances from the surprise package.
model_name(str): Model name created by the User. A way to identify the model.
traindata(surprise.dataset.DatasetAutoFolds): Train dataset
testdata(surprise.dataset.DatasetAutoFolds): Test dataset
Returns:
results(dict): A dictionary with the model name, fit time and RMSE's (train/test).
"""
# Stor results in dictionary
results = {}
results['model_name'] = model_name
print('Training', model_name, 'model')
# Fit on train data
start_time = time.time()
model.fit(traindata.build_full_trainset())
end_time = time.time()
total_time = round(end_time - start_time, 2)
results['fit_time_in_seconds'] = total_time
# Get RMSE on train data
predictions_train = model.test(traindata.build_full_trainset().build_testset())
rmse_train = rmse(predictions_train, verbose=False).round(2)
results['rmse_train'] = rmse_train
# Get RMSE on test data
predictions_test = model.test(testdata.build_full_trainset().build_testset())
rmse_test = rmse(predictions_test, verbose=False).round(2)
results['rmse_test'] = rmse_test
return results
# Create SVD model with the best hyperparameters
svd = SVD(n_factors=15, n_epochs=20, lr_all=0.012, reg_all=0.06, random_state=111)
# SVD++: Use the same hyperparameters
svd_pp = SVDpp(n_factors=15, n_epochs=20, lr_all=0.012, reg_all=0.06, random_state=111)
# Different instances of KNN Basic models with different hyperparameters
knn_basic_person_baseline = KNNBasic(sim_options={'name':'pearson_baseline', 'user_based':True}, verbose=False)
knn_basic_person = KNNBasic(sim_options={'name':'pearson', 'user_based':True}, verbose=False)
knn_basic_cosine = KNNBasic(sim_options={'name':'cosine', 'user_based':True}, verbose=False)
# Different instances of KNN Baseline models with different hyperparameters
knn_base_person_baseline = KNNBaseline(sim_options={'name':'pearson_baseline', 'user_based':True}, verbose=False)
knn_base_person = KNNBaseline(sim_options={'name':'pearson', 'user_based':True}, verbose=False)
knn_base_cosine = KNNBaseline(sim_options={'name':'cosine', 'user_based':True}, verbose=False)
# Put all models in a dictionary
models = {'SVD': svd,
'SVD++': svd_pp,
'KNNBasic Cosine': knn_basic_cosine,
'KNNBasic Person': knn_basic_person,
'KNNBasic Person Baseline': knn_basic_person_baseline,
'KNNBaseline Cosine': knn_base_cosine,
'KNNBaseline Person': knn_base_person,
'KNNBaseline Person Baseline': knn_base_person_baseline}
# Loop through different models and evaluate them
model_results = []
for model_name, model_instance in models.items():
results = full_model_training_evaluation(model_instance, model_name, traindata, testdata)
model_results.append(results)
pd.DataFrame(model_results)
# Create list of genres
list_of_genres = stacked_genres['genre'].sort_values().unique()[1:]
# Combine mean ratings and movies details
ratings_movies_df = pd.merge(mean_ratings_df, df_movies, on='movieId')
def filtered_dataset(genre):
"""
Function to filter the dataset given the genre and remove outliers.
Args:
genre(str): The genre the user has chosen to come with recommendations.
Returns:
genre_df(pd.DataFrame): Filtered Dataframe with only the chosen genre.
"""
# Keep only the selected genre
genre_df = ratings_movies_df[ratings_movies_df['genres'].str.contains(genre)]
# Calculate the 95th quantile and the weighted rating
minimum_no_of_ratings = genre_df['count_rating'].quantile(0.95)
genre_df['minimum_no_of_ratings'] = minimum_no_of_ratings
genre_df['weighted_rating'] = genre_df.apply(weighted_rating, axis=1)
# Remove movies with not enough ratings
genre_df = genre_df[genre_df['count_rating'] >= minimum_no_of_ratings]
# Sorted it by weighted rating so we have the highest ratings on the top
genre_df = genre_df.sort_values('weighted_rating', ascending=False)
genre_df = genre_df.reset_index(drop=True)
# Keep certain relevant columns
genre_df = genre_df[['movieId', 'title',
'genres', 'count_rating',
'minimum_no_of_ratings', 'weighted_rating']]
return genre_df
def rate_movie(n_of_movies=5, default_user_id=9999999):
"""
Function to request a new user to review some movies.
Args:
n_of_movies(int): Number of ratings the new will have to give.
default_user_id(int): Random user id that will be given to the user to be able to reference to it later.
Returns:
new_ratings_df(pd.DataFrame): Pandas Dataframe with the new ratings
favorite_genre(str): The User's favorite genre
"""
# Print a list of the available genres
print('List of Available Genres: ', ", ".join(list_of_genres))
# Gather input from user on which genre will be analyzed
favorite_genre = input('Choose one genre from the following (case-sensitive): ')
# Filter the dataset
df_movies_popularity = filtered_dataset(favorite_genre)
# Keep only movies that contain the chosen genre
favorite_genre_movies = df_movies_popularity[df_movies_popularity['genres'].str.contains(favorite_genre)]
# Keep the highest rated movies
favorite_genre_movies = favorite_genre_movies.iloc[:20].sample(frac=1, random_state=111)
favorite_genre_movies = favorite_genre_movies.iloc[:n_of_movies]
print('')
# Created to store ratings from user
ratings_list = []
# Loop through dataframe with movies to be rated
for row in favorite_genre_movies.iterrows():
# Extract Title and ID
movie_title = row[1]['title']
movie_id = row[1]['movieId']
print('Movie to rate: ', movie_title)
# Gather rating from user
rating = input('How do you rate this movie on a scale of 1-5, press n if you have not seen :\n')
# Deal with users not typing a number and create a new variable with the integer
try:
rating_int = int(rating)
except:
rating_int = 1
# While the Rating is not valid, keep asking the user
while (rating != 'n') and not (1 <= rating_int <=5):
rating = input('Please rate the movie between 1-5 or n if you have not seen : \n')
else:
# If the rating is different from 'n' then we need to add the rating to the list
if rating != 'n':
ratings_list.append({'userId': default_user_id,
'movieId': movie_id,
'rating': rating_int})
print('')
# Convert to DataFrame
new_ratings_df = pd.DataFrame(ratings_list)
return new_ratings_df, favorite_genre, df_movies_popularity
def give_n_recommendations(model, default_user_id=9999999, n_recommendations=5):
"""
Function to request a new user to review movies and give recommendations based on that.
Args:
model(surprise.prediction_algorithms): Model instances from the surprise package.
default_user_id(int): Random user id that will be given to the user to be able to reference to it later.
n_recommendations(int): Number of recommendations that will be given to the user.
"""
# Extract ratings from the user
new_ratings_df, favorite_genre, df_movies_popularity = rate_movie(default_user_id=default_user_id)
watched_movies_id = new_ratings_df['movieId']
## add the new ratings to the original ratings DataFrame
updated_df = pd.concat([new_ratings_df, df_ratings_clean])
new_data = Dataset.load_from_df(updated_df, reader)
new_dataset = new_data.build_full_trainset()
# Fit new dataset
model.fit(new_dataset)
# make predictions for the user
results = []
for movie_id in df_movies_popularity['movieId'].unique():
predicted_score = model.predict(default_user_id, movie_id)[3]
results.append((movie_id, predicted_score))
# order the predictions from highest to lowest rated
ranked_movies = pd.DataFrame(results, columns=['movieId', 'predicted_score'])
ranked_movies = ranked_movies[~ranked_movies['movieId'].isin(watched_movies_id)]
ranked_movies = ranked_movies.sort_values('predicted_score', ascending=False).reset_index(drop=True)
ranked_movies = pd.merge(ranked_movies, df_movies, on='movieId')
# ranked_movies = ranked_movies[ranked_movies['genres'].str.contains(favorite_genre)]
print('The recommendations are the following:')
if len(ranked_movies) < n_recommendations:
n_recommendations = len(ranked_movies)
for row in range(n_recommendations):
movie_id = ranked_movies.iloc[row]['movieId']
recommended_title = df_movies[df_movies['movieId'] == movie_id]['title'].item()
print(f'No. {row+1} is {recommended_title}')
give_n_recommendations(svd)
give_n_recommendations(svd)
give_n_recommendations(svd)
give_n_recommendations(svd)
# Export it to use it on streamlit
ratings_movies_df.to_csv('./app/data/movies_by_rating.csv', index=0)
df_ratings_clean.to_csv('./app/data/user_movie_ratings.csv', index=0)
dump.dump('./app/data/svd.pkl', algo=svd) | 0.662906 | 0.885829 |
```
#all_slow
#export
from fastai.basics import *
#hide
from nbdev.showdoc import *
#default_exp callback.tensorboard
```
# Tensorboard
> Integration with [tensorboard](https://www.tensorflow.org/tensorboard)
First thing first, you need to install tensorboard with
```
pip install tensorboard
```
Then launch tensorboard with
```
tensorboard --logdir=runs
```
in your terminal. You can change the logdir as long as it matches the `log_dir` you pass to `TensorBoardCallback` (default is `runs` in the working directory).
## Tensorboard Embedding Projector support
> Tensorboard Embedding Projector is currently only supported for image classification
### Export Embeddings during Training
Tensorboard [Embedding Projector](https://www.tensorflow.org/tensorboard/tensorboard_projector_plugin) is supported in `TensorBoardCallback` (set parameter `projector=True`) during training. The validation set embeddings will be written after each epoch.
```
cbs = [TensorBoardCallback(projector=True)]
learn = cnn_learner(dls, resnet18, metrics=accuracy, cbs=cbs)
```
### Export Embeddings for a custom dataset
To write the embeddings for a custom dataset (e. g. after loading a learner) use `TensorBoardProjectorCallback`. Add the callback manually to the learner.
```
learn = load_learner('path/to/export.pkl')
learn.add_cb(TensorBoardProjectorCallback())
dl = learn.dls.test_dl(files, with_labels=True)
_ = learn.get_preds(dl=dl)
```
If using a custom model (non fastai-resnet) pass the layer where the embeddings should be extracted as a callback-parameter.
```
layer = learn.model[1][1]
learn.add_cb(TensorBoardProjectorCallback(layer=layer))
```
```
#export
import tensorboard
from torch.utils.tensorboard import SummaryWriter
from fastai.callback.fp16 import ModelToHalf
from fastai.callback.hook import hook_output
#export
class TensorBoardBaseCallback(Callback):
def __init__(self):
self.run_projector = False
def after_pred(self):
if self.run_projector: self.feat = _add_projector_features(self.learn, self.h, self.feat)
def after_validate(self):
if not self.run_projector: return
self.run_projector = False
self._remove()
_write_projector_embedding(self.learn, self.writer, self.feat)
def after_fit(self):
if self.run: self.writer.close()
def _setup_projector(self):
self.run_projector = True
self.h = hook_output(self.learn.model[1][1] if not self.layer else self.layer)
self.feat = {}
def _setup_writer(self):
self.writer = SummaryWriter(log_dir=self.log_dir)
def _remove(self):
if getattr(self, 'h', None): self.h.remove()
def __del__(self): self._remove()
#export
class TensorBoardCallback(TensorBoardBaseCallback):
"Saves model topology, losses & metrics"
def __init__(self, log_dir=None, trace_model=True, log_preds=True, n_preds=9, projector=False, layer=None):
super().__init__()
store_attr()
def before_fit(self):
self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") and rank_distrib()==0
if not self.run: return
self._setup_writer()
if self.trace_model:
if hasattr(self.learn, 'mixed_precision'):
raise Exception("Can't trace model in mixed precision, pass `trace_model=False` or don't use FP16.")
b = self.dls.one_batch()
self.learn._split(b)
self.writer.add_graph(self.model, *self.xb)
def after_batch(self):
self.writer.add_scalar('train_loss', self.smooth_loss, self.train_iter)
for i,h in enumerate(self.opt.hypers):
for k,v in h.items(): self.writer.add_scalar(f'{k}_{i}', v, self.train_iter)
def after_epoch(self):
for n,v in zip(self.recorder.metric_names[2:-1], self.recorder.log[2:-1]):
self.writer.add_scalar(n, v, self.train_iter)
if self.log_preds:
b = self.dls.valid.one_batch()
self.learn.one_batch(0, b)
preds = getattr(self.loss_func, 'activation', noop)(self.pred)
out = getattr(self.loss_func, 'decodes', noop)(preds)
x,y,its,outs = self.dls.valid.show_results(b, out, show=False, max_n=self.n_preds)
tensorboard_log(x, y, its, outs, self.writer, self.train_iter)
def before_validate(self):
if self.projector: self._setup_projector()
#export
class TensorBoardProjectorCallback(TensorBoardBaseCallback):
"Saves Embeddings for Tensorboard Projector"
def __init__(self, log_dir=None, layer=None):
super().__init__()
store_attr()
def before_fit(self):
self.run = not hasattr(self.learn, 'lr_finder') and hasattr(self, "gather_preds") and rank_distrib()==0
if not self.run: return
self._setup_writer()
def before_validate(self):
self._setup_projector()
#export
def _write_projector_embedding(learn, writer, feat):
lbls = [learn.dl.vocab[l] for l in feat['lbl']] if getattr(learn.dl, 'vocab', None) else None
writer.add_embedding(feat['vec'], metadata=lbls, label_img=feat['img'], global_step=learn.train_iter)
#export
def _add_projector_features(learn, hook, feat):
img = normalize_for_projector(learn.x)
first_epoch = True if learn.iter == 0 else False
feat['vec'] = hook.stored if first_epoch else torch.cat((feat['vec'], hook.stored),0)
feat['img'] = img if first_epoch else torch.cat((feat['img'], img),0)
if getattr(learn.dl, 'vocab', None):
feat['lbl'] = learn.y if first_epoch else torch.cat((feat['lbl'], learn.y),0)
return feat
#export
@typedispatch
def normalize_for_projector(x:TensorImage):
# normalize tensor to be between 0-1
img = x.clone()
sz = img.shape
img = img.view(x.size(0), -1)
img -= img.min(1, keepdim=True)[0]
img /= img.max(1, keepdim=True)[0]
img = img.view(*sz)
return img
#export
from fastai.vision.data import *
#export
@typedispatch
def tensorboard_log(x:TensorImage, y: TensorCategory, samples, outs, writer, step):
fig,axs = get_grid(len(samples), add_vert=1, return_fig=True)
for i in range(2):
axs = [b.show(ctx=c) for b,c in zip(samples.itemgot(i),axs)]
axs = [r.show(ctx=c, color='green' if b==r else 'red')
for b,r,c in zip(samples.itemgot(1),outs.itemgot(0),axs)]
writer.add_figure('Sample results', fig, step)
#export
from fastai.vision.core import TensorPoint,TensorBBox
#export
@typedispatch
def tensorboard_log(x:TensorImage, y: (TensorImageBase, TensorPoint, TensorBBox), samples, outs, writer, step):
fig,axs = get_grid(len(samples), add_vert=1, return_fig=True, double=True)
for i in range(2):
axs[::2] = [b.show(ctx=c) for b,c in zip(samples.itemgot(i),axs[::2])]
for x in [samples,outs]:
axs[1::2] = [b.show(ctx=c) for b,c in zip(x.itemgot(0),axs[1::2])]
writer.add_figure('Sample results', fig, step)
```
## Test
```
from fastai.vision.all import Resize, RandomSubsetSplitter, aug_transforms, cnn_learner, resnet18
```
## TensorBoardCallback
```
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
learn.fit_one_cycle(3, cbs=TensorBoardCallback(Path.home()/'tmp'/'runs', trace_model=True))
```
## Projector
### Projector in TensorBoardCallback
```
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.05, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
cbs = [TensorBoardCallback(log_dir=Path.home()/'tmp'/'runs', projector=True)]
learn = cnn_learner(dls, resnet18, metrics=accuracy, cbs=cbs)
learn.unfreeze()
learn.fit_one_cycle(3)
```
### TensorBoardProjectorCallback
```
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
files = get_image_files(path/'images')
files = files[:256]
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.add_cb(TensorBoardProjectorCallback(log_dir=Path.home()/'tmp'/'runs'))
dl = learn.dls.test_dl(files, with_labels=True)
_ = learn.get_preds(dl=dl)
```
### Validate results in tensorboard
Run the following command in the command line to check if the projector embeddings have been correctly wirtten:
```
tensorboard --logdir=~/tmp/runs
```
Open http://localhost:6006 in browser (TensorBoard Projector doesn't work correctly in Safari!)
## Export -
```
#hide
from nbdev.export import *
notebook2script()
```
| github_jupyter | #all_slow
#export
from fastai.basics import *
#hide
from nbdev.showdoc import *
#default_exp callback.tensorboard
pip install tensorboard
in your terminal. You can change the logdir as long as it matches the `log_dir` you pass to `TensorBoardCallback` (default is `runs` in the working directory).
## Tensorboard Embedding Projector support
> Tensorboard Embedding Projector is currently only supported for image classification
### Export Embeddings during Training
Tensorboard [Embedding Projector](https://www.tensorflow.org/tensorboard/tensorboard_projector_plugin) is supported in `TensorBoardCallback` (set parameter `projector=True`) during training. The validation set embeddings will be written after each epoch.
### Export Embeddings for a custom dataset
To write the embeddings for a custom dataset (e. g. after loading a learner) use `TensorBoardProjectorCallback`. Add the callback manually to the learner.
If using a custom model (non fastai-resnet) pass the layer where the embeddings should be extracted as a callback-parameter.
## Test
## TensorBoardCallback
## Projector
### Projector in TensorBoardCallback
### TensorBoardProjectorCallback
### Validate results in tensorboard
Run the following command in the command line to check if the projector embeddings have been correctly wirtten:
Open http://localhost:6006 in browser (TensorBoard Projector doesn't work correctly in Safari!)
## Export -
| 0.718496 | 0.86511 |
<a href="https://colab.research.google.com/github/Victoooooor/SimpleJobs/blob/main/movenet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
import numpy as np
import cv2
import os
# Import matplotlib libraries
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
import matplotlib.patches as patches
import imageio
from IPython.display import HTML, display
from google.colab import files
import sys
import time
import shutil
from google.colab.patches import cv2_imshow
import copy
from base64 import b64encode
#@title
KEYPOINT_DICT = {
'nose': 0,
'left_eye': 1,
'right_eye': 2,
'left_ear': 3,
'right_ear': 4,
'left_shoulder': 5,
'right_shoulder': 6,
'left_elbow': 7,
'right_elbow': 8,
'left_wrist': 9,
'right_wrist': 10,
'left_hip': 11,
'right_hip': 12,
'left_knee': 13,
'right_knee': 14,
'left_ankle': 15,
'right_ankle': 16
}
# Maps bones to a matplotlib color name.
KEYPOINT_EDGE_INDS_TO_COLOR = {
(0, 1): 'm',
(0, 2): 'c',
(1, 3): 'm',
(2, 4): 'c',
(0, 5): 'm',
(0, 6): 'c',
(5, 7): 'm',
(7, 9): 'm',
(6, 8): 'c',
(8, 10): 'c',
(5, 6): 'y',
(5, 11): 'm',
(6, 12): 'c',
(11, 12): 'y',
(11, 13): 'm',
(13, 15): 'm',
(12, 14): 'c',
(14, 16): 'c'
}
def _keypoints_and_edges_for_display(keypoints_with_scores,
height,
width,
keypoint_threshold=0.11):
"""Returns high confidence keypoints and edges for visualization.
Args:
keypoints_with_scores: A numpy array with shape [1, 1, 17, 3] representing
the keypoint coordinates and scores returned from the MoveNet model.
height: height of the image in pixels.
width: width of the image in pixels.
keypoint_threshold: minimum confidence score for a keypoint to be
visualized.
Returns:
A (keypoints_xy, edges_xy, edge_colors) containing:
* the coordinates of all keypoints of all detected entities;
* the coordinates of all skeleton edges of all detected entities;
* the colors in which the edges should be plotted.
"""
keypoints_all = []
keypoint_edges_all = []
edge_colors = []
num_instances, _, _, _ = keypoints_with_scores.shape
for idx in range(num_instances):
kpts_x = keypoints_with_scores[0, idx, :, 1]
kpts_y = keypoints_with_scores[0, idx, :, 0]
kpts_scores = keypoints_with_scores[0, idx, :, 2]
kpts_absolute_xy = np.stack(
[width * np.array(kpts_x), height * np.array(kpts_y)], axis=-1)
kpts_above_thresh_absolute = kpts_absolute_xy[
kpts_scores > keypoint_threshold, :]
keypoints_all.append(kpts_above_thresh_absolute)
for edge_pair, color in KEYPOINT_EDGE_INDS_TO_COLOR.items():
if (kpts_scores[edge_pair[0]] > keypoint_threshold and
kpts_scores[edge_pair[1]] > keypoint_threshold):
x_start = kpts_absolute_xy[edge_pair[0], 0]
y_start = kpts_absolute_xy[edge_pair[0], 1]
x_end = kpts_absolute_xy[edge_pair[1], 0]
y_end = kpts_absolute_xy[edge_pair[1], 1]
line_seg = np.array([[x_start, y_start], [x_end, y_end]])
keypoint_edges_all.append(line_seg)
edge_colors.append(color)
if keypoints_all:
keypoints_xy = np.concatenate(keypoints_all, axis=0)
else:
keypoints_xy = np.zeros((0, 17, 2))
if keypoint_edges_all:
edges_xy = np.stack(keypoint_edges_all, axis=0)
else:
edges_xy = np.zeros((0, 2, 2))
return keypoints_xy, edges_xy, edge_colors
def draw_prediction_on_image(
image, keypoints_with_scores, crop_region=None, close_figure=False,
output_image_height=None):
"""Draws the keypoint predictions on image.
Args:
image: A numpy array with shape [height, width, channel] representing the
pixel values of the input image.
keypoints_with_scores: A numpy array with shape [1, 1, 17, 3] representing
the keypoint coordinates and scores returned from the MoveNet model.
crop_region: A dictionary that defines the coordinates of the bounding box
of the crop region in normalized coordinates (see the init_crop_region
function below for more detail). If provided, this function will also
draw the bounding box on the image.
output_image_height: An integer indicating the height of the output image.
Note that the image aspect ratio will be the same as the input image.
Returns:
A numpy array with shape [out_height, out_width, channel] representing the
image overlaid with keypoint predictions.
"""
height, width, channel = image.shape
aspect_ratio = float(width) / height
fig, ax = plt.subplots(figsize=(12 * aspect_ratio, 12))
# To remove the huge white borders
fig.tight_layout(pad=0)
ax.margins(0)
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.axis('off')
im = ax.imshow(image)
line_segments = LineCollection([], linewidths=(4), linestyle='solid')
ax.add_collection(line_segments)
# Turn off tick labels
scat = ax.scatter([], [], s=60, color='#FF1493', zorder=3)
(keypoint_locs, keypoint_edges,
edge_colors) = _keypoints_and_edges_for_display(
keypoints_with_scores, height, width)
line_segments.set_segments(keypoint_edges)
line_segments.set_color(edge_colors)
if keypoint_edges.shape[0]:
line_segments.set_segments(keypoint_edges)
line_segments.set_color(edge_colors)
if keypoint_locs.shape[0]:
scat.set_offsets(keypoint_locs)
if crop_region is not None:
xmin = max(crop_region['x_min'] * width, 0.0)
ymin = max(crop_region['y_min'] * height, 0.0)
rec_width = min(crop_region['x_max'], 0.99) * width - xmin
rec_height = min(crop_region['y_max'], 0.99) * height - ymin
rect = patches.Rectangle(
(xmin,ymin),rec_width,rec_height,
linewidth=1,edgecolor='b',facecolor='none')
ax.add_patch(rect)
fig.canvas.draw()
image_from_plot = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
image_from_plot = image_from_plot.reshape(
fig.canvas.get_width_height()[::-1] + (3,))
plt.close(fig)
if output_image_height is not None:
output_image_width = int(output_image_height / height * width)
image_from_plot = cv2.resize(
image_from_plot, dsize=(output_image_width, output_image_height),
interpolation=cv2.INTER_CUBIC)
return image_from_plot
def to_gif(images, fps):
"""Converts image sequence (4D numpy array) to gif."""
imageio.mimsave('./animation.gif', images, fps=fps)
return embed.embed_file('./animation.gif')
def progress(value, max=100):
return HTML("""
<progress
value='{value}'
max='{max}',
style='width: 100%'
>
{value}
</progress>
""".format(value=value, max=max))
def show_video(video_path, video_width = 600):
video_file = open(video_path, "r+b").read()
video_url = f"data:video/mp4;base64,{b64encode(video_file).decode()}"
return HTML(f"""<video width={video_width} controls><source src="{video_url}"></video>""")
# Load the input image.
def get_pose(image, thresh = 0.2):
detection_threshold = thresh
image = tf.expand_dims(image, axis=0)
image_origin = copy.copy(image)
image = tf.cast(tf.image.resize_with_pad(
image, 256, 256), dtype=tf.int32)
_, image_height, image_width, channel = image_origin.shape
# print(image_height, image_width)
if channel != 3:
sys.exit('Image isn\'t in RGB format.')
output = movenet(image)
people = output['output_0'].numpy()[:, :, :51].reshape((6, 17, 3))
if image_width > image_height:
# print('scaling')
dif = people - 0.5
people[:,:,0] = 0.5 + image_width/image_height * dif[:,:,0]
elif image_width < image_height:
# print('scaling')
dif = people - 0.5
people[:,:,1] = 0.5 + image_height/image_width * dif[:,:,1]
# Save landmarks if all landmarks were detected
ppl = []
for i in range(6):
# print(output['output_0'][0, i, -1])
if output['output_0'][0, i, -1] > detection_threshold:
ppl.append(people[i])
should_keep_image = len(ppl) > 0
if not should_keep_image:
print('No pose was confidentlly detected.')
#draw all
merged_img = np.squeeze(image_origin.numpy(), axis=0)
for pp in ppl:
merged_img = draw_prediction_on_image(
merged_img, np.array([[pp]]), output_image_height=image_height)
return merged_img, ppl
def get_vid(filename, fhandle, desti = 'processed.mp4', interval = 5):
video_file = desti
video = cv2.VideoCapture(filename)
if not video.isOpened():
sys.exit('video does not exist')
fps = int(video.get(cv2.CAP_PROP_FPS))
frame_num = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
frame_width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
video_writer = cv2.VideoWriter(video_file,fourcc,fps,(frame_width,frame_height))
print("Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps))
frame_counter = 0
while True:
ret, frame = video.read()
if ret == True:
tfframe= tf.convert_to_tensor(frame)
new_frame, data = get_pose(tfframe)
video_writer.write(new_frame)
if frame_counter % interval == 0:
data=np.delete(data,2,2)
data[:,:,[0,1]] = data[:,:,[1,0]]
np.savetxt(fhandle, data.flatten(),
fmt='%.18e', newline=',')
fhandle.write(b"\n")
frame_counter += 1
if ret == False:
break
video.release()
video_writer.release()
cv2.destroyAllWindows()
return video_file
#@title
model = hub.load("https://tfhub.dev/google/movenet/multipose/lightning/1")
movenet = model.signatures['serving_default']
#params
interval = 5 #meaning save to csv every 5 frames
uploaded = files.upload()
filename = next(iter(uploaded))
#@title
text_name = 'pose.csv'
try:
os.remove(text_name)
except:
None
with open(text_name, "ab") as csv:
# numpy.savetxt(csv, a)
gen = get_vid(filename, csv, interval = interval)
csv.close()
audiofile = '_sound.mp3'
withsound = 'output.mp4'
!ffmpeg -i {filename} -f mp3 -ab 192000 -vn {audiofile}
!ffmpeg -i {gen} -i {audiofile} -map 0:0 -map 1:0 -c:v copy -c:a copy {withsound}
!zip -r file.zip {text_name} {withsound}
files.download('file.zip')
try:
os.remove(text_name)
os.remove(filename)
os.remove(audiofile)
os.remove(gen)
os.remove(withsound)
except:
None
```
| github_jupyter | #@title
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
import numpy as np
import cv2
import os
# Import matplotlib libraries
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
import matplotlib.patches as patches
import imageio
from IPython.display import HTML, display
from google.colab import files
import sys
import time
import shutil
from google.colab.patches import cv2_imshow
import copy
from base64 import b64encode
#@title
KEYPOINT_DICT = {
'nose': 0,
'left_eye': 1,
'right_eye': 2,
'left_ear': 3,
'right_ear': 4,
'left_shoulder': 5,
'right_shoulder': 6,
'left_elbow': 7,
'right_elbow': 8,
'left_wrist': 9,
'right_wrist': 10,
'left_hip': 11,
'right_hip': 12,
'left_knee': 13,
'right_knee': 14,
'left_ankle': 15,
'right_ankle': 16
}
# Maps bones to a matplotlib color name.
KEYPOINT_EDGE_INDS_TO_COLOR = {
(0, 1): 'm',
(0, 2): 'c',
(1, 3): 'm',
(2, 4): 'c',
(0, 5): 'm',
(0, 6): 'c',
(5, 7): 'm',
(7, 9): 'm',
(6, 8): 'c',
(8, 10): 'c',
(5, 6): 'y',
(5, 11): 'm',
(6, 12): 'c',
(11, 12): 'y',
(11, 13): 'm',
(13, 15): 'm',
(12, 14): 'c',
(14, 16): 'c'
}
def _keypoints_and_edges_for_display(keypoints_with_scores,
height,
width,
keypoint_threshold=0.11):
"""Returns high confidence keypoints and edges for visualization.
Args:
keypoints_with_scores: A numpy array with shape [1, 1, 17, 3] representing
the keypoint coordinates and scores returned from the MoveNet model.
height: height of the image in pixels.
width: width of the image in pixels.
keypoint_threshold: minimum confidence score for a keypoint to be
visualized.
Returns:
A (keypoints_xy, edges_xy, edge_colors) containing:
* the coordinates of all keypoints of all detected entities;
* the coordinates of all skeleton edges of all detected entities;
* the colors in which the edges should be plotted.
"""
keypoints_all = []
keypoint_edges_all = []
edge_colors = []
num_instances, _, _, _ = keypoints_with_scores.shape
for idx in range(num_instances):
kpts_x = keypoints_with_scores[0, idx, :, 1]
kpts_y = keypoints_with_scores[0, idx, :, 0]
kpts_scores = keypoints_with_scores[0, idx, :, 2]
kpts_absolute_xy = np.stack(
[width * np.array(kpts_x), height * np.array(kpts_y)], axis=-1)
kpts_above_thresh_absolute = kpts_absolute_xy[
kpts_scores > keypoint_threshold, :]
keypoints_all.append(kpts_above_thresh_absolute)
for edge_pair, color in KEYPOINT_EDGE_INDS_TO_COLOR.items():
if (kpts_scores[edge_pair[0]] > keypoint_threshold and
kpts_scores[edge_pair[1]] > keypoint_threshold):
x_start = kpts_absolute_xy[edge_pair[0], 0]
y_start = kpts_absolute_xy[edge_pair[0], 1]
x_end = kpts_absolute_xy[edge_pair[1], 0]
y_end = kpts_absolute_xy[edge_pair[1], 1]
line_seg = np.array([[x_start, y_start], [x_end, y_end]])
keypoint_edges_all.append(line_seg)
edge_colors.append(color)
if keypoints_all:
keypoints_xy = np.concatenate(keypoints_all, axis=0)
else:
keypoints_xy = np.zeros((0, 17, 2))
if keypoint_edges_all:
edges_xy = np.stack(keypoint_edges_all, axis=0)
else:
edges_xy = np.zeros((0, 2, 2))
return keypoints_xy, edges_xy, edge_colors
def draw_prediction_on_image(
image, keypoints_with_scores, crop_region=None, close_figure=False,
output_image_height=None):
"""Draws the keypoint predictions on image.
Args:
image: A numpy array with shape [height, width, channel] representing the
pixel values of the input image.
keypoints_with_scores: A numpy array with shape [1, 1, 17, 3] representing
the keypoint coordinates and scores returned from the MoveNet model.
crop_region: A dictionary that defines the coordinates of the bounding box
of the crop region in normalized coordinates (see the init_crop_region
function below for more detail). If provided, this function will also
draw the bounding box on the image.
output_image_height: An integer indicating the height of the output image.
Note that the image aspect ratio will be the same as the input image.
Returns:
A numpy array with shape [out_height, out_width, channel] representing the
image overlaid with keypoint predictions.
"""
height, width, channel = image.shape
aspect_ratio = float(width) / height
fig, ax = plt.subplots(figsize=(12 * aspect_ratio, 12))
# To remove the huge white borders
fig.tight_layout(pad=0)
ax.margins(0)
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.axis('off')
im = ax.imshow(image)
line_segments = LineCollection([], linewidths=(4), linestyle='solid')
ax.add_collection(line_segments)
# Turn off tick labels
scat = ax.scatter([], [], s=60, color='#FF1493', zorder=3)
(keypoint_locs, keypoint_edges,
edge_colors) = _keypoints_and_edges_for_display(
keypoints_with_scores, height, width)
line_segments.set_segments(keypoint_edges)
line_segments.set_color(edge_colors)
if keypoint_edges.shape[0]:
line_segments.set_segments(keypoint_edges)
line_segments.set_color(edge_colors)
if keypoint_locs.shape[0]:
scat.set_offsets(keypoint_locs)
if crop_region is not None:
xmin = max(crop_region['x_min'] * width, 0.0)
ymin = max(crop_region['y_min'] * height, 0.0)
rec_width = min(crop_region['x_max'], 0.99) * width - xmin
rec_height = min(crop_region['y_max'], 0.99) * height - ymin
rect = patches.Rectangle(
(xmin,ymin),rec_width,rec_height,
linewidth=1,edgecolor='b',facecolor='none')
ax.add_patch(rect)
fig.canvas.draw()
image_from_plot = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
image_from_plot = image_from_plot.reshape(
fig.canvas.get_width_height()[::-1] + (3,))
plt.close(fig)
if output_image_height is not None:
output_image_width = int(output_image_height / height * width)
image_from_plot = cv2.resize(
image_from_plot, dsize=(output_image_width, output_image_height),
interpolation=cv2.INTER_CUBIC)
return image_from_plot
def to_gif(images, fps):
"""Converts image sequence (4D numpy array) to gif."""
imageio.mimsave('./animation.gif', images, fps=fps)
return embed.embed_file('./animation.gif')
def progress(value, max=100):
return HTML("""
<progress
value='{value}'
max='{max}',
style='width: 100%'
>
{value}
</progress>
""".format(value=value, max=max))
def show_video(video_path, video_width = 600):
video_file = open(video_path, "r+b").read()
video_url = f"data:video/mp4;base64,{b64encode(video_file).decode()}"
return HTML(f"""<video width={video_width} controls><source src="{video_url}"></video>""")
# Load the input image.
def get_pose(image, thresh = 0.2):
detection_threshold = thresh
image = tf.expand_dims(image, axis=0)
image_origin = copy.copy(image)
image = tf.cast(tf.image.resize_with_pad(
image, 256, 256), dtype=tf.int32)
_, image_height, image_width, channel = image_origin.shape
# print(image_height, image_width)
if channel != 3:
sys.exit('Image isn\'t in RGB format.')
output = movenet(image)
people = output['output_0'].numpy()[:, :, :51].reshape((6, 17, 3))
if image_width > image_height:
# print('scaling')
dif = people - 0.5
people[:,:,0] = 0.5 + image_width/image_height * dif[:,:,0]
elif image_width < image_height:
# print('scaling')
dif = people - 0.5
people[:,:,1] = 0.5 + image_height/image_width * dif[:,:,1]
# Save landmarks if all landmarks were detected
ppl = []
for i in range(6):
# print(output['output_0'][0, i, -1])
if output['output_0'][0, i, -1] > detection_threshold:
ppl.append(people[i])
should_keep_image = len(ppl) > 0
if not should_keep_image:
print('No pose was confidentlly detected.')
#draw all
merged_img = np.squeeze(image_origin.numpy(), axis=0)
for pp in ppl:
merged_img = draw_prediction_on_image(
merged_img, np.array([[pp]]), output_image_height=image_height)
return merged_img, ppl
def get_vid(filename, fhandle, desti = 'processed.mp4', interval = 5):
video_file = desti
video = cv2.VideoCapture(filename)
if not video.isOpened():
sys.exit('video does not exist')
fps = int(video.get(cv2.CAP_PROP_FPS))
frame_num = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
frame_width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
video_writer = cv2.VideoWriter(video_file,fourcc,fps,(frame_width,frame_height))
print("Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps))
frame_counter = 0
while True:
ret, frame = video.read()
if ret == True:
tfframe= tf.convert_to_tensor(frame)
new_frame, data = get_pose(tfframe)
video_writer.write(new_frame)
if frame_counter % interval == 0:
data=np.delete(data,2,2)
data[:,:,[0,1]] = data[:,:,[1,0]]
np.savetxt(fhandle, data.flatten(),
fmt='%.18e', newline=',')
fhandle.write(b"\n")
frame_counter += 1
if ret == False:
break
video.release()
video_writer.release()
cv2.destroyAllWindows()
return video_file
#@title
model = hub.load("https://tfhub.dev/google/movenet/multipose/lightning/1")
movenet = model.signatures['serving_default']
#params
interval = 5 #meaning save to csv every 5 frames
uploaded = files.upload()
filename = next(iter(uploaded))
#@title
text_name = 'pose.csv'
try:
os.remove(text_name)
except:
None
with open(text_name, "ab") as csv:
# numpy.savetxt(csv, a)
gen = get_vid(filename, csv, interval = interval)
csv.close()
audiofile = '_sound.mp3'
withsound = 'output.mp4'
!ffmpeg -i {filename} -f mp3 -ab 192000 -vn {audiofile}
!ffmpeg -i {gen} -i {audiofile} -map 0:0 -map 1:0 -c:v copy -c:a copy {withsound}
!zip -r file.zip {text_name} {withsound}
files.download('file.zip')
try:
os.remove(text_name)
os.remove(filename)
os.remove(audiofile)
os.remove(gen)
os.remove(withsound)
except:
None | 0.760651 | 0.820001 |
# Getting started with Captum Insights: a simple model on CIFAR10 dataset
Demonstrates how to use Captum Insights embedded in a notebook to debug a CIFAR model and test samples. This is a slight modification of the CIFAR_TorchVision_Interpret notebook.
More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
**Note:** Before running this tutorial, please install the torchvision, and IPython packages.
```
import os
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from captum.insights import AttributionVisualizer, Batch
from captum.insights.features import ImageFeature
```
Define functions for classification classes and pretrained model.
```
def get_classes():
classes = [
"Plane",
"Car",
"Bird",
"Cat",
"Deer",
"Dog",
"Frog",
"Horse",
"Ship",
"Truck",
]
return classes
def get_pretrained_model():
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool1 = nn.MaxPool2d(2, 2)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.relu3 = nn.ReLU()
self.relu4 = nn.ReLU()
def forward(self, x):
x = self.pool1(self.relu1(self.conv1(x)))
x = self.pool2(self.relu2(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = self.relu3(self.fc1(x))
x = self.relu4(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.load_state_dict(torch.load("models/cifar_torchvision.pt"))
return net
def baseline_func(input):
return input * 0
def formatted_data_iter():
dataset = torchvision.datasets.CIFAR10(
root="data/test", train=False, download=True, transform=transforms.ToTensor()
)
dataloader = iter(
torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=False, num_workers=2)
)
while True:
images, labels = next(dataloader)
yield Batch(inputs=images, labels=labels)
```
Run the visualizer and render inside notebook for interactive debugging.
```
normalize = transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
model = get_pretrained_model()
visualizer = AttributionVisualizer(
models=[model],
score_func=lambda o: torch.nn.functional.softmax(o, 1),
classes=get_classes(),
features=[
ImageFeature(
"Photo",
baseline_transforms=[baseline_func],
input_transforms=[normalize],
)
],
dataset=formatted_data_iter(),
)
visualizer.render()
# show a screenshot if using notebook non-interactively
from IPython.display import Image
Image(filename='img/captum_insights.png')
```
| github_jupyter | import os
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from captum.insights import AttributionVisualizer, Batch
from captum.insights.features import ImageFeature
def get_classes():
classes = [
"Plane",
"Car",
"Bird",
"Cat",
"Deer",
"Dog",
"Frog",
"Horse",
"Ship",
"Truck",
]
return classes
def get_pretrained_model():
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool1 = nn.MaxPool2d(2, 2)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.relu3 = nn.ReLU()
self.relu4 = nn.ReLU()
def forward(self, x):
x = self.pool1(self.relu1(self.conv1(x)))
x = self.pool2(self.relu2(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = self.relu3(self.fc1(x))
x = self.relu4(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.load_state_dict(torch.load("models/cifar_torchvision.pt"))
return net
def baseline_func(input):
return input * 0
def formatted_data_iter():
dataset = torchvision.datasets.CIFAR10(
root="data/test", train=False, download=True, transform=transforms.ToTensor()
)
dataloader = iter(
torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=False, num_workers=2)
)
while True:
images, labels = next(dataloader)
yield Batch(inputs=images, labels=labels)
normalize = transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
model = get_pretrained_model()
visualizer = AttributionVisualizer(
models=[model],
score_func=lambda o: torch.nn.functional.softmax(o, 1),
classes=get_classes(),
features=[
ImageFeature(
"Photo",
baseline_transforms=[baseline_func],
input_transforms=[normalize],
)
],
dataset=formatted_data_iter(),
)
visualizer.render()
# show a screenshot if using notebook non-interactively
from IPython.display import Image
Image(filename='img/captum_insights.png') | 0.905044 | 0.978935 |
# Loading Image Data
So far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.
We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:
<img src='assets/dog_cat.png'>
We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torchvision import datasets, transforms
import helper
```
The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder)). In general you'll use `ImageFolder` like so:
```python
dataset = datasets.ImageFolder('path/to/data', transform=transform)
```
where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:
```
root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png
```
where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set.
### Transforms
When you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:
```python
transform = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()])
```
There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html).
### Data Loaders
With the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.
```python
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
```
Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.
```python
# Looping through it, get a batch on each loop
for images, labels in dataloader:
pass
# Get one batch
images, labels = next(iter(dataloader))
```
>**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader.
```
data_dir = '/Users/mohamedabdelbary/Documents/Dev/deep-learning-v2-pytorch/dogs-vs-cats/'
transform = transforms.Compose([
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()
])
dataset = datasets.ImageFolder(data_dir, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
# Run this to test your data loader
images, labels = next(iter(dataloader))
helper.imshow(images[0], normalize=False)
```
If you loaded the data correctly, you should see something like this (your image will be different):
<img src='assets/cat_cropped.png' width=244>
## Data Augmentation
A common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.
To randomly rotate, scale and crop, then flip your images you would define your transforms like this:
```python
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
```
You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so
```input[channel] = (input[channel] - mean[channel]) / std[channel]```
Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.
You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.
>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now.
```
data_dir = '/Users/mohamedabdelbary/Documents/Dev/deep-learning-v2-pytorch/dogs-vs-cats'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])
])
test_transforms = transforms.Compose([
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()
])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=32)
testloader = torch.utils.data.DataLoader(test_data, batch_size=32)
# change this to the trainloader or testloader
data_iter = iter(testloader)
images, labels = next(data_iter)
fig, axes = plt.subplots(figsize=(10,4), ncols=4)
for ii in range(4):
ax = axes[ii]
helper.imshow(images[ii], ax=ax, normalize=False)
```
Your transformed images should look something like this.
<center>Training examples:</center>
<img src='assets/train_examples.png' width=500px>
<center>Testing examples:</center>
<img src='assets/test_examples.png' width=500px>
At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).
In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem.
```
# Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset
```
| github_jupyter | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torchvision import datasets, transforms
import helper
dataset = datasets.ImageFolder('path/to/data', transform=transform)
root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png
transform = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()])
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
# Looping through it, get a batch on each loop
for images, labels in dataloader:
pass
# Get one batch
images, labels = next(iter(dataloader))
data_dir = '/Users/mohamedabdelbary/Documents/Dev/deep-learning-v2-pytorch/dogs-vs-cats/'
transform = transforms.Compose([
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()
])
dataset = datasets.ImageFolder(data_dir, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
# Run this to test your data loader
images, labels = next(iter(dataloader))
helper.imshow(images[0], normalize=False)
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.
You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.
>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now.
Your transformed images should look something like this.
<center>Training examples:</center>
<img src='assets/train_examples.png' width=500px>
<center>Testing examples:</center>
<img src='assets/test_examples.png' width=500px>
At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).
In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem.
| 0.829561 | 0.991161 |
```
# Visualization of the KO+ChIP Gold Standard from:
# Miraldi et al. (2018) "Leveraging chromatin accessibility for transcriptional regulatory network inference in Th17 Cells"
# TO START: In the menu above, choose "Cell" --> "Run All", and network + heatmap will load
# Change "canvas" to "SVG" (drop-down menu in cell below) to enable drag interactions with nodes & labels
# More info about jp_gene_viz and user interface instructions are available on Github:
# https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/dNetwork%20widget%20overview.ipynb
# Info specific to the "Multi-network" view:
# https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/Combined%20widgets.ipynb
# directory containing gene expression data and network folder
directory = "."
# folder containing networks
netPath = 'Networks'
# name of gene expression file
expressionFile = 'Th0_Th17_48hTh.txt'
# sample condition for initial gene node color
sampleConditionOfInt = 'Th17(48h)'
# The starting conditions for each of the networks is a list of tuples. Tuple entries are:
# 0. network file name (column format) (as found in directory)
# 1. column of the expression matrix that you want the nodes to be colored by
# 2. network title, to which we'll add the gene and peak cutoffs
# 3. cut off for edge strength, note TRN edges strengths are quantile for 15 TFs/gene, to see top 10 TFs/gene,
# increase cutoff to .33, etc.
networkInits = [
('ChIP_A17_KOall_ATh_bias50_maxComb_sp.tsv',sampleConditionOfInt,' Final ChIP/ATAC(Th17)+KO+ATAC(Th) TRN',.93),
('ATAC_Th17_bias50_maxComb_sp.tsv',sampleConditionOfInt,'Final ATAC-only TRN', .93),
("KO75_KOrk_1norm_sp.tsv",sampleConditionOfInt,'KO G.S. (25 TFs)',0),
("KC1p5_sp.tsv",sampleConditionOfInt,'KO-ChIP G.S. (9 TFs)',0)]
tfFocus = 1 # If 1, automatically applies the "TF only" function, so we can focus on TFs
# If 0, all genes shown
# Uncomment to run without install (in binder, for example)
import sys
if ".." not in sys.path:
sys.path.append("..")
from jp_gene_viz import dNetwork
dNetwork.load_javascript_support()
from jp_gene_viz import multiple_network
from jp_gene_viz import LExpression
LExpression.load_javascript_support()
networkList = list() # this list will contain heatmap-linked network objects
for networkInit in networkInits:
networkFile = networkInit[0]
curr = LExpression.LinkedExpressionNetwork()
print directory + '/' + networkFile
curr.load_network(directory + '/' + netPath + '/' + networkFile)
networkList.append(curr)
# visualize the networks -- HARD CODED for 4 networks:
M = multiple_network.MultipleNetworks(
[[networkList[0], networkList[1]],
[networkList[2], networkList[3]]])
M.svg_width = 500
M.show()
# Set network preferences
count = 0
for curr in networkList:
networkInit = networkInits[count]
# get title information + curr column for shading of figures
currCol = networkInit[1]
titleInf = networkInit[2]
threshhold = networkInit[3]
# set threshold
curr.network.threshhold_slider.value = threshhold
curr.network.apply_click(None)
curr.network.restore_click(None)
if tfFocus:
# focus on TF core
curr.network.tf_only_click(None)
curr.network.layout_click(None)
# layout network
curr.network.connected_only_click()
curr.network.layout_dropdown.value = 'fruchterman_reingold'
curr.network.layout_click()
# set title
curr.network.title_html.value = titleInf
# add labels
curr.network.labels_button.value=True
curr.network.draw_click(None)
# Load heatmap
curr.load_heatmap(directory + '/' + expressionFile)
# color nodes according to a sample column in the gene expression matrix
curr.gene_click(None)
curr.expression.transform_dropdown.value = 'Z score'
curr.expression.apply_transform()
curr.expression.col = currCol
curr.condition_click(None)
count += 1
```
| github_jupyter | # Visualization of the KO+ChIP Gold Standard from:
# Miraldi et al. (2018) "Leveraging chromatin accessibility for transcriptional regulatory network inference in Th17 Cells"
# TO START: In the menu above, choose "Cell" --> "Run All", and network + heatmap will load
# Change "canvas" to "SVG" (drop-down menu in cell below) to enable drag interactions with nodes & labels
# More info about jp_gene_viz and user interface instructions are available on Github:
# https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/dNetwork%20widget%20overview.ipynb
# Info specific to the "Multi-network" view:
# https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/Combined%20widgets.ipynb
# directory containing gene expression data and network folder
directory = "."
# folder containing networks
netPath = 'Networks'
# name of gene expression file
expressionFile = 'Th0_Th17_48hTh.txt'
# sample condition for initial gene node color
sampleConditionOfInt = 'Th17(48h)'
# The starting conditions for each of the networks is a list of tuples. Tuple entries are:
# 0. network file name (column format) (as found in directory)
# 1. column of the expression matrix that you want the nodes to be colored by
# 2. network title, to which we'll add the gene and peak cutoffs
# 3. cut off for edge strength, note TRN edges strengths are quantile for 15 TFs/gene, to see top 10 TFs/gene,
# increase cutoff to .33, etc.
networkInits = [
('ChIP_A17_KOall_ATh_bias50_maxComb_sp.tsv',sampleConditionOfInt,' Final ChIP/ATAC(Th17)+KO+ATAC(Th) TRN',.93),
('ATAC_Th17_bias50_maxComb_sp.tsv',sampleConditionOfInt,'Final ATAC-only TRN', .93),
("KO75_KOrk_1norm_sp.tsv",sampleConditionOfInt,'KO G.S. (25 TFs)',0),
("KC1p5_sp.tsv",sampleConditionOfInt,'KO-ChIP G.S. (9 TFs)',0)]
tfFocus = 1 # If 1, automatically applies the "TF only" function, so we can focus on TFs
# If 0, all genes shown
# Uncomment to run without install (in binder, for example)
import sys
if ".." not in sys.path:
sys.path.append("..")
from jp_gene_viz import dNetwork
dNetwork.load_javascript_support()
from jp_gene_viz import multiple_network
from jp_gene_viz import LExpression
LExpression.load_javascript_support()
networkList = list() # this list will contain heatmap-linked network objects
for networkInit in networkInits:
networkFile = networkInit[0]
curr = LExpression.LinkedExpressionNetwork()
print directory + '/' + networkFile
curr.load_network(directory + '/' + netPath + '/' + networkFile)
networkList.append(curr)
# visualize the networks -- HARD CODED for 4 networks:
M = multiple_network.MultipleNetworks(
[[networkList[0], networkList[1]],
[networkList[2], networkList[3]]])
M.svg_width = 500
M.show()
# Set network preferences
count = 0
for curr in networkList:
networkInit = networkInits[count]
# get title information + curr column for shading of figures
currCol = networkInit[1]
titleInf = networkInit[2]
threshhold = networkInit[3]
# set threshold
curr.network.threshhold_slider.value = threshhold
curr.network.apply_click(None)
curr.network.restore_click(None)
if tfFocus:
# focus on TF core
curr.network.tf_only_click(None)
curr.network.layout_click(None)
# layout network
curr.network.connected_only_click()
curr.network.layout_dropdown.value = 'fruchterman_reingold'
curr.network.layout_click()
# set title
curr.network.title_html.value = titleInf
# add labels
curr.network.labels_button.value=True
curr.network.draw_click(None)
# Load heatmap
curr.load_heatmap(directory + '/' + expressionFile)
# color nodes according to a sample column in the gene expression matrix
curr.gene_click(None)
curr.expression.transform_dropdown.value = 'Z score'
curr.expression.apply_transform()
curr.expression.col = currCol
curr.condition_click(None)
count += 1 | 0.609757 | 0.747455 |
# Bagging
This notebook introduces a very natural strategy to build ensembles of
machine learning models named "bagging".
"Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling
(random sampling with replacement) to learn several models on random
variations of the training set. At predict time, the predictions of each
learner are aggregated to give the final predictions.
First, we will generate a simple synthetic dataset to get insights regarding
bootstraping.
```
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
```
The relationship between our feature and the target to predict is non-linear.
However, a decision tree is capable of approximating such a non-linear
dependency:
```
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
```
Remember that the term "test" here refers to data that was not used for
training and computing an evaluation metric on such a synthetic test set
would be meaningless.
```
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test["Feature"], y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
```
Let's see how we can use bootstraping to learn several trees.
## Bootstrap resampling
A bootstrap sample corresponds to a resampling with replacement, of the
original dataset, a sample that is the same size as the original dataset.
Thus, the bootstrap sample will contain some data points several times while
some of the original data points will not be present.
We will create a function that given `data` and `target` will return a
resampled variation `data_bootstrap` and `target_bootstrap`.
```
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
```
We will generate 3 bootstrap samples and qualitatively check the difference
with the original dataset.
```
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
```
Observe that the 3 variations all share common points with the original
dataset. Some of the points are randomly resampled several times and appear
as darker blue circles.
The 3 generated bootstrap samples are all different from the original dataset
and from each other. To confirm this intuition, we can check the number of
unique samples in the bootstrap samples.
```
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
```
On average, ~63.2% of the original data points of the original dataset will
be present in a given bootstrap sample. The other ~36.8% are repeated
samples.
We are able to generate many datasets, all slightly different.
Now, we can fit a decision tree for each of these datasets and they all shall
be slightly different as well.
```
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
```
Now that we created a bag of different trees, we can use each of the trees to
predict the samples within the range of data. They shall give slightly
different predictions.
```
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
```
## Aggregating
Once our trees are fitted and we are able to get predictions for each of
them. In regression, the most straightforward way to combine those
predictions is just to average them: for a given test data point, we feed the
input feature values to each of the `n` trained models in the ensemble and as
a result compute `n` predicted values for the target variable. The final
prediction of the ensemble for the test data point is the average of those
`n` values.
We can plot the averaged predictions from the previous example.
```
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test["Feature"], bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Predictions of bagged trees")
```
The unbroken red line shows the averaged predictions, which would be the
final predictions given by our 'bag' of decision tree regressors. Note that
the predictions of the ensemble is more stable because of the averaging
operation. As a result, the bag of trees as a whole is less likely to overfit
than the individual trees.
## Bagging in scikit-learn
Scikit-learn implements the bagging procedure as a "meta-estimator", that is
an estimator that wraps another estimator: it takes a base model that is
cloned several times and trained independently on each bootstrap sample.
The following code snippet shows how to build a bagging ensemble of decision
trees. We set `n_estimators=100` instead of 3 in our manual implementation
above to get a stronger smoothing effect.
```
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
```
Let us visualize the predictions of the ensemble on the same interval of data:
```
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
```
Because we use 100 trees in the ensemble, the average prediction is indeed
slightly smoother but very similar to our previous average plot.
It is possible to access the internal models of the ensemble stored as a
Python list in the `bagged_trees.estimators_` attribute after fitting.
Let us compare the based model predictions with their average:
```
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
tree_predictions = tree.predict(data_test.to_numpy())
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
```
We used a low value of the opacity parameter `alpha` to better appreciate the
overlap in the prediction functions of the individual trees.
This visualization gives some insights on the uncertainty in the predictions
in different areas of the feature space.
## Bagging complex pipelines
While we used a decision tree as a base model, nothing prevents us of using
any other type of model.
As we know that the original data generating function is a noisy polynomial
transformation of the input variable, let us try to fit a bagged polynomial
regression pipeline on this dataset:
```
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
```
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.
Then it extracts degree-4 polynomial features. The resulting features will
all stay in the 0-1 range by construction: if `x` lies in the 0-1 range then
`x ** n` also lies in the 0-1 range for any value of `n`.
Then the pipeline feeds the resulting non-linear features to a regularized
linear regression model for the final prediction of the target variable.
Note that we intentionally use a small value for the regularization parameter
`alpha` as we expect the bagging ensemble to work well with slightly overfit
base models.
The ensemble itself is simply built by passing the resulting pipeline as the
`base_estimator` parameter of the `BaggingRegressor` class:
```
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
regressor_predictions = regressor.predict(data_test.to_numpy())
base_model_line = plt.plot(
data_test["Feature"], regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test["Feature"], bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
```
The predictions of this bagged polynomial regression model looks
qualitatively better than the bagged trees. This is somewhat expected since
the base model better reflects our knowldege of the true data generating
process.
Again the different shades induced by the overlapping blue lines let us
appreciate the uncertainty in the prediction of the bagged ensemble.
To conclude this notebook, we note that the bootstrapping procedure is a
generic tool of statistics and is not limited to build ensemble of machine
learning models. The interested reader can learn more on the [Wikipedia
article on
bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)).
| github_jupyter | import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test["Feature"], y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test["Feature"], bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Predictions of bagged trees")
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
tree_predictions = tree.predict(data_test.to_numpy())
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
regressor_predictions = regressor.predict(data_test.to_numpy())
base_model_line = plt.plot(
data_test["Feature"], regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test["Feature"], bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression") | 0.856317 | 0.963057 |
### Dependencies for the interactive plots apart from rdkit, oechem and other qc* packages
!conda install -c conda-forge plotly -y
!conda install -c plotly jupyter-dash -y
!conda install -c plotly plotly-orca -y
```
#imports
import numpy as np
from scipy import stats
import fragmenter
from openeye import oechem
TD_datasets = [
'Fragment Stability Benchmark',
# 'Fragmenter paper',
# 'OpenFF DANCE 1 eMolecules t142 v1.0',
'OpenFF Fragmenter Validation 1.0',
'OpenFF Full TorsionDrive Benchmark 1',
'OpenFF Gen 2 Torsion Set 1 Roche 2',
'OpenFF Gen 2 Torsion Set 2 Coverage 2',
'OpenFF Gen 2 Torsion Set 3 Pfizer Discrepancy 2',
'OpenFF Gen 2 Torsion Set 4 eMolecules Discrepancy 2',
'OpenFF Gen 2 Torsion Set 5 Bayer 2',
'OpenFF Gen 2 Torsion Set 6 Supplemental 2',
'OpenFF Group1 Torsions 2',
'OpenFF Group1 Torsions 3',
'OpenFF Primary Benchmark 1 Torsion Set',
'OpenFF Primary Benchmark 2 Torsion Set',
'OpenFF Primary TorsionDrive Benchmark 1',
'OpenFF Rowley Biaryl v1.0',
'OpenFF Substituted Phenyl Set 1',
'OpenFF-benchmark-ligand-fragments-v1.0',
'Pfizer Discrepancy Torsion Dataset 1',
'SMIRNOFF Coverage Torsion Set 1',
# 'SiliconTX Torsion Benchmark Set 1',
'TorsionDrive Paper'
]
def oeb2oemol(oebfile):
"""
Takes in oebfile and generates oemolList
Parameters
----------
oebfile : String
Title of an oeb file
Returns
-------
mollist : List of objects
List of OEMols in the .oeb file
"""
ifs = oechem.oemolistream(oebfile)
mollist = []
for mol in ifs.GetOEGraphMols():
mollist.append(oechem.OEGraphMol(mol))
return mollist
def compute_r_ci(wbos, max_energies):
return (stats.linregress(wbos, max_energies)[2])**2
def plot_interactive(fileList, t_id):
"""
Takes in a list of oeb files and plots wbo vs torsion barrier, combining all the datasets and plotting by each tid in the combined dataset
Note: ***Plot is interactive (or returns chemical structures) only for the last usage
Parameters
----------
fileList: list of strings
each string is a oeb file name
Eg. ['rowley.oeb'] or ['rowley.oeb', 'phenyl.oeb']
t_id: str
torsion id, eg., 't43'
"""
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
import plotly.graph_objects as go
from dash.dependencies import Input, Output
from rdkit import Chem
from rdkit.Chem.Draw import MolsToGridImage
import base64
from io import BytesIO
from plotly.validators.scatter.marker import SymbolValidator
import ntpath
df = pd.DataFrame(columns = ['tid', 'tb', 'wbo', 'cmiles', 'TDindices', 'filename'])
fig = go.Figure({'layout' : go.Layout(height=900, width=1000,
xaxis={'title': 'Wiberg Bond Order'},
yaxis={'title': 'Torsion barrier (kJ/mol)'},
#paper_bgcolor='white',
plot_bgcolor='rgba(0,0,0,0)',
margin={'l': 40, 'b': 200, 't': 40, 'r': 10},
legend={'orientation': 'h', 'y': -0.2},
legend_font=dict(family='Arial', color='black', size=15),
hovermode=False,
dragmode='select')})
fig.update_xaxes(title_font=dict(size=26, family='Arial', color='black'),
ticks="outside", tickwidth=2, tickcolor='black', ticklen=10,
tickfont=dict(family='Arial', color='black', size=20),
showgrid=False, gridwidth=1, gridcolor='black',
mirror=True, linewidth=2, linecolor='black', showline=True)
fig.update_yaxes(title_font=dict(size=26, family='Arial', color='black'),
ticks="outside", tickwidth=2, tickcolor='black', ticklen=10,
tickfont=dict(family='Arial', color='black', size=20),
showgrid=False, gridwidth=1, gridcolor='black',
mirror=True, linewidth=2, linecolor='black', showline=True)
colors = fragmenter.chemi._KELLYS_COLORS
colors = colors * 2
raw_symbols = SymbolValidator().values
symbols = []
for i in range(0,len(raw_symbols),8):
symbols.append(raw_symbols[i])
count = 0
fname = []
for fileName in fileList:
molList = []
fname = fileName
molList = oeb2oemol(fname)
for m in molList:
tid = m.GetData("IDMatch")
fname = ntpath.basename(fileName)
df = df.append({'tid': tid,
'tb': m.GetData("TB"),
'wbo' : m.GetData("WBO"),
'cmiles' : m.GetData("cmiles"),
'TDindices' : m.GetData("TDindices"),
'filename' : fname},
ignore_index = True)
x = df[(df.filename == fname) & (df.tid == t_id)].wbo
y = df.loc[x.index].tb
fig.add_scatter(x=x,
y=y,
mode="markers",
name=fname,
marker_color=colors[count],
marker_symbol=count,
marker_size=13)
count += 1
x = df[df.tid == t_id].wbo
y = df.loc[x.index].tb
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
print("tid: ", t_id, "r_value: ", r_value,
"slope: ", slope, "intercept: ", intercept)
fig.add_traces(go.Scatter(
x=np.unique(x),
y=np.poly1d([slope, intercept])(np.unique(x)),
showlegend=False, mode ='lines'))
slope_text = 'slope: '+str('%.2f' % slope)
r_value = 'r_val: '+str('%.2f' % r_value)
fig_text = slope_text + ', '+ r_value
fig.add_annotation(text=fig_text,
font = {'family': "Arial", 'size': 22, 'color': 'black'},
xref="paper", yref="paper", x=1, y=1,
showarrow=False)
graph_component = dcc.Graph(id="graph_id", figure=fig)
image_component = html.Img(id="structure-image")
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = JupyterDash(__name__)
app.layout = html.Div([
html.Div([graph_component]),
html.Div([image_component])])
@app.callback(
Output('structure-image', 'src'),
[Input('graph_id', 'selectedData')])
def display_selected_data(selectedData):
max_structs = 40
structs_per_row = 1
empty_plot = "data:image/gif;base64,R0lGODlhAQABAAAAACwAAAAAAQABAAA="
if selectedData:
if len(selectedData['points']) == 0:
return empty_plot
print("# of points selected = ", len(selectedData['points']))
xval = [x['x'] for x in selectedData['points']]
yval = [x['y'] for x in selectedData['points']]
match_df = df[df['tb'].isin(yval) & df['tid'].isin([t_id])]
smiles_list = list(match_df.cmiles)
name_list = list(match_df.tid)
name_list = []
hl_atoms = []
for i in range(len(smiles_list)):
print(smiles_list[i])
indices_tup = match_df.iloc[i].TDindices
indices_list = [x + 1 for x in list(indices_tup)]
hl_atoms.append(indices_list)
tid = match_df.iloc[i].tid
tor_bar = match_df.iloc[i].tb
wbo_tor = match_df.iloc[i].wbo
cmiles_str = match_df.iloc[i].cmiles
tmp = [str(tid), ':', 'TDindices [', str(indices_tup[0]+1),
str(indices_tup[1]+1), str(indices_tup[2]+1),
str(indices_tup[3]+1), ']',
'wbo:', str('%.2f'%(wbo_tor)),
'TB:', str('%.2f'%(tor_bar)), '(kJ/mol)']
name_list.append(' '.join(tmp))
mol_list = [Chem.MolFromSmiles(x) for x in smiles_list]
print(len(mol_list))
img = MolsToGridImage(mol_list[0:max_structs],
subImgSize=(500, 500),
molsPerRow=structs_per_row,
legends=name_list)
# ,
# highlightAtomLists=hl_atoms)
buffered = BytesIO()
img.save(buffered, format="PNG", legendFontSize=60)
encoded_image = base64.b64encode(buffered.getvalue())
src_str = 'data:image/png;base64,{}'.format(encoded_image.decode())
else:
return empty_plot
return src_str
if __name__ == '__main__':
app.run_server(mode='inline', port=8061, debug=True)
return fig
```
`rowley_t43 = plot_interactive(['./FF_1.2.1/OpenFF Rowley Biaryl v1.0.oeb'], t_id='t43')`
```
folder_name = './FF_1.3.0-tig-8/'
TD_datasets = [
'Fragment Stability Benchmark',
# 'Fragmenter paper',
# 'OpenFF DANCE 1 eMolecules t142 v1.0',
'OpenFF Fragmenter Validation 1.0',
'OpenFF Full TorsionDrive Benchmark 1',
'OpenFF Gen 2 Torsion Set 1 Roche 2',
'OpenFF Gen 2 Torsion Set 2 Coverage 2',
'OpenFF Gen 2 Torsion Set 3 Pfizer Discrepancy 2',
'OpenFF Gen 2 Torsion Set 4 eMolecules Discrepancy 2',
'OpenFF Gen 2 Torsion Set 5 Bayer 2',
'OpenFF Gen 2 Torsion Set 6 Supplemental 2',
'OpenFF Group1 Torsions 2',
'OpenFF Group1 Torsions 3',
'OpenFF Primary Benchmark 1 Torsion Set',
'OpenFF Primary Benchmark 2 Torsion Set',
'OpenFF Primary TorsionDrive Benchmark 1',
'OpenFF Rowley Biaryl v1.0',
'OpenFF Substituted Phenyl Set 1',
'OpenFF-benchmark-ligand-fragments-v1.0',
'Pfizer Discrepancy Torsion Dataset 1',
'SMIRNOFF Coverage Torsion Set 1',
# 'SiliconTX Torsion Benchmark Set 1',
'TorsionDrive Paper'
]
TD_working_oeb = [folder_name+x+'.oeb' for x in TD_datasets]
# all_t43 = plot_interactive(TD_working_oeb, t_id='t43')
tig_ids = ['TIG2']
for iid in tig_ids:
tmp = plot_interactive(TD_working_oeb, t_id=iid)
# tmp.write_image(folder_name+"fig_"+str(iid)+".pdf")
```
| github_jupyter | #imports
import numpy as np
from scipy import stats
import fragmenter
from openeye import oechem
TD_datasets = [
'Fragment Stability Benchmark',
# 'Fragmenter paper',
# 'OpenFF DANCE 1 eMolecules t142 v1.0',
'OpenFF Fragmenter Validation 1.0',
'OpenFF Full TorsionDrive Benchmark 1',
'OpenFF Gen 2 Torsion Set 1 Roche 2',
'OpenFF Gen 2 Torsion Set 2 Coverage 2',
'OpenFF Gen 2 Torsion Set 3 Pfizer Discrepancy 2',
'OpenFF Gen 2 Torsion Set 4 eMolecules Discrepancy 2',
'OpenFF Gen 2 Torsion Set 5 Bayer 2',
'OpenFF Gen 2 Torsion Set 6 Supplemental 2',
'OpenFF Group1 Torsions 2',
'OpenFF Group1 Torsions 3',
'OpenFF Primary Benchmark 1 Torsion Set',
'OpenFF Primary Benchmark 2 Torsion Set',
'OpenFF Primary TorsionDrive Benchmark 1',
'OpenFF Rowley Biaryl v1.0',
'OpenFF Substituted Phenyl Set 1',
'OpenFF-benchmark-ligand-fragments-v1.0',
'Pfizer Discrepancy Torsion Dataset 1',
'SMIRNOFF Coverage Torsion Set 1',
# 'SiliconTX Torsion Benchmark Set 1',
'TorsionDrive Paper'
]
def oeb2oemol(oebfile):
"""
Takes in oebfile and generates oemolList
Parameters
----------
oebfile : String
Title of an oeb file
Returns
-------
mollist : List of objects
List of OEMols in the .oeb file
"""
ifs = oechem.oemolistream(oebfile)
mollist = []
for mol in ifs.GetOEGraphMols():
mollist.append(oechem.OEGraphMol(mol))
return mollist
def compute_r_ci(wbos, max_energies):
return (stats.linregress(wbos, max_energies)[2])**2
def plot_interactive(fileList, t_id):
"""
Takes in a list of oeb files and plots wbo vs torsion barrier, combining all the datasets and plotting by each tid in the combined dataset
Note: ***Plot is interactive (or returns chemical structures) only for the last usage
Parameters
----------
fileList: list of strings
each string is a oeb file name
Eg. ['rowley.oeb'] or ['rowley.oeb', 'phenyl.oeb']
t_id: str
torsion id, eg., 't43'
"""
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
import plotly.graph_objects as go
from dash.dependencies import Input, Output
from rdkit import Chem
from rdkit.Chem.Draw import MolsToGridImage
import base64
from io import BytesIO
from plotly.validators.scatter.marker import SymbolValidator
import ntpath
df = pd.DataFrame(columns = ['tid', 'tb', 'wbo', 'cmiles', 'TDindices', 'filename'])
fig = go.Figure({'layout' : go.Layout(height=900, width=1000,
xaxis={'title': 'Wiberg Bond Order'},
yaxis={'title': 'Torsion barrier (kJ/mol)'},
#paper_bgcolor='white',
plot_bgcolor='rgba(0,0,0,0)',
margin={'l': 40, 'b': 200, 't': 40, 'r': 10},
legend={'orientation': 'h', 'y': -0.2},
legend_font=dict(family='Arial', color='black', size=15),
hovermode=False,
dragmode='select')})
fig.update_xaxes(title_font=dict(size=26, family='Arial', color='black'),
ticks="outside", tickwidth=2, tickcolor='black', ticklen=10,
tickfont=dict(family='Arial', color='black', size=20),
showgrid=False, gridwidth=1, gridcolor='black',
mirror=True, linewidth=2, linecolor='black', showline=True)
fig.update_yaxes(title_font=dict(size=26, family='Arial', color='black'),
ticks="outside", tickwidth=2, tickcolor='black', ticklen=10,
tickfont=dict(family='Arial', color='black', size=20),
showgrid=False, gridwidth=1, gridcolor='black',
mirror=True, linewidth=2, linecolor='black', showline=True)
colors = fragmenter.chemi._KELLYS_COLORS
colors = colors * 2
raw_symbols = SymbolValidator().values
symbols = []
for i in range(0,len(raw_symbols),8):
symbols.append(raw_symbols[i])
count = 0
fname = []
for fileName in fileList:
molList = []
fname = fileName
molList = oeb2oemol(fname)
for m in molList:
tid = m.GetData("IDMatch")
fname = ntpath.basename(fileName)
df = df.append({'tid': tid,
'tb': m.GetData("TB"),
'wbo' : m.GetData("WBO"),
'cmiles' : m.GetData("cmiles"),
'TDindices' : m.GetData("TDindices"),
'filename' : fname},
ignore_index = True)
x = df[(df.filename == fname) & (df.tid == t_id)].wbo
y = df.loc[x.index].tb
fig.add_scatter(x=x,
y=y,
mode="markers",
name=fname,
marker_color=colors[count],
marker_symbol=count,
marker_size=13)
count += 1
x = df[df.tid == t_id].wbo
y = df.loc[x.index].tb
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
print("tid: ", t_id, "r_value: ", r_value,
"slope: ", slope, "intercept: ", intercept)
fig.add_traces(go.Scatter(
x=np.unique(x),
y=np.poly1d([slope, intercept])(np.unique(x)),
showlegend=False, mode ='lines'))
slope_text = 'slope: '+str('%.2f' % slope)
r_value = 'r_val: '+str('%.2f' % r_value)
fig_text = slope_text + ', '+ r_value
fig.add_annotation(text=fig_text,
font = {'family': "Arial", 'size': 22, 'color': 'black'},
xref="paper", yref="paper", x=1, y=1,
showarrow=False)
graph_component = dcc.Graph(id="graph_id", figure=fig)
image_component = html.Img(id="structure-image")
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = JupyterDash(__name__)
app.layout = html.Div([
html.Div([graph_component]),
html.Div([image_component])])
@app.callback(
Output('structure-image', 'src'),
[Input('graph_id', 'selectedData')])
def display_selected_data(selectedData):
max_structs = 40
structs_per_row = 1
empty_plot = "data:image/gif;base64,R0lGODlhAQABAAAAACwAAAAAAQABAAA="
if selectedData:
if len(selectedData['points']) == 0:
return empty_plot
print("# of points selected = ", len(selectedData['points']))
xval = [x['x'] for x in selectedData['points']]
yval = [x['y'] for x in selectedData['points']]
match_df = df[df['tb'].isin(yval) & df['tid'].isin([t_id])]
smiles_list = list(match_df.cmiles)
name_list = list(match_df.tid)
name_list = []
hl_atoms = []
for i in range(len(smiles_list)):
print(smiles_list[i])
indices_tup = match_df.iloc[i].TDindices
indices_list = [x + 1 for x in list(indices_tup)]
hl_atoms.append(indices_list)
tid = match_df.iloc[i].tid
tor_bar = match_df.iloc[i].tb
wbo_tor = match_df.iloc[i].wbo
cmiles_str = match_df.iloc[i].cmiles
tmp = [str(tid), ':', 'TDindices [', str(indices_tup[0]+1),
str(indices_tup[1]+1), str(indices_tup[2]+1),
str(indices_tup[3]+1), ']',
'wbo:', str('%.2f'%(wbo_tor)),
'TB:', str('%.2f'%(tor_bar)), '(kJ/mol)']
name_list.append(' '.join(tmp))
mol_list = [Chem.MolFromSmiles(x) for x in smiles_list]
print(len(mol_list))
img = MolsToGridImage(mol_list[0:max_structs],
subImgSize=(500, 500),
molsPerRow=structs_per_row,
legends=name_list)
# ,
# highlightAtomLists=hl_atoms)
buffered = BytesIO()
img.save(buffered, format="PNG", legendFontSize=60)
encoded_image = base64.b64encode(buffered.getvalue())
src_str = 'data:image/png;base64,{}'.format(encoded_image.decode())
else:
return empty_plot
return src_str
if __name__ == '__main__':
app.run_server(mode='inline', port=8061, debug=True)
return fig
folder_name = './FF_1.3.0-tig-8/'
TD_datasets = [
'Fragment Stability Benchmark',
# 'Fragmenter paper',
# 'OpenFF DANCE 1 eMolecules t142 v1.0',
'OpenFF Fragmenter Validation 1.0',
'OpenFF Full TorsionDrive Benchmark 1',
'OpenFF Gen 2 Torsion Set 1 Roche 2',
'OpenFF Gen 2 Torsion Set 2 Coverage 2',
'OpenFF Gen 2 Torsion Set 3 Pfizer Discrepancy 2',
'OpenFF Gen 2 Torsion Set 4 eMolecules Discrepancy 2',
'OpenFF Gen 2 Torsion Set 5 Bayer 2',
'OpenFF Gen 2 Torsion Set 6 Supplemental 2',
'OpenFF Group1 Torsions 2',
'OpenFF Group1 Torsions 3',
'OpenFF Primary Benchmark 1 Torsion Set',
'OpenFF Primary Benchmark 2 Torsion Set',
'OpenFF Primary TorsionDrive Benchmark 1',
'OpenFF Rowley Biaryl v1.0',
'OpenFF Substituted Phenyl Set 1',
'OpenFF-benchmark-ligand-fragments-v1.0',
'Pfizer Discrepancy Torsion Dataset 1',
'SMIRNOFF Coverage Torsion Set 1',
# 'SiliconTX Torsion Benchmark Set 1',
'TorsionDrive Paper'
]
TD_working_oeb = [folder_name+x+'.oeb' for x in TD_datasets]
# all_t43 = plot_interactive(TD_working_oeb, t_id='t43')
tig_ids = ['TIG2']
for iid in tig_ids:
tmp = plot_interactive(TD_working_oeb, t_id=iid)
# tmp.write_image(folder_name+"fig_"+str(iid)+".pdf") | 0.668015 | 0.692207 |
# Noisy Convolutional Neural Network Example
Build a noisy convolutional neural network with TensorFlow v2.
- Author: Gagandeep Singh
- Project: https://github.com/czgdp1807/noisy_weights
Experimental Details
- Datasets: The MNIST database of handwritten digits has been used for training and testing.
Observations
- It has been observed that accuracy of the model isn't affected on testing it with MNIST digits.
- The uncertainty expressed by the model is low which is expected since train and test disitributions are same.
References
- [1] https://github.com/aymericdamien/TensorFlow-Examples/
```
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow.keras import Model, layers
import numpy as np
# MNIST dataset parameters.
num_classes = 10 # total classes (0-9 digits).
# Training parameters.
learning_rate = 0.001
training_steps = 200
batch_size = 128
display_step = 10
# Network parameters.
conv1_filters = 32 # number of filters for 1st conv layer.
conv2_filters = 64 # number of filters for 2nd conv layer.
fc1_units = 1024 # number of neurons for 1st fully-connected layer.
# Prepare MNIST data.
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255.
# Use tf.data API to shuffle and batch data.
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
# Create TF Model.
class ConvNet(Model):
# Set layers.
def __init__(self):
super(ConvNet, self).__init__()
# Convolution Layer with 32 filters and a kernel size of 5.
self.conv1 = layers.Conv2D(32, kernel_size=5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
self.maxpool1 = layers.MaxPool2D(2, strides=2)
# Convolution Layer with 64 filters and a kernel size of 3.
self.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
self.maxpool2 = layers.MaxPool2D(2, strides=2)
# Flatten the data to a 1-D vector for the fully connected layer.
self.flatten = layers.Flatten()
# Fully connected layer.
self.fc1 = layers.Dense(1024)
# Apply Dropout (if is_training is False, dropout is not applied).
self.dropout = layers.Dropout(rate=0.5)
# Output layer, class prediction.
self.out = layers.Dense(num_classes)
# Set forward pass.
def call(self, x, is_training=False):
def add_noise(_layer):
noisy_weights = []
for weight in _layer.get_weights():
noisy_weights.append(weight + tf.random.normal(weight.shape, 0., 0.001))
_layer.set_weights(noisy_weights)
if not is_training:
add_noise(self.conv1)
add_noise(self.conv2)
add_noise(self.fc1)
add_noise(self.out)
x = tf.reshape(x, [-1, 28, 28, 1])
x = self.conv1(x)
x = self.maxpool1(x)
x = self.conv2(x)
x = self.maxpool2(x)
x = self.flatten(x)
x = self.fc1(x)
x = self.dropout(x, training=is_training)
x = self.out(x)
if not is_training:
# tf cross entropy expect logits without softmax, so only
# apply softmax when not training.
x = tf.nn.softmax(x)
return x
# Build neural network model.
conv_net = ConvNet()
# Cross-Entropy Loss.
# Note that this will apply 'softmax' to the logits.
def cross_entropy_loss(x, y):
# Convert labels to int 64 for tf cross-entropy function.
y = tf.cast(y, tf.int64)
# Apply softmax to logits and compute cross-entropy.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
# Average loss across the batch.
return tf.reduce_mean(loss)
# Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
# Stochastic gradient descent optimizer.
optimizer = tf.optimizers.Adam(learning_rate)
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
# Forward pass.
pred = conv_net(x, is_training=True)
# Compute loss.
loss = cross_entropy_loss(pred, y)
# Variables to update, i.e. trainable variables.
trainable_variables = conv_net.trainable_variables
# Compute gradients.
gradients = g.gradient(loss, trainable_variables)
# Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, trainable_variables))
# Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y)
if step % display_step == 0:
pred = conv_net(batch_x)
loss = cross_entropy_loss(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
# Test model on validation set.
pred = conv_net(x_test)
print("Test Accuracy: %f" % accuracy(pred, y_test))
# Visualize predictions.
import matplotlib.pyplot as plt
def compute_entropy(preds):
uncertainties = []
for i in range(preds.shape[0]):
uncertainties.append(-tf.reduce_mean(tf.math.multiply(preds[i], tf.math.log(preds[i]))))
return tf.convert_to_tensor(uncertainties)
n_images = 5
test_images = x_test[:n_images]
n_samples = 10
predictions = []
for i in range(n_samples):
predictions.append(conv_net(test_images))
predictions = tf.convert_to_tensor(predictions)
predictions = tf.reduce_mean(predictions, 0)
uncertainty = compute_entropy(predictions)
print(uncertainty)
# Display image and model prediction.
for i in range(n_images):
plt.imshow(np.reshape(test_images[i], [28, 28]), cmap='gray')
plt.show()
print("Model prediction: %i" % np.argmax(predictions.numpy()[i]))
```
| github_jupyter | from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow.keras import Model, layers
import numpy as np
# MNIST dataset parameters.
num_classes = 10 # total classes (0-9 digits).
# Training parameters.
learning_rate = 0.001
training_steps = 200
batch_size = 128
display_step = 10
# Network parameters.
conv1_filters = 32 # number of filters for 1st conv layer.
conv2_filters = 64 # number of filters for 2nd conv layer.
fc1_units = 1024 # number of neurons for 1st fully-connected layer.
# Prepare MNIST data.
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255.
# Use tf.data API to shuffle and batch data.
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
# Create TF Model.
class ConvNet(Model):
# Set layers.
def __init__(self):
super(ConvNet, self).__init__()
# Convolution Layer with 32 filters and a kernel size of 5.
self.conv1 = layers.Conv2D(32, kernel_size=5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
self.maxpool1 = layers.MaxPool2D(2, strides=2)
# Convolution Layer with 64 filters and a kernel size of 3.
self.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
self.maxpool2 = layers.MaxPool2D(2, strides=2)
# Flatten the data to a 1-D vector for the fully connected layer.
self.flatten = layers.Flatten()
# Fully connected layer.
self.fc1 = layers.Dense(1024)
# Apply Dropout (if is_training is False, dropout is not applied).
self.dropout = layers.Dropout(rate=0.5)
# Output layer, class prediction.
self.out = layers.Dense(num_classes)
# Set forward pass.
def call(self, x, is_training=False):
def add_noise(_layer):
noisy_weights = []
for weight in _layer.get_weights():
noisy_weights.append(weight + tf.random.normal(weight.shape, 0., 0.001))
_layer.set_weights(noisy_weights)
if not is_training:
add_noise(self.conv1)
add_noise(self.conv2)
add_noise(self.fc1)
add_noise(self.out)
x = tf.reshape(x, [-1, 28, 28, 1])
x = self.conv1(x)
x = self.maxpool1(x)
x = self.conv2(x)
x = self.maxpool2(x)
x = self.flatten(x)
x = self.fc1(x)
x = self.dropout(x, training=is_training)
x = self.out(x)
if not is_training:
# tf cross entropy expect logits without softmax, so only
# apply softmax when not training.
x = tf.nn.softmax(x)
return x
# Build neural network model.
conv_net = ConvNet()
# Cross-Entropy Loss.
# Note that this will apply 'softmax' to the logits.
def cross_entropy_loss(x, y):
# Convert labels to int 64 for tf cross-entropy function.
y = tf.cast(y, tf.int64)
# Apply softmax to logits and compute cross-entropy.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
# Average loss across the batch.
return tf.reduce_mean(loss)
# Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
# Stochastic gradient descent optimizer.
optimizer = tf.optimizers.Adam(learning_rate)
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
# Forward pass.
pred = conv_net(x, is_training=True)
# Compute loss.
loss = cross_entropy_loss(pred, y)
# Variables to update, i.e. trainable variables.
trainable_variables = conv_net.trainable_variables
# Compute gradients.
gradients = g.gradient(loss, trainable_variables)
# Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, trainable_variables))
# Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y)
if step % display_step == 0:
pred = conv_net(batch_x)
loss = cross_entropy_loss(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
# Test model on validation set.
pred = conv_net(x_test)
print("Test Accuracy: %f" % accuracy(pred, y_test))
# Visualize predictions.
import matplotlib.pyplot as plt
def compute_entropy(preds):
uncertainties = []
for i in range(preds.shape[0]):
uncertainties.append(-tf.reduce_mean(tf.math.multiply(preds[i], tf.math.log(preds[i]))))
return tf.convert_to_tensor(uncertainties)
n_images = 5
test_images = x_test[:n_images]
n_samples = 10
predictions = []
for i in range(n_samples):
predictions.append(conv_net(test_images))
predictions = tf.convert_to_tensor(predictions)
predictions = tf.reduce_mean(predictions, 0)
uncertainty = compute_entropy(predictions)
print(uncertainty)
# Display image and model prediction.
for i in range(n_images):
plt.imshow(np.reshape(test_images[i], [28, 28]), cmap='gray')
plt.show()
print("Model prediction: %i" % np.argmax(predictions.numpy()[i])) | 0.939519 | 0.974893 |
This is a "Neural Network" toy example which implements the basic logical gates.
Here we don't use any method to train the NN model. We just guess correct weight.
It is meant to show how in principle NN works.
```
import math
def sigmoid(x):
return 1./(1+ math.exp(-x))
def neuron(inputs, weights):
return sigmoid(sum([x*y for x,y in zip(inputs,weights)]))
def almost_equal(x,y,epsilon=0.001):
return abs(x-y) < epsilon
```
### We "implement" NN that computes OR operation:
| x1| x2| OR|
|---|---|---|
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 1
### Input:
* x0 = 1 (bias term)
* x1,x2 in [0,1]
### Weights:
We "guess" e.g. w0 = -5, w1= 10 and w2= 10 weights.
```
def NN_OR(x1,x2):
weights =[-10, 20, 20]
inputs = [1, x1, x2]
return neuron(weights,inputs)
print(NN_OR(1,0))
print(NN_OR(0,0))
assert almost_equal(NN_OR(0,0),0)
assert almost_equal(NN_OR(0,1),1)
assert almost_equal(NN_OR(1,0),1)
assert almost_equal(NN_OR(1,1),1)
```
### Analogically we "implement" NN that computes AND operation:
| x1| x2| AND|
|---|---|---|
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1
### Input:
* x0 = 1 (bias term)
* x1,x2 in [0,1]
### Weights:
We "guess" e.g. w0 = -30, w1= 20 and w2 = 20 weights.
```
def NN_AND(x1,x2):
weights =[-30, 20, 20]
inputs = [1, x1, x2]
return neuron(weights,inputs)
print(NN_AND(1,0))
print(NN_AND(1,1))
assert almost_equal(NN_AND(0,0),0)
assert almost_equal(NN_AND(0,1),0)
assert almost_equal(NN_AND(1,0),0)
assert almost_equal(NN_AND(1,1),1)
```
### Analogically we "implement" NN that computes NOT operation:
| x | NOT|
|---|--- |
| 0 | 1
| 1 | 0
### Input:
* x0 = 1 (bias term)
* x in [0,1]
### Weights:
We "guess w0=20 and w1 =-30
```
def NN_NOT(x):
weights =[20, -30]
inputs = [1, x]
return neuron(weights,inputs)
print(NN_NOT(1))
print(NN_NOT(0))
assert almost_equal(NN_NOT(1),0)
assert almost_equal(NN_NOT(0),1)
```
### XOR operation
| x1| x2| XOR|
|---|---|---|
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
It's known that we cannot express XOR with one layer.
XOR is equivalent to (x1 OR x2) AND NOT(x1 AND x2)
### Input:
* x0 = 1 (bias term)
* x1,x2 in [0,1]
We will use combination of already existing GATES
```
def NN_XOR(x1,x2):
first = NN_OR(x1,x2)
second = NN_AND(x1,x2)
return NN_AND(first, NN_NOT(second))
print(NN_XOR(1,0))
print(NN_XOR(0,0))
print(NN_XOR(1,1))
assert almost_equal(NN_XOR(0,0),0)
assert almost_equal(NN_XOR(0,1),1)
assert almost_equal(NN_XOR(1,0),1)
assert almost_equal(NN_XOR(1,1),0)
```
| github_jupyter | import math
def sigmoid(x):
return 1./(1+ math.exp(-x))
def neuron(inputs, weights):
return sigmoid(sum([x*y for x,y in zip(inputs,weights)]))
def almost_equal(x,y,epsilon=0.001):
return abs(x-y) < epsilon
def NN_OR(x1,x2):
weights =[-10, 20, 20]
inputs = [1, x1, x2]
return neuron(weights,inputs)
print(NN_OR(1,0))
print(NN_OR(0,0))
assert almost_equal(NN_OR(0,0),0)
assert almost_equal(NN_OR(0,1),1)
assert almost_equal(NN_OR(1,0),1)
assert almost_equal(NN_OR(1,1),1)
def NN_AND(x1,x2):
weights =[-30, 20, 20]
inputs = [1, x1, x2]
return neuron(weights,inputs)
print(NN_AND(1,0))
print(NN_AND(1,1))
assert almost_equal(NN_AND(0,0),0)
assert almost_equal(NN_AND(0,1),0)
assert almost_equal(NN_AND(1,0),0)
assert almost_equal(NN_AND(1,1),1)
def NN_NOT(x):
weights =[20, -30]
inputs = [1, x]
return neuron(weights,inputs)
print(NN_NOT(1))
print(NN_NOT(0))
assert almost_equal(NN_NOT(1),0)
assert almost_equal(NN_NOT(0),1)
def NN_XOR(x1,x2):
first = NN_OR(x1,x2)
second = NN_AND(x1,x2)
return NN_AND(first, NN_NOT(second))
print(NN_XOR(1,0))
print(NN_XOR(0,0))
print(NN_XOR(1,1))
assert almost_equal(NN_XOR(0,0),0)
assert almost_equal(NN_XOR(0,1),1)
assert almost_equal(NN_XOR(1,0),1)
assert almost_equal(NN_XOR(1,1),0) | 0.609292 | 0.978073 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 38