text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# Section 1.2: Dimension reduction and principal component analysis (PCA)
One of the iron laws of data science is know as the "curse of dimensionality": as the number of considered features (dimensions) of a feature space increases, the number of data configurations can grow exponentially and thus the number observations (data points) needed to account for these configurations must also increase. Because this fact of life has huge ramifications for the time, computational effort, and memory required it is often desirable to reduce the number of dimensions we have to work with.
One way to accomplish this is by reducing the number of features considered in an analysis. After all, not all features are created equal, and some yield more insight for a given analysis than others. While this type of feature engineering is necessary in any data-science project, we can really only take it so far; up to a point, considering more features can often increase the accuracy of a classifier. (For example, consder how many features could increase the accuracy of classifying images as cats or dogs.)
## PCA in theory
Another way to reduce the number of dimensions that we have to work with is by projecting our feature space into a lower dimensional space. The reason why we can do this is that in most real-world problems, data points are not spread uniformly across all dimensions. Some features might be near constant, while others are highly correlated, which means that those data points lie close to a lower-dimensional subspace.
In the image below, the data points are not spread across the entire plane, but are nicely clumped, roughly in an oval. Because the cluster (or, indeed, any cluster) is roughly elliptical, it can be mathematically described by two values: its major (long) axis and its minor (short) axis. These axes form the *principal components* of the cluster.
<img align="center" style="padding-right:10px;" src="Images/PCA.png">
In fact, we can construct a whole new feature space around this cluster, defined by two *eigenvectors* (the vectors that define the linear transformation to this new feature space), $c_{1}$ and $c_{2}$. Better still, we don't have to consider all of the dimensions of this new space. Intuitively, we can see that most of the points lie on or close to the line that runs through $c_{1}$. So, if we project the cluster down from two dimensions to that single dimension, we capture most of the information about this data sense while simplifying our analysis. This ability to extract most of the information from a dataset by considering only a fraction of its definitive eigenvectors forms the heart of principal component analysis (PCA).
## Import modules and dataset
You will need to clean and prepare the data in order to conduct PCA on it, so pandas will be essential. You will also need NumPy, a bit of Scikit Learn, and pyplot.
```
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
%matplotlib inline
```
The dataset we’ll use here is the same one drawn from the [U.S. Department of Agriculture National Nutrient Database for Standard Reference](https://www.ars.usda.gov/northeast-area/beltsville-md-bhnrc/beltsville-human-nutrition-research-center/nutrient-data-laboratory/docs/usda-national-nutrient-database-for-standard-reference/) that you prepared in Section 1.1. Remember to set the encoding to `latin_1` (for those darn µg).
```
df = pd.read_csv('Data/USDA-nndb-combined.csv', encoding='latin_1')
```
We can check the number of columns and rows using the `info()` method for the `DataFrame`.
```
df.info()
```
> **Exercise**
>
> Can you think of a more concise way to check the number of rows and columns in a `DataFrame`? (***Hint:*** Use one of the [attributes](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) of the `DataFrame`.)
## Handle `null` values
Because this is a real-world dataset, it is a safe bet that it has `null` values in it. We could first check to see if this is true. However, later on in this section, we will have to transform our data using a function that cannot use `NaN` values, so we might as well drop rows containing those values.
> **Exercise**
>
> Drop rows from the `DataFrame` that contain `NaN` values. (If you need help remembering which method to use, see [this page](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html).)
> **Exercise solution**
>
> The correct code to use is `df = df.dropna()`.
Now let’s see how many rows we have left.
```
df.shape
```
Dropping those rows eliminated 76 percent of our data (8989 entries to 2190). An imperfect state of affairs, but we still have enough for our purposes in this section.
> **Key takeaway:** Another solution to removing `null` values is to impute values for them, but this can be tricky. Should we handle missing values as equal to 0? What about a fatty food with `NaN` for `Lipid_Tot_(g)`? We could try taking the averages of values surrounding a `NaN`, but what about foods that are right next to rows containing foods from radically different food groups? It is possible to make justifiable imputations for missing values, but it can be important to involve subject-matter experts (SMEs) in that process.
## Split off descriptive columns
Out descriptive columns (such as `FoodGroup` and `Shrt_Desc`) pose challenges for us when it comes time to perform PCA because they are categorical rather than numerical features, so we will split our `DataFrame` in to one containing the descriptive information and one containing the nutritional information.
```
desc_df = df.iloc[:, [0, 1, 2]+[i for i in range(50,54)]]
desc_df.set_index('NDB_No', inplace=True)
desc_df.head()
```
> **Question**
>
> Why was it necessary to structure the `iloc` method call the way we did in the code cell above? What did it accomplish? Why was it necessary set the `desc_df` index to `NDB_No`?
```
nutr_df = df.iloc[:, :-5]
nutr_df.head()
```
> **Question**
>
> What did the `iloc` syntax do in the code cell above?
```
nutr_df = nutr_df.drop(['FoodGroup', 'Shrt_Desc'], axis=1)
```
> **Exercise**
>
> Now set the index of `nutr_df` to use `NDB_No`.
> **Exercise solution**
>
> The correct code for students to use here is `nutr_df.set_index('NDB_No', inplace=True)`.
Now let’s take a look at `nutr_df`.
```
nutr_df.head()
```
## Check for correlation among features
One thing that can skew our classification results is correlation among our features. Recall that the whole reason that PCA works is that it exploits the correlation among data points to project our feature-space into a lower-dimensional space. However, if some of our features are highly correleted to begin with, these relationships might create spurious clusters of data in our PCA.
The code to check for correlations in our data isn't long, but it takes too long (up to 10 to 20 minutes) to run for a course like this. Instead, the table below shows the output from that code:
| | column | row | corr |
|--:|------------------:|------------------:|-----:|
| 0 | Folate\_Tot\_(µg) | Folate\_DFE\_(µg) | 0.98 |
| 1 | Folic\_Acid\_(µg) | Folate\_DFE\_(µg) | 0.95 |
| 2 | Folate\_DFE\_(µg) | Folate\_Tot\_(µg) | 0.98 |
| 3 | Vit\_A\_RAE | Retinol\_(µg) | 0.99 |
| 4 | Retinol\_(µg) | Vit\_A\_RAE | 0.99 |
| 5 | Vit\_D\_µg | Vit\_D\_IU | 1 |
| 6 | Vit\_D\_IU | Vit\_D\_µg | 1 |
As it turns out, dropping `Folate_DFE_(µg)`, `Vit_A_RAE`, and `Vit_D_IU` will eliminate the correlations enumerated in the table above.
```
nutr_df.drop(['Folate_DFE_(µg)', 'Vit_A_RAE', 'Vit_D_IU'],
inplace=True, axis=1)
nutr_df.head()
```
## Normalize and center the data
Our numeric data comes in a variety of mass units (grams, milligrams, and micrograms) and one energy unit (kilocalories). In order to make an apples-to-apples comparison (pun intended) of the nutritional data, we need to first *normalize* the data and make it more normally distributed (that is, make the distribution of the data look more like a familiar bell curve).
To help see why we need to normalize the data, let's look at a histogram of all of the columns.
```
ax = nutr_df.hist(bins=50, xlabelsize=-1, ylabelsize=-1, figsize=(11,11))
```
Not a bell curve in sight. Worse, a lot of the data is clumped at or around 0. We will use the Box-Cox Transformation on the data, but it requires strictly positive input, so we will add 1 to every value in each column.
```
nutr_df = nutr_df + 1
```
Now for the transformation. The [Box-Cox Transformation](https://www.statisticshowto.datasciencecentral.com/box-cox-transformation/) performs the transformation $y(\lambda) = \dfrac{y^{\lambda}-1}{\lambda}$ for $\lambda \neq 0$ and $y(\lambda) = log y$ for $\lambda = 0$ for all values $y$ in a given column. SciPy has a particularly useful `boxcox()` function that can automatically calculate the $\lambda$ for each column that best normalizes the data in that column. (However, it is does not support `NaN` values; scikit-learn has a comparable `boxcox()` function that is `NaN`-safe, but it is not available on the version of scikit-learn that comes with Azure notebooks.)
```
from scipy.stats import boxcox
nutr_df_TF = pd.DataFrame(index=nutr_df.index)
for col in nutr_df.columns.values:
nutr_df_TF['{}_TF'.format(col)] = boxcox(nutr_df.loc[:, col])[0]
```
Let's now take a look at the `DataFrame` containing the transformed data.
```
ax = nutr_df_TF.hist(bins=50, xlabelsize=-1, ylabelsize=-1, figsize=(11,11))
```
Few of these columns looks properly normal, but it is enough to now center the data.
Our data units were incompatible to begin with, and the transformations have not improved that. But we can address that by centering the data around 0; that is, we will again transform the data, this time so that every column has a mean of 0 and a standard deviation of 1. Scikit-learn has a convenient function for this.
```
nutr_df_TF = StandardScaler().fit_transform(nutr_df_TF)
```
You can satisfy your self that the data is now centered by using the `mean()` method on the `DataFrame`.
```
print("mean: ", np.round(nutr_df_TF.mean(), 2))
```
> **Exercise**
>
> Find the standard deviation for the `nutr_df_TF`. (If you need a hint as to which method to use, see [this page](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.std.html).)
> **Exercise solution**
>
> The correct code to use here is `print("s.d.: ", np.round(nutr_df_TF.std(), 2))`.
## PCA in practice
It is finally time to perform the PCA on our data. (As stated before, even with pretty clean data, a lot of effort has to go into preparing the data for analysis.)
```
fit = PCA()
pca = fit.fit_transform(nutr_df_TF)
```
So, now that we have peformed the PCA on our data, what do we actually have? Remember that PCA is foremost about finding the eigenvectors for our data. We then want to select some subset of those vectors to form the lower-dimensional subspace in which to analyze our data.
Not all of the eigenvectors are created equal. Just a few of them will account for the majority of the variance in the data. (Put another way, a subspace composed of just a few of the eigenvectors will retain the majority of the information from our data.) We want to focus on those vectors.
To help us get a sense of how many vectors we should use, consider this scree graph of the variance for the PCA components, which plots the variance explained by the components from greatest to least.
```
plt.plot(fit.explained_variance_ratio_)
```
This is where data science can become an art. As a rule of thumb, we want to look for "elbow" in the graph, which is the point at which the few components have captured the majority of the variance in the data (after that point, we are only adding complexity to the analysis for increasingly diminishing returns). In this particular case, that appears to be at about five components.
We can take the cumulative sum of the first five components to see how much variance they capture in total.
```
print(fit.explained_variance_ratio_[:5].sum())
```
So our five components capture about 70 percent of the variance. We can see what fewer or additional components would yield by looking at the cumulative variance for all of the components.
```
print(fit.explained_variance_ratio_.cumsum())
```
We can also examine this visually.
```
plt.plot(np.cumsum(fit.explained_variance_ratio_))
plt.title("Cumulative Explained Variance Graph")
```
Ultimately, it is a matter of judgment as to how many components to use, but five vectors (and 70 percent of the variance) will suffice for our purposes in this section.
To aid further analysis, let's now put those five components into a DataFrame.
```
pca_df = pd.DataFrame(pca[:, :5], index=df.index)
pca_df.head()
```
Each column represents one of the eigenvectors, and each row is one of the coordinates that defines that vector in five-dimensional space.
We will want to add the FoodGroup column back in to aid with our interpretation of the data later on. Let's also rename the component-columns $c_{1}$ through $c_{5}$ so that we know what we are looking at.
```
pca_df = pca_df.join(desc_df)
pca_df.drop(['Shrt_Desc', 'GmWt_Desc1', 'GmWt_2', 'GmWt_Desc2', 'Refuse_Pct'],
axis=1, inplace=True)
pca_df.rename(columns={0:'c1', 1:'c2', 2:'c3', 3:'c4', 4:'c5'},
inplace=True)
pca_df.head()
```
Don't worry that the FoodGroup column has all `NaN` values: it is not a vector, so it has no vector coordinates.
One last thing we should demonstrate is that each of the components is mutually perpendicular (or orthogonal in math-speak). One way of expressing that condition is that each component-vector should perfectly correspond with itself and not correlate at all (positively or negatively) with any other vector.
```
np.round(pca_df.corr(), 5)
```
## Interpreting the results
What do our vectors mean? Put another way, what kinds of foods populate the differnt clusters we have discovered among the data?
To see these results, we will create pandas Series for each of the components, index them by feature, and then sort them in descreasing order (so that a higher number represents a feature that is positively correlated with that vector and negative numbers represent low correlation).
```
vects = fit.components_[:5]
c1 = pd.Series(vects[0], index=nutr_df.columns)
c1.sort_values(ascending=False)
```
Our first cluster is defined by foods that are high in protein and minerals like selenium and zinc while also being low in sugars and vitamin C. Even to a non-specialist, these sound like foods such as meat, poultry, or legumes.
> **Key takeaway:** Particularly when it comes to interpretation, subject-matter expertise can prove essential to producing high-quality analysis. For this reason, you should also try to include SMEs in your data -cience projects.
```
c2 = pd.Series(vects[1], index=nutr_df.columns)
c2.sort_values(ascending=False)
```
Our second group is foods that are high in fiber and folic acid and low in cholesterol.
> **Exercise**
>
> Find the sorted output for $c_{3}$, $c_{4}$, and $c_{5}$.
>
> ***Hint:*** Remember that Python uses zero-indexing.
Even without subject-matter expertise, it is possible to get a more accurate sense of the kinds of foods are defined by each component? Yes! This is the reason we merged the `FoodGroup` column back into `pca_df`. We will sort that `DataFrame` by the components and count the values from `FoodGroup` for the top items.
```
pca_df.sort_values(by='c1')['FoodGroup'][:500].value_counts()
```
We can do the same thing for $c_{2}$.
```
pca_df.sort_values(by='c2')['FoodGroup'][:500].value_counts()
```
> **Exercise**
>
> Repeat this process for $c_{3}$, $c_{4}$, and $c_{5}$.
> **A parting note:** `Baby Foods` and some other categories might seem to dominate several of the categories. This is a product of all of the rows we had to drop that had `NaN` values. If we look at all of the value counts for `FoodGroup`, we will see that they are not evenly distributed, with some categories far more represented than others.
```
df['FoodGroup'].value_counts()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/geemap/tree/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
```
import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
Map.add_minimap(position='bottomright')
Map
```
## Add tile layers
For example, you can Google Map tile layer:
```
url = 'https://mt1.google.com/vt/lyrs=m&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Map', attribution='Google')
```
Add Google Terrain tile layer:
```
url = 'https://mt1.google.com/vt/lyrs=p&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Terrain', attribution='Google')
```
## Add WMS layers
More WMS layers can be found at <https://viewer.nationalmap.gov/services/>.
For example, you can add NAIP imagery.
```
url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='0', name='NAIP Imagery', format='image/png')
```
Add USGS 3DEP Elevation Dataset
```
url = 'https://elevation.nationalmap.gov/arcgis/services/3DEPElevation/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='3DEPElevation:None', name='3DEP Elevation', format='image/png')
```
## Capture user inputs
```
import geemap
from ipywidgets import Label
from ipyleaflet import Marker
Map = geemap.Map(center=(40, -100), zoom=4)
label = Label()
display(label)
coordinates = []
def handle_interaction(**kwargs):
latlon = kwargs.get('coordinates')
if kwargs.get('type') == 'mousemove':
label.value = str(latlon)
elif kwargs.get('type') == 'click':
coordinates.append(latlon)
Map.add_layer(Marker(location=latlon))
Map.on_interaction(handle_interaction)
Map
print(coordinates)
```
## A simpler way for capturing user inputs
```
import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
cluster = Map.listening(event='click', add_marker=True)
Map
# Get the last mouse clicked coordinates
Map.last_click
# Get all the mouse clicked coordinates
Map.all_clicks
```
## SplitMap control
```
import geemap
from ipyleaflet import *
Map = geemap.Map(center=(47.50, -101), zoom=7)
right_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2017_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2017_CIR',
name = 'AerialImage_ND_2017_CIR',
format = 'image/png'
)
left_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2018_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2018_CIR',
name = 'AerialImage_ND_2018_CIR',
format = 'image/png'
)
control = SplitMapControl(left_layer=left_layer, right_layer=right_layer)
Map.add_control(control)
Map.add_control(LayersControl(position='topright'))
Map.add_control(FullScreenControl())
Map
import geemap
Map = geemap.Map()
Map.split_map(left_layer='HYBRID', right_layer='ESRI')
Map
```
| github_jupyter |
# An Introduction to SageMaker LDA
***Finding topics in synthetic document data using Spectral LDA algorithms.***
---
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Training](#Training)
1. [Inference](#Inference)
1. [Epilogue](#Epilogue)
# Introduction
***
Amazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Latent Dirichlet Allocation (LDA) is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. Since the method is unsupervised, the topics are not specified up front, and are not guaranteed to align with how a human may naturally categorize documents. The topics are learned as a probability distribution over the words that occur in each document. Each document, in turn, is described as a mixture of topics.
In this notebook we will use the Amazon SageMaker LDA algorithm to train an LDA model on some example synthetic data. We will then use this model to classify (perform inference on) the data. The main goals of this notebook are to,
* learn how to obtain and store data for use in Amazon SageMaker,
* create an AWS SageMaker training job on a data set to produce an LDA model,
* use the LDA model to perform inference with an Amazon SageMaker endpoint.
The following are ***not*** goals of this notebook:
* understand the LDA model,
* understand how the Amazon SageMaker LDA algorithm works,
* interpret the meaning of the inference output
If you would like to know more about these things take a minute to run this notebook and then check out the SageMaker LDA Documentation and the **LDA-Science.ipynb** notebook.
```
!conda install -y scipy
%matplotlib inline
import os, re
import boto3
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=3, suppress=True)
# some helpful utility functions are defined in the Python module
# "generate_example_data" located in the same directory as this
# notebook
from generate_example_data import generate_griffiths_data, plot_lda, match_estimated_topics
# accessing the SageMaker Python SDK
import sagemaker
from sagemaker.amazon.common import RecordSerializer
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
```
# Setup
***
*This notebook was created and tested on an ml.m4.xlarge notebook instance.*
Before we do anything at all, we need data! We also need to setup our AWS credentials so that AWS SageMaker can store and access data. In this section we will do four things:
1. [Setup AWS Credentials](#SetupAWSCredentials)
1. [Obtain Example Dataset](#ObtainExampleDataset)
1. [Inspect Example Data](#InspectExampleData)
1. [Store Data on S3](#StoreDataonS3)
## Setup AWS Credentials
We first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:
* `bucket` - An S3 bucket accessible by this account.
* Used to store input training data and model data output.
* Should be within the same region as this notebook instance, training, and hosting.
* `prefix` - The location in the bucket where this notebook's input and and output data will be stored. (The default value is sufficient.)
* `role` - The IAM Role ARN used to give training and hosting access to your data.
* See documentation on how to create these.
* The script below will try to determine an appropriate Role ARN.
```
from sagemaker import get_execution_role
session = sagemaker.Session()
role = get_execution_role()
bucket = session.default_bucket()
prefix = 'sagemaker/DEMO-lda-introduction'
print('Training input/output will be stored in {}/{}'.format(bucket, prefix))
print('\nIAM Role: {}'.format(role))
```
## Obtain Example Data
We generate some example synthetic document data. For the purposes of this notebook we will omit the details of this process. All we need to know is that each piece of data, commonly called a *"document"*, is a vector of integers representing *"word counts"* within the document. In this particular example there are a total of 25 words in the *"vocabulary"*.
$$
\underbrace{w}_{\text{document}} = \overbrace{\big[ w_1, w_2, \ldots, w_V \big] }^{\text{word counts}},
\quad
V = \text{vocabulary size}
$$
These data are based on that used by Griffiths and Steyvers in their paper [Finding Scientific Topics](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf). For more information, see the **LDA-Science.ipynb** notebook.
```
print('Generating example data...')
num_documents = 6000
num_topics = 5
known_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data(
num_documents=num_documents, num_topics=num_topics)
vocabulary_size = len(documents[0])
# separate the generated data into training and tests subsets
num_documents_training = int(0.9*num_documents)
num_documents_test = num_documents - num_documents_training
documents_training = documents[:num_documents_training]
documents_test = documents[num_documents_training:]
topic_mixtures_training = topic_mixtures[:num_documents_training]
topic_mixtures_test = topic_mixtures[num_documents_training:]
print('documents_training.shape = {}'.format(documents_training.shape))
print('documents_test.shape = {}'.format(documents_test.shape))
```
## Inspect Example Data
*What does the example data actually look like?* Below we print an example document as well as its corresponding known *topic-mixture*. A topic-mixture serves as the "label" in the LDA model. It describes the ratio of topics from which the words in the document are found.
For example, if the topic mixture of an input document $\mathbf{w}$ is,
$$\theta = \left[ 0.3, 0.2, 0, 0.5, 0 \right]$$
then $\mathbf{w}$ is 30% generated from the first topic, 20% from the second topic, and 50% from the fourth topic. For more information see **How LDA Works** in the SageMaker documentation as well as the **LDA-Science.ipynb** notebook.
Below, we compute the topic mixtures for the first few training documents. As we can see, each document is a vector of word counts from the 25-word vocabulary and its topic-mixture is a probability distribution across the five topics used to generate the sample dataset.
```
print('First training document =\n{}'.format(documents[0]))
print('\nVocabulary size = {}'.format(vocabulary_size))
print('Known topic mixture of first document =\n{}'.format(topic_mixtures_training[0]))
print('\nNumber of topics = {}'.format(num_topics))
print('Sum of elements = {}'.format(topic_mixtures_training[0].sum()))
```
Later, when we perform inference on the training data set we will compare the inferred topic mixture to this known one.
---
Human beings are visual creatures, so it might be helpful to come up with a visual representation of these documents. In the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs. Below we plot the first few documents of the training set reshaped into 5x5 pixel grids.
```
%matplotlib inline
fig = plot_lda(documents_training, nrows=3, ncols=4, cmap='gray_r', with_colorbar=True)
fig.suptitle('Example Document Word Counts')
fig.set_dpi(160)
```
## Store Data on S3
A SageMaker training job needs access to training data stored in an S3 bucket. Although training can accept data of various formats we convert the documents MXNet RecordIO Protobuf format before uploading to the S3 bucket defined at the beginning of this notebook. We do so by making use of the SageMaker Python SDK utility `RecordSerializer`.
```
# convert documents_training to Protobuf RecordIO format
recordio_protobuf_serializer = RecordSerializer()
fbuffer = recordio_protobuf_serializer.serialize(documents_training)
# upload to S3 in bucket/prefix/train
fname = 'lda.data'
s3_object = os.path.join(prefix, 'train', fname)
boto3.Session().resource('s3').Bucket(bucket).Object(s3_object).upload_fileobj(fbuffer)
s3_train_data = 's3://{}/{}'.format(bucket, s3_object)
print('Uploaded data to S3: {}'.format(s3_train_data))
```
# Training
***
Once the data is preprocessed and available in a recommended format the next step is to train our model on the data. There are number of parameters required by SageMaker LDA configuring the model and defining the computational environment in which training will take place.
First, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication. Information about the locations of each SageMaker algorithm is available in the documentation.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
# select the algorithm container based on this notebook's current location
region_name = boto3.Session().region_name
container = get_image_uri(region_name, 'lda')
print('Using SageMaker LDA container: {} ({})'.format(container, region_name))
```
Particular to a SageMaker LDA training job are the following hyperparameters:
* **`num_topics`** - The number of topics or categories in the LDA model.
* Usually, this is not known a priori.
* In this example, howevever, we know that the data is generated by five topics.
* **`feature_dim`** - The size of the *"vocabulary"*, in LDA parlance.
* In this example, this is equal 25.
* **`mini_batch_size`** - The number of input training documents.
* **`alpha0`** - *(optional)* a measurement of how "mixed" are the topic-mixtures.
* When `alpha0` is small the data tends to be represented by one or few topics.
* When `alpha0` is large the data tends to be an even combination of several or many topics.
* The default value is `alpha0 = 1.0`.
In addition to these LDA model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,
* Recommended instance type: `ml.c4`
* Current limitations:
* SageMaker LDA *training* can only run on a single instance.
* SageMaker LDA does not take advantage of GPU hardware.
* (The Amazon AI Algorithms team is working hard to provide these capabilities in a future release!)
```
# specify general training job information
lda = sagemaker.estimator.Estimator(
container,
role,
output_path='s3://{}/{}/output'.format(bucket, prefix),
train_instance_count=1,
train_instance_type='ml.c4.2xlarge',
sagemaker_session=session,
)
# set algorithm-specific hyperparameters
lda.set_hyperparameters(
num_topics=num_topics,
feature_dim=vocabulary_size,
mini_batch_size=num_documents_training,
alpha0=1.0,
)
# run the training job on input data stored in S3
lda.fit({'train': s3_train_data})
```
If you see the message
> `===== Job Complete =====`
at the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print('Training job name: {}'.format(lda.latest_training_job.job_name))
```
# Inference
***
A trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a given document.
We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up.
```
lda_inference = lda.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge', # LDA inference may work better at scale on ml.c4 instances
)
```
Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below:
```
print('Endpoint name: {}'.format(lda_inference.endpoint_name))
```
With this realtime endpoint at our fingertips we can finally perform inference on our training and test data.
We can pass a variety of data formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted, JSON-sparse-formatter, and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `CSVSerializer` and `JSONDeserializer` when configuring the inference endpoint.
```
lda_inference.serializer = CSVSerializer()
lda_inference.deserializer = JSONDeserializer()
```
We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion from Numpy NDArrays.
```
results = lda_inference.predict(documents_test[:12])
print(results)
```
It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.
```
{
'predictions': [
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
...
]
}
```
We extract the topic mixtures, themselves, corresponding to each of the input documents.
```
computed_topic_mixtures = np.array([prediction['topic_mixture'] for prediction in results['predictions']])
print(computed_topic_mixtures)
```
If you decide to compare these results to the known topic mixtures generated in the [Obtain Example Data](#ObtainExampleData) Section keep in mind that SageMaker LDA discovers topics in no particular order. That is, the approximate topic mixtures computed above may be permutations of the known topic mixtures corresponding to the same documents.
```
print(topic_mixtures_test[0]) # known test topic mixture
print(computed_topic_mixtures[0]) # computed topic mixture (topics permuted)
```
## Stop / Close the Endpoint
Finally, we should delete the endpoint before we close the notebook.
To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select "Delete" from the "Actions" dropdown menu.
```
sagemaker.Session().delete_endpoint(lda_inference.endpoint_name)
```
# Epilogue
---
In this notebook we,
* generated some example LDA documents and their corresponding topic-mixtures,
* trained a SageMaker LDA model on a training set of documents,
* created an inference endpoint,
* used the endpoint to infer the topic mixtures of a test input.
There are several things to keep in mind when applying SageMaker LDA to real-word data such as a corpus of text documents. Note that input documents to the algorithm, both in training and inference, need to be vectors of integers representing word counts. Each index corresponds to a word in the corpus vocabulary. Therefore, one will need to "tokenize" their corpus vocabulary.
$$
\text{"cat"} \mapsto 0, \; \text{"dog"} \mapsto 1 \; \text{"bird"} \mapsto 2, \ldots
$$
Each text document then needs to be converted to a "bag-of-words" format document.
$$
w = \text{"cat bird bird bird cat"} \quad \longmapsto \quad w = [2, 0, 3, 0, \ldots, 0]
$$
Also note that many real-word applications have large vocabulary sizes. It may be necessary to represent the input documents in sparse format. Finally, the use of stemming and lemmatization in data preprocessing provides several benefits. Doing so can improve training and inference compute time since it reduces the effective vocabulary size. More importantly, though, it can improve the quality of learned topic-word probability matrices and inferred topic mixtures. For example, the words *"parliament"*, *"parliaments"*, *"parliamentary"*, *"parliament's"*, and *"parliamentarians"* are all essentially the same word, *"parliament"*, but with different conjugations. For the purposes of detecting topics, such as a *"politics"* or *governments"* topic, the inclusion of all five does not add much additional value as they all essentiall describe the same feature.
| github_jupyter |
# Word2Vec
**Learning Objectives**
1. Compile all steps into one function
2. Prepare training data for Word2Vec
3. Model and Training
4. Embedding lookup and analysis
## Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and
[Distributed
Representations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
* **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/word2vec.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
## Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`.
Consider the following sentence of 8 words.
> The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this tutorial, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.

The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>*, the objective can be written as the average log probability

where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.

where *v* and *v<sup>'<sup>* are target and context vector representations of words and *W* is vocabulary size.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *P<sub>n</sub>(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).
```
(hot, shimmered)
(wide, hot)
(wide, sun)
```
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
## Setup
```
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tqdm
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import io
import itertools
import numpy as np
import os
import re
import string
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
```
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
```
### Vectorize an example sentence
Consider the following sentence:
`The wide road shimmered in the hot sun.`
Tokenize the sentence:
```
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
```
Create a vocabulary to save mappings from tokens to integer indices.
```
vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
```
Create an inverse vocabulary to save mappings from integer indices to tokens.
```
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
```
Vectorize your sentence.
```
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
```
### Generate skip-grams from one sentence
The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.
Note: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
```
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams))
```
Take a look at few positive skip-grams.
```
for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
```
### Negative sampling for one skip-gram
The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets.
```
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
```
### Construct one training example
For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word.
```
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
```
Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
```
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
```
A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)`
```
print(f"target :", target)
print(f"context :", context )
print(f"label :", label )
```
### Summary
This picture summarizes the procedure of generating training example from a sentence.

## Lab Task 1: Compile all steps into one function
### Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10.
```
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
```
`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling.
Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
### Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
```
# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1)
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling")
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
```
## Lab Task 2: Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
### Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
```
Read text from the file and take a look at the first few lines.
```
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
```
Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps.
```
# TODO 2a
text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))
```
### Vectorize sentences from the corpus
You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer.
```
# We create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Set output_sequence_length length to pad all samples to same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
```
Call `adapt` on the text dataset to create vocabulary.
```
vectorize_layer.adapt(text_ds.batch(1024))
```
Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
```
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
```
The vectorize_layer can now be used to generate vectors for each element in the `text_ds`.
```
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
```
### Obtain sequences from the dataset
You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`.
```
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
```
Take a look at few examples from `sequences`.
```
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
```
### Generate training examples from sequences
`sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
```
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
print(len(targets), len(contexts), len(labels))
```
### Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model!
```
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
```
Add `cache()` and `prefetch()` to improve performance.
```
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
```
## Lab Task 3: Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
### Subclassed Word2Vec Model
Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:
* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.
* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.
* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.
* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.
With the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result.
Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
```
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding", )
self.context_embedding = Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
self.dots = Dot(axes=(3,2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
```
### Define loss function and compile model
For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
``` python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
```
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer.
```
# TODO 3a
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
Also define a callback to log training statistics for tensorboard.
```
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
```
Train the model with `dataset` prepared above for some number of epochs.
```
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
```
Tensorboard now shows the Word2Vec model's accuracy and loss.
```
!tensorboard --bind_all --port=8081 --load_fast=false --logdir logs
```
Run the following command in **Cloud Shell:**
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click *Web Preview* > *Change Port* and insert port number *8081*. Click *Change and Preview* to open the TensorBoard.

**To quit the TensorBoard, click Kernel > Interrupt kernel**.
## Lab Task 4: Embedding lookup and analysis
Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
```
# TODO 4a
weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
```
Create and save the vectors and metadata file.
```
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
```
Download the `vectors.tsv` and `metadata.tsv` to analyze the obtained embeddings in the [Embedding Projector](https://projector.tensorflow.org/).
```
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
```
## Next steps
This tutorial has shown you how to implement a skip-gram Word2Vec model with negative sampling from scratch and visualize the obtained word embeddings.
* To learn more about word vectors and their mathematical representations, refer to these [notes](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf).
* To learn more about advanced text processing, read the [Transformer model for language understanding](https://www.tensorflow.org/tutorials/text/transformer) tutorial.
* If you’re interested in pre-trained embedding models, you may also be interested in [Exploring the TF-Hub CORD-19 Swivel Embeddings](https://www.tensorflow.org/hub/tutorials/cord_19_embeddings_keras), or the [Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder)
* You may also like to train the model on a new dataset (there are many available in [TensorFlow Datasets](https://www.tensorflow.org/datasets)).
| github_jupyter |
# Numbers and Integer Math
Watch the full [C# 101 video](https://www.youtube.com/watch?v=jEE0pWTq54U&list=PLdo4fOcmZ0oVxKLQCHpiUWun7vlJJvUiN&index=5) for this module.
## Integer Math
You have a few `integers` defined below. An `integer` is a positive or negative whole number.
> Before you run the code, what should c be?
## Addition
```
int a = 18;
int b = 6;
int c = a + b;
Console.WriteLine(c);
```
## Subtraction
```
int c = a - b;
Console.WriteLine(c);
```
## Multiplication
```
int c = a * b;
Console.WriteLine(c);
```
## Division
```
int c = a / b;
Console.WriteLine(c);
```
# Order of operations
C# follows the order of operation when it comes to math. That is, it does multiplication and division first, then addition and subtraction.
> What would the math be if C# didn't follow the order of operation, and instead just did math left to right?
```
int a = 5;
int b = 4;
int c = 2;
int d = a + b * c;
Console.WriteLine(d);
```
## Using parenthesis
You can also force different orders by putting parentheses around whatever you want done first
> Try it out
```
int d = (a + b) * c;
Console.WriteLine(d);
```
You can make math as long and complicated as you want.
> Can you make this line even more complicated?
```
int d = (a + b) - 6 * c + (12 * 4) / 3 + 12;
Console.WriteLine(d);
```
## Integers: Whole numbers no matter what
Integer math will always produce integers. What that means is that even when math should result in a decimal or fraction, the answer will be truncated to a whole number.
> Check it out. WHat should the answer truly be?
```
int a = 7;
int b = 4;
int c = 3;
int d = (a + b) / c;
Console.WriteLine(d);
```
# Playground
Play around with what you've learned! Here's some starting ideas:
> Do you have any homework or projects that need math? Try using code in place of a calculator!
>
> How do integers round? Do they always round up? down? to the nearest integer?
>
> How do the Order of Operations work? Play around with parentheses.
```
Console.WriteLine("Playground");
```
# Continue learning
There are plenty more resources out there to learn!
> [⏩ Next Module - Numbers and Integer Precision](http://tinyurl.com/csharp-notebook05)
>
> [⏪ Last Module - Searching Strings](http://tinyurl.com/csharp-notebook03)
>
> [Watch the video](https://www.youtube.com/watch?v=jEE0pWTq54U&list=PLdo4fOcmZ0oVxKLQCHpiUWun7vlJJvUiN&index=5)
>
> [Documentation: Numbers in C#](https://docs.microsoft.com/dotnet/csharp/tour-of-csharp/tutorials/numbers-in-csharp?WT.mc_id=Educationalcsharp-c9-scottha)
>
> [Start at the beginning: What is C#?](https://www.youtube.com/watch?v=BM4CHBmAPh4&list=PLdo4fOcmZ0oVxKLQCHpiUWun7vlJJvUiN&index=1)
# Other resources
Here's some more places to explore:
> [Other 101 Videos](https://dotnet.microsoft.com/learn/videos?WT.mc_id=csharpnotebook-35129-website)
>
> [Microsoft Learn](https://docs.microsoft.com/learn/dotnet/?WT.mc_id=csharpnotebook-35129-website)
>
> [C# Documentation](https://docs.microsoft.com/dotnet/csharp/?WT.mc_id=csharpnotebook-35129-website)
| github_jupyter |
## **Nigerian Music scraped from Spotify - an analysis**
Clustering is a type of [Unsupervised Learning](https://wikipedia.org/wiki/Unsupervised_learning) that presumes that a dataset is unlabelled or that its inputs are not matched with predefined outputs. It uses various algorithms to sort through unlabeled data and provide groupings according to patterns it discerns in the data.
[**Pre-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/27/)
### **Introduction**
[Clustering](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) is very useful for data exploration. Let's see if it can help discover trends and patterns in the way Nigerian audiences consume music.
> ✅ Take a minute to think about the uses of clustering. In real life, clustering happens whenever you have a pile of laundry and need to sort out your family members' clothes 🧦👕👖🩲. In data science, clustering happens when trying to analyze a user's preferences, or determine the characteristics of any unlabeled dataset. Clustering, in a way, helps make sense of chaos, like a sock drawer.
In a professional setting, clustering can be used to determine things like market segmentation, determining what age groups buy what items, for example. Another use would be anomaly detection, perhaps to detect fraud from a dataset of credit card transactions. Or you might use clustering to determine tumors in a batch of medical scans.
✅ Think a minute about how you might have encountered clustering 'in the wild', in a banking, e-commerce, or business setting.
> 🎓 Interestingly, cluster analysis originated in the fields of Anthropology and Psychology in the 1930s. Can you imagine how it might have been used?
Alternately, you could use it for grouping search results - by shopping links, images, or reviews, for example. Clustering is useful when you have a large dataset that you want to reduce and on which you want to perform more granular analysis, so the technique can be used to learn about data before other models are constructed.
✅ Once your data is organized in clusters, you assign it a cluster Id, and this technique can be useful when preserving a dataset's privacy; you can instead refer to a data point by its cluster id, rather than by more revealing identifiable data. Can you think of other reasons why you'd refer to a cluster Id rather than other elements of the cluster to identify it?
### Getting started with clustering
> 🎓 How we create clusters has a lot to do with how we gather up the data points into groups. Let's unpack some vocabulary:
>
> 🎓 ['Transductive' vs. 'inductive'](https://wikipedia.org/wiki/Transduction_(machine_learning))
>
> Transductive inference is derived from observed training cases that map to specific test cases. Inductive inference is derived from training cases that map to general rules which are only then applied to test cases.
>
> An example: Imagine you have a dataset that is only partially labelled. Some things are 'records', some 'cds', and some are blank. Your job is to provide labels for the blanks. If you choose an inductive approach, you'd train a model looking for 'records' and 'cds', and apply those labels to your unlabeled data. This approach will have trouble classifying things that are actually 'cassettes'. A transductive approach, on the other hand, handles this unknown data more effectively as it works to group similar items together and then applies a label to a group. In this case, clusters might reflect 'round musical things' and 'square musical things'.
>
> 🎓 ['Non-flat' vs. 'flat' geometry](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
>
> Derived from mathematical terminology, non-flat vs. flat geometry refers to the measure of distances between points by either 'flat' ([Euclidean](https://wikipedia.org/wiki/Euclidean_geometry)) or 'non-flat' (non-Euclidean) geometrical methods.
>
> 'Flat' in this context refers to Euclidean geometry (parts of which are taught as 'plane' geometry), and non-flat refers to non-Euclidean geometry. What does geometry have to do with machine learning? Well, as two fields that are rooted in mathematics, there must be a common way to measure distances between points in clusters, and that can be done in a 'flat' or 'non-flat' way, depending on the nature of the data. [Euclidean distances](https://wikipedia.org/wiki/Euclidean_distance) are measured as the length of a line segment between two points. [Non-Euclidean distances](https://wikipedia.org/wiki/Non-Euclidean_geometry) are measured along a curve. If your data, visualized, seems to not exist on a plane, you might need to use a specialized algorithm to handle it.
<p >
<img src="../../images/flat-nonflat.png"
width="600"/>
<figcaption>Infographic by Dasani Madipalli</figcaption>
> 🎓 ['Distances'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
>
> Clusters are defined by their distance matrix, e.g. the distances between points. This distance can be measured a few ways. Euclidean clusters are defined by the average of the point values, and contain a 'centroid' or center point. Distances are thus measured by the distance to that centroid. Non-Euclidean distances refer to 'clustroids', the point closest to other points. Clustroids in turn can be defined in various ways.
>
> 🎓 ['Constrained'](https://wikipedia.org/wiki/Constrained_clustering)
>
> [Constrained Clustering](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) introduces 'semi-supervised' learning into this unsupervised method. The relationships between points are flagged as 'cannot link' or 'must-link' so some rules are forced on the dataset.
>
> An example: If an algorithm is set free on a batch of unlabelled or semi-labelled data, the clusters it produces may be of poor quality. In the example above, the clusters might group 'round music things' and 'square music things' and 'triangular things' and 'cookies'. If given some constraints, or rules to follow ("the item must be made of plastic", "the item needs to be able to produce music") this can help 'constrain' the algorithm to make better choices.
>
> 🎓 'Density'
>
> Data that is 'noisy' is considered to be 'dense'. The distances between points in each of its clusters may prove, on examination, to be more or less dense, or 'crowded' and thus this data needs to be analyzed with the appropriate clustering method. [This article](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) demonstrates the difference between using K-Means clustering vs. HDBSCAN algorithms to explore a noisy dataset with uneven cluster density.
Deepen your understanding of clustering techniques in this [Learn module](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-15963-cxa)
### **Clustering algorithms**
There are over 100 clustering algorithms, and their use depends on the nature of the data at hand. Let's discuss some of the major ones:
- **Hierarchical clustering**. If an object is classified by its proximity to a nearby object, rather than to one farther away, clusters are formed based on their members' distance to and from other objects. Hierarchical clustering is characterized by repeatedly combining two clusters.
<p >
<img src="../../images/hierarchical.png"
width="600"/>
<figcaption>Infographic by Dasani Madipalli</figcaption>
- **Centroid clustering**. This popular algorithm requires the choice of 'k', or the number of clusters to form, after which the algorithm determines the center point of a cluster and gathers data around that point. [K-means clustering](https://wikipedia.org/wiki/K-means_clustering) is a popular version of centroid clustering which separates a data set into pre-defined K groups. The center is determined by the nearest mean, thus the name. The squared distance from the cluster is minimized.
<p >
<img src="../../images/centroid.png"
width="600"/>
<figcaption>Infographic by Dasani Madipalli</figcaption>
- **Distribution-based clustering**. Based in statistical modeling, distribution-based clustering centers on determining the probability that a data point belongs to a cluster, and assigning it accordingly. Gaussian mixture methods belong to this type.
- **Density-based clustering**. Data points are assigned to clusters based on their density, or their grouping around each other. Data points far from the group are considered outliers or noise. DBSCAN, Mean-shift and OPTICS belong to this type of clustering.
- **Grid-based clustering**. For multi-dimensional datasets, a grid is created and the data is divided amongst the grid's cells, thereby creating clusters.
The best way to learn about clustering is to try it for yourself, so that's what you'll do in this exercise.
We'll require some packages to knock-off this module. You can have them installed as: `install.packages(c('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork'))`
Alternatively, the script below checks whether you have the packages required to complete this module and installs them for you in case some are missing.
```
suppressWarnings(if(!require("pacman")) install.packages("pacman"))
pacman::p_load('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork')
```
## Exercise - cluster your data
Clustering as a technique is greatly aided by proper visualization, so let's get started by visualizing our music data. This exercise will help us decide which of the methods of clustering we should most effectively use for the nature of this data.
Let's hit the ground running by importing the data.
```
# Load the core tidyverse and make it available in your current R session
library(tidyverse)
# Import the data into a tibble
df <- read_csv(file = "https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/5-Clustering/data/nigerian-songs.csv")
# View the first 5 rows of the data set
df %>%
slice_head(n = 5)
```
Sometimes, we may want some little more information on our data. We can have a look at the `data` and `its structure` by using the [*glimpse()*](https://pillar.r-lib.org/reference/glimpse.html) function:
```
# Glimpse into the data set
df %>%
glimpse()
```
Good job!💪
We can observe that `glimpse()` will give you the total number of rows (observations) and columns (variables), then, the first few entries of each variable in a row after the variable name. In addition, the *data type* of the variable is given immediately after each variable's name inside `< >`.
`DataExplorer::introduce()` can summarize this information neatly:
```
# Describe basic information for our data
df %>%
introduce()
# A visual display of the same
df %>%
plot_intro()
```
Awesome! We have just learnt that our data has no missing values.
While we are at it, we can explore common central tendency statistics (e.g [mean](https://en.wikipedia.org/wiki/Arithmetic_mean) and [median](https://en.wikipedia.org/wiki/Median)) and measures of dispersion (e.g [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation)) using `summarytools::descr()`
```
# Describe common statistics
df %>%
descr(stats = "common")
```
Let's look at the general values of the data. Note that popularity can be `0`, which show songs that have no ranking. We'll remove those shortly.
> 🤔 If we are working with clustering, an unsupervised method that does not require labeled data, why are we showing this data with labels? In the data exploration phase, they come in handy, but they are not necessary for the clustering algorithms to work.
### 1. Explore popular genres
Let's go ahead and find out the most popular genres 🎶 by making a count of the instances it appears.
```
# Popular genres
top_genres <- df %>%
count(artist_top_genre, sort = TRUE) %>%
# Encode to categorical and reorder the according to count
mutate(artist_top_genre = factor(artist_top_genre) %>% fct_inorder())
# Print the top genres
top_genres
```
That went well! They say a picture is worth a thousand rows of a data frame (actually nobody ever says that 😅). But you get the gist of it, right?
One way to visualize categorical data (character or factor variables) is using barplots. Let's make a barplot of the top 10 genres:
```
# Change the default gray theme
theme_set(theme_light())
# Visualize popular genres
top_genres %>%
slice(1:10) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5),
# Rotates the X markers (so we can read them)
axis.text.x = element_text(angle = 90))
```
Now it's way easier to identify that we have `missing` genres 🧐!
> A good visualisation will show you things that you did not expect, or raise new questions about the data - Hadley Wickham and Garrett Grolemund, [R For Data Science](https://r4ds.had.co.nz/introduction.html)
Note, when the top genre is described as `Missing`, that means that Spotify did not classify it, so let's get rid of it.
```
# Visualize popular genres
top_genres %>%
filter(artist_top_genre != "Missing") %>%
slice(1:10) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5),
# Rotates the X markers (so we can read them)
axis.text.x = element_text(angle = 90))
```
From the little data exploration, we learn that the top three genres dominate this dataset. Let's concentrate on `afro dancehall`, `afropop`, and `nigerian pop`, additionally filter the dataset to remove anything with a 0 popularity value (meaning it was not classified with a popularity in the dataset and can be considered noise for our purposes):
```
nigerian_songs <- df %>%
# Concentrate on top 3 genres
filter(artist_top_genre %in% c("afro dancehall", "afropop","nigerian pop")) %>%
# Remove unclassified observations
filter(popularity != 0)
# Visualize popular genres
nigerian_songs %>%
count(artist_top_genre) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("ggsci::category10_d3") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5))
```
Let's see whether there is any apparent linear relationship among the numerical variables in our data set. This relationship is quantified mathematically by the [correlation statistic](https://en.wikipedia.org/wiki/Correlation).
The correlation statistic is a value between -1 and 1 that indicates the strength of a relationship. Values above 0 indicate a *positive* correlation (high values of one variable tend to coincide with high values of the other), while values below 0 indicate a *negative* correlation (high values of one variable tend to coincide with low values of the other).
```
# Narrow down to numeric variables and fid correlation
corr_mat <- nigerian_songs %>%
select(where(is.numeric)) %>%
cor()
# Visualize correlation matrix
corrplot(corr_mat, order = 'AOE', col = c('white', 'black'), bg = 'gold2')
```
The data is not strongly correlated except between `energy` and `loudness`, which makes sense, given that loud music is usually pretty energetic. `Popularity` has a correspondence to `release date`, which also makes sense, as more recent songs are probably more popular. Length and energy seem to have a correlation too.
It will be interesting to see what a clustering algorithm can make of this data!
> 🎓 Note that correlation does not imply causation! We have proof of correlation but no proof of causation. An [amusing web site](https://tylervigen.com/spurious-correlations) has some visuals that emphasize this point.
### 2. Explore data distribution
Let's ask some more subtle questions. Are the genres significantly different in the perception of their danceability, based on their popularity? Let's examine our top three genres data distribution for popularity and danceability along a given x and y axis using [density plots](https://www.khanacademy.org/math/ap-statistics/density-curves-normal-distribution-ap/density-curves/v/density-curves).
```
# Perform 2D kernel density estimation
density_estimate_2d <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre)) +
geom_density_2d(bins = 5, size = 1) +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") +
xlim(-20, 80) +
ylim(0, 1.2)
# Density plot based on the popularity
density_estimate_pop <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, fill = artist_top_genre, color = artist_top_genre)) +
geom_density(size = 1, alpha = 0.5) +
paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") +
theme(legend.position = "none")
# Density plot based on the danceability
density_estimate_dance <- nigerian_songs %>%
ggplot(mapping = aes(x = danceability, fill = artist_top_genre, color = artist_top_genre)) +
geom_density(size = 1, alpha = 0.5) +
paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry")
# Patch everything together
library(patchwork)
density_estimate_2d / (density_estimate_pop + density_estimate_dance)
```
We see that there are concentric circles that line up, regardless of genre. Could it be that Nigerian tastes converge at a certain level of danceability for this genre?
In general, the three genres align in terms of their popularity and danceability. Determining clusters in this loosely-aligned data will be a challenge. Let's see whether a scatter plot can support this.
```
# A scatter plot of popularity and danceability
scatter_plot <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre, shape = artist_top_genre)) +
geom_point(size = 2, alpha = 0.8) +
paletteer::scale_color_paletteer_d("futurevisions::mars")
# Add a touch of interactivity
ggplotly(scatter_plot)
```
A scatterplot of the same axes shows a similar pattern of convergence.
In general, for clustering, you can use scatterplots to show clusters of data, so mastering this type of visualization is very useful. In the next lesson, we will take this filtered data and use k-means clustering to discover groups in this data that see to overlap in interesting ways.
## **🚀 Challenge**
In preparation for the next lesson, make a chart about the various clustering algorithms you might discover and use in a production environment. What kinds of problems is the clustering trying to address?
## [**Post-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/28/)
## **Review & Self Study**
Before you apply clustering algorithms, as we have learned, it's a good idea to understand the nature of your dataset. Read more on this topic [here](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
Deepen your understanding of clustering techniques:
- [Train and Evaluate Clustering Models using Tidymodels and friends](https://rpubs.com/eR_ic/clustering)
- Bradley Boehmke & Brandon Greenwell, [*Hands-On Machine Learning with R*](https://bradleyboehmke.github.io/HOML/)*.*
## **Assignment**
[Research other visualizations for clustering](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/assignment.md)
## THANK YOU TO:
[Jen Looper](https://www.twitter.com/jenlooper) for creating the original Python version of this module ♥️
[`Dasani Madipalli`](https://twitter.com/dasani_decoded) for creating the amazing illustrations that make machine learning concepts more interpretable and easier to understand.
Happy Learning,
[Eric](https://twitter.com/ericntay), Gold Microsoft Learn Student Ambassador.
| github_jupyter |
# B - A Closer Look at Word Embeddings
We have very briefly covered how word embeddings (also known as word vectors) are used in the tutorials. In this appendix we'll have a closer look at these embeddings and find some (hopefully) interesting results.
Embeddings transform a one-hot encoded vector (a vector that is 0 in elements except one, which is 1) into a much smaller dimension vector of real numbers. The one-hot encoded vector is also known as a *sparse vector*, whilst the real valued vector is known as a *dense vector*.
The key concept in these word embeddings is that words that appear in similar _contexts_ appear nearby in the vector space, i.e. the Euclidean distance between these two word vectors is small. By context here, we mean the surrounding words. For example in the sentences "I purchased some items at the shop" and "I purchased some items at the store" the words 'shop' and 'store' appear in the same context and thus should be close together in vector space.
You may have also heard about *word2vec*. *word2vec* is an algorithm (actually a bunch of algorithms) that calculates word vectors from a corpus. In this appendix we use *GloVe* vectors, *GloVe* being another algorithm to calculate word vectors. If you want to know how *word2vec* works, check out a two part series [here](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) and [here](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/), and if you want to find out more about *GloVe*, check the website [here](https://nlp.stanford.edu/projects/glove/).
In PyTorch, we use word vectors with the `nn.Embedding` layer, which takes a _**[sentence length, batch size]**_ tensor and transforms it into a _**[sentence length, batch size, embedding dimensions]**_ tensor.
In tutorial 2 onwards, we also used pre-trained word embeddings (specifically the GloVe vectors) provided by TorchText. These embeddings have been trained on a gigantic corpus. We can use these pre-trained vectors within any of our models, with the idea that as they have already learned the context of each word they will give us a better starting point for our word vectors. This usually leads to faster training time and/or improved accuracy.
In this appendix we won't be training any models, instead we'll be looking at the word embeddings and finding a few interesting things about them.
A lot of the code from the first half of this appendix is taken from [here](https://github.com/spro/practical-pytorch/blob/master/glove-word-vectors/glove-word-vectors.ipynb). For more information about word embeddings, go [here](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/).
## Loading the GloVe vectors
First, we'll load the GloVe vectors. The `name` field specifies what the vectors have been trained on, here the `6B` means a corpus of 6 billion words. The `dim` argument specifies the dimensionality of the word vectors. GloVe vectors are available in 50, 100, 200 and 300 dimensions. There is also a `42B` and `840B` glove vectors, however they are only available at 300 dimensions.
```
import torchtext.vocab
glove = torchtext.vocab.GloVe(name = '6B', dim = 100)
print(f'There are {len(glove.itos)} words in the vocabulary')
```
As shown above, there are 400,000 unique words in the GloVe vocabulary. These are the most common words found in the corpus the vectors were trained on. **In these set of GloVe vectors, every single word is lower-case only.**
`glove.vectors` is the actual tensor containing the values of the embeddings.
```
glove.vectors.shape
```
We can see what word is associated with each row by checking the `itos` (int to string) list.
Below implies that row 0 is the vector associated with the word 'the', row 1 for ',' (comma), row 2 for '.' (period), etc.
```
glove.itos[:10]
```
We can also use the `stoi` (string to int) dictionary, in which we input a word and receive the associated integer/index. If you try get the index of a word that is not in the vocabulary, you receive an error.
```
glove.stoi['the']
```
We can get the vector of a word by first getting the integer associated with it and then indexing into the word embedding tensor with that index.
```
glove.vectors[glove.stoi['the']].shape
```
We'll be doing this a lot, so we'll create a function that takes in word embeddings and a word then returns the associated vector. It'll also throw an error if the word doesn't exist in the vocabulary.
```
def get_vector(embeddings, word):
assert word in embeddings.stoi, f'*{word}* is not in the vocab!'
return embeddings.vectors[embeddings.stoi[word]]
```
As before, we use a word to get the associated vector.
```
get_vector(glove, 'the').shape
```
## Similar Contexts
Now to start looking at the context of different words.
If we want to find the words similar to a certain input word, we first find the vector of this input word, then we scan through our vocabulary calculating the distance between the vector of each word and our input word vector. We then sort these from closest to furthest away.
The function below returns the closest 10 words to an input word vector:
```
import torch
def closest_words(embeddings, vector, n = 10):
distances = [(word, torch.dist(vector, get_vector(embeddings, word)).item())
for word in embeddings.itos]
return sorted(distances, key = lambda w: w[1])[:n]
```
Let's try it out with 'korea'. The closest word is the word 'korea' itself (not very interesting), however all of the words are related in some way. Pyongyang is the capital of North Korea, DPRK is the official name of North Korea, etc.
Interestingly, we also get 'Japan' and 'China', implies that Korea, Japan and China are frequently talked about together in similar contexts. This makes sense as they are geographically situated near each other.
```
word_vector = get_vector(glove, 'korea')
closest_words(glove, word_vector)
```
Looking at another country, India, we also get nearby countries: Thailand, Malaysia and Sri Lanka (as two separate words). Australia is relatively close to India (geographically), but Thailand and Malaysia are closer. So why is Australia closer to India in vector space? This is most probably due to India and Australia appearing in the context of [cricket](https://en.wikipedia.org/wiki/Cricket) matches together.
```
word_vector = get_vector(glove, 'india')
closest_words(glove, word_vector)
```
We'll also create another function that will nicely print out the tuples returned by our `closest_words` function.
```
def print_tuples(tuples):
for w, d in tuples:
print(f'({d:02.04f}) {w}')
```
A final word to look at, 'sports'. As we can see, the closest words are most of the sports themselves.
```
word_vector = get_vector(glove, 'sports')
print_tuples(closest_words(glove, word_vector))
```
## Analogies
Another property of word embeddings is that they can be operated on just as any standard vector and give interesting results.
We'll show an example of this first, and then explain it:
```
def analogy(embeddings, word1, word2, word3, n=5):
#get vectors for each word
word1_vector = get_vector(embeddings, word1)
word2_vector = get_vector(embeddings, word2)
word3_vector = get_vector(embeddings, word3)
#calculate analogy vector
analogy_vector = word2_vector - word1_vector + word3_vector
#find closest words to analogy vector
candidate_words = closest_words(embeddings, analogy_vector, n+3)
#filter out words already in analogy
candidate_words = [(word, dist) for (word, dist) in candidate_words
if word not in [word1, word2, word3]][:n]
print(f'{word1} is to {word2} as {word3} is to...')
return candidate_words
print_tuples(analogy(glove, 'man', 'king', 'woman'))
```
This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'?
If we think about it, the vector calculated from 'king' minus 'man' gives us a "royalty vector". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this "royality vector" to 'woman', this should travel to her royal equivalent, which is a queen!
We can do this with other analogies too. For example, this gets an "acting career vector":
```
print_tuples(analogy(glove, 'man', 'actor', 'woman'))
```
For a "baby animal vector":
```
print_tuples(analogy(glove, 'cat', 'kitten', 'dog'))
```
A "capital city vector":
```
print_tuples(analogy(glove, 'france', 'paris', 'england'))
```
A "musician's genre vector":
```
print_tuples(analogy(glove, 'elvis', 'rock', 'eminem'))
```
And an "ingredient vector":
```
print_tuples(analogy(glove, 'beer', 'barley', 'wine'))
```
## Correcting Spelling Mistakes
Another interesting property of word embeddings is that they can actually be used to correct spelling mistakes!
We'll put their findings into code and briefly explain them, but to read more about this, check out the [original thread](http://forums.fast.ai/t/nlp-any-libraries-dictionaries-out-there-for-fixing-common-spelling-errors/16411) and the associated [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26).
First, we need to load up the much larger vocabulary GloVe vectors, this is due to the spelling mistakes not appearing in the smaller vocabulary.
**Note**: these vectors are very large (~2GB), so watch out if you have a limited internet connection.
```
glove = torchtext.vocab.GloVe(name = '840B', dim = 300)
```
Checking the vocabulary size of these embeddings, we can see we now have over 2 million unique words in our vocabulary!
```
glove.vectors.shape
```
As the vectors were trained with a much larger vocabulary on a larger corpus of text, the words that appear are a little different. Notice how the words 'north', 'south', 'pyongyang' and 'dprk' no longer appear in the most closest words to 'korea'.
```
word_vector = get_vector(glove, 'korea')
print_tuples(closest_words(glove, word_vector))
```
Our first step to correcting spelling mistakes is looking at the vector for a misspelling of the word 'reliable'.
```
word_vector = get_vector(glove, 'relieable')
print_tuples(closest_words(glove, word_vector))
```
Notice how the correct spelling, "reliable", does not appear in the top 10 closest words. Surely the misspellings of a word should appear next to the correct spelling of the word as they appear in the same context, right?
The hypothesis is that misspellings of words are all equally shifted away from their correct spelling. This is because articles of text that contain spelling mistakes are usually written in an informal manner where correct spelling doesn't matter as much (such as tweets/blog posts), thus spelling errors will appear together as they appear in context of informal articles.
Similar to how we created analogies before, we can create a "correct spelling" vector. This time, instead of using a single example to create our vector, we'll use the average of multiple examples. This will hopefully give better accuracy!
We first create a vector for the correct spelling, 'reliable', then calculate the difference between the "reliable vector" and each of the 8 misspellings of 'reliable'. As we are going to concatenate these 8 misspelling tensors together we need to unsqueeze a "batch" dimension to them.
```
reliable_vector = get_vector(glove, 'reliable')
reliable_misspellings = ['relieable', 'relyable', 'realible', 'realiable',
'relable', 'relaible', 'reliabe', 'relaiable']
diff_reliable = [(reliable_vector - get_vector(glove, s)).unsqueeze(0)
for s in reliable_misspellings]
```
We take the average of these 8 'difference from reliable' vectors to get our "misspelling vector".
```
misspelling_vector = torch.cat(diff_reliable, dim = 0).mean(dim = 0)
```
We can now correct other spelling mistakes using this "misspelling vector" by finding the closest words to the sum of the vector of a misspelled word and the "misspelling vector".
For a misspelling of "because":
```
word_vector = get_vector(glove, 'becuase')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a misspelling of "definitely":
```
word_vector = get_vector(glove, 'defintiely')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a misspelling of "consistent":
```
word_vector = get_vector(glove, 'consistant')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a misspelling of "package":
```
word_vector = get_vector(glove, 'pakage')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a more in-depth look at this, check out the [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26).
| github_jupyter |
```
# second notebook for Yelp1 Labs 18 Project
# data cleanup
# imports
# dataframe
import pandas as pd
import json
# NLP
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from gensim import corpora
# import review.json file from https://www.yelp.com/dataset
with open('/Users/ianforrest/Desktop/coding/repos/yelp/yelp_dataset/review.json') as f:
review = json.loads("[" +
f.read().replace("}\n{", "},\n{") +
"]")
# convert review.json files to pandas DataFrame 'df_review'
df_review = pd.DataFrame(review)
# check df_review to make sure it was created correctly
df_review.head()
# check column names of df_review
df_review.columns
# check value counts of 'stars' column
df_review['stars'].value_counts()
# check value counts of useful column
df_review['useful'].value_counts()
# check value counts of funny column
df_review['funny'].value_counts()
# check value counts of cool column
df_review['cool'].value_counts()
# check text of random reviews in dataset as part of initial exploration
df_review.iloc[3244,7]
# check text of random reviews in dataset as part of initial exploration
df_review.iloc[2342553,7]
# check text of random reviews in dataset as part of initial exploration
df_review.iloc[3,7]
# export df_review to .csv
#df_review.to_csv(r'/Users/ianforrest/Desktop/coding/repos/yelp/yelp_dataset/df_review.csv')
# create copy of dataframe to manipulate for model
df = df_review.copy()
df.head()
# add 'total_votes' column to dataframe; total of 'useful', 'funny', 'cool' columns
df['total_votes'] = df['useful'] + df['funny'] + df['cool']
df.head()
# drop unused columns from dataframe
df = df.drop(columns=['user_id', 'business_id', 'review_id', 'useful', 'funny', 'cool'])
df.head()
# convert 'date' column to datetime format
df['date'] = pd.to_datetime(df['date'])
df.dtypes
# check value counts of 'total_votes' column
df['total_votes'].value_counts()
# limit dataframe to reviews with 0 or more total votes
df = df.loc[df['total_votes'] >= 0]
# check value counts of 'total_votes' column
df['total_votes'].value_counts()
# remove html code from text column
df['text'] = df['text'].str.replace('(\d{1,2}[/. ](?:\d{1,2}|January|Jan)[/. ]\d{2}(?:\d{2})?)', '')
df['text'] = df['text'].str.replace('\n\n', '')
df['text'] = df['text'].str.replace('\\n', '')
df['text'] = df['text'].str.replace('\n', '')
# check text of random reviews in dataset to make sure HTML code is removed correctly
# backslashes before apostrophes are for display purposes only to indicate apostrophes are not quotation marks
df.iloc[2342553,1]
# initiate STOPWORDS for NLP Processing
STOPWORDS = set(STOPWORDS).union(set(['I', 'We', 'i', 'we', 'it', "it's",
'it', 'the', 'this', 'they', 'They',
'he', 'He', 'she', 'She', '\n', '\n\n']))
# create tokenize function to tokenize review text
def tokenize(text):
return [token for token in simple_preprocess(text, deacc=True, min_len=4, max_len=40) if token not in STOPWORDS]
# add tokens column to dataframe
df['tokens'] = df['text'].apply(tokenize)
# check to make sure tokens were added to dataframe correctly
df.head()
# export cleaned dataframe with tokenized text to .csv file
df.to_csv(r'/Users/ianforrest/Desktop/coding/repos/yelp/yelp_dataset/df.csv')
df.sort_values(['total_votes'], ascending=False)
df.iloc[1292098,1]
df.dtypes
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Deep Convolutional Generative Adversarial Network
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/generative/dcgan">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/dcgan.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to generate images of handwritten digits using a [Deep Convolutional Generative Adversarial Network](https://arxiv.org/pdf/1511.06434.pdf) (DCGAN). The code is written using the [Keras Sequential API](https://www.tensorflow.org/guide/keras) with a `tf.GradientTape` training loop.
## What are GANs?
[Generative Adversarial Networks](https://arxiv.org/abs/1406.2661) (GANs) are one of the most interesting ideas in computer science today. Two models are trained simultaneously by an adversarial process. A *generator* ("the artist") learns to create images that look real, while a *discriminator* ("the art critic") learns to tell real images apart from fakes.

During training, the *generator* progressively becomes better at creating images that look real, while the *discriminator* becomes better at telling them apart. The process reaches equilibrium when the *discriminator* can no longer distinguish real images from fakes.

This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the *generator* as it was trained for 50 epochs. The images begin as random noise, and increasingly resemble hand written digits over time.

To learn more about GANs, we recommend MIT's [Intro to Deep Learning](http://introtodeeplearning.com/) course.
### Import TensorFlow and other libraries
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
# To generate GIFs
!pip install imageio
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
from IPython import display
```
### Load and prepare the dataset
You will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
```
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
```
## Create the models
Both the generator and discriminator are defined using the [Keras Sequential API](https://www.tensorflow.org/guide/keras#sequential_model).
### The Generator
The generator uses `tf.keras.layers.Conv2DTranspose` (upsampling) layers to produce an image from a seed (random noise). Start with a `Dense` layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1. Notice the `tf.keras.layers.LeakyReLU` activation for each layer, except the output layer which uses tanh.
```
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
```
Use the (as yet untrained) generator to create an image.
```
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
```
### The Discriminator
The discriminator is a CNN-based image classifier.
```
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
```
Use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
```
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)
```
## Define the loss and optimizers
Define loss functions and optimizers for both models.
```
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
```
### Discriminator loss
This method quantifies how well the discriminator is able to distinguish real images from fakes. It compares the discriminator's predictions on real images to an array of 1s, and the discriminator's predictions on fake (generated) images to an array of 0s.
```
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
```
### Generator loss
The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.
```
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
```
The discriminator and the generator optimizers are different since we will train two networks separately.
```
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
```
### Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
```
## Define the training loop
```
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
```
The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
```
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
```
**Generate and save images**
```
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
```
## Train the model
Call the `train()` method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one minute / epoch with the default settings on Colab.
```
train(train_dataset, EPOCHS)
```
Restore the latest checkpoint.
```
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
```
## Create a GIF
```
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
```
Use `imageio` to create an animated gif using the images saved during training.
```
anim_file = 'dcgan.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6,2,0,''):
display.Image(filename=anim_file)
```
If you're working in Colab you can download the animation with the code below:
```
try:
from google.colab import files
except ImportError:
pass
else:
files.download(anim_file)
```
## Next steps
This tutorial has shown the complete code necessary to write and train a GAN. As a next step, you might like to experiment with a different dataset, for example the Large-scale Celeb Faces Attributes (CelebA) dataset [available on Kaggle](https://www.kaggle.com/jessicali9530/celeba-dataset). To learn more about GANs we recommend the [NIPS 2016 Tutorial: Generative Adversarial Networks](https://arxiv.org/abs/1701.00160).
| github_jupyter |
```
import pickle
from misc import *
import SYCLOP_env as syc
from RL_brain_b import DeepQNetwork
import cv2
import time
from mnist import MNIST
mnist = MNIST('/home/bnapp/datasets/mnist/')
images, labels = mnist.load_training()
# some_mnistSM =[ cv2.resize(1.+np.reshape(uu,[28,28]), dsize=(256, 256)) for uu in images[:2]]#[:4096]]
some_samples_for_setup= prep_mnist_padded_images(2)
# run_dir = 'saved_runs/run_syclop_generic1.py_noname_1576060868_0/' #padded mnist beta 0.1 speed penalty 5
# result_type = 'nwk2.nwk'
# run_dir = 'saved_runs/run_syclop_generic1.py_noname_1576147784_0/' #padded mnist beta 0.1 speed penalty 20
run_dir = 'saved_runs/run_syclop_generic1.py_noname_1576403573_0/' #padded mnist beta 0.1 speed penalty 0
result_type = 'tempX_1.nwk'
hp = HP()
hp.mem_depth=1
hp.logmode=False
batch_size=256
action_space_size=9
# images = some_mnistSM
number_of_images = len(images)
reward = syc.Rewards()
observation_size = 256*4
RL = DeepQNetwork(action_space_size, observation_size*hp.mem_depth,#sensor.frame_size+2,
reward_decay=0.99,
e_greedy=1-1e-9,
e_greedy0=1-1e-9,
replace_target_iter=10,
memory_size=100000,
e_greedy_increment=0.0001,
learning_rate=0.0025,
double_q=False,
dqn_mode=True,
state_table=np.zeros([1,observation_size*hp.mem_depth]),
soft_q_type='boltzmann',
beta=0.1
)
def local_observer(sensor,agent):
if hp.logmode:
normfactor=1.0
else:
normfactor = 1.0/256.0
return normfactor*np.concatenate([relu_up_and_down(sensor.central_dvs_view),
relu_up_and_down(cv2.resize(1.0*sensor.dvs_view, dsize=(16, 16), interpolation=cv2.INTER_AREA))])
observation = np.random.uniform(0,1,size=[hp.mem_depth, observation_size])
scene_bb = [None]*batch_size
sensor_bb =[None]*batch_size
agent_bb = [None]*batch_size
action_bb = [None]*batch_size
action_list_bb = [None]*batch_size
q_list_bb = [None]*batch_size
observation_bb = [None]*batch_size
with open(run_dir+'/hp.pkl','rb') as f:
this_hp = pickle.load(f)
for bb in range(batch_size):
scene_bb[bb] = syc.Scene(frame_list=some_samples_for_setup[0:1])
sensor_bb[bb] = syc.Sensor()
agent_bb[bb] = syc.Agent(max_q = [scene_bb[bb].maxx-sensor_bb[bb].hp.winx,scene_bb[bb].maxy-sensor_bb[bb].hp.winy])
agent_bb[bb].hp.action_space = this_hp.agent.action_space
RL.dqn.load_nwk_param(run_dir+'/'+ result_type)
with open(run_dir+'/hp.pkl','rb') as f:
this_hp = pickle.load(f)
hp.fading_mem = this_hp.fading_mem +0.0 #to avoid assignment by address
size=(28,28)
offset=(0,0)
action_records=[]
q_records=[]
observation_feeder=np.zeros([batch_size,1024])
for image_num,image in enumerate(images):
step = 0
episode = 0
for batch_num in range(len(images)//batch_size):
for bb in range(batch_size):
action_list_bb[bb] = []
# q_list_bb[bb] = []
observation_bb[bb] = np.random.uniform(0,1,size=[hp.mem_depth, observation_size])
observation_bb[bb] = np.random.uniform(0,1,size=[hp.mem_depth, observation_size])
# scene_bb[bb].current_frame = image_num[bb]
#### sizing story:
image_resized=cv2.resize(0.0+np.reshape(images[batch_num*batch_size+bb],[28,28]), dsize=size)
scene_bb[bb].image = build_mnist_padded([image_resized],y_size=size[1],x_size=size[0],offset=offset)
# scene_bb[bb].image = build_mnist_padded([images[batch_num*batch_size+bb]])
agent_bb[bb].reset()
agent_bb[bb].q_ana[1]=128./2.-32
agent_bb[bb].q_ana[0]=128./2-32
agent_bb[bb].q = np.int32(np.floor(agent_bb[bb].q_ana))
sensor_bb[bb].reset()
sensor_bb[bb].update(scene_bb[bb], agent_bb[bb])
sensor_bb[bb].update(scene_bb[bb], agent_bb[bb])
time1=time.time()
for step_prime in range(1000):
deep_time1=time.time()
# action = RL.choose_action(observation.reshape([-1]))
for bb in range(batch_size):
observation_feeder[bb,:]=observation_bb[bb].reshape([1,-1])
oo = RL.dqn.eval_eval(observation_feeder)
boltzmann_measure = np.exp(RL.beta * (oo-np.max(oo,axis=1).reshape([-1,1]))) #todo here substracted max to avoid exponent exploding. need to be taken into a separate function!
boltzmann_measure = boltzmann_measure / np.sum(boltzmann_measure, axis=1).reshape([-1,1])
for bb in range(batch_size):
action_bb[bb] = np.random.choice(list(range(RL.n_actions)),1, p=boltzmann_measure[bb,:].reshape([-1]))[0]
# action_bb= [a for a in np.argmax(oo,axis=1)]
deep_time2=time.time()
shallow_time1=time.time()
for bb in range(batch_size):
agent_bb[bb].act(action_bb[bb])
action_list_bb[bb].append(action_bb[bb])
# q_list_bb[bb].append(agent_bb[bb].q_ana)
sensor_bb[bb].update(scene_bb[bb],agent_bb[bb])
observation_bb[bb] *= hp.fading_mem
observation_bb[bb] += local_observer(sensor_bb[bb], agent_bb[bb]) # todo: generalize
shallow_time2=time.time()
# print('deep:',deep_time2-deep_time1,'shallow:',shallow_time2-shallow_time1)
time2=time.time()
print('batch num:',batch_num,'wall time consumed:',time2-time1)
for bb in range(batch_size):
action_records.append(action_list_bb[bb])
# q_records.append(q_list_bb[bb])
len(action_records)
with open('mnist_padded_b0p1_v0_X28_Tx0y0_act_full1.pkl','wb') as f:
pickle.dump([action_records[:30000],labels[:30000]],f)
with open('mnist_padded_b0p1_v0_X28_Tx0y0_act_full2.pkl','wb') as f:
pickle.dump([action_records[30000:],labels[30000:]],f)
np.shape(sensor_bb[0].frame_view)
agent_bb[0].q_ana
```
| github_jupyter |
```
import lifelines
import pymc as pm
from pyBMA.CoxPHFitter import CoxPHFitter
import matplotlib.pyplot as plt
import numpy as np
from numpy import log
from datetime import datetime
import pandas as pd
%matplotlib inline
```
The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
```
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
```
Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
```
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
```
Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
```
#### Adjust burn and thin, both paramters of the mcmc sample function
#### Narrow and broaden prior
```
Problems:
7 - Try testing whether the median is greater than a different values
```
#### Hypothesis testing
```
If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
```
### Fit a cox proprtional hazards model
```
Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
```
#### Plot baseline hazard function
#### Predict
#### Plot survival functions for different covariates
#### Plot some odds
```
Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
```
#### BMA Coefficient values
#### Different priors
```
| github_jupyter |
# Probability Distributions
# Some typical stuff we'll likely use
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%config InlineBackend.figure_format = 'retina'
```
# [SciPy](https://scipy.org)
### [scipy.stats](https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html)
```
import scipy as sp
import scipy.stats as st
```
# Binomial Distribution
### <font color=darkred> **Example**: A couple, who are both carriers for a recessive disease, wish to have 5 children. They want to know the probability that they will have four healthy kids.</font>
In this case the random variable is the number of healthy kids.
```
# number of trials (kids)
n = 5
# probability of success on each trial
# i.e. probability that each child will be healthy = 1 - 0.5 * 0.5 = 0.75
p = 0.75
# a binomial distribution object
dist = st.binom(n, p)
# probability of four healthy kids
dist.pmf(4)
print(f"The probability of having four healthy kids is {dist.pmf(4):.3f}")
```
### <font color=darkred>Probability to have each of 0-5 healthy kids.</font>
```
# all possible # of successes out of n trials
# i.e. all possible outcomes of the random variable
# i.e. all possible number of healthy kids = 0-5
numHealthyKids = np.arange(n+1)
numHealthyKids
# probability of obtaining each possible number of successes
# i.e. probability of having each possible number of healthy children
pmf = dist.pmf(numHealthyKids)
pmf
```
### <font color=darkred>Visualize the probability to have each of 0-5 healthy kids.</font>
```
plt.bar(numHealthyKids, pmf)
plt.xlabel('# healthy children', fontsize=18)
plt.ylabel('probability', fontsize=18);
```
### <font color=darkred>Probability to have at least 4 healthy kids.</font>
```
# sum of probabilities of 4 and 5 healthy kids
pmf[-2:].sum()
# remaining probability after subtracting CDF for 3 kids
1 - dist.cdf(3)
# survival function for 3 kids
dist.sf(3)
```
### <font color=darkred>What is the expected number of healthy kids?</font>
```
print(f"The expected number of healthy kids is {dist.mean()}")
```
### <font color=darkred>How sure are we about the above estimate?</font>
```
print(f"The expected number of healthy kids is {dist.mean()} ± {dist.std():.2f}")
```
# <font color=red> Exercise</font>
Should the couple consider having six children?
1. Plot the *pmf* for the probability of each possible number of healthy children.
2. What's the probability that they will all be healthy?
# Poisson Distribution
### <font color=darkred> **Example**: Assume that the rate of deleterious mutations is ~1.2 per diploid genome. What is the probability that an individual has 8 or more spontaneous deleterious mutations?</font>
In this case the random variable is the number of deleterious mutations within an individuals genome.
```
# the rate of deleterious mutations is 1.2 per diploid genome
rate = 1.2
# poisson distribution describing the predicted number of spontaneous mutations
dist = st.poisson(rate)
# let's look at the probability for 0-10 mutations
numMutations = np.arange(11)
plt.bar(numMutations, dist.pmf(numMutations))
plt.xlabel('# mutations', fontsize=18)
plt.ylabel('probability', fontsize=18);
print(f"Probability of less than 8 mutations = {dist.cdf(7)}")
print(f"Probability of 8 or more mutations = {dist.sf(7)}")
dist.cdf(7) + dist.sf(7)
```
# <font color=red> Exercise</font>
For the above example, what is the probability that an individual has three or fewer mutations?
# Exponential Distribution
### <font color=darkred> **Example**: Assume that a neuron spikes 1.5 times per second on average. Plot the probability density function of interspike intervals from zero to five seconds with a resolution of 0.01 seconds.</font>
In this case the random variable is the interspike interval time.
```
# spike rate per second
rate = 1.5
# exponential distribution describing the neuron's predicted interspike intervals
dist = st.expon(loc=0, scale=1/rate)
# plot interspike intervals from 0-5 seconds at 0.01 sec resolution
intervalsSec = np.linspace(0, 5, 501)
# probability density for each interval
pdf = dist.pdf(intervalsSec)
plt.plot(intervalsSec, pdf)
plt.xlabel('interspike interval (sec)', fontsize=18)
plt.ylabel('pdf', fontsize=18);
```
### <font color=darkred>What is the average interval?</font>
```
print(f"Average interspike interval = {dist.mean():.2f} seconds.")
```
### <font color=darkred>time constant = 1 / rate = mean</font>
```
tau = 1 / rate
tau
```
### <font color=darkred> What is the probability that an interval will be between 1 and 2 seconds?</font>
```
prob1to2 = dist.cdf(2) - dist.cdf(1);
print(f"Probability of an interspike interval being between 1 and 2 seconds is {prob1to2:.2f}")
```
### <font color=darkred> For what time *T* is the probability that an interval is shorter than *T* equal to 25%?</font>
```
timeAtFirst25PercentOfDist = dist.ppf(0.25) # percent point function
print(f"There is a 25% chance that an interval is shorter than {timeAtFirst25PercentOfDist:.2f} seconds.")
```
# <font color=red> Exercise</font>
For the above example, what is the probability that 3 seconds will pass without any spikes?
# Normal Distribution
### <font color=darkred> **Example**: Under basal conditions the resting membrane voltage of a neuron fluctuates around -70 mV with a variance of 10 mV.</font>
In this case the random variable is the neuron's resting membrane voltage.
```
# mean resting membrane voltage (mV)
mu = -70
# standard deviation about the mean
sd = np.sqrt(10)
# normal distribution describing the neuron's predicted resting membrane voltage
dist = st.norm(mu, sd)
# membrane voltages from -85 to -55 mV
mV = np.linspace(-85, -55, 301)
# probability density for each membrane voltage in mV
pdf = dist.pdf(mV)
plt.plot(mV, pdf)
plt.xlabel('membrane voltage (mV)', fontsize=18)
plt.ylabel('pdf', fontsize=18);
```
### <font color=darkred> What range of membrane voltages (centered on the mean) account for 95% of the probability.</font>
```
low = dist.ppf(0.025) # first 2.5% of distribution
high = dist.ppf(0.975) # first 97.5% of distribution
print(f"95% of membrane voltages are expected to fall within {low :.1f} and {high :.1f} mV.")
```
# <font color=red> Exercise</font>
In a resting neuron, what's the probability that you would measure a membrane voltage greater than -65 mV?
If you meaassure -65 mV, is the neuron at rest?
# <font color=red> Exercise</font>
What probability distribution might best describe the number of synapses per millimeter of dendrite?
A) Binomial
B) Poisson
C) Exponential
D) Normal
# <font color=red> Exercise</font>
What probability distribution might best describe the time a protein spends in its active conformation?
A) Binomial
B) Poisson
C) Exponential
D) Normal
# <font color=red> Exercise</font>
What probability distribution might best describe the weights of adult mice in a colony?
A) Binomial
B) Poisson
C) Exponential
D) Normal
# <font color=red> Exercise</font>
What probability distribution might best describe the number of times a subject is able to identify the correct target in a series of trials?
A) Binomial
B) Poisson
C) Exponential
D) Normal
| github_jupyter |
### Dr. Ignaz Semmelweis
```
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
display(yearly)
```
### The alarming number of deaths
```
# Calculate proportion of deaths per no. births
yearly['proportion_deaths'] = yearly.deaths / yearly.births
# Extract Clinic 1 data into clinic_1 and Clinic 2 data into clinic_2
clinic_1 = yearly[yearly.clinic == 'clinic 1']
clinic_2 = yearly[yearly.clinic == 'clinic 2']
# Print out clinic_1
display(clinic_2)
```
### Death at the clinics
```
# Plot yearly proportion of deaths at the two clinics
ax = clinic_1.plot(x='year', y='proportion_deaths', label='Clinic 1')
clinic_2.plot(x='year', y='proportion_deaths', label='Clinic 2', ax=ax)
plt.ylabel("Proportion deaths")
plt.show()
```
### The handwashing
```
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates=['date'])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"] = monthly.deaths/monthly.births
# Print out the first rows in monthly
display(monthly.head())
```
### The effect of handwashing
```
# Date when handwashing was made mandatory
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly.date < handwashing_start]
after_washing = monthly[monthly.date >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x='date',
y='proportion_deaths', label='Before Washing')
after_washing.plot(x='date',y='proportion_deaths', label='After Washing', ax=ax)
plt.ylabel("Proportion deaths")
plt.show()
```
### More handwashing, fewer deaths?
```
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing.proportion_deaths
after_proportion = after_washing.proportion_deaths
mean_diff = after_proportion.mean() - before_proportion.mean()
print(mean_diff)
```
### Bootstrap analysis
```
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(replace=True,n=len(before_proportion))
boot_after = after_proportion.sample(replace=True,n=len(after_proportion))
boot_mean_diff.append(boot_after.mean()-boot_before.mean())
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975] )
print(confidence_interval)
```
### Conclusion
```
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
print(doctors_should_wash_their_hands)
```
| github_jupyter |
```
# Import the necessary libraries
import numpy as np
import pandas as pd
import os
import time
import warnings
import gc
gc.collect()
import os
from six.moves import urllib
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
warnings.filterwarnings('ignore')
%matplotlib inline
plt.style.use('seaborn')
from scipy import stats
from scipy.stats import norm, skew
from sklearn.preprocessing import StandardScaler
#Add All the Models Libraries
# preprocessing
from sklearn.preprocessing import LabelEncoder
label_enc = LabelEncoder()
# Scalers
from sklearn.utils import shuffle
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
# Models
from sklearn.linear_model import Lasso
from sklearn.metrics import mean_squared_log_error,mean_squared_error, r2_score,mean_absolute_error
from sklearn.model_selection import train_test_split #training and testing data split
from sklearn import metrics #accuracy measure
from sklearn.metrics import confusion_matrix #for confusion matrix
from scipy.stats import reciprocal, uniform
from sklearn.model_selection import StratifiedKFold, RepeatedKFold
# Cross-validation
from sklearn.model_selection import KFold #for K-fold cross validation
from sklearn.model_selection import cross_val_score #score evaluation
from sklearn.model_selection import cross_val_predict #prediction
from sklearn.model_selection import cross_validate
# GridSearchCV
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
#Common data processors
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn import feature_selection
from sklearn import model_selection
from sklearn import metrics
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.utils import check_array
from scipy import sparse
# to make this notebook's output stable across runs
np.random.seed(123)
gc.collect()
# To plot pretty figures
%matplotlib inline
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
#Reduce the memory usage - by Panchajanya Banerjee
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
train = reduce_mem_usage(pd.read_csv('train.csv',parse_dates=["first_active_month"]))
test = reduce_mem_usage(pd.read_csv('test.csv', parse_dates=["first_active_month"]))
test.first_active_month = test.first_active_month.fillna(pd.to_datetime('2017-09-01'))
test.isnull().sum()
# Now extract the month, year, day, weekday
train["month"] = train["first_active_month"].dt.month
train["year"] = train["first_active_month"].dt.year
train['week'] = train["first_active_month"].dt.weekofyear
train['dayofweek'] = train['first_active_month'].dt.dayofweek
train['days'] = (datetime.date(2018, 2, 1) - train['first_active_month'].dt.date).dt.days
train['quarter'] = train['first_active_month'].dt.quarter
test["month"] = test["first_active_month"].dt.month
test["year"] = test["first_active_month"].dt.year
test['week'] = test["first_active_month"].dt.weekofyear
test['dayofweek'] = test['first_active_month'].dt.dayofweek
test['days'] = (datetime.date(2018, 2, 1) - test['first_active_month'].dt.date).dt.days
test['quarter'] = test['first_active_month'].dt.quarter
# Taking Reference from Other Kernels
def aggregate_transaction_hist(trans, prefix):
agg_func = {
'purchase_date' : ['max','min'],
'month_diff' : ['mean', 'min', 'max', 'var'],
'month_diff_lag' : ['mean', 'min', 'max', 'var'],
'weekend' : ['sum', 'mean'],
'authorized_flag': ['sum', 'mean'],
'category_1': ['sum','mean', 'max','min'],
'purchase_amount': ['sum', 'mean', 'max', 'min', 'std'],
'installments': ['sum', 'mean', 'max', 'min', 'std'],
'month_lag': ['max','min','mean','var'],
'card_id' : ['size'],
'month': ['nunique'],
'hour': ['nunique'],
'weekofyear': ['nunique'],
'dayofweek': ['nunique'],
'year': ['nunique'],
'subsector_id': ['nunique'],
'merchant_category_id' : ['nunique', lambda x:stats.mode(x)[0]],
'merchant_id' : ['nunique', lambda x:stats.mode(x)[0]],
'state_id' : ['nunique', lambda x:stats.mode(x)[0]],
}
agg_trans = trans.groupby(['card_id']).agg(agg_func)
agg_trans.columns = [prefix + '_'.join(col).strip() for col in agg_trans.columns.values]
agg_trans.reset_index(inplace=True)
df = (trans.groupby('card_id').size().reset_index(name='{}transactions_count'.format(prefix)))
agg_trans = pd.merge(df, agg_trans, on='card_id', how='left')
return agg_trans
transactions = reduce_mem_usage(pd.read_csv('historical_transactions_clean_outlier.csv'))
transactions = transactions.loc[transactions.purchase_amount < 50,]
transactions['authorized_flag'] = transactions['authorized_flag'].map({'Y': 1, 'N': 0})
transactions['category_1'] = transactions['category_1'].map({'Y': 0, 'N': 1})
#Feature Engineering - Adding new features
transactions['purchase_date'] = pd.to_datetime(transactions['purchase_date'])
transactions['year'] = transactions['purchase_date'].dt.year
transactions['weekofyear'] = transactions['purchase_date'].dt.weekofyear
transactions['month'] = transactions['purchase_date'].dt.month
transactions['dayofweek'] = transactions['purchase_date'].dt.dayofweek
transactions['weekend'] = (transactions.purchase_date.dt.weekday >=5).astype(int)
transactions['hour'] = transactions['purchase_date'].dt.hour
transactions['quarter'] = transactions['purchase_date'].dt.quarter
transactions['month_diff'] = ((pd.to_datetime('01/03/2018') - transactions['purchase_date']).dt.days)//30
transactions['month_diff_lag'] = transactions['month_diff'] + transactions['month_lag']
gc.collect()
def aggregate_bymonth(trans, prefix):
agg_func = {
'purchase_amount': ['sum', 'mean'],
'card_id' : ['size'],
'merchant_category_id' : ['nunique', lambda x:stats.mode(x)[0]],
# 'merchant_id' : ['nunique', lambda x:stats.mode(x)[0]],
}
agg_trans = trans.groupby(['card_id','month','year']).agg(agg_func)
agg_trans.columns = [prefix + '_'.join(col).strip() for col in agg_trans.columns.values]
agg_trans.reset_index(inplace=True)
df = (trans.groupby('card_id').size().reset_index(name='{}transactions_count'.format(prefix)))
agg_trans = pd.merge(df, agg_trans, on='card_id', how='left')
return agg_trans
merge = aggregate_bymonth(transactions, prefix='hist_')
merge = merge.drop(['hist_transactions_count'], axis = 1)
merge['Date'] = pd.to_datetime(merge[['year', 'month']].assign(Day=1))
df1 = merge.groupby(['card_id', 'hist_merchant_category_id_<lambda>']).size().reset_index(name='Count')
df1 = df1.loc[df1.Count > 1]
df1 = df1.groupby(['card_id']).agg({'Count':['sum']})
df1.columns = ['category_repeated_month']
train = pd.merge(train, df1, on='card_id',how='left')
test = pd.merge(test, df1, on='card_id',how='left')
df1
gc.collect()
## Second last month
amerge = merge.sort_values('Date').groupby('card_id',
as_index=False).apply(lambda x: x.iloc[-2])[['card_id','hist_card_id_size','hist_purchase_amount_sum','hist_purchase_amount_mean']]
new_names = [(i,i+'_last2') for i in amerge.iloc[:, 1:].columns.values]
amerge.rename(columns = dict(new_names), inplace=True)
train = pd.merge(train, amerge, on='card_id',how='left')
test = pd.merge(test, amerge, on='card_id',how='left')
gc.collect()
# last month and first month
merge1 = merge.loc[merge.groupby('card_id').Date.idxmax(),:][[ 'card_id','hist_card_id_size',
'hist_purchase_amount_sum','hist_purchase_amount_mean']]
new_names = [(i,i+'_last') for i in merge1.iloc[:, 1:].columns.values]
merge1.rename(columns = dict(new_names), inplace=True)
merge2 = merge.loc[merge.groupby('card_id').Date.idxmin(),:][['card_id','hist_card_id_size',
'hist_purchase_amount_sum','hist_purchase_amount_mean']]
new_names = [(i,i+'_first') for i in merge2.iloc[:, 1:].columns.values]
merge2.rename(columns = dict(new_names), inplace=True)
comb = pd.merge(merge1, merge2, on='card_id',how='left')
train = pd.merge(train, comb, on='card_id',how='left')
test = pd.merge(test, comb, on='card_id',how='left')
gc.collect()
## Same merchant purchase
df = (transactions.groupby(['card_id','merchant_id','purchase_amount']).size().reset_index(name='count_hist'))
df['purchase_amount_hist'] = df.groupby(['card_id','merchant_id'])['purchase_amount'].transform('sum')
df['count_hist'] = df.groupby(['card_id','merchant_id'])['count_hist'].transform('sum')
df = df.drop_duplicates()
df = df.loc[df['count_hist'] >= 2]
agg_func = {
'count_hist' : ['count'],
'purchase_amount_hist':['sum','mean'],
'purchase_amount':['sum','mean'],
}
df = df.groupby(['card_id']).agg(agg_func)
df.columns = [''.join(col).strip() for col in df.columns.values]
new_names = [(i,i+'_merhist') for i in df.iloc[:, 3:].columns.values]
df.rename(columns = dict(new_names), inplace=True)
train = pd.merge(train, df, on='card_id',how='left')
test = pd.merge(test, df, on='card_id',how='left')
# Same category purchase
df = (transactions.groupby(['card_id','merchant_category_id','purchase_amount']).size().reset_index(name='hist_count'))
df['hist_purchase_amount'] = df.groupby(['card_id','merchant_category_id'])['purchase_amount'].transform('sum')
df['hist_count'] = df.groupby(['card_id','merchant_category_id'])['hist_count'].transform('sum')
df = df.drop_duplicates()
df = df.loc[df['hist_count'] >= 2]
df['hist_count_4'] = 0
df.loc[df['hist_count'] >= 4, 'hist_count_4'] = 1
df['hist_mean4'] = 0
df.loc[df['hist_count'] >= 4, 'hist_mean4'] = df['hist_purchase_amount']/df['hist_count']
agg_fun = {
'hist_count' : ['count'],
'hist_count_4' : ['sum'],
'hist_purchase_amount':['sum','mean'],
'hist_mean4' : ['sum','mean'],
'purchase_amount':['sum','mean'],
}
df = df.groupby(['card_id']).agg(agg_fun)
df.columns = [''.join(col).strip() for col in df.columns.values]
new_names = [(i,'hist'+i) for i in df.iloc[:, 6:].columns.values]
df.rename(columns = dict(new_names), inplace=True)
train = pd.merge(train, df, on='card_id',how='left')
test = pd.merge(test, df, on='card_id',how='left')
# agg_func = {'mean': ['mean'],}
# for col in ['category_2','category_3']:
# transactions[col+'_mean'] = transactions['purchase_amount'].groupby(transactions[col]).agg('mean')
# transactions[col+'_max'] = transactions['purchase_amount'].groupby(transactions[col]).agg('max')
# transactions[col+'_min'] = transactions['purchase_amount'].groupby(transactions[col]).agg('min')
# transactions[col+'_var'] = transactions['purchase_amount'].groupby(transactions[col]).agg('var')
# agg_func[col+'_mean'] = ['mean']
# gc.collect()
merchants = reduce_mem_usage(pd.read_csv('merchants_clean.csv'))
merchants = merchants.drop(['Unnamed: 0', 'merchant_group_id', 'merchant_category_id',
'subsector_id', 'numerical_1', 'numerical_2',
'active_months_lag3','active_months_lag6',
'city_id', 'state_id'
], axis = 1)
d = dict(zip(merchants.columns[1:], ['histchant_{}'.format(x) for x in (merchants.columns[1:])]))
d.update({"merchant_id": "hist_merchant_id_<lambda>"})
merchants = merchants.rename(index=str, columns= d)
## convert the month in business to categorical
merchants.histchant_active_months_lag12 = pd.cut(merchants.histchant_active_months_lag12, 4)
merge_trans = aggregate_transaction_hist(transactions, prefix='hist_')
merge_trans = merge_trans.merge(merchants, on = 'hist_merchant_id_<lambda>', how = 'left')
## hist transaction frequency
merge_trans['hist_freq'] = merge_trans.hist_transactions_count/(((merge_trans.hist_purchase_date_max -
merge_trans.hist_purchase_date_min).dt.total_seconds())/86400)
merge_trans['hist_freq_amount'] = merge_trans['hist_freq'] * merge_trans['hist_purchase_amount_mean']
merge_trans['hist_freq_install'] = merge_trans['hist_freq'] * merge_trans['hist_installments_mean']
cols = ['histchant_avg_sales_lag3','histchant_avg_purchases_lag3',
'histchant_avg_sales_lag6','histchant_avg_purchases_lag6',
'histchant_avg_sales_lag12','histchant_avg_purchases_lag12','hist_freq']
for col in cols:
merge_trans[col] = pd.qcut(merge_trans[col], 4)
for col in cols:
merge_trans[col].fillna(merge_trans[col].mode()[0], inplace=True)
label_enc.fit(list(merge_trans[col].values))
merge_trans[col] = label_enc.transform(list(merge_trans[col].values))
for col in ['histchant_category_1','histchant_most_recent_sales_range','histchant_most_recent_purchases_range',
'histchant_active_months_lag12','histchant_category_4','histchant_category_2']:
merge_trans[col].fillna(merge_trans[col].mode()[0], inplace=True)
label_enc.fit(list(merge_trans['hist_merchant_id_<lambda>'].values))
merge_trans['hist_merchant_id_<lambda>'] = label_enc.transform(list(merge_trans['hist_merchant_id_<lambda>'].values))
label_enc.fit(list(merge_trans['histchant_active_months_lag12'].values))
merge_trans['histchant_active_months_lag12'] = label_enc.transform(list(merge_trans['histchant_active_months_lag12'].values))
#del transactions
gc.collect()
train = pd.merge(train, merge_trans, on='card_id',how='left')
test = pd.merge(test, merge_trans, on='card_id',how='left')
#del merge_trans
gc.collect()
#Feature Engineering - Adding new features
train['hist_purchase_date_max'] = pd.to_datetime(train['hist_purchase_date_max'])
train['hist_purchase_date_min'] = pd.to_datetime(train['hist_purchase_date_min'])
train['hist_purchase_date_diff'] = (train['hist_purchase_date_max'] - train['hist_purchase_date_min']).dt.days
train['hist_purchase_date_average'] = train['hist_purchase_date_diff']/train['hist_card_id_size']
train['hist_purchase_date_uptonow'] = (pd.to_datetime('01/03/2018') - train['hist_purchase_date_max']).dt.days
train['hist_purchase_date_uptomin'] = (pd.to_datetime('01/03/2018') - train['hist_purchase_date_min']).dt.days
train['hist_first_buy'] = (train['hist_purchase_date_min'] - train['first_active_month']).dt.days
for feature in ['hist_purchase_date_max','hist_purchase_date_min']:
train[feature] = train[feature].astype(np.int64) * 1e-9
gc.collect()
#Feature Engineering - Adding new features
test['hist_purchase_date_max'] = pd.to_datetime(test['hist_purchase_date_max'])
test['hist_purchase_date_min'] = pd.to_datetime(test['hist_purchase_date_min'])
test['hist_purchase_date_diff'] = (test['hist_purchase_date_max'] - test['hist_purchase_date_min']).dt.days
test['hist_purchase_date_average'] = test['hist_purchase_date_diff']/test['hist_card_id_size']
test['hist_purchase_date_uptonow'] = (pd.to_datetime('01/03/2018') - test['hist_purchase_date_max']).dt.days
test['hist_purchase_date_uptomin'] = (pd.to_datetime('01/03/2018') - test['hist_purchase_date_min']).dt.days
test['hist_first_buy'] = (test['hist_purchase_date_min'] - test['first_active_month']).dt.days
for feature in ['hist_purchase_date_max','hist_purchase_date_min']:
test[feature] = test[feature].astype(np.int64) * 1e-9
gc.collect()
# Taking Reference from Other Kernels
def aggregate_transaction_new(trans, prefix):
agg_func = {
'purchase_date' : ['max','min'],
'month_diff' : ['mean', 'min', 'max'],
'month_diff_lag' : ['mean', 'min', 'max'],
'weekend' : ['sum', 'mean'],
'authorized_flag': ['sum'],
'category_1': ['sum','mean', 'max','min'],
'purchase_amount': ['sum', 'mean', 'max', 'min'],
'installments': ['sum', 'mean', 'max', 'min'],
'month_lag': ['max','min','mean'],
'card_id' : ['size'],
'month': ['nunique'],
'hour': ['nunique'],
'weekofyear': ['nunique'],
'dayofweek': ['nunique'],
'year': ['nunique'],
'subsector_id': ['nunique'],
'merchant_category_id' : ['nunique', lambda x:stats.mode(x)[0]],
'merchant_id' : ['nunique', lambda x:stats.mode(x)[0]],
'state_id' : ['nunique', lambda x:stats.mode(x)[0]],
}
agg_trans = trans.groupby(['card_id']).agg(agg_func)
agg_trans.columns = [prefix + '_'.join(col).strip() for col in agg_trans.columns.values]
agg_trans.reset_index(inplace=True)
df = (trans.groupby('card_id').size().reset_index(name='{}transactions_count'.format(prefix)))
agg_trans = pd.merge(df, agg_trans, on='card_id', how='left')
return agg_trans
# Now extract the data from the new transactions
new_transactions = reduce_mem_usage(pd.read_csv('new_merchant_transactions_clean_outlier.csv'))
new_transactions = new_transactions.loc[new_transactions.purchase_amount < 50,]
new_transactions['authorized_flag'] = new_transactions['authorized_flag'].map({'Y': 1, 'N': 0})
new_transactions['category_1'] = new_transactions['category_1'].map({'Y': 0, 'N': 1})
#Feature Engineering - Adding new features inspired by Chau's first kernel
new_transactions['purchase_date'] = pd.to_datetime(new_transactions['purchase_date'])
new_transactions['year'] = new_transactions['purchase_date'].dt.year
new_transactions['weekofyear'] = new_transactions['purchase_date'].dt.weekofyear
new_transactions['month'] = new_transactions['purchase_date'].dt.month
new_transactions['dayofweek'] = new_transactions['purchase_date'].dt.dayofweek
new_transactions['weekend'] = (new_transactions.purchase_date.dt.weekday >=5).astype(int)
new_transactions['hour'] = new_transactions['purchase_date'].dt.hour
new_transactions['quarter'] = new_transactions['purchase_date'].dt.quarter
new_transactions['is_month_start'] = new_transactions['purchase_date'].dt.is_month_start
new_transactions['month_diff'] = ((pd.to_datetime('01/03/2018') - new_transactions['purchase_date']).dt.days)//30
new_transactions['month_diff_lag'] = new_transactions['month_diff'] + new_transactions['month_lag']
gc.collect()
# new_transactions['Christmas_Day_2017'] = (pd.to_datetime('2017-12-25') -
# new_transactions['purchase_date']).dt.days.apply(lambda x: x if x > 0 and x <= 15 else 0)
# new_transactions['Valentine_Day_2017'] = (pd.to_datetime('2017-06-13') -
# new_transactions['purchase_date']).dt.days.apply(lambda x: x if x > 0 and x <= 7 else 0)
# #Black Friday : 24th November 2017
# new_transactions['Black_Friday_2017'] = (pd.to_datetime('2017-11-27') -
# new_transactions['purchase_date']).dt.days.apply(lambda x: x if x > 0 and x <= 7 else 0)
# aggs = {'mean': ['mean'],}
# for col in ['category_2','category_3']:
# new_transactions[col+'_mean'] = new_transactions['purchase_amount'].groupby(new_transactions[col]).agg('mean')
# new_transactions[col+'_max'] = new_transactions['purchase_amount'].groupby(new_transactions[col]).agg('max')
# new_transactions[col+'_min'] = new_transactions['purchase_amount'].groupby(new_transactions[col]).agg('min')
# new_transactions[col+'_var'] = new_transactions['purchase_amount'].groupby(new_transactions[col]).agg('var')
# aggs[col+'_mean'] = ['mean']
new_merge = aggregate_bymonth(new_transactions, prefix='new_')
new_merge = new_merge.drop(['new_transactions_count'], axis = 1)
new_merge['Date'] = pd.to_datetime(new_merge[['year', 'month']].assign(Day=1))
gc.collect()
merge1 = new_merge.loc[new_merge.groupby('card_id').Date.idxmax(),:][[ 'card_id','new_card_id_size',
'new_purchase_amount_sum','new_purchase_amount_mean']]
new_names = [(i,i+'_last') for i in merge1.iloc[:, 1:].columns.values]
merge1.rename(columns = dict(new_names), inplace=True)
# merge2 = merge.loc[merge.groupby('card_id').Date.idxmin(),:][['card_id','new_card_id_size',
# 'new_purchase_amount_sum','new_purchase_amount_mean']]
# new_names = [(i,i+'_first') for i in merge2.iloc[:, 1:].columns.values]
# merge2.rename(columns = dict(new_names), inplace=True)
# comb = pd.merge(merge1, merge2, on='card_id',how='left')
train = pd.merge(train, merge1, on='card_id',how='left')
test = pd.merge(test, merge1, on='card_id',how='left')
gc.collect()
## Same merchant purchase
df = (new_transactions.groupby(['card_id','merchant_id','purchase_amount']).size().reset_index(name='count_new'))
df['purchase_amount_new'] = df.groupby(['card_id','merchant_id'])['purchase_amount'].transform('sum')
df['count_new'] = df.groupby(['card_id','merchant_id'])['count_new'].transform('sum')
df = df.drop_duplicates()
df = df.loc[df['count_new'] >= 2]
agg_func = {
'count_new' : ['count'],
'purchase_amount_new':['sum','mean'],
'purchase_amount':['sum','mean'],
}
df = df.groupby(['card_id']).agg(agg_func)
df.columns = [''.join(col).strip() for col in df.columns.values]
new_names = [(i,'new'+i) for i in df.iloc[:, 3:].columns.values]
df.rename(columns = dict(new_names), inplace=True)
train = pd.merge(train, df, on='card_id',how='left')
test = pd.merge(test, df, on='card_id',how='left')
# Same category purchase
df = (new_transactions.groupby(['card_id','merchant_category_id']).size().reset_index(name='new_count'))
df['new_count'] = df.groupby(['card_id','merchant_category_id'])['new_count'].transform('sum')
df = df.drop_duplicates()
df = df.loc[df['new_count'] >= 2]
df['new_count_4'] = 0
df.loc[df['new_count'] >= 4, 'new_count_4'] = 1
agg_fun = {
'new_count' : ['count'],
'new_count_4' : ['sum'],
}
df = df.groupby(['card_id']).agg(agg_fun)
df.columns = [''.join(col).strip() for col in df.columns.values]
train = pd.merge(train, df, on='card_id',how='left')
test = pd.merge(test, df, on='card_id',how='left')
merchants = reduce_mem_usage(pd.read_csv('merchants_clean.csv'))
merchants = merchants.drop(['Unnamed: 0', 'merchant_group_id', 'merchant_category_id',
'subsector_id', 'numerical_1', 'numerical_2',
'active_months_lag3','active_months_lag6',
'city_id', 'state_id',
], axis = 1)
d = dict(zip(merchants.columns[1:], ['newchant_{}'.format(x) for x in (merchants.columns[1:])]))
d.update({"merchant_id": "new_merchant_id_<lambda>"})
merchants = merchants.rename(index=str, columns= d)
## convert the month in business to categorical
merchants.newchant_active_months_lag12 = pd.cut(merchants.newchant_active_months_lag12, 4)
merge_new = aggregate_transaction_new(new_transactions, prefix='new_')
merge_new = merge_new.merge(merchants, on = 'new_merchant_id_<lambda>', how = 'left')
## new transaction frequency
merge_new['new_freq'] = merge_new.new_transactions_count/(((merge_new.new_purchase_date_max -
merge_new.new_purchase_date_min).dt.total_seconds())/86400)
merge_new['new_freq_amount'] = merge_new['new_freq'] * merge_new['new_purchase_amount_mean']
merge_new['new_freq_install'] = merge_new['new_freq'] * merge_new['new_installments_mean']
cols = ['newchant_avg_sales_lag3','newchant_avg_purchases_lag3',
'newchant_avg_sales_lag6','newchant_avg_purchases_lag6',
'newchant_avg_sales_lag12','newchant_avg_purchases_lag12','new_freq']
for col in cols:
merge_new[col] = pd.qcut(merge_new[col], 4)
for col in cols:
merge_new[col].fillna(merge_new[col].mode()[0], inplace=True)
label_enc.fit(list(merge_new[col].values))
merge_new[col] = label_enc.transform(list(merge_new[col].values))
for col in ['newchant_category_1','newchant_most_recent_sales_range','newchant_most_recent_purchases_range',
'newchant_active_months_lag12','newchant_category_4','newchant_category_2']:
merge_new[col].fillna(merge_new[col].mode()[0], inplace=True)
label_enc.fit(list(merge_new['new_merchant_id_<lambda>'].values))
merge_new['new_merchant_id_<lambda>'] = label_enc.transform(list(merge_new['new_merchant_id_<lambda>'].values))
label_enc.fit(list(merge_new['newchant_active_months_lag12'].values))
merge_new['newchant_active_months_lag12'] = label_enc.transform(list(merge_new['newchant_active_months_lag12'].values))
#del new_transactions
gc.collect()
train = pd.merge(train, merge_new, on='card_id',how='left')
test = pd.merge(test, merge_new, on='card_id',how='left')
#del merge_new
gc.collect()
train_na = train.isnull().sum()
train_na = train_na.drop(train_na[train_na == 0].index).sort_values(ascending=False)
missing_data = pd.DataFrame({'Missing Value' :train_na})
missing_data.head(5)
for col in ['new_freq','new_purchase_amount_min','new_purchase_amount_max','newchant_category_4','new_weekend_mean',
'new_purchase_amount_mean','newchant_active_months_lag12','new_weekend_sum','newchant_avg_purchases_lag12',
'newchant_avg_sales_lag12','newchant_avg_purchases_lag6','newchant_avg_sales_lag6','new_category_1_sum',
'newchant_avg_purchases_lag3','newchant_avg_sales_lag3','new_category_1_mean','new_category_1_max',
'new_category_1_min','newchant_most_recent_purchases_range','newchant_most_recent_sales_range',
'newchant_category_1'] : # -1
train[col] = train[col].fillna(-1)
test[col] = test[col].fillna(-1)
for col in ['new_installments_min','new_installments_max','new_installments_mean','new_installments_sum',
'new_purchase_amount_sum','new_state_id_<lambda>' ]: # -2
train[col] = train[col].fillna(-2)
test[col] = test[col].fillna(-2)
for col in ['newchant_category_2','new_authorized_flag_sum','new_month_lag_min','new_month_lag_max','new_card_id_size',
'new_month_lag_mean','new_weekofyear_nunique','new_year_nunique','new_state_id_nunique',
'new_merchant_id_<lambda>','new_merchant_id_nunique','new_merchant_category_id_nunique',
'new_subsector_id_nunique','new_dayofweek_nunique','new_hour_nunique','new_month_nunique',
'new_transactions_count','new_count_4sum','new_countcount','hist_count_4sum','hist_countcount',
'hist_purchase_amountmean','hist_purchase_amountsum','purchase_amount_newmean','purchase_amount_newsum',
'count_newcount','purchase_amount_histmean','purchase_amount_histsum','count_histcount','hist_mean4mean',
'hist_mean4sum','newpurchase_amountmean','newpurchase_amountsum','purchase_amountmean_merhist',
'purchase_amountsum_merhist','histpurchase_amountmean','histpurchase_amountsum',
'new_merchant_category_id_<lambda>','category_repeated_month','new_purchase_amount_mean_last',
'new_purchase_amount_sum_last','new_card_id_size_last']: # 0
train[col] = train[col].fillna(0)
test[col] = test[col].fillna(0)
train.new_month_diff_mean = train.new_month_diff_mean.fillna(23)
train.new_month_diff_min = train.new_month_diff_min.fillna(23)
train.new_month_diff_max = train.new_month_diff_max.fillna(24)
train.new_month_diff_lag_mean = train.new_month_diff_lag_mean.fillna(24)
train.new_month_diff_lag_min = train.new_month_diff_lag_min.fillna(24)
train.new_month_diff_lag_max = train.new_month_diff_lag_max.fillna(24)
test.new_month_diff_mean = test.new_month_diff_mean.fillna(23)
test.new_month_diff_min = test.new_month_diff_min.fillna(23)
test.new_month_diff_max = test.new_month_diff_max.fillna(24)
test.new_month_diff_lag_mean = test.new_month_diff_lag_mean.fillna(24)
test.new_month_diff_lag_min = test.new_month_diff_lag_min.fillna(24)
test.new_month_diff_lag_max = test.new_month_diff_lag_max.fillna(24)
for col in ['new_purchase_date_min','new_purchase_date_max']:
train[col] = train[col].fillna(pd.to_datetime(1/9/2017))
test[col] = test[col].fillna(pd.to_datetime(1/9/2017))
#Feature Engineering - Adding new features inspired by Chau's first kernel
train['total_count_merid'] = train['count_newcount'] + train['count_histcount']
train['total_count'] = train['new_countcount'] + train['hist_countcount']
train['new_purchase_date_max'] = pd.to_datetime(train['new_purchase_date_max'])
train['new_purchase_date_min'] = pd.to_datetime(train['new_purchase_date_min'])
train['new_purchase_date_diff'] = (train['new_purchase_date_max'] - train['new_purchase_date_min']).dt.days
train['new_purchase_date_average'] = train['new_purchase_date_diff']/train['new_card_id_size']
train['new_purchase_date_uptonow'] = (pd.to_datetime('01/03/2018') - train['new_purchase_date_max']).dt.days
train['new_purchase_date_uptomin'] = (pd.to_datetime('01/03/2018') - train['new_purchase_date_min']).dt.days
train['new_first_buy'] = (train['new_purchase_date_min'] - train['first_active_month']).dt.days
for feature in ['new_purchase_date_max','new_purchase_date_min']:
train[feature] = train[feature].astype(np.int64) * 1e-9
#Feature Engineering - Adding new features inspired by Chau's first kernel
test['total_count_merid'] = test['count_newcount'] + test['count_histcount']
test['total_count'] = test['new_countcount'] + test['hist_countcount']
test['new_purchase_date_max'] = pd.to_datetime(test['new_purchase_date_max'])
test['new_purchase_date_min'] = pd.to_datetime(test['new_purchase_date_min'])
test['new_purchase_date_diff'] = (test['new_purchase_date_max'] - test['new_purchase_date_min']).dt.days
test['new_purchase_date_average'] = test['new_purchase_date_diff']/test['new_card_id_size']
test['new_purchase_date_uptonow'] = (pd.to_datetime('01/03/2018') - test['new_purchase_date_max']).dt.days
test['new_purchase_date_uptomin'] = (pd.to_datetime('01/03/2018') - test['new_purchase_date_min']).dt.days
test['new_first_buy'] = (test['new_purchase_date_min'] - test['first_active_month']).dt.days
for feature in ['new_purchase_date_max','new_purchase_date_min']:
test[feature] = test[feature].astype(np.int64) * 1e-9
#added new feature - Interactive
train['card_id_total'] = train['new_card_id_size'] + train['hist_card_id_size']
train['purchase_amount_total'] = train['new_purchase_amount_sum'] + train['hist_purchase_amount_sum']
test['card_id_total'] = test['new_card_id_size'] + test['hist_card_id_size']
test['purchase_amount_total'] = test['new_purchase_amount_sum'] + test['hist_purchase_amount_sum']
gc.collect()
cols = ['new_freq_amount',]
for col in cols:
train[col] = train[col].fillna(0)
train[col] = pd.qcut(train[col], 5)
label_enc.fit(list(train[col].values))
train[col] = label_enc.transform(list(train[col].values))
test[col] = test[col].fillna(0)
test[col] = pd.qcut(test[col], 5)
label_enc.fit(list(test[col].values))
test[col] = label_enc.transform(list(test[col].values))
train = train.drop(['new_freq_install'], axis = 1)
test = test.drop(['new_freq_install'], axis = 1)
train.new_purchase_date_average = train.new_purchase_date_average.fillna(-1.0)
test.new_purchase_date_average = test.new_purchase_date_average.fillna(-1.0)
# last month of new over hist
train['amountmean_ratiolastnew'] = train.new_purchase_amount_mean_last/train.hist_purchase_amount_mean
train['amountsum_ratiolastnew'] = train.new_purchase_amount_sum_last/(train.hist_purchase_amount_sum/(train.hist_purchase_date_diff//30))
train['transcount_ratiolastnew'] = train.new_card_id_size_last/(train.hist_transactions_count/(train.hist_purchase_date_diff//30))
test['amountmean_ratiolastnew'] = test.new_purchase_amount_mean_last/test.hist_purchase_amount_mean
test['amountsum_ratiolastnew'] = test.new_purchase_amount_sum_last/(test.hist_purchase_amount_sum/(test.hist_purchase_date_diff//30))
test['transcount_ratiolastnew'] = test.new_card_id_size_last/(test.hist_transactions_count/(test.hist_purchase_date_diff//30))
# last month of hist over hist
train['amountmean_ratiolast'] = train.hist_purchase_amount_mean_last/train.hist_purchase_amount_mean
train['amountsum_ratiolast'] = train.hist_purchase_amount_sum_last/(train.hist_purchase_amount_sum/(train.hist_purchase_date_diff//30))
train['transcount_ratiolast'] = train.hist_card_id_size_last/(train.hist_transactions_count/(train.hist_purchase_date_diff//30))
test['amountmean_ratiolast'] = test.hist_purchase_amount_mean_last/test.hist_purchase_amount_mean
test['amountsum_ratiolast'] = test.hist_purchase_amount_sum_last/(test.hist_purchase_amount_sum/(test.hist_purchase_date_diff//30))
test['transcount_ratiolast'] = test.hist_card_id_size_last/(test.hist_transactions_count/(test.hist_purchase_date_diff//30))
# last 2 month of hist ratio
train['amountmean_lastlast2'] = train.hist_purchase_amount_mean_last/train.hist_purchase_amount_mean_last2
train['amountsum_lastlast2'] = train.hist_purchase_amount_sum_last/train.hist_purchase_amount_sum_last2
train['transcount_lastlast2'] = train.hist_card_id_size_last/train.hist_card_id_size_last2
test['amountmean_lastlast2'] = test.hist_purchase_amount_mean_last/test.hist_purchase_amount_mean_last2
test['amountsum_lastlast2'] = test.hist_purchase_amount_sum_last/test.hist_purchase_amount_sum_last2
test['transcount_lastlast2'] = test.hist_card_id_size_last/test.hist_card_id_size_last2
# train['amountmean_ratiofirst'] = train.hist_purchase_amount_mean_first/train.hist_purchase_amount_mean
# train['amountsum_ratiofirst'] = train.hist_purchase_amount_sum_first/train.hist_purchase_amount_sum
# train['transcount_ratiofirst'] = train.hist_card_id_size_first/(train.hist_transactions_count/(train.hist_purchase_date_diff//30))
# test['amountmean_ratiofirst'] = test.hist_purchase_amount_mean_first/test.hist_purchase_amount_mean
# test['amountsum_ratiofirst'] = test.hist_purchase_amount_sum_first/test.hist_purchase_amount_sum
# test['transcount_ratiofirst'] = test.hist_card_id_size_first/(test.hist_transactions_count/(test.hist_purchase_date_diff//30))
# train['amountmean_lastfirst'] = train.hist_purchase_amount_mean_last/train.hist_purchase_amount_mean_first
# train['amountsum_lastfirst'] = train.hist_purchase_amount_sum_last/train.hist_purchase_amount_sum_first
# train['transcount_lastfirst'] = train.hist_card_id_size_last/train.hist_card_id_size_first
# test['amountmean_lastfirst'] = test.hist_purchase_amount_mean_last/test.hist_purchase_amount_mean_first
# test['amountsum_lastfirst'] = test.hist_purchase_amount_sum_last/test.hist_purchase_amount_sum_first
# test['transcount_lastfirst'] = test.hist_card_id_size_last/test.hist_card_id_size_first
train = train.drop(['hist_purchase_amount_mean_last2','hist_purchase_amount_sum_last2','hist_card_id_size_last2'], axis = 1)
test = test.drop(['hist_purchase_amount_mean_last2','hist_purchase_amount_sum_last2','hist_card_id_size_last2'], axis = 1)
train = train.drop(['hist_card_id_size','new_card_id_size','card_id', 'first_active_month'], axis = 1)
test = test.drop(['hist_card_id_size','new_card_id_size','card_id', 'first_active_month'], axis = 1)
train.shape
# Remove the Outliers if any
train['outliers'] = 0
train.loc[train['target'] < -30, 'outliers'] = 1
train['outliers'].value_counts()
for features in ['feature_1','feature_2','feature_3']:
order_label = train.groupby([features])['outliers'].mean()
train[features] = train[features].map(order_label)
test[features] = test[features].map(order_label)
# Get the X and Y
df_train_columns = [c for c in train.columns if c not in ['target','outliers']]
cat_features = [c for c in df_train_columns if 'feature_' in c]
#df_train_columns
target = train['target']
del train['target']
import lightgbm as lgb
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.01,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 11,
"metric": 'rmse',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 4,
"random_state": 4590}
folds = StratifiedKFold(n_splits=6, shuffle=True, random_state=4590)
oof = np.zeros(len(train))
predictions = np.zeros(len(test))
feature_importance_df = pd.DataFrame()
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train,train['outliers'].values)):
print("fold {}".format(fold_))
trn_data = lgb.Dataset(train.iloc[trn_idx][df_train_columns], label=target.iloc[trn_idx])
val_data = lgb.Dataset(train.iloc[val_idx][df_train_columns], label=target.iloc[val_idx])
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=-1, early_stopping_rounds = 200)
oof[val_idx] = clf.predict(train.iloc[val_idx][df_train_columns], num_iteration=clf.best_iteration)
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = df_train_columns
fold_importance_df["importance"] = clf.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
predictions += clf.predict(test[df_train_columns], num_iteration=clf.best_iteration) / folds.n_splits
np.sqrt(mean_squared_error(oof, target))
cols = (feature_importance_df[["Feature", "importance"]]
.groupby("Feature")
.mean()
.sort_values(by="importance", ascending=False)[:1000].index)
best_features = feature_importance_df.loc[feature_importance_df.Feature.isin(cols)]
plt.figure(figsize=(14,25))
sns.barplot(x="importance",
y="Feature",
data=best_features.sort_values(by="importance",
ascending=False))
plt.title('LightGBM Features (avg over folds)')
plt.tight_layout()
plt.savefig('lgbm_importances.png')
features = [c for c in train.columns if c not in ['card_id', 'first_active_month','target','outliers']]
cat_features = [c for c in features if 'feature_' in c]
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.01,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 11,
"metric": 'rmse',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 4,
"random_state": 4590}
folds = RepeatedKFold(n_splits=6, n_repeats=2, random_state=4590)
oof_2 = np.zeros(len(train))
predictions_2 = np.zeros(len(test))
feature_importance_df_2 = pd.DataFrame()
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train.values, target.values)):
print("fold {}".format(fold_))
trn_data = lgb.Dataset(train.iloc[trn_idx][features], label=target.iloc[trn_idx], categorical_feature=cat_features)
val_data = lgb.Dataset(train.iloc[val_idx][features], label=target.iloc[val_idx], categorical_feature=cat_features)
num_round = 10000
clf_r = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=-1, early_stopping_rounds = 200)
oof_2[val_idx] = clf_r.predict(train.iloc[val_idx][features], num_iteration=clf_r.best_iteration)
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = features
fold_importance_df["importance"] = clf_r.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df_2 = pd.concat([feature_importance_df_2, fold_importance_df], axis=0)
predictions_2 += clf_r.predict(test[features], num_iteration=clf_r.best_iteration) / (5 * 2)
print("CV score: {:<8.5f}".format(mean_squared_error(oof_2, target)**0.5))
cols = (feature_importance_df_2[["Feature", "importance"]]
.groupby("Feature")
.mean()
.sort_values(by="importance", ascending=False)[:1000].index)
best_features = feature_importance_df_2.loc[feature_importance_df_2.Feature.isin(cols)]
plt.figure(figsize=(14,25))
sns.barplot(x="importance",
y="Feature",
data=best_features.sort_values(by="importance",
ascending=False))
plt.title('LightGBM Features (avg over folds)')
plt.tight_layout()
plt.savefig('lgbm_importances.png')
from sklearn.linear_model import BayesianRidge
train_stack = np.vstack([oof,oof_2]).transpose()
test_stack = np.vstack([predictions, predictions_2]).transpose()
folds_stack = RepeatedKFold(n_splits=6, n_repeats=1, random_state=4590)
oof_stack = np.zeros(train_stack.shape[0])
predictions_3 = np.zeros(test_stack.shape[0])
for fold_, (trn_idx, val_idx) in enumerate(folds_stack.split(train_stack,target)):
print("fold {}".format(fold_))
trn_data, trn_y = train_stack[trn_idx], target.iloc[trn_idx].values
val_data, val_y = train_stack[val_idx], target.iloc[val_idx].values
clf_3 = BayesianRidge()
clf_3.fit(trn_data, trn_y)
oof_stack[val_idx] = clf_3.predict(val_data)
predictions_3 += clf_3.predict(test_stack) / 6
np.sqrt(mean_squared_error(target.values, oof_stack))
sample_submission = pd.read_csv('sample_submission.csv')
sample_submission['target'] = predictions_3
# combine = pd.read_csv('combining_submission.csv')
# sample_submission['target'] = predictions_3*0.7 + combine['target']*0.3
q = sample_submission['target'].quantile(0.002)
# #sample_submission['target'] = sample_submission['target'].apply(lambda x: x if x > q else x*1.04)
# sample_submission.loc[sample_submission.target < -19.3, 'target'] = -33.218750
# for i in [2726,17430,28039,42686]:
# sample_submission['target'][i] = -33.21875
sample_submission.to_csv('submission.csv', index=False)
((sample_submission.target <= -30) & (sample_submission.target > -35)).sum()
sample_submission.iloc[108111]
q
sample_submission.loc[sample_submission.target < -19.5]
sample_submission.head(5)
my = pd.read_csv('submission (1).csv')
my['target'][96354] = -33.218750
my.to_csv('submission96354.csv', index=False)
```
## Classification
```
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, ExtraTreesClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split, GridSearchCV, StratifiedKFold
from sklearn.metrics import mean_squared_error, accuracy_score
from sklearn.preprocessing import LabelEncoder
y_train = train['outliers']
del train['outliers']
train['target'] = target
test['target'] = predictions_3
models = [RandomForestClassifier(),ExtraTreesClassifier()]
names = ["RF", "Xtree"]
dict_score = {}
for name, model in zip(names, models):
model.fit(train, y_train)
model_train_pred = model.predict(train)
accy = round(accuracy_score(y_train, model_train_pred), 6)
dict_score[name] = accy
import operator
dict_score = sorted(dict_score.items(), key = operator.itemgetter(1), reverse = True)
dict_score
Xtree = ExtraTreesClassifier()
XtreeMd = Xtree.fit(train, y_train)
y_pred = XtreeMd.predict(test)
sample_submission['outliers'] = y_pred
sample_submission.loc[sample_submission['outliers'] == 1, 'target'] = -33.218750
sample_submission = sample_submission.drop(['outliers'], axis = 1)
sample_submission.to_csv('submission.csv', index=False)
sample_submission.loc[sample_submission['target'] == -33.21875][:40]
```
| github_jupyter |
# What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you choose to use that notebook).
### What is PyTorch?
PyTorch is a system for executing dynamic computational graphs over Tensor objects that behave similarly as numpy ndarray. It comes with a powerful automatic differentiation engine that removes the need for manual back-propagation.
### Why?
* Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).
* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
* We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
* We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
### PyTorch versions
This notebook assumes that you are using **PyTorch version 1.0**. In some of the previous versions (e.g. before 0.4), Tensors had to be wrapped in Variable objects to be used in autograd; however Variables have now been deprecated. In addition 1.0 also separates a Tensor's datatype from its device, and uses numpy-style factories for constructing Tensors rather than directly invoking Tensor constructors.
## How will I learn PyTorch?
Justin Johnson has made an excellent [tutorial](https://github.com/jcjohnson/pytorch-examples) for PyTorch.
You can also find the detailed [API doc](http://pytorch.org/docs/stable/index.html) here. If you have other questions that are not addressed by the API docs, the [PyTorch forum](https://discuss.pytorch.org/) is a much better place to ask than StackOverflow.
# Table of Contents
This assignment has 5 parts. You will learn PyTorch on **three different levels of abstraction**, which will help you understand it better and prepare you for the final project.
1. Part I, Preparation: we will use CIFAR-10 dataset.
2. Part II, Barebones PyTorch: **Abstraction level 1**, we will work directly with the lowest-level PyTorch Tensors.
3. Part III, PyTorch Module API: **Abstraction level 2**, we will use `nn.Module` to define arbitrary neural network architecture.
4. Part IV, PyTorch Sequential API: **Abstraction level 3**, we will use `nn.Sequential` to define a linear feed-forward network very conveniently.
5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features.
Here is a table of comparison:
| API | Flexibility | Convenience |
|---------------|-------------|-------------|
| Barebone | High | Low |
| `nn.Module` | High | Medium |
| `nn.Sequential` | Low | High |
# Part I. Preparation
First, we load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.
In previous parts of the assignment we had to write our own code to download the CIFAR-10 dataset, preprocess it, and iterate through it in minibatches; PyTorch provides convenient tools to automate this process for us.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import torchvision.transforms as T
import numpy as np
NUM_TRAIN = 49000
# The torchvision.transforms package provides tools for preprocessing data
# and for performing data augmentation; here we set up a transform to
# preprocess the data by subtracting the mean RGB value and dividing by the
# standard deviation of each RGB value; we've hardcoded the mean and std.
transform = T.Compose([
T.ToTensor(),
T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
])
# We set up a Dataset object for each split (train / val / test); Datasets load
# training examples one at a time, so we wrap each Dataset in a DataLoader which
# iterates through the Dataset and forms minibatches. We divide the CIFAR-10
# training set into train and val sets by passing a Sampler object to the
# DataLoader telling how it should sample from the underlying Dataset.
cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=transform)
loader_train = DataLoader(cifar10_train, batch_size=64,
sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN)))
cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=transform)
loader_val = DataLoader(cifar10_val, batch_size=64,
sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN, 50000)))
cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,
transform=transform)
loader_test = DataLoader(cifar10_test, batch_size=64)
```
You have an option to **use GPU by setting the flag to True below**. It is not necessary to use GPU for this assignment. Note that if your computer does not have CUDA enabled, `torch.cuda.is_available()` will return False and this notebook will fallback to CPU mode.
The global variables `dtype` and `device` will control the data types throughout this assignment.
```
USE_GPU = True
dtype = torch.float32 # we will be using float throughout this tutorial
if USE_GPU and torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
# Constant to control how frequently we print train loss
print_every = 100
print('using device:', device)
```
# Part II. Barebones PyTorch
PyTorch ships with high-level APIs to help us define model architectures conveniently, which we will cover in Part II of this tutorial. In this section, we will start with the barebone PyTorch elements to understand the autograd engine better. After this exercise, you will come to appreciate the high-level model API more.
We will start with a simple fully-connected ReLU network with two hidden layers and no biases for CIFAR classification.
This implementation computes the forward pass using operations on PyTorch Tensors, and uses PyTorch autograd to compute gradients. It is important that you understand every line, because you will write a harder version after the example.
When we create a PyTorch Tensor with `requires_grad=True`, then operations involving that Tensor will not just compute values; they will also build up a computational graph in the background, allowing us to easily backpropagate through the graph to compute gradients of some Tensors with respect to a downstream loss. Concretely if x is a Tensor with `x.requires_grad == True` then after backpropagation `x.grad` will be another Tensor holding the gradient of x with respect to the scalar loss at the end.
### PyTorch Tensors: Flatten Function
A PyTorch Tensor is conceptionally similar to a numpy array: it is an n-dimensional grid of numbers, and like numpy PyTorch provides many functions to efficiently operate on Tensors. As a simple example, we provide a `flatten` function below which reshapes image data for use in a fully-connected neural network.
Recall that image data is typically stored in a Tensor of shape N x C x H x W, where:
* N is the number of datapoints
* C is the number of channels
* H is the height of the intermediate feature map in pixels
* W is the height of the intermediate feature map in pixels
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `C x H x W` values per representation into a single long vector. The flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
```
def flatten(x):
N = x.shape[0] # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
def test_flatten():
x = torch.arange(12).view(2, 1, 3, 2)
print('Before flattening: ', x)
print('After flattening: ', flatten(x))
test_flatten()
```
### Barebones PyTorch: Two-Layer Network
Here we define a function `two_layer_fc` which performs the forward pass of a two-layer fully-connected ReLU network on a batch of image data. After defining the forward pass we check that it doesn't crash and that it produces outputs of the right shape by running zeros through the network.
You don't have to write any code here, but it's important that you read and understand the implementation.
```
import torch.nn.functional as F # useful stateless functions
def two_layer_fc(x, params):
"""
A fully-connected neural networks; the architecture is:
NN is fully connected -> ReLU -> fully connected layer.
Note that this function only defines the forward pass;
PyTorch will take care of the backward pass for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A PyTorch Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of PyTorch Tensors giving weights for the network;
w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A PyTorch Tensor of shape (N, C) giving classification scores for
the input data x.
"""
# first we flatten the image
x = flatten(x) # shape: [batch_size, C x H x W]
w1, w2 = params
# Forward pass: compute predicted y using operations on Tensors. Since w1 and
# w2 have requires_grad=True, operations involving these Tensors will cause
# PyTorch to build a computational graph, allowing automatic computation of
# gradients. Since we are no longer implementing the backward pass by hand we
# don't need to keep references to intermediate values.
# you can also use `.clamp(min=0)`, equivalent to F.relu()
x = F.relu(x.mm(w1))
x = x.mm(w2)
return x
def two_layer_fc_test():
hidden_layer_size = 42
x = torch.zeros((64, 50), dtype=dtype) # minibatch size 64, feature dimension 50
w1 = torch.zeros((50, hidden_layer_size), dtype=dtype)
w2 = torch.zeros((hidden_layer_size, 10), dtype=dtype)
scores = two_layer_fc(x, [w1, w2])
print(scores.size()) # you should see [64, 10]
two_layer_fc_test()
```
### Barebones PyTorch: Three-Layer ConvNet
Here you will complete the implementation of the function `three_layer_convnet`, which will perform the forward pass of a three-layer convolutional network. Like above, we can immediately test our implementation by passing zeros through the network. The network should have the following architecture:
1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two
2. ReLU nonlinearity
3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one
4. ReLU nonlinearity
5. Fully-connected layer with bias, producing scores for C classes.
Note that we have **no softmax activation** here after our fully-connected layer: this is because PyTorch's cross entropy loss performs a softmax activation for you, and by bundling that step in makes computation more efficient.
**HINT**: For convolutions: http://pytorch.org/docs/stable/nn.html#torch.nn.functional.conv2d; pay attention to the shapes of convolutional filters!
```
def three_layer_convnet(x, params):
"""
Performs the forward pass of a three-layer convolutional network with the
architecture defined above.
Inputs:
- x: A PyTorch Tensor of shape (N, 3, H, W) giving a minibatch of images
- params: A list of PyTorch Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: PyTorch Tensor of shape (channel_1, 3, KH1, KW1) giving weights
for the first convolutional layer
- conv_b1: PyTorch Tensor of shape (channel_1,) giving biases for the first
convolutional layer
- conv_w2: PyTorch Tensor of shape (channel_2, channel_1, KH2, KW2) giving
weights for the second convolutional layer
- conv_b2: PyTorch Tensor of shape (channel_2,) giving biases for the second
convolutional layer
- fc_w: PyTorch Tensor giving weights for the fully-connected layer. Can you
figure out what the shape should be?
- fc_b: PyTorch Tensor giving biases for the fully-connected layer. Can you
figure out what the shape should be?
Returns:
- scores: PyTorch Tensor of shape (N, C) giving classification scores for x
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
################################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
scores=F.relu_(F.conv2d(x,conv_w1,conv_b1,padding=2))
scores=F.relu_(F.conv2d(scores,conv_w2,conv_b2,padding=1))
scores=F.linear(flatten(scores),fc_w.T,fc_b)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
################################################################################
# END OF YOUR CODE #
################################################################################
return scores
```
After defining the forward pass of the ConvNet above, run the following cell to test your implementation.
When you run this function, scores should have shape (64, 10).
```
def three_layer_convnet_test():
x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]
conv_w1 = torch.zeros((6, 3, 5, 5), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]
conv_b1 = torch.zeros((6,)) # out_channel
conv_w2 = torch.zeros((9, 6, 3, 3), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]
conv_b2 = torch.zeros((9,)) # out_channel
# you must calculate the shape of the tensor after two conv layers, before the fully-connected layer
fc_w = torch.zeros((9 * 32 * 32, 10))
fc_b = torch.zeros(10)
scores = three_layer_convnet(x, [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b])
print(scores.size()) # you should see [64, 10]
three_layer_convnet_test()
```
### Barebones PyTorch: Initialization
Let's write a couple utility methods to initialize the weight matrices for our models.
- `random_weight(shape)` initializes a weight tensor with the Kaiming normalization method.
- `zero_weight(shape)` initializes a weight tensor with all zeros. Useful for instantiating bias parameters.
The `random_weight` function uses the Kaiming normal initialization method, described in:
He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
```
def random_weight(shape):
"""
Create random Tensors for weights; setting requires_grad=True means that we
want to compute gradients for these Tensors during the backward pass.
We use Kaiming normalization: sqrt(2 / fan_in)
"""
if len(shape) == 2: # FC weight
fan_in = shape[0]
else:
fan_in = np.prod(shape[1:]) # conv weight [out_channel, in_channel, kH, kW]
# randn is standard normal distribution generator.
w = torch.randn(shape, device=device, dtype=dtype) * np.sqrt(2. / fan_in)
w.requires_grad = True
return w
def zero_weight(shape):
return torch.zeros(shape, device=device, dtype=dtype, requires_grad=True)
# create a weight of shape [3 x 5]
# you should see the type `torch.cuda.FloatTensor` if you use GPU.
# Otherwise it should be `torch.FloatTensor`
random_weight((3, 5))
```
### Barebones PyTorch: Check Accuracy
When training the model we will use the following function to check the accuracy of our model on the training or validation sets.
When checking accuracy we don't need to compute any gradients; as a result we don't need PyTorch to build a computational graph for us when we compute scores. To prevent a graph from being built we scope our computation under a `torch.no_grad()` context manager.
```
def check_accuracy_part2(loader, model_fn, params):
"""
Check the accuracy of a classification model.
Inputs:
- loader: A DataLoader for the data split we want to check
- model_fn: A function that performs the forward pass of the model,
with the signature scores = model_fn(x, params)
- params: List of PyTorch Tensors giving parameters of the model
Returns: Nothing, but prints the accuracy of the model
"""
split = 'val' if loader.dataset.train else 'test'
print('Checking accuracy on the %s set' % split)
num_correct, num_samples = 0, 0
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.int64)
scores = model_fn(x, params)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
```
### BareBones PyTorch: Training Loop
We can now set up a basic training loop to train our network. We will train the model using stochastic gradient descent without momentum. We will use `torch.functional.cross_entropy` to compute the loss; you can [read about it here](http://pytorch.org/docs/stable/nn.html#cross-entropy).
The training loop takes as input the neural network function, a list of initialized parameters (`[w1, w2]` in our example), and learning rate.
```
def train_part2(model_fn, params, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model.
It should have the signature scores = model_fn(x, params) where x is a
PyTorch Tensor of image data, params is a list of PyTorch Tensors giving
model weights, and scores is a PyTorch Tensor of shape (N, C) giving
scores for the elements in x.
- params: List of PyTorch Tensors giving weights for the model
- learning_rate: Python scalar giving the learning rate to use for SGD
Returns: Nothing
"""
for t, (x, y) in enumerate(loader_train):
# Move the data to the proper device (GPU or CPU)
x = x.to(device=device, dtype=dtype)
y = y.to(device=device, dtype=torch.long)
# Forward pass: compute scores and loss
scores = model_fn(x, params)
loss = F.cross_entropy(scores, y)
# Backward pass: PyTorch figures out which Tensors in the computational
# graph has requires_grad=True and uses backpropagation to compute the
# gradient of the loss with respect to these Tensors, and stores the
# gradients in the .grad attribute of each Tensor.
loss.backward()
# Update parameters. We don't want to backpropagate through the
# parameter updates, so we scope the updates under a torch.no_grad()
# context manager to prevent a computational graph from being built.
with torch.no_grad():
for w in params:
w -= learning_rate * w.grad
# Manually zero the gradients after running the backward pass
w.grad.zero_()
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss.item()))
check_accuracy_part2(loader_val, model_fn, params)
print()
```
### BareBones PyTorch: Train a Two-Layer Network
Now we are ready to run the training loop. We need to explicitly allocate tensors for the fully connected weights, `w1` and `w2`.
Each minibatch of CIFAR has 64 examples, so the tensor shape is `[64, 3, 32, 32]`.
After flattening, `x` shape should be `[64, 3 * 32 * 32]`. This will be the size of the first dimension of `w1`.
The second dimension of `w1` is the hidden layer size, which will also be the first dimension of `w2`.
Finally, the output of the network is a 10-dimensional vector that represents the probability distribution over 10 classes.
You don't need to tune any hyperparameters but you should see accuracies above 40% after training for one epoch.
```
hidden_layer_size = 4000
learning_rate = 1e-2
w1 = random_weight((3 * 32 * 32, hidden_layer_size))
w2 = random_weight((hidden_layer_size, 10))
train_part2(two_layer_fc, [w1, w2], learning_rate)
```
### BareBones PyTorch: Training a ConvNet
In the below you should use the functions defined above to train a three-layer convolutional network on CIFAR. The network should have the following architecture:
1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 2
2. ReLU
3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 1
4. ReLU
5. Fully-connected layer (with bias) to compute scores for 10 classes
You should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.
You don't need to tune any hyperparameters, but if everything works correctly you should achieve an accuracy above 42% after one epoch.
```
learning_rate = 3e-3
channel_1 = 32
channel_2 = 16
conv_w1 = None
conv_b1 = None
conv_w2 = None
conv_b2 = None
fc_w = None
fc_b = None
################################################################################
# TODO: Initialize the parameters of a three-layer ConvNet. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
conv_w1=random_weight((channel_1,3,5,5))
conv_b1=zero_weight(channel_1)
conv_w2=random_weight((channel_2,channel_1,3,3))
conv_b2=zero_weight(channel_2)
fc_w=random_weight((16*32*32,10))
fc_b=zero_weight(10)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
################################################################################
# END OF YOUR CODE #
################################################################################
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
train_part2(three_layer_convnet, params, learning_rate)
```
# Part III. PyTorch Module API
Barebone PyTorch requires that we track all the parameter tensors by hand. This is fine for small networks with a few tensors, but it would be extremely inconvenient and error-prone to track tens or hundreds of tensors in larger networks.
PyTorch provides the `nn.Module` API for you to define arbitrary network architectures, while tracking every learnable parameters for you. In Part II, we implemented SGD ourselves. PyTorch also provides the `torch.optim` package that implements all the common optimizers, such as RMSProp, Adagrad, and Adam. It even supports approximate second-order methods like L-BFGS! You can refer to the [doc](http://pytorch.org/docs/master/optim.html) for the exact specifications of each optimizer.
To use the Module API, follow the steps below:
1. Subclass `nn.Module`. Give your network class an intuitive name like `TwoLayerFC`.
2. In the constructor `__init__()`, define all the layers you need as class attributes. Layer objects like `nn.Linear` and `nn.Conv2d` are themselves `nn.Module` subclasses and contain learnable parameters, so that you don't have to instantiate the raw tensors yourself. `nn.Module` will track these internal parameters for you. Refer to the [doc](http://pytorch.org/docs/master/nn.html) to learn more about the dozens of builtin layers. **Warning**: don't forget to call the `super().__init__()` first!
3. In the `forward()` method, define the *connectivity* of your network. You should use the attributes defined in `__init__` as function calls that take tensor as input and output the "transformed" tensor. Do *not* create any new layers with learnable parameters in `forward()`! All of them must be declared upfront in `__init__`.
After you define your Module subclass, you can instantiate it as an object and call it just like the NN forward function in part II.
### Module API: Two-Layer Network
Here is a concrete example of a 2-layer fully connected network:
```
class TwoLayerFC(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super().__init__()
# assign layer objects to class attributes
self.fc1 = nn.Linear(input_size, hidden_size)
# nn.init package contains convenient initialization methods
# http://pytorch.org/docs/master/nn.html#torch-nn-init
nn.init.kaiming_normal_(self.fc1.weight)
self.fc2 = nn.Linear(hidden_size, num_classes)
nn.init.kaiming_normal_(self.fc2.weight)
def forward(self, x):
# forward always defines connectivity
x = flatten(x)
scores = self.fc2(F.relu(self.fc1(x)))
return scores
def test_TwoLayerFC():
input_size = 50
x = torch.zeros((64, input_size), dtype=dtype) # minibatch size 64, feature dimension 50
model = TwoLayerFC(input_size, 42, 10)
scores = model(x)
print(scores.size()) # you should see [64, 10]
test_TwoLayerFC()
```
### Module API: Three-Layer ConvNet
It's your turn to implement a 3-layer ConvNet followed by a fully connected layer. The network architecture should be the same as in Part II:
1. Convolutional layer with `channel_1` 5x5 filters with zero-padding of 2
2. ReLU
3. Convolutional layer with `channel_2` 3x3 filters with zero-padding of 1
4. ReLU
5. Fully-connected layer to `num_classes` classes
You should initialize the weight matrices of the model using the Kaiming normal initialization method.
**HINT**: http://pytorch.org/docs/stable/nn.html#conv2d
After you implement the three-layer ConvNet, the `test_ThreeLayerConvNet` function will run your implementation; it should print `(64, 10)` for the shape of the output scores.
```
class ThreeLayerConvNet(nn.Module):
def __init__(self, in_channel, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Set up the layers you need for a three-layer ConvNet with the #
# architecture defined above. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
self.conv1=nn.Conv2d(in_channel,channel_1,5,padding=2)
nn.init.kaiming_normal_(self.conv1.weight)
self.conv2=nn.Conv2d(channel_1,channel_2,3,padding=1)
nn.init.kaiming_normal_(self.conv2.weight)
self.fc=nn.Linear(channel_2*32*32,num_classes)
nn.init.kaiming_normal_(self.fc.weight)
self.relu=nn.ReLU(inplace=True)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
def forward(self, x):
scores = None
########################################################################
# TODO: Implement the forward function for a 3-layer ConvNet. you #
# should use the layers you defined in __init__ and specify the #
# connectivity of those layers in forward() #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
scores=self.relu(self.conv1(x))
scores=self.relu(self.conv2(scores))
scores=self.fc(flatten(scores))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
def test_ThreeLayerConvNet():
x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]
model = ThreeLayerConvNet(in_channel=3, channel_1=12, channel_2=8, num_classes=10)
scores = model(x)
print(scores.size()) # you should see [64, 10]
test_ThreeLayerConvNet()
```
### Module API: Check Accuracy
Given the validation or test set, we can check the classification accuracy of a neural network.
This version is slightly different from the one in part II. You don't manually pass in the parameters anymore.
```
def check_accuracy_part34(loader, model):
if loader.dataset.train:
print('Checking accuracy on validation set')
else:
print('Checking accuracy on test set')
num_correct = 0
num_samples = 0
model.eval() # set model to evaluation mode
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
```
### Module API: Training Loop
We also use a slightly different training loop. Rather than updating the values of the weights ourselves, we use an Optimizer object from the `torch.optim` package, which abstract the notion of an optimization algorithm and provides implementations of most of the algorithms commonly used to optimize neural networks.
```
def train_part34(model, optimizer, epochs=1):
"""
Train a model on CIFAR-10 using the PyTorch Module API.
Inputs:
- model: A PyTorch Module giving the model to train.
- optimizer: An Optimizer object we will use to train the model
- epochs: (Optional) A Python integer giving the number of epochs to train for
Returns: Nothing, but prints model accuracies during training.
"""
model = model.to(device=device) # move the model parameters to CPU/GPU
for e in range(epochs):
for t, (x, y) in enumerate(loader_train):
model.train() # put model to training mode
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
loss = F.cross_entropy(scores, y)
# Zero out all of the gradients for the variables which the optimizer
# will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with
# respect to each parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients
# computed by the backwards pass.
optimizer.step()
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss.item()))
check_accuracy_part34(loader_val, model)
print()
```
### Module API: Train a Two-Layer Network
Now we are ready to run the training loop. In contrast to part II, we don't explicitly allocate parameter tensors anymore.
Simply pass the input size, hidden layer size, and number of classes (i.e. output size) to the constructor of `TwoLayerFC`.
You also need to define an optimizer that tracks all the learnable parameters inside `TwoLayerFC`.
You don't need to tune any hyperparameters, but you should see model accuracies above 40% after training for one epoch.
```
hidden_layer_size = 4000
learning_rate = 1e-2
model = TwoLayerFC(3 * 32 * 32, hidden_layer_size, 10)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
train_part34(model, optimizer)
```
### Module API: Train a Three-Layer ConvNet
You should now use the Module API to train a three-layer ConvNet on CIFAR. This should look very similar to training the two-layer network! You don't need to tune any hyperparameters, but you should achieve above above 45% after training for one epoch.
You should train the model using stochastic gradient descent without momentum.
```
learning_rate = 3e-3
channel_1 = 32
channel_2 = 16
model = None
optimizer = None
################################################################################
# TODO: Instantiate your ThreeLayerConvNet model and a corresponding optimizer #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
model=ThreeLayerConvNet(3,channel_1,channel_2,10)
optimizer=optim.SGD(model.parameters(),lr=learning_rate)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
################################################################################
# END OF YOUR CODE
################################################################################
train_part34(model, optimizer)
```
# Part IV. PyTorch Sequential API
Part III introduced the PyTorch Module API, which allows you to define arbitrary learnable layers and their connectivity.
For simple models like a stack of feed forward layers, you still need to go through 3 steps: subclass `nn.Module`, assign layers to class attributes in `__init__`, and call each layer one by one in `forward()`. Is there a more convenient way?
Fortunately, PyTorch provides a container Module called `nn.Sequential`, which merges the above steps into one. It is not as flexible as `nn.Module`, because you cannot specify more complex topology than a feed-forward stack, but it's good enough for many use cases.
### Sequential API: Two-Layer Network
Let's see how to rewrite our two-layer fully connected network example with `nn.Sequential`, and train it using the training loop defined above.
Again, you don't need to tune any hyperparameters here, but you shoud achieve above 40% accuracy after one epoch of training.
```
# We need to wrap `flatten` function in a module in order to stack it
# in nn.Sequential
class Flatten(nn.Module):
def forward(self, x):
return flatten(x)
hidden_layer_size = 4000
learning_rate = 1e-2
model = nn.Sequential(
Flatten(),
nn.Linear(3 * 32 * 32, hidden_layer_size),
nn.ReLU(),
nn.Linear(hidden_layer_size, 10),
)
# you can use Nesterov momentum in optim.SGD
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=0.9, nesterov=True)
train_part34(model, optimizer)
```
### Sequential API: Three-Layer ConvNet
Here you should use `nn.Sequential` to define and train a three-layer ConvNet with the same architecture we used in Part III:
1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 2
2. ReLU
3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 1
4. ReLU
5. Fully-connected layer (with bias) to compute scores for 10 classes
You should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.
You should optimize your model using stochastic gradient descent with Nesterov momentum 0.9.
Again, you don't need to tune any hyperparameters but you should see accuracy above 55% after one epoch of training.
```
channel_1 = 32
channel_2 = 16
learning_rate = 1e-2
model = None
optimizer = None
################################################################################
# TODO: Rewrite the 2-layer ConvNet with bias from Part III with the #
# Sequential API. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
model=nn.Sequential(
nn.Conv2d(3,channel_1,5,padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(channel_1,channel_2,3,padding=1),
nn.ReLU(inplace=True),
Flatten(),
nn.Linear(channel_2*32*32,10)
)
# for i in (0,2,5):
# w_shape=model[i].weight.data.shape
# b_shape=model[i].bias.data.shape
# model[i].weight.data=random_weight(w_shape)
# model[i].bias.data=zero_weight(b_shape)
optimizer=optim.SGD(model.parameters(),nesterov=True,lr=learning_rate, momentum=0.9)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
################################################################################
# END OF YOUR CODE
################################################################################
train_part34(model, optimizer)
```
# Part V. CIFAR-10 open-ended challenge
In this section, you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves **at least 70%** accuracy on the CIFAR-10 **validation** set within 10 epochs. You can use the check_accuracy and train functions from above. You can use either `nn.Module` or `nn.Sequential` API.
Describe what you did at the end of this notebook.
Here are the official API documentation for each component. One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.
* Layers in torch.nn package: http://pytorch.org/docs/stable/nn.html
* Activations: http://pytorch.org/docs/stable/nn.html#non-linear-activations
* Loss functions: http://pytorch.org/docs/stable/nn.html#loss-functions
* Optimizers: http://pytorch.org/docs/stable/optim.html
### Things you might try:
- **Filter size**: Above we used 5x5; would smaller filters be more efficient?
- **Number of filters**: Above we used 32 filters. Do more or fewer do better?
- **Pooling vs Strided Convolution**: Do you use max pooling or just stride convolutions?
- **Batch normalization**: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
- **Network architecture**: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
- [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
- [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
- [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
- **Global Average Pooling**: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in [Google's Inception Network](https://arxiv.org/abs/1512.00567) (See Table 1 for their architecture).
- **Regularization**: Add l2 weight regularization, or perhaps use Dropout.
### Tips for training
For each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:
- If the parameters are working well, you should see improvement within a few hundred iterations
- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
### Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!
- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.
- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
- Model ensembles
- Data augmentation
- New Architectures
- [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.
- [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.
- [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)
### Have fun and happy training!
```
################################################################################
# TODO: #
# Experiment with any architectures, optimizers, and hyperparameters. #
# Achieve AT LEAST 70% accuracy on the *validation set* within 10 epochs. #
# #
# Note that you can use the check_accuracy function to evaluate on either #
# the test set or the validation set, by passing either loader_test or #
# loader_val as the second argument to check_accuracy. You should not touch #
# the test set until you have finished your architecture and hyperparameter #
# tuning, and only run the test set once at the end to report a final value. #
################################################################################
model = None
optimizer = None
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
class AlexNet(nn.Module):
def __init__(self, num_classes=10):
super(AlexNet, self).__init__()
self.relu=nn.ReLU(inplace=True)
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, padding=1),
self.relu,
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(64, 192, kernel_size=3, padding=1),
self.relu,
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
self.relu,
nn.Conv2d(384, 256, kernel_size=3, padding=1),
self.relu,
# nn.Conv2d(256, 256, kernel_size=3, padding=1),
# nn.ReLU(inplace=True),
# nn.MaxPool2d(kernel_size=2),
)
self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 7 * 7, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes)
)
def forward(self, x):
x = self.features(x)
x: Tensor = self.avgpool(x)
x = x.view(-1, 7 * 7 * 256)
x = self.classifier(x)
return x
model=AlexNet()
optimizer=optim.Adam(model.parameters())
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
################################################################################
# END OF YOUR CODE
################################################################################
# You should get at least 70% accuracy
train_part34(model, optimizer, epochs=10)
```
## Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and/or any graphs that you made in the process of training and evaluating your network.
TODO: Describe what you did
## Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). Think about how this compares to your validation set accuracy.
```
best_model = model
check_accuracy_part34(loader_test, best_model)
```
| github_jupyter |
<a id=top></a>
# Analysis of Engineered Features
## Table of Contents
**Note:** In this notebook, the engineered features are referred to as "covariates".
----
1. [Preparations](#prep)
2. [Analysis of Covariates](#covar_analysis)
1. [Boxplots](#covar_analysis_boxplots)
2. [Forward Mapping (onto Shape Space)](#covar_analysis_fwdmap)
3. [Back Mapping (Tissue Consensus Map)](#covar_analysis_backmap)
4. [Covariate Correlations](#covar_analysis_correlations)
3. [Covariate-Shape Relationships](#covar_fspace)
1. [Covariate-Shape Correlations](#covar_fspace_correlations)
2. [Covariate Relation Graph](#covar_fspace_graph)
<a id=prep></a>
## 1. Preparations
----
```
### Import modules
# External, general
from __future__ import division
import os, sys
import numpy as np
np.random.seed(42)
import matplotlib.pyplot as plt
%matplotlib inline
# External, specific
import pandas as pd
import ipywidgets as widgets
from IPython.display import display, HTML
from scipy.stats import linregress, pearsonr, gaussian_kde
from scipy.spatial import cKDTree
import seaborn as sns
sns.set_style('white')
import networkx as nx
# Internal
import katachi.utilities.loading as ld
import katachi.utilities.plotting as kp
### Load data
# Prep loader
loader = ld.DataLoaderIDR()
loader.find_imports(r"data/experimentA/extracted_measurements/", recurse=True, verbose=True)
# Import embedded feature space
dataset_suffix = "shape_TFOR_pca_measured.tsv"
#dataset_suffix = "shape_CFOR_pca_measured.tsv"
#dataset_suffix = "tagRFPtUtrCH_TFOR_pca_measured.tsv"
#dataset_suffix = "mKate2GM130_TFOR_pca_measured.tsv"
fspace_pca, prim_IDs, fspace_idx = loader.load_dataset(dataset_suffix)
print "Imported feature space of shape:", fspace_pca.shape
# Import TFOR centroid locations
centroids = loader.load_dataset("_other_measurements.tsv", IDs=prim_IDs)[0][:,3:6][:,::-1]
print "Imported TFOR centroids of shape:", centroids.shape
# Import engineered features
covar_df, _, _ = loader.load_dataset("_other_measurements.tsv", IDs=prim_IDs, force_df=True)
del covar_df['Centroids RAW X']; del covar_df['Centroids RAW Y']; del covar_df['Centroids RAW Z']
covar_names = list(covar_df.columns)
print "Imported covariates of shape:", covar_df.shape
### Report
print "\ncovar_df.head()"
display(covar_df.head())
print "\ncovar_df.describe()"
display(covar_df.describe())
### Z-standardize the covariates
covar_df_z = (covar_df - covar_df.mean()) / covar_df.std()
```
<a id=covar_analysis></a>
## 2. Analysis of Covariates
----
### Boxplots <a id=covar_analysis_boxplots></a>
```
### General boxplot of Covariates
# Interactive selection of covariates
wid = widgets.SelectMultiple(
options=covar_names,
value=covar_names,
description='Covars',
)
# Interactive plot
@widgets.interact(selected=wid, standardized=True)
def covariate_boxplot(selected=covar_names,
standardized=True):
# Select data
if standardized:
covar_df_plot = covar_df_z[list(selected)]
else:
covar_df_plot = covar_df[list(selected)]
# Plot
fig = plt.figure(figsize=(12,3))
covar_df_plot.boxplot(grid=False)
plt.tick_params(axis='both', which='major', labelsize=6)
fig.autofmt_xdate()
if standardized: plt.title("Boxplot of Covariates [standardized]")
if not standardized: plt.title("Boxplot of Covariates [raw]")
plt.show()
```
### Forward Mapping (onto Shape Space) <a id=covar_analysis_fwdmap></a>
```
### Interactive mapping of covariates onto PCA-transformed shape space
# Set interactions
@widgets.interact(covariate=covar_names,
prim_ID=prim_IDs,
PCx=(1, fspace_pca.shape[1], 1),
PCy=(1, fspace_pca.shape[1], 1),
standardized=False,
show_all_prims=True)
# Show
def show_PCs(covariate=covar_names[0], prim_ID=prim_IDs[0],
PCx=1, PCy=2, standardized=False, show_all_prims=True):
# Select covariate data
if standardized:
covar_df_plot = covar_df_z[covariate]
else:
covar_df_plot = covar_df[covariate]
# Prep
plt.figure(figsize=(9,7))
# If all should be shown...
if show_all_prims:
# Plot
plt.scatter(fspace_pca[:,PCx-1], fspace_pca[:,PCy-1],
c=covar_df_plot, cmap=plt.cm.plasma,
s=10, edgecolor='', alpha=0.75)
# Cosmetics
cbar = plt.colorbar()
if standardized:
cbar.set_label(covariate+" [standardized]", rotation=270, labelpad=15)
else:
cbar.set_label(covariate+" [raw]", rotation=270, labelpad=15)
plt.xlabel("PC "+str(PCx))
plt.ylabel("PC "+str(PCy))
plt.title("PCA-Transformed Shape Space [All Prims]")
plt.show()
# If individual prims should be shown...
else:
# Plot
plt.scatter(fspace_pca[fspace_idx==prim_IDs.index(prim_ID), PCx-1],
fspace_pca[fspace_idx==prim_IDs.index(prim_ID), PCy-1],
c=covar_df_plot[fspace_idx==prim_IDs.index(prim_ID)],
cmap=plt.cm.plasma, s=10, edgecolor='',
vmin=covar_df_plot.min(), vmax=covar_df_plot.max())
# Cosmetics
cbar = plt.colorbar()
if standardized:
cbar.set_label(covariate+" [standardized]", rotation=270, labelpad=15)
else:
cbar.set_label(covariate+" [raw]", rotation=270, labelpad=15)
plt.xlabel("PC "+str(PCx))
plt.ylabel("PC "+str(PCy))
plt.title("PCA-Transformed Shape Space [prim "+prim_ID+"]")
plt.show()
```
### Back Mapping (Tissue Consensus Map) <a id=covar_analysis_backmap></a>
```
### Interactive mapping of covariates onto centroids in TFOR
# Axis range
xlim = (-175, 15)
ylim = (- 25, 25)
# Set interactions
@widgets.interact(covariate=covar_names,
standardized=['no','z'])
# Plot
def centroid_backmap(covariate=covar_names[0],
standardized='no'):
# Select covariate data
if standardized=='no':
covar_df_plot = covar_df[covariate]
elif standardized=='z':
covar_df_plot = covar_df_z[covariate]
# Init
fig,ax = plt.subplots(1, figsize=(12,5))
# Back-mapping plot
#zord = np.argsort(covar_df_plot)
zord = np.arange(len(covar_df_plot)); np.random.shuffle(zord) # Random is better!
scat = ax.scatter(centroids[zord,2], centroids[zord,1],
color=covar_df_plot[zord], cmap=plt.cm.plasma,
edgecolor='', s=15, alpha=0.75)
# Cosmetics
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.invert_yaxis() # To match images
ax.set_xlabel('TFOR x')
ax.set_ylabel('TFOR y')
cbar = plt.colorbar(scat,ax=ax)
if standardized:
ax.set_title('Centroid Back-Mapping of '+covariate+' [standardized]')
cbar.set_label(covariate+' [standardized]', rotation=270, labelpad=10)
else:
ax.set_title('Centroid Back-Mapping of '+covariate+' [raw]')
cbar.set_label(covariate+' [raw]', rotation=270, labelpad=20)
# Done
plt.tight_layout()
plt.show()
### Contour plot backmapping plot for publication
# Set interactions
@widgets.interact(covariate=covar_names,
standardized=['no','z'])
# Plot
def contour_backmap(covariate=covar_names[0],
standardized='no'):
# Settings
xlim = (-130, 8)
ylim = ( -19, 19)
# Select covariate data
if standardized=='no':
covar_df_plot = covar_df[covariate]
elif standardized=='z':
covar_df_plot = covar_df_z[covariate]
# Tools for smoothing on scatter
from katachi.utilities.pcl_helpers import pcl_gaussian_smooth
from scipy.spatial.distance import pdist, squareform
# Cut off at prim contour outline
kernel_prim = gaussian_kde(centroids[:,1:].T)
f_prim = kernel_prim(centroids[:,1:].T)
f_prim_mask = f_prim > f_prim.min() + (f_prim.max()-f_prim.min())*0.1
plot_values = covar_df_plot[f_prim_mask]
plot_centroids = centroids[f_prim_mask]
# Smoothen
pdists = squareform(pdist(plot_centroids[:,1:]))
plot_values = pcl_gaussian_smooth(pdists, plot_values[:,np.newaxis], sg_percentile=0.5)[:,0]
# Initialize figure
fig, ax = plt.subplots(1, figsize=(8, 3.25))
# Contourf plot
cfset = ax.tricontourf(plot_centroids[:,2], plot_centroids[:,1], plot_values, 20,
cmap='plasma')
# Illustrative centroids from a single prim
plt.scatter(centroids[fspace_idx==prim_IDs.index(prim_IDs[0]), 2],
centroids[fspace_idx==prim_IDs.index(prim_IDs[0]), 1],
c='', alpha=0.5)
# Cosmetics
ax.set_xlabel('TFOR x', fontsize=16)
ax.set_ylabel('TFOR y', fontsize=16)
plt.tick_params(axis='both', which='major', labelsize=13)
plt.xlim(xlim); plt.ylim(ylim)
ax.invert_yaxis() # To match images
# Colorbar
cbar = plt.colorbar(cfset, ax=ax, pad=0.01)
cbar.set_label(covariate, rotation=270, labelpad=10, fontsize=16)
cbar.ax.tick_params(labelsize=13)
# Done
plt.tight_layout()
plt.show()
```
### Covariate Correlations <a id=covar_analysis_correlations></a>
```
### Interactive linear fitting plot
# Set interaction
@widgets.interact(covar_x=covar_names,
covar_y=covar_names)
# Plotting function
def corr_plot_covar(covar_x=covar_names[0],
covar_y=covar_names[1]):
# Prep
plt.figure(figsize=(5,3))
# Scatterplot
plt.scatter(covar_df[covar_x], covar_df[covar_y],
facecolor='darkblue', edgecolor='',
s=5, alpha=0.5)
plt.xlabel(covar_x)
plt.ylabel(covar_y)
# Linear regression and pearson
fitted = linregress(covar_df[covar_x], covar_df[covar_y])
pearson = pearsonr(covar_df[covar_x], covar_df[covar_y])
# Report
print "Linear regression:"
for param,value in zip(["slope","intercept","rvalue","pvalue","stderr"], fitted):
print " {}:\t{:.2e}".format(param,value)
print "Pearson:"
print " r:\t{:.2e}".format(pearson[0])
print " p:\t{:.2e}".format(pearson[1])
# Add fit to plot
xmin,xmax = (covar_df[covar_x].min(), covar_df[covar_x].max())
ymin,ymax = (covar_df[covar_y].min(), covar_df[covar_y].max())
ybot,ytop = (xmin*fitted[0]+fitted[1], xmax*fitted[0]+fitted[1])
plt.plot([xmin,xmax], [ybot,ytop], c='blue', lw=2, alpha=0.5)
# Cosmetics and show
plt.xlim([xmin,xmax])
plt.ylim([ymin,ymax])
plt.show()
### Full pairwise correlation plot
# Create the plot
mclust = sns.clustermap(covar_df_z.corr(method='pearson'),
figsize=(10, 10),
cmap='RdBu')
# Fix the y axis orientation
mclust.ax_heatmap.set_yticklabels(mclust.ax_heatmap.get_yticklabels(),
rotation=0)
# Other cosmetics
mclust.ax_heatmap.set_title("Pairwise Correlations Cluster Plot", y=1.275)
plt.ylabel("Pearson\nCorr. Coef.")
plt.show()
```
<a id=covar_fspace></a>
## 3. Covariate-Shape Relationships
----
### Covariate-Shape Correlations <a id=covar_fspace_correlations></a>
```
### Interactive linear fitting plot
# Set interaction
@widgets.interact(covar_x=covar_names,
PC_y=range(1,fspace_pca.shape[1]+1))
# Plotting function
def corr_plot_covar(covar_x=covar_names[0],
PC_y=1):
# Prep
PC_y = int(PC_y)
plt.figure(figsize=(5,3))
# Scatterplot
plt.scatter(covar_df[covar_x], fspace_pca[:, PC_y-1],
facecolor='darkred', edgecolor='',
s=5, alpha=0.5)
plt.xlabel(covar_x)
plt.ylabel("PC "+str(PC_y))
# Linear regression and pearson
fitted = linregress(covar_df[covar_x], fspace_pca[:, PC_y-1])
pearson = pearsonr(covar_df[covar_x], fspace_pca[:, PC_y-1])
# Report
print "Linear regression:"
for param,value in zip(["slope","intercept","rvalue","pvalue","stderr"], fitted):
print " {}:\t{:.2e}".format(param,value)
print "Pearson:"
print " r:\t{:.2e}".format(pearson[0])
print " p:\t{:.2e}".format(pearson[1])
# Add fit to plot
xmin,xmax = (covar_df[covar_x].min(), covar_df[covar_x].max())
ymin,ymax = (fspace_pca[:, PC_y-1].min(), fspace_pca[:, PC_y-1].max())
ybot,ytop = (xmin*fitted[0]+fitted[1], xmax*fitted[0]+fitted[1])
plt.plot([xmin,xmax], [ybot,ytop], c='red', lw=2, alpha=0.5)
# Cosmetics and show
plt.xlim([xmin,xmax])
plt.ylim([ymin,ymax])
plt.show()
### Selected linear fits
# Settings for TFOR PC 3
if 'TFOR' in dataset_suffix:
covar_x = 'Z Axis Length'
PC_y = 3
x_reduc = 0
lbl_x = 'TFOR PC 3'
lbl_y = 'Z Axis Length\n(Cell Height)'
# Settings for CFOR PC 1
if 'CFOR' in dataset_suffix:
covar_x = 'Sphericity'
PC_y = 1
x_reduc = 2
lbl_x = 'CFOR PC 1'
lbl_y = 'Sphericity'
# Prep
plt.figure(figsize=(6,4))
# Scatterplot
plt.scatter(fspace_pca[:, PC_y-1], covar_df[covar_x],
facecolor='darkblue', edgecolor='',
s=5, alpha=0.25)
plt.xlabel(covar_x)
plt.ylabel("PC "+str(PC_y))
# Linear regression and pearson
fitted = linregress(fspace_pca[:, PC_y-1], covar_df[covar_x])
pearson = pearsonr(fspace_pca[:, PC_y-1], covar_df[covar_x])
# Report
print "Linear regression:"
for param,value in zip(["slope","intercept","rvalue","pvalue","stderr"], fitted):
print " {}:\t{:.2e}".format(param,value)
print "Pearson:"
print " r:\t{:.2e}".format(pearson[0])
print " p:\t{:.2e}".format(pearson[1])
# Add fit to plot
ymin,ymax = (covar_df[covar_x].min(), covar_df[covar_x].max())
xmin,xmax = (fspace_pca[:, PC_y-1].min()-x_reduc, fspace_pca[:, PC_y-1].max())
ybot,ytop = (xmin*fitted[0]+fitted[1], xmax*fitted[0]+fitted[1])
plt.plot([xmin,xmax], [ybot,ytop], c='black', lw=1, alpha=0.5)
# Cosmetics
plt.tick_params(axis='both', which='major', labelsize=16)
plt.xlabel(lbl_x, fontsize=18)
plt.ylabel(lbl_y, fontsize=18)
plt.xlim([xmin,xmax])
plt.ylim([ymin,ymax+0.05])
plt.tight_layout()
# Done
plt.show()
### Full pairwise correlation plot
# Prepare the pairwise correlation
fspace_pca_z = (fspace_pca - fspace_pca.mean(axis=0)) / fspace_pca.std(axis=0)
fspace_pca_z_df = pd.DataFrame(fspace_pca_z[:,:25])
pairwise_corr = covar_df_z.expanding(axis=1).corr(fspace_pca_z_df, pairwise=True).iloc[-1, :, :] # Ouf, pandas...
# Create the plot
mclust = sns.clustermap(pairwise_corr,
figsize=(10, 10),
col_cluster=False,
cmap='RdBu')
# Fix the y axis orientation
mclust.ax_heatmap.set_yticklabels(mclust.ax_heatmap.get_yticklabels(),
rotation=0)
# Other cosmetics
mclust.ax_heatmap.set_title("Pairwise Correlations Cluster Plot", y=1.275)
mclust.ax_heatmap.set_xticklabels(range(1,fspace_pca_z_df.shape[1]+1))
plt.ylabel("Pearson\nCorr. Coef.")
# Done
plt.show()
```
### Covariate Relation Graph <a id=covar_fspace_graph></a>
```
# Parameters
num_PCs = 8 # Number of PCs to include
corr_measure = 'pearsonr' # Correlation measure to use
threshold = 0.30 # Threshold to include a correlation as relevant
# Get relevant data
if corr_measure == 'pearsonr':
covar_fspace_dists = pairwise_corr.get_values()[:, :num_PCs] # Retrieved from above!
else:
raise NotImplementedError()
# Generate the plot
kp.covar_pc_bigraph(covar_fspace_dists, threshold, covar_names,
height=0.6, verbose=True, show=False)
# Done
plt.show()
```
----
[back to top](#top)
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy
from fastai.script import *
from fastai.vision import *
from fastai.callbacks import *
from fastai.distributed import *
from fastprogress import fastprogress
from torchvision.models import *
from fastai.vision.models.xresnet import *
from fastai.vision.models.xresnet2 import *
from fastai.vision.models.presnet import *
torch.backends.cudnn.benchmark = True
```
# XResNet baseline
```
#https://github.com/fastai/fastai_docs/blob/master/dev_course/dl2/11_train_imagenette.ipynb
def noop(x): return x
class Flatten(nn.Module):
def forward(self, x): return x.view(x.size(0), -1)
def conv(ni, nf, ks=3, stride=1, bias=False):
return nn.Conv2d(ni, nf, kernel_size=ks, stride=stride, padding=ks//2, bias=bias)
act_fn = nn.ReLU(inplace=True)
def init_cnn(m):
if getattr(m, 'bias', None) is not None: nn.init.constant_(m.bias, 0)
if isinstance(m, (nn.Conv2d,nn.Linear)): nn.init.kaiming_normal_(m.weight)
for l in m.children(): init_cnn(l)
def conv_layer(ni, nf, ks=3, stride=1, zero_bn=False, act=True):
bn = nn.BatchNorm2d(nf)
nn.init.constant_(bn.weight, 0. if zero_bn else 1.)
layers = [conv(ni, nf, ks, stride=stride), bn]
if act: layers.append(act_fn)
return nn.Sequential(*layers)
class ResBlock(nn.Module):
def __init__(self, expansion, ni, nh, stride=1):
super().__init__()
nf,ni = nh*expansion,ni*expansion
layers = [conv_layer(ni, nh, 3, stride=stride),
conv_layer(nh, nf, 3, zero_bn=True, act=False)
] if expansion == 1 else [
conv_layer(ni, nh, 1),
conv_layer(nh, nh, 3, stride=stride),
conv_layer(nh, nf, 1, zero_bn=True, act=False)
]
self.convs = nn.Sequential(*layers)
self.idconv = noop if ni==nf else conv_layer(ni, nf, 1, act=False)
self.pool = noop if stride==1 else nn.AvgPool2d(2, ceil_mode=True)
def forward(self, x): return act_fn(self.convs(x) + self.idconv(self.pool(x)))
class XResNet(nn.Sequential):
@classmethod
def create(cls, expansion, layers, c_in=3, c_out=1000):
nfs = [c_in, (c_in+1)*8, 64, 64]
stem = [conv_layer(nfs[i], nfs[i+1], stride=2 if i==0 else 1)
for i in range(3)]
nfs = [64//expansion,64,128,256,512]
res_layers = [cls._make_layer(expansion, nfs[i], nfs[i+1],
n_blocks=l, stride=1 if i==0 else 2)
for i,l in enumerate(layers)]
res = cls(
*stem,
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
*res_layers,
nn.AdaptiveAvgPool2d(1), Flatten(),
nn.Linear(nfs[-1]*expansion, c_out),
)
init_cnn(res)
return res
@staticmethod
def _make_layer(expansion, ni, nf, n_blocks, stride):
return nn.Sequential(
*[ResBlock(expansion, ni if i==0 else nf, nf, stride if i==0 else 1)
for i in range(n_blocks)])
def xresnet18 (**kwargs): return XResNet.create(1, [2, 2, 2, 2], **kwargs)
def xresnet34 (**kwargs): return XResNet.create(1, [3, 4, 6, 3], **kwargs)
def xresnet50 (**kwargs): return XResNet.create(4, [3, 4, 6, 3], **kwargs)
def xresnet101(**kwargs): return XResNet.create(4, [3, 4, 23, 3], **kwargs)
def xresnet152(**kwargs): return XResNet.create(4, [3, 8, 36, 3], **kwargs)
```
# XResNet with Self Attention
```
#Unmodified from https://github.com/fastai/fastai/blob/5c51f9eabf76853a89a9bc5741804d2ed4407e49/fastai/layers.py
def conv1d(ni:int, no:int, ks:int=1, stride:int=1, padding:int=0, bias:bool=False):
"Create and initialize a `nn.Conv1d` layer with spectral normalization."
conv = nn.Conv1d(ni, no, ks, stride=stride, padding=padding, bias=bias)
nn.init.kaiming_normal_(conv.weight)
if bias: conv.bias.data.zero_()
return spectral_norm(conv)
# Adapted from SelfAttention layer at https://github.com/fastai/fastai/blob/5c51f9eabf76853a89a9bc5741804d2ed4407e49/fastai/layers.py
# Inspired by https://arxiv.org/pdf/1805.08318.pdf
class SimpleSelfAttention(nn.Module):
def __init__(self, n_in:int, ks=1):#, n_out:int):
super().__init__()
self.conv = conv1d(n_in, n_in, ks, padding=ks//2, bias=False)
self.gamma = nn.Parameter(tensor([0.]))
def forward(self,x):
size = x.size()
x = x.view(*size[:2],-1)
o = torch.bmm(x.permute(0,2,1).contiguous(),self.conv(x))
o = self.gamma * torch.bmm(x,o) + x
return o.view(*size).contiguous()
#unmodified from https://github.com/fastai/fastai/blob/9b9014b8967186dc70c65ca7dcddca1a1232d99d/fastai/vision/models/xresnet.py
def conv(ni, nf, ks=3, stride=1, bias=False):
return nn.Conv2d(ni, nf, kernel_size=ks, stride=stride, padding=ks//2, bias=bias)
def noop(x): return x
def conv_layer(ni, nf, ks=3, stride=1, zero_bn=False, act=True):
bn = nn.BatchNorm2d(nf)
nn.init.constant_(bn.weight, 0. if zero_bn else 1.)
layers = [conv(ni, nf, ks, stride=stride), bn]
if act: layers.append(act_fn)
return nn.Sequential(*layers)
# Modified from https://github.com/fastai/fastai/blob/9b9014b8967186dc70c65ca7dcddca1a1232d99d/fastai/vision/models/xresnet.py
# Added self attention
class ResBlock(nn.Module):
def __init__(self, expansion, ni, nh, stride=1,sa=False):
super().__init__()
nf,ni = nh*expansion,ni*expansion
layers = [conv_layer(ni, nh, 3, stride=stride),
conv_layer(nh, nf, 3, zero_bn=True, act=False)
] if expansion == 1 else [
conv_layer(ni, nh, 1),
conv_layer(nh, nh, 3, stride=stride),
conv_layer(nh, nf, 1, zero_bn=True, act=False)
]
self.sa = SimpleSelfAttention(nf,ks=1) if sa else noop
self.convs = nn.Sequential(*layers)
self.idconv = noop if ni==nf else conv_layer(ni, nf, 1, act=False)
self.pool = noop if stride==1 else nn.AvgPool2d(2, ceil_mode=True)
def forward(self, x):
return act_fn(self.sa(self.convs(x)) + self.idconv(self.pool(x)))
# Modified from https://github.com/fastai/fastai/blob/9b9014b8967186dc70c65ca7dcddca1a1232d99d/fastai/vision/models/xresnet.py
# Added self attention
class XResNet_sa(nn.Sequential):
@classmethod
def create(cls, expansion, layers, c_in=3, c_out=1000):
nfs = [c_in, (c_in+1)*8, 64, 64]
stem = [conv_layer(nfs[i], nfs[i+1], stride=2 if i==0 else 1)
for i in range(3)]
nfs = [64//expansion,64,128,256,512]
res_layers = [cls._make_layer(expansion, nfs[i], nfs[i+1],
n_blocks=l, stride=1 if i==0 else 2, sa = True if i in[len(layers)-4] else False)
for i,l in enumerate(layers)]
res = cls(
*stem,
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
*res_layers,
nn.AdaptiveAvgPool2d(1), Flatten(),
nn.Linear(nfs[-1]*expansion, c_out),
)
init_cnn(res)
return res
@staticmethod
def _make_layer(expansion, ni, nf, n_blocks, stride, sa = False):
return nn.Sequential(
*[ResBlock(expansion, ni if i==0 else nf, nf, stride if i==0 else 1, sa if i in [n_blocks -1] else False)
for i in range(n_blocks)])
def xresnet50_sa (**kwargs): return XResNet_sa.create(4, [3, 4, 6, 3], **kwargs)
```
# Data loading
```
#https://github.com/fastai/fastai/blob/master/examples/train_imagenette.py
def get_data(size, woof, bs, workers=None):
if size<=128: path = URLs.IMAGEWOOF_160 if woof else URLs.IMAGENETTE_160
elif size<=224: path = URLs.IMAGEWOOF_320 if woof else URLs.IMAGENETTE_320
else : path = URLs.IMAGEWOOF if woof else URLs.IMAGENETTE
path = untar_data(path)
n_gpus = num_distrib() or 1
if workers is None: workers = min(8, num_cpus()//n_gpus)
return (ImageList.from_folder(path).split_by_folder(valid='val')
.label_from_folder().transform(([flip_lr(p=0.5)], []), size=size)
.databunch(bs=bs, num_workers=workers)
.presize(size, scale=(0.35,1))
.normalize(imagenet_stats))
```
# Train
```
opt_func = partial(optim.Adam, betas=(0.9,0.99), eps=1e-6)
```
## Imagewoof
### Image size = 256
```
image_size = 256
data = get_data(image_size,woof =True,bs=64)
```
#### Epochs = 5
```
# we use the same parameters for baseline and new model
epochs = 5
lr = 3e-3
bs = 64
mixup = 0
```
##### Baseline
```
m = xresnet50(c_out=10)
learn = (Learner(data, m, wd=1e-2, opt_func=opt_func,
metrics=[accuracy,top_k_accuracy],
bn_wd=False, true_wd=True,
loss_func = LabelSmoothingCrossEntropy())
)
if mixup: learn = learn.mixup(alpha=mixup)
learn = learn.to_fp16(dynamic=True)
learn.fit_one_cycle(epochs, lr, div_factor=10, pct_start=0.3)
learn.fit_one_cycle(epochs, lr, div_factor=10, pct_start=0.3)
learn.fit_one_cycle(epochs, lr, div_factor=10, pct_start=0.3)
learn.fit_one_cycle(epochs, lr, div_factor=10, pct_start=0.3)
learn.fit_one_cycle(epochs, lr, div_factor=10, pct_start=0.3)
results = [61.8,64.8,57.4,62.4,63,61.8, 57.6,63,62.6, 64.8] #included some from previous notebook iteration
np.mean(results), np.std(results), np.min(results), np.max(results)
```
##### New model
```
m = xresnet50_sa(c_out=10)
learn = None
gc.collect()
learn = (Learner(data, m, wd=1e-2, opt_func=opt_func,
metrics=[accuracy,top_k_accuracy],
bn_wd=False, true_wd=True,
loss_func = LabelSmoothingCrossEntropy())
)
if mixup: learn = learn.mixup(alpha=mixup)
learn = learn.to_fp16(dynamic=True)
learn.fit_one_cycle(5, lr, div_factor=10, pct_start=0.3)
learn.fit_one_cycle(5, lr, div_factor=10, pct_start=0.3)
learn.fit_one_cycle(5, lr, div_factor=10, pct_start=0.3)
learn.fit_one_cycle(5, lr, div_factor=10, pct_start=0.3)
results = [67.4,65.8,70.6,65.8,67.8,69,65.6,66.4, 67.8,70.2]
np.mean(results), np.std(results), np.min(results), np.max(results)
```
| github_jupyter |
# [모듈 2.1] SageMaker 클러스터에서 훈련 (No VPC에서 실행)
이 노트북은 아래의 작업을 실행 합니다.
- SageMaker Hosting Cluster 에서 훈련을 실행
- 훈련한 Job 이름을 저장
- 다음 노트북에서 모델 배포 및 추론시에 사용 합니다.
---
SageMaker의 세션을 얻고, role 정보를 가져옵니다.
- 위의 두 정보를 통해서 SageMaker Hosting Cluster에 연결합니다.
```
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
## 로컬의 데이터 S3 업로딩
로컬의 데이터를 S3에 업로딩하여 훈련시에 Input으로 사용 합니다.
```
# dataset_location = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-cifar10')
# display(dataset_location)
dataset_location = 's3://sagemaker-ap-northeast-2-057716757052/data/DEMO-cifar10'
dataset_location
# efs_dir = '/home/ec2-user/efs/data'
# ! ls {efs_dir} -al
# ! aws s3 cp {dataset_location} {efs_dir} --recursive
from sagemaker.inputs import FileSystemInput
# Specify EFS ile system id.
file_system_id = 'fs-38dc1558' # 'fs-xxxxxxxx'
print(f"EFS file-system-id: {file_system_id}")
# Specify directory path for input data on the file system.
# You need to provide normalized and absolute path below.
train_file_system_directory_path = '/data/train'
eval_file_system_directory_path = '/data/eval'
validation_file_system_directory_path = '/data/validation'
print(f'EFS file-system data input path: {train_file_system_directory_path}')
print(f'EFS file-system data input path: {eval_file_system_directory_path}')
print(f'EFS file-system data input path: {validation_file_system_directory_path}')
# Specify the access mode of the mount of the directory associated with the file system.
# Directory must be mounted 'ro'(read-only).
file_system_access_mode = 'ro'
# Specify your file system type
file_system_type = 'EFS'
train = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=train_file_system_directory_path,
file_system_access_mode=file_system_access_mode)
eval = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=eval_file_system_directory_path,
file_system_access_mode=file_system_access_mode)
validation = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=validation_file_system_directory_path,
file_system_access_mode=file_system_access_mode)
aws_region = 'ap-northeast-2'# aws-region-code e.g. us-east-1
s3_bucket = 'sagemaker-ap-northeast-2-057716757052'# your-s3-bucket-name
prefix = "cifar10/efs" #prefix in your bucket
s3_output_location = f's3://{s3_bucket}/{prefix}/output'
print(f'S3 model output location: {s3_output_location}')
security_group_ids = ['sg-0192524ef63ec6138'] # ['sg-xxxxxxxx']
# subnets = ['subnet-0a84bcfa36d3981e6','subnet-0304abaaefc2b1c34','subnet-0a2204b79f378b178'] # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx']
subnets = ['subnet-0a84bcfa36d3981e6'] # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx']
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs' : 1},
train_instance_count=1,
train_instance_type='ml.p3.2xlarge',
output_path=s3_output_location,
subnets=subnets,
security_group_ids=security_group_ids,
session = sagemaker.Session()
)
estimator.fit({'train': train,
'validation': validation,
'eval': eval,
})
# estimator.fit({'train': 'file://data/train',
# 'validation': 'file://data/validation',
# 'eval': 'file://data/eval'})
```
# VPC_Mode를 True, False 선택
#### **[중요] VPC_Mode에서 실행시에 True로 변경해주세요**
```
VPC_Mode = False
from sagemaker.tensorflow import TensorFlow
def retrieve_estimator(VPC_Mode):
if VPC_Mode:
# VPC 모드 경우에 subnets, security_group을 기술 합니다.
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs': 2},
train_instance_count=1,
train_instance_type='ml.p3.8xlarge',
subnets = ['subnet-090c1fad32165b0fa','subnet-0bd7cff3909c55018'],
security_group_ids = ['sg-0f45d634d80aef27e']
)
else:
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs': 2},
train_instance_count=1,
train_instance_type='ml.p3.8xlarge')
return estimator
estimator = retrieve_estimator(VPC_Mode)
```
학습을 수행합니다. 이번에는 각각의 채널(`train, validation, eval`)에 S3의 데이터 저장 위치를 지정합니다.<br>
학습 완료 후 Billable seconds도 확인해 보세요. Billable seconds는 실제로 학습 수행 시 과금되는 시간입니다.
```
Billable seconds: <time>
```
참고로, `ml.p2.xlarge` 인스턴스로 5 epoch 학습 시 전체 6분-7분이 소요되고, 실제 학습에 소요되는 시간은 3분-4분이 소요됩니다.
```
%%time
estimator.fit({'train':'{}/train'.format(dataset_location),
'validation':'{}/validation'.format(dataset_location),
'eval':'{}/eval'.format(dataset_location)})
```
## training_job_name 저장
현재의 training_job_name을 저장 합니다.
- training_job_name을 에는 훈련에 관련 내용 및 훈련 결과인 **Model Artifact** 파일의 S3 경로를 제공 합니다.
```
train_job_name = estimator._current_job_name
%store train_job_name
```
| github_jupyter |
<a href="https://colab.research.google.com/github/iotanalytics/IoTTutorial/blob/main/code/preprocessing_and_decomposition/Matrix_Profile.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Matrix Profile
## Introduction
The matrix profile (MP) is a data structure and associated algorithms that helps solve the dual problem of anomaly detection and motif discovery. It is robust, scalable and largely parameter-free.
MP can be combined with other algorithms to accomplish:
* Motif discovery
* Time series chains
* Anomaly discovery
* Joins
* Semantic segmentation
matrixprofile-ts offers 3 different algorithms to compute Matrix Profile:
* STAMP (Scalable Time Series Anytime Matrix Profile) - Each distance profile is independent of other distance profiles, the order in which they are computed can be random. It is an anytime algorithm.
* STOMP (Scalable Time Series Ordered Matrix Profile) - This algorithm is an exact ordered algorithm. It is significantly faster than STAMP.
* SCRIMP++ (Scalable Column Independent Matrix Profile) - This algorithm combines the anytime component of STAMP with the speed of STOMP.
See: https://towardsdatascience.com/introduction-to-matrix-profiles-5568f3375d90
## Code Example
```
!pip install matrixprofile-ts
import pandas as pd
## example data importing
data = pd.read_csv('https://raw.githubusercontent.com/iotanalytics/IoTTutorial/main/data/SCG_data.csv').drop('Unnamed: 0',1).to_numpy()[0:20,:1000]
import operator
import numpy as np
import matplotlib.pyplot as plt
from matrixprofile import *
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
# Pull a portion of the data
pattern = data[10,:] + max(abs(data[10,:]))
# Compute Matrix Profile
m = 10
mp = matrixProfile.stomp(pattern,m)
#Append np.nan to Matrix profile to enable plotting against raw data
mp_adj = np.append(mp[0],np.zeros(m-1)+np.nan)
#Plot the signal data
fig, (ax1, ax2) = plt.subplots(2,1,sharex=True,figsize=(20,10))
ax1.plot(np.arange(len(pattern)),pattern)
ax1.set_ylabel('Signal', size=22)
#Plot the Matrix Profile
ax2.plot(np.arange(len(mp_adj)),mp_adj, label="Matrix Profile", color='red')
ax2.set_ylabel('Matrix Profile', size=22)
ax2.set_xlabel('Time', size=22);
```
## Discussion
Pros:
* It is exact: For motif discovery, discord discovery, time series joins etc., the Matrix Profile based methods provide no false positives or false dismissals.
* It is simple and parameter-free: In contrast, the more general algorithms in this space
that typically require building and tuning spatial access methods and/or hash functions.
* It is space efficient: Matrix Profile construction algorithms requires an inconsequential
space overhead, just linear in the time series length with a small constant factor, allowing
massive datasets to be processed in main memory (for most data mining, disk is death).
* It allows anytime algorithms: While exact MP algorithms are extremely scalable, for
extremely large datasets we can compute the Matrix Profile in an anytime fashion, allowing
ultra-fast approximate solutions and real-time data interaction.
* It is incrementally maintainable: Having computed the Matrix Profile for a dataset,
we can incrementally update it very efficiently. In many domains this means we can effectively
maintain exact joins, motifs, discords on streaming data forever.
* It can leverage hardware: Matrix Profile construction is embarrassingly parallelizable,
both on multicore processors, GPUs, distributed systems etc.
* It is free of the curse of dimensionality: That is to say, It has time complexity that is
constant in subsequence length: This is a very unusual and desirable property; virtually all
existing algorithms in the time series scale poorly as the subsequence length grows.
* It can be constructed in deterministic time: Almost all algorithms for time series
data mining can take radically different times to finish on two (even slightly) different datasets.
In contrast, given only the length of the time series, we can precisely predict in advance how
long it will take to compute the Matrix Profile. (this allows resource planning)
* It can handle missing data: Even in the presence of missing data, we can provide
answers which are guaranteed to have no false negatives.
* Finally, and subjectively: Simplicity and Intuitiveness: Seeing the world through
the MP lens often invites/suggests simple and elegant solutions.
Cons:
* Larger datasets can take a long time to compute. Scalability needs to be addressed.
* Cannot be used with Dynamic time Warping as of now.
* DTW is used for one-to-all matching whereas MP is used for all-to-all matching.
* DTW is used for smaller datasets rather than large.
* Need to adjust window size manually for different datasets.
*How to read MP* :
* Where you see relatively low values, you know that the subsequence in the original time
series must have (at least one) relatively similar subsequence elsewhere in the data (such
regions are “motifs” or reoccurring patterns)
* Where you see relatively high values, you know that the subsequence in the original time
series must be unique in its shape (such areas are “discords” or anomalies). In fact, the highest point is exactly the definition of Time
Series Discord, perhaps the best anomaly detector for time series.
##References:
https://www.cs.ucr.edu/~eamonn/MatrixProfile.html (powerpoints on this site - a lot of examples)
https://towardsdatascience.com/introduction-to-matrix-profiles-5568f3375d90
Python implementation: https://github.com/TDAmeritrade/stumpy
| github_jupyter |
## Python Modules
```
%%writefile weather.py
def prognosis():
print("It will rain today")
import weather
weather.prognosis()
```
## How does Python know from where to import packages/modules from?
```
# Python imports work by searching the directories listed in sys.path.
import sys
sys.path
## "__main__" usage
# A module can discover whether or not it is running in the main scope by checking its own __name__,
# which allows a common idiom for conditionally executing code in a module when it is run as a script or with python -m
# but not when it is imported:
%%writefile hw.py
#!/usr/bin/env python
def hw():
print("Running Main")
def hw2():
print("Hello 2")
if __name__ == "__main__":
# execute only if run as a script
print("Running as script")
hw()
hw2()
import main
import hw
main.main()
hw.hw2()
# Running on all 3 OSes from command line:
python main.py
```
## Make main.py self running on Linux (also should work on MacOS):
Add
#!/usr/bin/env python to first line of script
mark it executable using
### need to change permissions too!
$ chmod +x main.py
## Making Standalone .EXEs for Python in Windows
* http://www.py2exe.org/ used to be for Python 2 , now supposedly Python 3 as well
* http://www.pyinstaller.org/
Tutorial: https://medium.com/dreamcatcher-its-blog/making-an-stand-alone-executable-from-a-python-script-using-pyinstaller-d1df9170e263
Need to create exe on a similar system as target system!
```
# Exercise Write a function which returns a list of fibonacci numbers up to starting with 1, 1, 2, 3, 5 up to the nth.
So Fib(4) would return [1,1,2,3]
```


```
%%writefile fibo.py
# Fibonacci numbers module
def fib(n): # write Fibonacci series up to n
a, b = 1 1
while b < n:
print(b, end=' ')
a, b = b, a+b
print()
def fib2(n): # return Fibonacci series up to n
result = []
a, b = 1, 1
while b < n:
result.append(b)
a, b = b, a+b
return result
import fibo
fibo.fib(100)
fibo.fib2(100)
fib=fibo.fib
```
If you intend to use a function often you can assign it to a local name:
```
fib(300)
```
#### There is a variant of the import statement that imports names from a module directly into the importing module’s symbol table.
```
from fibo import fib, fib2 # we overwrote fib=fibo.fib
fib(100)
fib2(200)
```
This does not introduce the module name from which the imports are taken in the local symbol table (so in the example, fibo is not defined).
There is even a variant to import all names that a module defines: **NOT RECOMMENDED**
```
## DO not do this Namespace collission possible!!
from fibo import *
fib(400)
```
### If the module name is followed by as, then the name following as is bound directly to the imported module.
```
import fibo as fib
dir(fib)
fib.fib(50)
### It can also be used when utilising from with similar effects:
from fibo import fib as fibonacci
fibonacci(200)
```
### Executing modules as scripts¶
When you run a Python module with
python fibo.py <arguments>
the code in the module will be executed, just as if you imported it, but with the \_\_name\_\_ set to "\_\_main\_\_". That means that by adding this code at the end of your module:
```
%%writefile fibbo.py
# Fibonacci numbers module
def fib(n): # write Fibonacci series up to n
a, b = 0, 1
while b < n:
print(b, end=' ')
a, b = b, a+b
print()
def fib2(n): # return Fibonacci series up to n
result = []
a, b = 0, 1
while b < n:
result.append(b)
a, b = b, a+b
return result
if __name__ == "__main__":
import sys
fib(int(sys.argv[1], 10))
import fibbo as fi
fi.fib(200)
```
#### This is often used either to provide a convenient user interface to a module, or for testing purposes (running the module as a script executes a test suite).
### The Module Search Path
When a module named spam is imported, the interpreter first searches for a built-in module with that name. If not found, it then searches for a file named spam.py in a list of directories given by the variable sys.path. sys.path is initialized from these locations:
* The directory containing the input script (or the current directory when no file is specified).
* PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH).
* The installation-dependent default.
Packages are a way of structuring Python’s module namespace by using “dotted module names”. For example, the module name A.B designates a submodule named B in a package named A. Just like the use of modules saves the authors of different modules from having to worry about each other’s global variable names, the use of dotted module names saves the authors of multi-module packages like NumPy or Pillow from having to worry about each other’s module names.
```
sound/ Top-level package
__init__.py Initialize the sound package
formats/ Subpackage for file format conversions
__init__.py
wavread.py
wavwrite.py
aiffread.py
aiffwrite.py
auread.py
auwrite.py
...
effects/ Subpackage for sound effects
__init__.py
echo.py
surround.py
reverse.py
...
filters/ Subpackage for filters
__init__.py
equalizer.py
vocoder.py
karaoke.py
...
```
The \_\_init\_\_.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, \_\_init\_\_.py can just be an empty file
| github_jupyter |
```
%matplotlib inline
```
What is `torch.nn` *really*?
============================
by Jeremy Howard, `fast.ai <https://www.fast.ai>`_. Thanks to Rachel Thomas and Francisco Ingham.
We recommend running this tutorial as a notebook, not a script. To download the notebook (.ipynb) file,
click `here <https://pytorch.org/tutorials/beginner/nn_tutorial.html#sphx-glr-download-beginner-nn-tutorial-py>`_ .
PyTorch provides the elegantly designed modules and classes `torch.nn <https://pytorch.org/docs/stable/nn.html>`_ ,
`torch.optim <https://pytorch.org/docs/stable/optim.html>`_ ,
`Dataset <https://pytorch.org/docs/stable/data.html?highlight=dataset#torch.utils.data.Dataset>`_ ,
and `DataLoader <https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader>`_
to help you create and train neural networks.
In order to fully utilize their power and customize
them for your problem, you need to really understand exactly what they're
doing. To develop this understanding, we will first train basic neural net
on the MNIST data set without using any features from these models; we will
initially only use the most basic PyTorch tensor functionality. Then, we will
incrementally add one feature from ``torch.nn``, ``torch.optim``, ``Dataset``, or
``DataLoader`` at a time, showing exactly what each piece does, and how it
works to make the code either more concise, or more flexible.
**This tutorial assumes you already have PyTorch installed, and are familiar
with the basics of tensor operations.** (If you're familiar with Numpy array
operations, you'll find the PyTorch tensor operations used here nearly identical).
MNIST data setup
----------------
We will use the classic `MNIST <http://deeplearning.net/data/mnist/>`_ dataset,
which consists of black-and-white images of hand-drawn digits (between 0 and 9).
We will use `pathlib <https://docs.python.org/3/library/pathlib.html>`_
for dealing with paths (part of the Python 3 standard library), and will
download the dataset using
`requests <http://docs.python-requests.org/en/master/>`_. We will only
import modules when we use them, so you can see exactly what's being
used at each point.
```
from pathlib import Path
import requests
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"
PATH.mkdir(parents=True, exist_ok=True)
URL = "http://deeplearning.net/data/mnist/"
FILENAME = "mnist.pkl.gz"
if not (PATH / FILENAME).exists():
content = requests.get(URL + FILENAME).content
(PATH / FILENAME).open("wb").write(content)
```
This dataset is in numpy array format, and has been stored using pickle,
a python-specific format for serializing data.
```
import pickle
import gzip
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
```
Each image is 28 x 28, and is being stored as a flattened row of length
784 (=28x28). Let's take a look at one; we need to reshape it to 2d
first.
```
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)
```
PyTorch uses ``torch.tensor``, rather than numpy arrays, so we need to
convert our data.
```
import torch
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())
```
Neural net from scratch (no torch.nn)
---------------------------------------------
Let's first create a model using nothing but PyTorch tensor operations. We're assuming
you're already familiar with the basics of neural networks. (If you're not, you can
learn them at `course.fast.ai <https://course.fast.ai>`_).
PyTorch provides methods to create random or zero-filled tensors, which we will
use to create our weights and bias for a simple linear model. These are just regular
tensors, with one very special addition: we tell PyTorch that they require a
gradient. This causes PyTorch to record all of the operations done on the tensor,
so that it can calculate the gradient during back-propagation *automatically*!
For the weights, we set ``requires_grad`` **after** the initialization, since we
don't want that step included in the gradient. (Note that a trailling ``_`` in
PyTorch signifies that the operation is performed in-place.)
<div class="alert alert-info"><h4>Note</h4><p>We are initializing the weights here with
`Xavier initialisation <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_
(by multiplying with 1/sqrt(n)).</p></div>
```
import math
weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
```
Thanks to PyTorch's ability to calculate gradients automatically, we can
use any standard Python function (or callable object) as a model! So
let's just write a plain matrix multiplication and broadcasted addition
to create a simple linear model. We also need an activation function, so
we'll write `log_softmax` and use it. Remember: although PyTorch
provides lots of pre-written loss functions, activation functions, and
so forth, you can easily write your own using plain python. PyTorch will
even create fast GPU or vectorized CPU code for your function
automatically.
```
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb):
return log_softmax(xb @ weights + bias)
```
In the above, the ``@`` stands for the dot product operation. We will call
our function on one batch of data (in this case, 64 images). This is
one *forward pass*. Note that our predictions won't be any better than
random at this stage, since we start with random weights.
```
bs = 64 # batch size
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb) # predictions
preds[0], preds.shape
print(preds[0], preds.shape)
```
As you see, the ``preds`` tensor contains not only the tensor values, but also a
gradient function. We'll use this later to do backprop.
Let's implement negative log-likelihood to use as the loss function
(again, we can just use standard Python):
```
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
```
Let's check our loss with our random model, so we can see if we improve
after a backprop pass later.
```
yb = y_train[0:bs]
print(loss_func(preds, yb))
```
Let's also implement a function to calculate the accuracy of our model.
For each prediction, if the index with the largest value matches the
target value, then the prediction was correct.
```
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
```
Let's check the accuracy of our random model, so we can see if our
accuracy improves as our loss improves.
```
print(accuracy(preds, yb))
```
We can now run a training loop. For each iteration, we will:
- select a mini-batch of data (of size ``bs``)
- use the model to make predictions
- calculate the loss
- ``loss.backward()`` updates the gradients of the model, in this case, ``weights``
and ``bias``.
We now use these gradients to update the weights and bias. We do this
within the ``torch.no_grad()`` context manager, because we do not want these
actions to be recorded for our next calculation of the gradient. You can read
more about how PyTorch's Autograd records operations
`here <https://pytorch.org/docs/stable/notes/autograd.html>`_.
We then set the
gradients to zero, so that we are ready for the next loop.
Otherwise, our gradients would record a running tally of all the operations
that had happened (i.e. ``loss.backward()`` *adds* the gradients to whatever is
already stored, rather than replacing them).
.. tip:: You can use the standard python debugger to step through PyTorch
code, allowing you to check the various variable values at each step.
Uncomment ``set_trace()`` below to try it out.
```
from IPython.core.debugger import set_trace
lr = 0.5 # learning rate
epochs = 2 # how many epochs to train for
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
```
That's it: we've created and trained a minimal neural network (in this case, a
logistic regression, since we have no hidden layers) entirely from scratch!
Let's check the loss and accuracy and compare those to what we got
earlier. We expect that the loss will have decreased and accuracy to
have increased, and they have.
```
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
```
Using torch.nn.functional
------------------------------
We will now refactor our code, so that it does the same thing as before, only
we'll start taking advantage of PyTorch's ``nn`` classes to make it more concise
and flexible. At each step from here, we should be making our code one or more
of: shorter, more understandable, and/or more flexible.
The first and easiest step is to make our code shorter by replacing our
hand-written activation and loss functions with those from ``torch.nn.functional``
(which is generally imported into the namespace ``F`` by convention). This module
contains all the functions in the ``torch.nn`` library (whereas other parts of the
library contain classes). As well as a wide range of loss and activation
functions, you'll also find here some convenient functions for creating neural
nets, such as pooling functions. (There are also functions for doing convolutions,
linear layers, etc, but as we'll see, these are usually better handled using
other parts of the library.)
If you're using negative log likelihood loss and log softmax activation,
then Pytorch provides a single function ``F.cross_entropy`` that combines
the two. So we can even remove the activation function from our model.
```
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
return xb @ weights + bias
```
Note that we no longer call ``log_softmax`` in the ``model`` function. Let's
confirm that our loss and accuracy are the same as before:
```
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
```
Refactor using nn.Module
-----------------------------
Next up, we'll use ``nn.Module`` and ``nn.Parameter``, for a clearer and more
concise training loop. We subclass ``nn.Module`` (which itself is a class and
able to keep track of state). In this case, we want to create a class that
holds our weights, bias, and method for the forward step. ``nn.Module`` has a
number of attributes and methods (such as ``.parameters()`` and ``.zero_grad()``)
which we will be using.
<div class="alert alert-info"><h4>Note</h4><p>``nn.Module`` (uppercase M) is a PyTorch specific concept, and is a
class we'll be using a lot. ``nn.Module`` is not to be confused with the Python
concept of a (lowercase ``m``) `module <https://docs.python.org/3/tutorial/modules.html>`_,
which is a file of Python code that can be imported.</p></div>
```
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb):
return xb @ self.weights + self.bias
```
Since we're now using an object instead of just using a function, we
first have to instantiate our model:
```
model = Mnist_Logistic()
```
Now we can calculate the loss in the same way as before. Note that
``nn.Module`` objects are used as if they are functions (i.e they are
*callable*), but behind the scenes Pytorch will call our ``forward``
method automatically.
```
print(loss_func(model(xb), yb))
```
Previously for our training loop we had to update the values for each parameter
by name, and manually zero out the grads for each parameter separately, like this:
::
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
Now we can take advantage of model.parameters() and model.zero_grad() (which
are both defined by PyTorch for ``nn.Module``) to make those steps more concise
and less prone to the error of forgetting some of our parameters, particularly
if we had a more complicated model:
::
with torch.no_grad():
for p in model.parameters(): p -= p.grad * lr
model.zero_grad()
We'll wrap our little training loop in a ``fit`` function so we can run it
again later.
```
def fit():
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters():
p -= p.grad * lr
model.zero_grad()
fit()
```
Let's double-check that our loss has gone down:
```
print(loss_func(model(xb), yb))
```
Refactor using nn.Linear
-------------------------
We continue to refactor our code. Instead of manually defining and
initializing ``self.weights`` and ``self.bias``, and calculating ``xb @
self.weights + self.bias``, we will instead use the Pytorch class
`nn.Linear <https://pytorch.org/docs/stable/nn.html#linear-layers>`_ for a
linear layer, which does all that for us. Pytorch has many types of
predefined layers that can greatly simplify our code, and often makes it
faster too.
```
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
def forward(self, xb):
return self.lin(xb)
```
We instantiate our model and calculate the loss in the same way as before:
```
model = Mnist_Logistic()
print(loss_func(model(xb), yb))
```
We are still able to use our same ``fit`` method as before.
```
fit()
print(loss_func(model(xb), yb))
```
Refactor using optim
------------------------------
Pytorch also has a package with various optimization algorithms, ``torch.optim``.
We can use the ``step`` method from our optimizer to take a forward step, instead
of manually updating each parameter.
This will let us replace our previous manually coded optimization step:
::
with torch.no_grad():
for p in model.parameters(): p -= p.grad * lr
model.zero_grad()
and instead use just:
::
opt.step()
opt.zero_grad()
(``optim.zero_grad()`` resets the gradient to 0 and we need to call it before
computing the gradient for the next minibatch.)
```
from torch import optim
```
We'll define a little function to create our model and optimizer so we
can reuse it in the future.
```
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model, opt = get_model()
print(loss_func(model(xb), yb))
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
```
Refactor using Dataset
------------------------------
PyTorch has an abstract Dataset class. A Dataset can be anything that has
a ``__len__`` function (called by Python's standard ``len`` function) and
a ``__getitem__`` function as a way of indexing into it.
`This tutorial <https://pytorch.org/tutorials/beginner/data_loading_tutorial.html>`_
walks through a nice example of creating a custom ``FacialLandmarkDataset`` class
as a subclass of ``Dataset``.
PyTorch's `TensorDataset <https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset>`_
is a Dataset wrapping tensors. By defining a length and way of indexing,
this also gives us a way to iterate, index, and slice along the first
dimension of a tensor. This will make it easier to access both the
independent and dependent variables in the same line as we train.
```
from torch.utils.data import TensorDataset
```
Both ``x_train`` and ``y_train`` can be combined in a single ``TensorDataset``,
which will be easier to iterate over and slice.
```
train_ds = TensorDataset(x_train, y_train)
```
Previously, we had to iterate through minibatches of x and y values separately:
::
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
Now, we can do these two steps together:
::
xb,yb = train_ds[i*bs : i*bs+bs]
```
model, opt = get_model()
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
xb, yb = train_ds[i * bs: i * bs + bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
```
Refactor using DataLoader
------------------------------
Pytorch's ``DataLoader`` is responsible for managing batches. You can
create a ``DataLoader`` from any ``Dataset``. ``DataLoader`` makes it easier
to iterate over batches. Rather than having to use ``train_ds[i*bs : i*bs+bs]``,
the DataLoader gives us each minibatch automatically.
```
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
```
Previously, our loop iterated over batches (xb, yb) like this:
::
for i in range((n-1)//bs + 1):
xb,yb = train_ds[i*bs : i*bs+bs]
pred = model(xb)
Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader:
::
for xb,yb in train_dl:
pred = model(xb)
```
model, opt = get_model()
for epoch in range(epochs):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
```
Thanks to Pytorch's ``nn.Module``, ``nn.Parameter``, ``Dataset``, and ``DataLoader``,
our training loop is now dramatically smaller and easier to understand. Let's
now try to add the basic features necessary to create effecive models in practice.
Add validation
-----------------------
In section 1, we were just trying to get a reasonable training loop set up for
use on our training data. In reality, you **always** should also have
a `validation set <https://www.fast.ai/2017/11/13/validation-sets/>`_, in order
to identify if you are overfitting.
Shuffling the training data is
`important <https://www.quora.com/Does-the-order-of-training-data-matter-when-training-neural-networks>`_
to prevent correlation between batches and overfitting. On the other hand, the
validation loss will be identical whether we shuffle the validation set or not.
Since shuffling takes extra time, it makes no sense to shuffle the validation data.
We'll use a batch size for the validation set that is twice as large as
that for the training set. This is because the validation set does not
need backpropagation and thus takes less memory (it doesn't need to
store the gradients). We take advantage of this to use a larger batch
size and compute the loss more quickly.
```
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
```
We will calculate and print the validation loss at the end of each epoch.
(Note that we always call ``model.train()`` before training, and ``model.eval()``
before inference, because these are used by layers such as ``nn.BatchNorm2d``
and ``nn.Dropout`` to ensure appropriate behaviour for these different phases.)
```
model, opt = get_model()
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
print(epoch, valid_loss / len(valid_dl))
```
Create fit() and get_data()
----------------------------------
We'll now do a little refactoring of our own. Since we go through a similar
process twice of calculating the loss for both the training set and the
validation set, let's make that into its own function, ``loss_batch``, which
computes the loss for one batch.
We pass an optimizer in for the training set, and use it to perform
backprop. For the validation set, we don't pass an optimizer, so the
method doesn't perform backprop.
```
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
```
``fit`` runs the necessary operations to train our model and compute the
training and validation losses for each epoch.
```
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses, nums = zip(
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
print(epoch, val_loss)
```
``get_data`` returns dataloaders for the training and validation sets.
```
def get_data(train_ds, valid_ds, bs):
return (
DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs * 2),
)
```
Now, our whole process of obtaining the data loaders and fitting the
model can be run in 3 lines of code:
```
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
You can use these basic 3 lines of code to train a wide variety of models.
Let's see if we can use them to train a convolutional neural network (CNN)!
Switch to CNN
-------------
We are now going to build our neural network with three convolutional layers.
Because none of the functions in the previous section assume anything about
the model form, we'll be able to use them to train a CNN without any modification.
We will use Pytorch's predefined
`Conv2d <https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d>`_ class
as our convolutional layer. We define a CNN with 3 convolutional layers.
Each convolution is followed by a ReLU. At the end, we perform an
average pooling. (Note that ``view`` is PyTorch's version of numpy's
``reshape``)
```
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1, xb.size(1))
lr = 0.1
```
`Momentum <https://cs231n.github.io/neural-networks-3/#sgd>`_ is a variation on
stochastic gradient descent that takes previous updates into account as well
and generally leads to faster training.
```
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
nn.Sequential
------------------------
``torch.nn`` has another handy class we can use to simply our code:
`Sequential <https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential>`_ .
A ``Sequential`` object runs each of the modules contained within it, in a
sequential manner. This is a simpler way of writing our neural network.
To take advantage of this, we need to be able to easily define a
**custom layer** from a given function. For instance, PyTorch doesn't
have a `view` layer, and we need to create one for our network. ``Lambda``
will create a layer that we can then use when defining a network with
``Sequential``.
```
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
def preprocess(x):
return x.view(-1, 1, 28, 28)
```
The model created with ``Sequential`` is simply:
```
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
Wrapping DataLoader
-----------------------------
Our CNN is fairly concise, but it only works with MNIST, because:
- It assumes the input is a 28\*28 long vector
- It assumes that the final CNN grid size is 4\*4 (since that's the average
pooling kernel size we used)
Let's get rid of these two assumptions, so our model works with any 2d
single channel image. First, we can remove the initial Lambda layer but
moving the data preprocessing into a generator:
```
def preprocess(x, y):
return x.view(-1, 1, 28, 28), y
class WrappedDataLoader:
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self):
return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches:
yield (self.func(*b))
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
```
Next, we can replace ``nn.AvgPool2d`` with ``nn.AdaptiveAvgPool2d``, which
allows us to define the size of the *output* tensor we want, rather than
the *input* tensor we have. As a result, our model will work with any
size input.
```
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
```
Let's try it out:
```
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
Using your GPU
---------------
If you're lucky enough to have access to a CUDA-capable GPU (you can
rent one for about $0.50/hour from most cloud providers) you can
use it to speed up your code. First check that your GPU is working in
Pytorch:
```
print(torch.cuda.is_available())
```
And then create a device object for it:
```
dev = torch.device(
"cuda") if torch.cuda.is_available() else torch.device("cpu")
```
Let's update ``preprocess`` to move batches to the GPU:
```
def preprocess(x, y):
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
```
Finally, we can move our model to the GPU.
```
model.to(dev)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
```
You should find it runs faster now:
```
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
Closing thoughts
-----------------
We now have a general data pipeline and training loop which you can use for
training many types of models using Pytorch. To see how simple training a model
can now be, take a look at the `mnist_sample` sample notebook.
Of course, there are many things you'll want to add, such as data augmentation,
hyperparameter tuning, monitoring training, transfer learning, and so forth.
These features are available in the fastai library, which has been developed
using the same design approach shown in this tutorial, providing a natural
next step for practitioners looking to take their models further.
We promised at the start of this tutorial we'd explain through example each of
``torch.nn``, ``torch.optim``, ``Dataset``, and ``DataLoader``. So let's summarize
what we've seen:
- **torch.nn**
+ ``Module``: creates a callable which behaves like a function, but can also
contain state(such as neural net layer weights). It knows what ``Parameter`` (s) it
contains and can zero all their gradients, loop through them for weight updates, etc.
+ ``Parameter``: a wrapper for a tensor that tells a ``Module`` that it has weights
that need updating during backprop. Only tensors with the `requires_grad` attribute set are updated
+ ``functional``: a module(usually imported into the ``F`` namespace by convention)
which contains activation functions, loss functions, etc, as well as non-stateful
versions of layers such as convolutional and linear layers.
- ``torch.optim``: Contains optimizers such as ``SGD``, which update the weights
of ``Parameter`` during the backward step
- ``Dataset``: An abstract interface of objects with a ``__len__`` and a ``__getitem__``,
including classes provided with Pytorch such as ``TensorDataset``
- ``DataLoader``: Takes any ``Dataset`` and creates an iterator which returns batches of data.
| github_jupyter |
# ART for TensorFlow v2 - Keras API
This notebook demonstrate applying ART with the new TensorFlow v2 using the Keras API. The code follows and extends the examples on www.tensorflow.org.
```
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import numpy as np
from matplotlib import pyplot as plt
from art.estimators.classification import KerasClassifier
from art.attacks.evasion import FastGradientMethod, CarliniLInfMethod
if tf.__version__[0] != '2':
raise ImportError('This notebook requires TensorFlow v2.')
```
# Load MNIST dataset
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_test = x_test[0:100]
y_test = y_test[0:100]
```
# TensorFlow with Keras API
Create a model using Keras API. Here we use the Keras Sequential model and add a sequence of layers. Afterwards the model is compiles with optimizer, loss function and metrics.
```
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']);
```
Fit the model on training data.
```
model.fit(x_train, y_train, epochs=3);
```
Evaluate model accuracy on test data.
```
loss_test, accuracy_test = model.evaluate(x_test, y_test)
print('Accuracy on test data: {:4.2f}%'.format(accuracy_test * 100))
```
Create a ART Keras classifier for the TensorFlow Keras model.
```
classifier = KerasClassifier(model=model, clip_values=(0, 1))
```
## Fast Gradient Sign Method attack
Create a ART Fast Gradient Sign Method attack.
```
attack_fgsm = FastGradientMethod(estimator=classifier, eps=0.3)
```
Generate adversarial test data.
```
x_test_adv = attack_fgsm.generate(x_test)
```
Evaluate accuracy on adversarial test data and calculate average perturbation.
```
loss_test, accuracy_test = model.evaluate(x_test_adv, y_test)
perturbation = np.mean(np.abs((x_test_adv - x_test)))
print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100))
print('Average perturbation: {:4.2f}'.format(perturbation))
```
Visualise the first adversarial test sample.
```
plt.matshow(x_test_adv[0])
plt.show()
```
## Carlini&Wagner Infinity-norm attack
Create a ART Carlini&Wagner Infinity-norm attack.
```
attack_cw = CarliniLInfMethod(classifier=classifier, eps=0.3, max_iter=100, learning_rate=0.01)
```
Generate adversarial test data.
```
x_test_adv = attack_cw.generate(x_test)
```
Evaluate accuracy on adversarial test data and calculate average perturbation.
```
loss_test, accuracy_test = model.evaluate(x_test_adv, y_test)
perturbation = np.mean(np.abs((x_test_adv - x_test)))
print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100))
print('Average perturbation: {:4.2f}'.format(perturbation))
```
Visualise the first adversarial test sample.
```
plt.matshow(x_test_adv[0, :, :])
plt.show()
```
| github_jupyter |
# Prophet
Time serie forecasting using Prophet
Official documentation: https://facebook.github.io/prophet/docs/quick_start.html
Procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It is released by Facebook’s Core Data Science team.
Additive model is a model like:
$Data = seasonal\space effect + trend + residual$
and, multiplicative model:
$Data = seasonal\space effect * trend * residual$
The algorithm provides useful statistics that help visualize the tuning process, e.g. trend, week trend, year trend and their max and min errors.
### Data
The data on which the algorithms will be trained and tested upon comes from Kaggle Hourly Energy Consumption database. It is collected by PJM Interconnection, a company coordinating the continuous buying, selling, and delivery of wholesale electricity through the Energy Market from suppliers to customers in the reagon of South Carolina, USA. All .csv files contains rows with a timestamp and a value. The name of the value column corresponds to the name of the contractor. the timestamp represents a single hour and the value represents the total energy, cunsumed during that hour.
The data we will be using is hourly power consumption data from PJM. Energy consumtion has some unique charachteristics. It will be interesting to see how prophet picks them up.
https://www.kaggle.com/robikscube/hourly-energy-consumption
Pulling the PJM East which has data from 2002-2018 for the entire east region.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from fbprophet import Prophet
from sklearn.metrics import mean_squared_error, mean_absolute_error
plt.style.use('fivethirtyeight') # For plots
dataset_path = './data/hourly-energy-consumption/PJME_hourly.csv'
df = pd.read_csv(dataset_path, index_col=[0], parse_dates=[0])
print("Dataset path:",df.shape)
df.head(10)
# VISUALIZE DATA
# Color pallete for plotting
color_pal = ["#F8766D", "#D39200", "#93AA00",
"#00BA38", "#00C19F", "#00B9E3",
"#619CFF", "#DB72FB"]
df.plot(style='.', figsize=(20,10), color=color_pal[0], title='PJM East Dataset TS')
plt.show()
#Decompose the seasonal data
def create_features(df, label=None):
"""
Creates time series features from datetime index.
"""
df = df.copy()
df['date'] = df.index
df['hour'] = df['date'].dt.hour
df['dayofweek'] = df['date'].dt.dayofweek
df['quarter'] = df['date'].dt.quarter
df['month'] = df['date'].dt.month
df['year'] = df['date'].dt.year
df['dayofyear'] = df['date'].dt.dayofyear
df['dayofmonth'] = df['date'].dt.day
df['weekofyear'] = df['date'].dt.weekofyear
X = df[['hour','dayofweek','quarter','month','year',
'dayofyear','dayofmonth','weekofyear']]
if label:
y = df[label]
return X, y
return X
df.columns
X, y = create_features(df, label='PJME_MW')
features_and_target = pd.concat([X, y], axis=1)
print("Shape",features_and_target.shape)
features_and_target.head(10)
sns.pairplot(features_and_target.dropna(),
hue='hour',
x_vars=['hour','dayofweek',
'year','weekofyear'],
y_vars='PJME_MW',
height=5,
plot_kws={'alpha':0.15, 'linewidth':0}
)
plt.suptitle('Power Use MW by Hour, Day of Week, Year and Week of Year')
plt.show()
```
## Train and Test Split
We use a temporal split, keeping old data and use only new period to do the prediction
```
split_date = '01-Jan-2015'
pjme_train = df.loc[df.index <= split_date].copy()
pjme_test = df.loc[df.index > split_date].copy()
# Plot train and test so you can see where we have split
pjme_test \
.rename(columns={'PJME_MW': 'TEST SET'}) \
.join(pjme_train.rename(columns={'PJME_MW': 'TRAINING SET'}),
how='outer') \
.plot(figsize=(15,5), title='PJM East', style='.')
plt.show()
```
To use prophet you need to correctly rename features and label to correctly pass the input to the engine.
```
# Format data for prophet model using ds and y
pjme_train.reset_index() \
.rename(columns={'Datetime':'ds',
'PJME_MW':'y'})
print(pjme_train.columns)
pjme_train.head(5)
```
### Create and train the model
```
# Setup and train model and fit
model = Prophet()
model.fit(pjme_train.reset_index() \
.rename(columns={'Datetime':'ds',
'PJME_MW':'y'}))
# Predict on training set with model
pjme_test_fcst = model.predict(df=pjme_test.reset_index() \
.rename(columns={'Datetime':'ds'}))
pjme_test_fcst.head()
```
### Plot the results and forecast
```
# Plot the forecast
f, ax = plt.subplots(1)
f.set_figheight(5)
f.set_figwidth(15)
fig = model.plot(pjme_test_fcst,
ax=ax)
plt.show()
# Plot the components of the model
fig = model.plot_components(pjme_test_fcst)
```
| github_jupyter |
```
from PyEIS import *
```
## Frequency range
The first first step needed to simulate an electrochemical impedance spectra is to generate a frequency domain, to do so, use to build-in freq_gen() function, as follows
```
f_range = freq_gen(f_start=10**10, f_stop=0.1, pts_decade=7)
# print(f_range[0]) #First 5 points in the freq. array
print()
# print(f_range[1]) #First 5 points in the angular freq.array
```
Note that all functions included are described, to access these descriptions stay within () and press shift+tab. The freq_gen(), returns both the frequency, which is log seperated based on points/decade between f_start to f_stop, and the angular frequency. This function is quite useful and will be used through this tutorial
## The Equivalent Circuits
There exist a number of equivalent circuits that can be simulated and fitted, these functions are made as definations and can be called at any time. To find these, write: "cir_" and hit tab. All functions are outline in the next cell and can also be viewed in the equivalent circuit overview:
```
cir_RC
cir_RQ
cir_RsRQ
cir_RsRQRQ
cir_Randles
cir_Randles_simplified
cir_C_RC_C
cir_Q_RQ_Q
cir_RCRCZD
cir_RsTLsQ
cir_RsRQTLsQ
cir_RsTLs
cir_RsRQTLs
cir_RsTLQ
cir_RsRQTLQ
cir_RsTL
cir_RsRQTL
cir_RsTL_1Dsolid
cir_RsRQTL_1Dsolid
```
## Simulation of -(RC)-
<img src='https://raw.githubusercontent.com/kbknudsen/PyEIS/master/pyEIS_images/RC_circuit.png' width="300" />
#### Input Parameters:
- w = Angular frequency [1/s]
- R = Resistance [Ohm]
- C = Capacitance [F]
- fs = summit frequency of RC circuit [Hz]
```
RC_example = EIS_sim(frange=f_range[0], circuit=cir_RC(w=f_range[1], R=70, C=10**-6), legend='on')
```
## Simulation of -Rs-(RQ)-
<img src='https://raw.githubusercontent.com/kbknudsen/PyEIS/master/pyEIS_images/RsRQ_circuit.png' width="500" />
#### Input parameters:
- w = Angular frequency [1/s]
- Rs = Series resistance [Ohm]
- R = Resistance [Ohm]
- Q = Constant phase element [s^n/ohm]
- n = Constant phase elelment exponent [-]
- fs = summit frequency of RQ circuit [Hz]
```
RsRQ_example = EIS_sim(frange=f_range[0], circuit=cir_RsRQ(w=f_range[1], Rs=70, R=200, n=.8, Q=10**-5), legend='on')
RsRC_example = EIS_sim(frange=f_range[0], circuit=cir_RsRC(w=f_range[1], Rs=80, R=100, C=10**-5), legend='on')
```
## Simulation of -Rs-(RQ)-(RQ)-
<img src='https://raw.githubusercontent.com/kbknudsen/PyEIS/master/pyEIS_images/RsRQRQ_circuit.png' width="500" />
#### Input parameters:
- w = Angular frequency [1/s]
- Rs = Series Resistance [Ohm]
- R = Resistance [Ohm]
- Q = Constant phase element [s^n/ohm]
- n = Constant phase element exponent [-]
- fs = summit frequency of RQ circuit [Hz]
- R2 = Resistance [Ohm]
- Q2 = Constant phase element [s^n/ohm]
- n2 = Constant phase element exponent [-]
- fs2 = summit frequency of RQ circuit [Hz]
```
RsRQRQ_example = EIS_sim(frange=f_range[0], circuit=cir_RsRQRQ(w=f_range[1], Rs=200, R=150, n=.872, Q=10**-4, R2=50, n2=.853, Q2=10**-6), legend='on')
```
## Simulation of -Rs-(Q(RW))- (Randles-circuit)
This circuit is often used for an experimental setup with a macrodisk working electrode with an outer-sphere heterogeneous charge transfer. This, classical, warburg element is controlled by semi-infinite linear diffusion, which is given by the geometry of the working electrode. Two Randles functions are avaliable for simulations: cir_Randles_simplified() and cir_Randles(). The former contains the Warburg constant (sigma), which summs up all mass transport constants (Dox/Dred, Cred/Cox, number of electrons (n_electron), Faradays constant (F), T, and E0) into a single constant sigma, while the latter contains all of these constants. Only cir_Randles_simplified() is avaliable for fitting, as either D$_{ox}$ or D$_{red}$ and C$_{red}$ or C$_{ox}$ are needed.
<img src='https://raw.githubusercontent.com/kbknudsen/PyEIS/master/pyEIS_images/Randles_circuit.png' width="500" />
#### Input parameters:
- Rs = Series resistance [ohm]
- Rct = charge-transfer resistance [ohm]
- Q = Constant phase element used to model the double-layer capacitance [F]
- n = expononent of the CPE [-]
- sigma = Warburg Constant [ohm/s^1/2]
```
Randles = cir_Randles_simplified(w=f_range[1], Rs=100, R=1000, n=1, sigma=300, Q=10**-5)
Randles_example = EIS_sim(frange=f_range[0], circuit=Randles, legend='off')
Randles_example = EIS_sim(frange=f_range[0], circuit=cir_Randles_simplified(w=f_range[1], Rs=100, R=1000, n=1, sigma=300, Q='none', fs=10**3.3), legend='off')
```
In the following, the Randles circuit with the Warburg constant (sigma) defined is simulated where:
- D$_{red}$/D$_{ox}$ = 10$^{-6}$ cm$^2$/s
- C$_{red}$/C$_{ox}$ = 10 mM
- n_electron = 1
- T = 25 $^o$C
This function is a great tool to simulate expected impedance responses prior to starting experiments as it allows for evaluation of concentrations, diffusion constants, number of electrons, and Temp. to evaluate the feasability of obtaining information on either kinetics, mass-transport, or both.
```
Randles_example = EIS_sim(frange=f_range[0], circuit=cir_Randles(w=f_range[1], Rs=100, Rct=1000, Q=10**-7, n=1, T=298.15, D_ox=10**-9, D_red=10**-9, C_ox=10**-5, C_red=10**-5, n_electron=1, E=0, A=1), legend='off')
```
| github_jupyter |
# Scalable GP Classification in 1D (w/ KISS-GP)
This example shows how to use grid interpolation based variational classification with an `ApproximateGP` using a `GridInterpolationVariationalStrategy` module. This classification module is designed for when the inputs of the function you're modeling are one-dimensional.
The use of inducing points allows for scaling up the training data by making computational complexity linear instead of cubic.
In this example, we’re modeling a function that is periodically labeled cycling every 1/8 (think of a square wave with period 1/4)
This notebook doesn't use cuda, in general we recommend GPU use if possible and most of our notebooks utilize cuda as well.
Kernel interpolation for scalable structured Gaussian processes (KISS-GP) was introduced in this paper:
http://proceedings.mlr.press/v37/wilson15.pdf
KISS-GP with SVI for classification was introduced in this paper:
https://papers.nips.cc/paper/6426-stochastic-variational-deep-kernel-learning.pdf
```
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
from math import exp
%matplotlib inline
%load_ext autoreload
%autoreload 2
train_x = torch.linspace(0, 1, 26)
train_y = torch.sign(torch.cos(train_x * (2 * math.pi))).add(1).div(2)
from gpytorch.models import ApproximateGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import GridInterpolationVariationalStrategy
class GPClassificationModel(ApproximateGP):
def __init__(self, grid_size=128, grid_bounds=[(0, 1)]):
variational_distribution = CholeskyVariationalDistribution(grid_size)
variational_strategy = GridInterpolationVariationalStrategy(self, grid_size, grid_bounds, variational_distribution)
super(GPClassificationModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self,x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return latent_pred
model = GPClassificationModel()
likelihood = gpytorch.likelihoods.BernoulliLikelihood()
from gpytorch.mlls.variational_elbo import VariationalELBO
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
# "Loss" for GPs - the marginal log likelihood
# n_data refers to the number of training datapoints
mll = VariationalELBO(likelihood, model, num_data=train_y.numel())
def train():
num_iter = 100
for i in range(num_iter):
optimizer.zero_grad()
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, num_iter, loss.item()))
optimizer.step()
# Get clock time
%time train()
# Set model and likelihood into eval mode
model.eval()
likelihood.eval()
# Initialize axes
f, ax = plt.subplots(1, 1, figsize=(4, 3))
with torch.no_grad():
test_x = torch.linspace(0, 1, 101)
predictions = likelihood(model(test_x))
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
pred_labels = predictions.mean.ge(0.5).float()
ax.plot(test_x.data.numpy(), pred_labels.numpy(), 'b')
ax.set_ylim([-1, 2])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
```
| github_jupyter |
# Showing uncertainty
> Uncertainty occurs everywhere in data science, but it's frequently left out of visualizations where it should be included. Here, we review what a confidence interval is and how to visualize them for both single estimates and continuous functions. Additionally, we discuss the bootstrap resampling technique for assessing uncertainty and how to visualize it properly. This is the Summary of lecture "Improving Your Data Visualizations in Python", via datacamp.
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Datacamp, Visualization]
- image: images/so2_compare.png
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = (10, 5)
```
### Point estimate intervals
- When is uncertainty important?
- Estimates from sample
- Average of a subset
- Linear model coefficients
- Why is uncertainty important?
- Helps inform confidence in estimate
- Neccessary for decision making
- Acknowledges limitations of data
### Basic confidence intervals
You are a data scientist for a fireworks manufacturer in Des Moines, Iowa. You need to make a case to the city that your company's large fireworks show has not caused any harm to the city's air. To do this, you look at the average levels for pollutants in the week after the fourth of July and how they compare to readings taken after your last show. By showing confidence intervals around the averages, you can make a case that the recent readings were well within the normal range.
```
average_ests = pd.read_csv('./dataset/average_ests.csv', index_col=0)
average_ests
# Construct CI bounds for averages
average_ests['lower'] = average_ests['mean'] - 1.96 * average_ests['std_err']
average_ests['upper'] = average_ests['mean'] + 1.96 * average_ests['std_err']
# Setup a grid of plots, with non-shared x axes limits
g = sns.FacetGrid(average_ests, row='pollutant', sharex=False, aspect=2);
# Plot CI for average estimate
g.map(plt.hlines, 'y', 'lower', 'upper');
# Plot observed values for comparison and remove axes labels
g.map(plt.scatter, 'seen', 'y', color='orangered').set_ylabels('').set_xlabels('');
```
This simple visualization shows that all the observed values fall well within the confidence intervals for all the pollutants except for $O_3$.
### Annotating confidence intervals
Your data science work with pollution data is legendary, and you are now weighing job offers in both Cincinnati, Ohio and Indianapolis, Indiana. You want to see if the SO2 levels are significantly different in the two cities, and more specifically, which city has lower levels. To test this, you decide to look at the differences in the cities' SO2 values (Indianapolis' - Cincinnati's) over multiple years.
Instead of just displaying a p-value for a significant difference between the cities, you decide to look at the 95% confidence intervals (columns `lower` and `upper`) of the differences. This allows you to see the magnitude of the differences along with any trends over the years.
```
diffs_by_year = pd.read_csv('./dataset/diffs_by_year.csv', index_col=0)
diffs_by_year
# Set start and ends according to intervals
# Make intervals thicker
plt.hlines(y='year', xmin='lower', xmax='upper',
linewidth=5, color='steelblue', alpha=0.7,
data=diffs_by_year);
# Point estimates
plt.plot('mean', 'year', 'k|', data=diffs_by_year);
# Add a 'null' reference line at 0 and color orangered
plt.axvline(x=0, color='orangered', linestyle='--');
# Set descriptive axis labels and title
plt.xlabel('95% CI');
plt.title('Avg SO2 differences between Cincinnati and Indianapolis');
```
By looking at the confidence intervals you can see that the difference flipped from generally positive (more pollution in Cincinnati) in 2013 to negative (more pollution in Indianapolis) in 2014 and 2015. Given that every year's confidence interval contains the null value of zero, no P-Value would be significant, and a plot that only showed significance would have been entirely hidden this trend.
## Confidence bands
### Making a confidence band
Vandenberg Air Force Base is often used as a location to launch rockets into space. You have a theory that a recent increase in the pace of rocket launches could be harming the air quality in the surrounding region. To explore this, you plotted a 25-day rolling average line of the measurements of atmospheric $NO_2$. To help decide if any pattern observed is random-noise or not, you decide to add a 99% confidence band around your rolling mean. Adding a confidence band to a trend line can help shed light on the stability of the trend seen. This can either increase or decrease the confidence in the discovered trend.
```
vandenberg_NO2 = pd.read_csv('./dataset/vandenberg_NO2.csv', index_col=0)
vandenberg_NO2.head()
# Draw 99% interval bands for average NO2
vandenberg_NO2['lower'] = vandenberg_NO2['mean'] - 2.58 * vandenberg_NO2['std_err']
vandenberg_NO2['upper'] = vandenberg_NO2['mean'] + 2.58 * vandenberg_NO2['std_err']
# Plot mean estimate as a white semi-transparent line
plt.plot('day', 'mean', data=vandenberg_NO2, color='white', alpha=0.4);
# Fill between the upper and lower confidence band values
plt.fill_between(x='day', y1='lower', y2='upper', data=vandenberg_NO2);
```
This plot shows that the middle of the year's $NO_2$ values are not only lower than the beginning and end of the year but also are less noisy. If just the moving average line were plotted, then this potentially interesting observation would be completely missed. (Can you think of what may cause reduced variance at the lower values of the pollutant?)
### Separating a lot of bands
It is relatively simple to plot a bunch of trend lines on top of each other for rapid and precise comparisons. Unfortunately, if you need to add uncertainty bands around those lines, the plot becomes very difficult to read. Figuring out whether a line corresponds to the top of one class' band or the bottom of another's can be hard due to band overlap. Luckily in Seaborn, it's not difficult to break up the overlapping bands into separate faceted plots.
To see this, explore trends in SO2 levels for a few cities in the eastern half of the US. If you plot the trends and their confidence bands on a single plot - it's a mess. To fix, use Seaborn's `FacetGrid()` function to spread out the confidence intervals to multiple panes to ease your inspection.
```
eastern_SO2 = pd.read_csv('./dataset/eastern_SO2.csv', index_col=0)
eastern_SO2.head()
# setup a grid of plots with columns divided by location
g = sns.FacetGrid(eastern_SO2, col='city', col_wrap=2);
# Map interval plots to each cities data with coral colored ribbons
g.map(plt.fill_between, 'day', 'lower', 'upper', color='coral');
# Map overlaid mean plots with white line
g.map(plt.plot, 'day', 'mean', color='white');
```
By separating each band into its own plot you can investigate each city with ease. Here, you see that Des Moines and Houston on average have lower SO2 values for the entire year than the two cities in the Midwest. Cincinnati has a high and variable peak near the beginning of the year but is generally more stable and lower than Indianapolis.
### Cleaning up bands for overlaps
You are working for the city of Denver, Colorado and want to run an ad campaign about how much cleaner Denver's air is than Long Beach, California's air. To investigate this claim, you will compare the SO2 levels of both cities for the year 2014. Since you are solely interested in how the cities compare, you want to keep the bands on the same plot. To make the bands easier to compare, decrease the opacity of the confidence bands and set a clear legend.
```
SO2_compare = pd.read_csv('./dataset/SO2_compare.csv', index_col=0)
SO2_compare.head()
for city, color in [('Denver', '#66c2a5'), ('Long Beach', '#fc8d62')]:
# Filter data to desired city
city_data = SO2_compare[SO2_compare.city == city]
# Set city interval color to desired and lower opacity
plt.fill_between(x='day', y1='lower', y2='upper', data=city_data, color=color, alpha=0.4);
# Draw a faint mean line for reference and give a label for legend
plt.plot('day', 'mean', data=city_data, label=city, color=color, alpha=0.25);
plt.legend();
```
From these two curves you can see that during the first half of the year Long Beach generally has a higher average SO2 value than Denver, in the middle of the year they are very close, and at the end of the year Denver seems to have higher averages. However, by showing the confidence intervals, you can see however that almost none of the year shows a statistically meaningful difference in average values between the two cities.
## Beyond 95%
### 90, 95, and 99% intervals
You are a data scientist for an outdoor adventure company in Fairbanks, Alaska. Recently, customers have been having issues with SO2 pollution, leading to costly cancellations. The company has sensors for CO, NO2, and O3 but not SO2 levels.
You've built a model that predicts SO2 values based on the values of pollutants with sensors (loaded as `pollution_model`, a `statsmodels` object). You want to investigate which pollutant's value has the largest effect on your model's SO2 prediction. This will help you know which pollutant's values to pay most attention to when planning outdoor tours. To maximize the amount of information in your report, show multiple levels of uncertainty for the model estimates.
```
from statsmodels.formula.api import ols
pollution = pd.read_csv('./dataset/pollution_wide.csv')
pollution = pollution.query("city == 'Fairbanks' & year == 2014 & month == 11")
pollution_model = ols(formula='SO2 ~ CO + NO2 + O3 + day', data=pollution)
res = pollution_model.fit()
# Add interval percent widths
alphas = [ 0.01, 0.05, 0.1]
widths = [ '99% CI', '95%', '90%']
colors = ['#fee08b','#fc8d59','#d53e4f']
for alpha, color, width in zip(alphas, colors, widths):
# Grab confidence interval
conf_ints = res.conf_int(alpha)
# Pass current interval color and legend label to plot
plt.hlines(y = conf_ints.index, xmin = conf_ints[0], xmax = conf_ints[1],
colors = color, label = width, linewidth = 10)
# Draw point estimates
plt.plot(res.params, res.params.index, 'wo', label = 'Point Estimate')
plt.legend(loc = 'upper right')
```
### 90 and 95% bands
You are looking at a 40-day rolling average of the $NO_2$ pollution levels for the city of Cincinnati in 2013. To provide as detailed a picture of the uncertainty in the trend you want to look at both the 90 and 99% intervals around this rolling estimate.
To do this, set up your two interval sizes and an orange ordinal color palette. Additionally, to enable precise readings of the bands, make them semi-transparent, so the Seaborn background grids show through.
```
cinci_13_no2 = pd.read_csv('./dataset/cinci_13_no2.csv', index_col=0);
cinci_13_no2.head()
int_widths = ['90%', '99%']
z_scores = [1.67, 2.58]
colors = ['#fc8d59', '#fee08b']
for percent, Z, color in zip(int_widths, z_scores, colors):
# Pass lower and upper confidence bounds and lower opacity
plt.fill_between(
x = cinci_13_no2.day, alpha = 0.4, color = color,
y1 = cinci_13_no2['mean'] - Z * cinci_13_no2['std_err'],
y2 = cinci_13_no2['mean'] + Z * cinci_13_no2['std_err'],
label = percent);
plt.legend();
```
This plot shows us that throughout 2013, the average NO2 values in Cincinnati followed a cyclical pattern with the seasons. However, the uncertainty bands show that for most of the year you can't be sure this pattern is not noise at both a 90 and 99% confidence level.
### Using band thickness instead of coloring
You are a researcher investigating the elevation a rocket reaches before visual is lost and pollutant levels at Vandenberg Air Force Base. You've built a model to predict this relationship, and since you are working independently, you don't have the money to pay for color figures in your journal article. You need to make your model results plot work in black and white. To do this, you will plot the 90, 95, and 99% intervals of the effect of each pollutant as successively smaller bars.
```
rocket_model = pd.read_csv('./dataset/rocket_model.csv', index_col=0)
rocket_model
# Decrase interval thickness as interval widens
sizes = [ 15, 10, 5]
int_widths = ['90% CI', '95%', '99%']
z_scores = [ 1.67, 1.96, 2.58]
for percent, Z, size in zip(int_widths, z_scores, sizes):
plt.hlines(y = rocket_model.pollutant,
xmin = rocket_model['est'] - Z * rocket_model['std_err'],
xmax = rocket_model['est'] + Z * rocket_model['std_err'],
label = percent,
# Resize lines and color them gray
linewidth = size,
color = 'gray');
# Add point estimate
plt.plot('est', 'pollutant', 'wo', data = rocket_model, label = 'Point Estimate');
plt.legend(loc = 'center left', bbox_to_anchor = (1, 0.5));
```
While less elegant than using color to differentiate interval sizes, this plot still clearly allows the reader to access the effect each pollutant has on rocket visibility. You can see that of all the pollutants, O3 has the largest effect and also the tightest confidence bounds
## Visualizing the bootstrap
### The bootstrap histogram
You are considering a vacation to Cincinnati in May, but you have a severe sensitivity to NO2. You pull a few years of pollution data from Cincinnati in May and look at a bootstrap estimate of the average $NO_2$ levels. You only have one estimate to look at the best way to visualize the results of your bootstrap estimates is with a histogram.
While you like the intuition of the bootstrap histogram by itself, your partner who will be going on the vacation with you, likes seeing percent intervals. To accommodate them, you decide to highlight the 95% interval by shading the region.
```
# Perform bootstrapped mean on a vector
def bootstrap(data, n_boots):
return [np.mean(np.random.choice(data,len(data))) for _ in range(n_boots) ]
pollution = pd.read_csv('./dataset/pollution_wide.csv')
cinci_may_NO2 = pollution.query("city == 'Cincinnati' & month == 5").NO2
# Generate bootstrap samples
boot_means = bootstrap(cinci_may_NO2, 1000)
# Get lower and upper 95% interval bounds
lower, upper = np.percentile(boot_means, [2.5, 97.5])
# Plot shaded area for interval
plt.axvspan(lower, upper, color = 'gray', alpha = 0.2);
# Draw histogram of bootstrap samples
sns.distplot(boot_means, bins = 100, kde = False);
```
Your bootstrap histogram looks stable and uniform. You're now confident that the average NO2 levels in Cincinnati during your vacation should be in the range of 16 to 23.
### Bootstrapped regressions
While working for the Long Beach parks and recreation department investigating the relationship between $NO_2$ and $SO_2$ you noticed a cluster of potential outliers that you suspect might be throwing off the correlations.
Investigate the uncertainty of your correlations through bootstrap resampling to see how stable your fits are. For convenience, the bootstrap sampling is complete and is provided as `no2_so2_boot` along with `no2_so2` for the non-resampled data.
```
no2_so2 = pd.read_csv('./dataset/no2_so2.csv', index_col=0)
no2_so2_boot = pd.read_csv('./dataset/no2_so2_boot.csv', index_col=0)
sns.lmplot('NO2', 'SO2', data = no2_so2_boot,
# Tell seaborn to a regression line for each sample
hue = 'sample',
# Make lines blue and transparent
line_kws = {'color': 'steelblue', 'alpha': 0.2},
# Disable built-in confidence intervals
ci = None, legend = False, scatter = False);
# Draw scatter of all points
plt.scatter('NO2', 'SO2', data = no2_so2);
```
The outliers appear to drag down the regression lines as evidenced by the cluster of lines with more severe slopes than average. In a single plot, you have not only gotten a good idea of the variability of your correlation estimate but also the potential effects of outliers.
### Lots of bootstraps with beeswarms
As a current resident of Cincinnati, you're curious to see how the average NO2 values compare to Des Moines, Indianapolis, and Houston: a few other cities you've lived in.
To look at this, you decide to use bootstrap estimation to look at the mean NO2 values for each city. Because the comparisons are of primary interest, you will use a swarm plot to compare the estimates.
```
pollution_may = pollution.query("month == 5")
pollution_may
# Initialize a holder DataFrame for bootstrap results
city_boots = pd.DataFrame()
for city in ['Cincinnati', 'Des Moines', 'Indianapolis', 'Houston']:
# Filter to city
city_NO2 = pollution_may[pollution_may.city == city].NO2
# Bootstrap city data & put in DataFrame
cur_boot = pd.DataFrame({'NO2_avg': bootstrap(city_NO2, 100), 'city': city})
# Append to other city's bootstraps
city_boots = pd.concat([city_boots,cur_boot])
# Beeswarm plot of averages with citys on y axis
sns.swarmplot(y = "city", x = "NO2_avg", data = city_boots, color = 'coral');
```
The beeswarm plots show that Indianapolis and Houston both have the highest average NO2 values, with Cincinnati falling roughly in the middle. Interestingly, you can rather confidently say that Des Moines has the lowest as nearly all its sample estimates fall below those of the other cities.
| github_jupyter |
<h1><center>Introductory Data Analysis Workflow</center></h1>

https://xkcd.com/2054
# An example machine learning notebook
* Original Notebook by [Randal S. Olson](http://www.randalolson.com/)
* Supported by [Jason H. Moore](http://www.epistasis.org/)
* [University of Pennsylvania Institute for Bioinformatics](http://upibi.org/)
* Adapted for LU Py-Sem 2018 by [Valdis Saulespurens](valdis.s.coding@gmail.com)
**You can also [execute the code in this notebook on Binder](https://mybinder.org/v2/gh/ValRCS/RigaComm_DataAnalysis/master) - no local installation required.**
```
# text 17.04.2019
import datetime
print(datetime.datetime.now())
print('hello')
```
## Table of contents
1. [Introduction](#Introduction)
2. [License](#License)
3. [Required libraries](#Required-libraries)
4. [The problem domain](#The-problem-domain)
5. [Step 1: Answering the question](#Step-1:-Answering-the-question)
6. [Step 2: Checking the data](#Step-2:-Checking-the-data)
7. [Step 3: Tidying the data](#Step-3:-Tidying-the-data)
- [Bonus: Testing our data](#Bonus:-Testing-our-data)
8. [Step 4: Exploratory analysis](#Step-4:-Exploratory-analysis)
9. [Step 5: Classification](#Step-5:-Classification)
- [Cross-validation](#Cross-validation)
- [Parameter tuning](#Parameter-tuning)
10. [Step 6: Reproducibility](#Step-6:-Reproducibility)
11. [Conclusions](#Conclusions)
12. [Further reading](#Further-reading)
13. [Acknowledgements](#Acknowledgements)
## Introduction
[[ go back to the top ]](#Table-of-contents)
In the time it took you to read this sentence, terabytes of data have been collectively generated across the world — more data than any of us could ever hope to process, much less make sense of, on the machines we're using to read this notebook.
In response to this massive influx of data, the field of Data Science has come to the forefront in the past decade. Cobbled together by people from a diverse array of fields — statistics, physics, computer science, design, and many more — the field of Data Science represents our collective desire to understand and harness the abundance of data around us to build a better world.
In this notebook, I'm going to go over a basic Python data analysis pipeline from start to finish to show you what a typical data science workflow looks like.
In addition to providing code examples, I also hope to imbue in you a sense of good practices so you can be a more effective — and more collaborative — data scientist.
I will be following along with the data analysis checklist from [The Elements of Data Analytic Style](https://leanpub.com/datastyle), which I strongly recommend reading as a free and quick guidebook to performing outstanding data analysis.
**This notebook is intended to be a public resource. As such, if you see any glaring inaccuracies or if a critical topic is missing, please feel free to point it out or (preferably) submit a pull request to improve the notebook.**
## License
[[ go back to the top ]](#Table-of-contents)
Please see the [repository README file](https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects#license) for the licenses and usage terms for the instructional material and code in this notebook. In general, I have licensed this material so that it is as widely usable and shareable as possible.
## Required libraries
[[ go back to the top ]](#Table-of-contents)
If you don't have Python on your computer, you can use the [Anaconda Python distribution](http://continuum.io/downloads) to install most of the Python packages you need. Anaconda provides a simple double-click installer for your convenience.
This notebook uses several Python packages that come standard with the Anaconda Python distribution. The primary libraries that we'll be using are:
* **NumPy**: Provides a fast numerical array structure and helper functions.
* **pandas**: Provides a DataFrame structure to store data in memory and work with it easily and efficiently.
* **scikit-learn**: The essential Machine Learning package in Python.
* **matplotlib**: Basic plotting library in Python; most other Python plotting libraries are built on top of it.
* **Seaborn**: Advanced statistical plotting library.
* **watermark**: A Jupyter Notebook extension for printing timestamps, version numbers, and hardware information.
**Note:** I will not be providing support for people trying to run this notebook outside of the Anaconda Python distribution.
## The problem domain
[[ go back to the top ]](#Table-of-contents)
For the purposes of this exercise, let's pretend we're working for a startup that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone. We're working with a moderately-sized team of data scientists and will be building part of the data analysis pipeline for this app.
We've been tasked by our company's Head of Data Science to create a demo machine learning model that takes four measurements from the flowers (sepal length, sepal width, petal length, and petal width) and identifies the species based on those measurements alone.
<img src="img/petal_sepal.jpg" />
We've been given a [data set](https://github.com/ValRCS/RCS_Data_Analysis_Python/blob/master/data/iris-data.csv) from our field researchers to develop the demo, which only includes measurements for three types of *Iris* flowers:
### *Iris setosa*
<img src="img/iris_setosa.jpg" />
### *Iris versicolor*
<img src="img/iris_versicolor.jpg" />
### *Iris virginica*
<img src="img/iris_virginica.jpg" />
The four measurements we're using currently come from hand-measurements by the field researchers, but they will be automatically measured by an image processing model in the future.
**Note:** The data set we're working with is the famous [*Iris* data set](https://archive.ics.uci.edu/ml/datasets/Iris) — included with this notebook — which I have modified slightly for demonstration purposes.
## Step 1: Answering the question
[[ go back to the top ]](#Table-of-contents)
The first step to any data analysis project is to define the question or problem we're looking to solve, and to define a measure (or set of measures) for our success at solving that task. The data analysis checklist has us answer a handful of questions to accomplish that, so let's work through those questions.
>Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?
We're trying to classify the species (i.e., class) of the flower based on four measurements that we're provided: sepal length, sepal width, petal length, and petal width.
Petal - ziedlapiņa, sepal - arī ziedlapiņa

>Did you define the metric for success before beginning?
Let's do that now. Since we're performing classification, we can use [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) — the fraction of correctly classified flowers — to quantify how well our model is performing. Our company's Head of Data has told us that we should achieve at least 90% accuracy.
>Did you understand the context for the question and the scientific or business application?
We're building part of a data analysis pipeline for a smartphone app that will be able to classify the species of flowers from pictures taken on the smartphone. In the future, this pipeline will be connected to another pipeline that automatically measures from pictures the traits we're using to perform this classification.
>Did you record the experimental design?
Our company's Head of Data has told us that the field researchers are hand-measuring 50 randomly-sampled flowers of each species using a standardized methodology. The field researchers take pictures of each flower they sample from pre-defined angles so the measurements and species can be confirmed by the other field researchers at a later point. At the end of each day, the data is compiled and stored on a private company GitHub repository.
>Did you consider whether the question could be answered with the available data?
The data set we currently have is only for three types of *Iris* flowers. The model built off of this data set will only work for those *Iris* flowers, so we will need more data to create a general flower classifier.
<hr />
Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the data.
**Thinking about and documenting the problem we're working on is an important step to performing effective data analysis that often goes overlooked.** Don't skip it.
## Step 2: Checking the data
[[ go back to the top ]](#Table-of-contents)
The next step is to look at the data we're working with. Even curated data sets from the government can have errors in them, and it's vital that we spot these errors before investing too much time in our analysis.
Generally, we're looking to answer the following questions:
* Is there anything wrong with the data?
* Are there any quirks with the data?
* Do I need to fix or remove any of the data?
Let's start by reading the data into a pandas DataFrame.
```
import pandas as pd
iris_data = pd.read_csv('../data/iris-data.csv')
# Resources for loading data from nonlocal sources
# Pandas Can generally handle most common formats
# https://pandas.pydata.org/pandas-docs/stable/io.html
# SQL https://stackoverflow.com/questions/39149243/how-do-i-connect-to-a-sql-server-database-with-python
# NoSQL MongoDB https://realpython.com/introduction-to-mongodb-and-python/
# Apache Hadoop: https://dzone.com/articles/how-to-get-hadoop-data-into-a-python-model
# Apache Spark: https://www.datacamp.com/community/tutorials/apache-spark-python
# Data Scraping / Crawling libraries : https://elitedatascience.com/python-web-scraping-libraries Big Topic in itself
# Most data resources have some form of Python API / Library
iris_data.head()
```
We're in luck! The data seems to be in a usable format.
The first row in the data file defines the column headers, and the headers are descriptive enough for us to understand what each column represents. The headers even give us the units that the measurements were recorded in, just in case we needed to know at a later point in the project.
Each row following the first row represents an entry for a flower: four measurements and one class, which tells us the species of the flower.
**One of the first things we should look for is missing data.** Thankfully, the field researchers already told us that they put a 'NA' into the spreadsheet when they were missing a measurement.
We can tell pandas to automatically identify missing values if it knows our missing value marker.
```
iris_data.shape
iris_data.info()
iris_data.describe()
iris_data = pd.read_csv('../data/iris-data.csv', na_values=['NA', 'N/A'])
```
Voilà! Now pandas knows to treat rows with 'NA' as missing values.
Next, it's always a good idea to look at the distribution of our data — especially the outliers.
Let's start by printing out some summary statistics about the data set.
```
iris_data.describe()
```
We can see several useful values from this table. For example, we see that five `petal_width_cm` entries are missing.
If you ask me, though, tables like this are rarely useful unless we know that our data should fall in a particular range. It's usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they might go unnoticed in a large table of numbers.
Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it.
```
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
```
Next, let's create a **scatterplot matrix**. Scatterplot matrices plot the distribution of each column along the diagonal, and then plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.
We can even have the plotting package color each entry by its class to look for trends within the classes.
```
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
```
From the scatterplot matrix, we can already see some issues with the data set:
1. There are five classes when there should only be three, meaning there were some coding errors.
2. There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.
3. We had to drop those rows with missing values.
In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step...
## Step 3: Tidying the data
### GIGO principle
[[ go back to the top ]](#Table-of-contents)
Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.
Let's walk through the issues one-by-one.
>There are five classes when there should only be three, meaning there were some coding errors.
After talking with the field researchers, it sounds like one of them forgot to add `Iris-` before their `Iris-versicolor` entries. The other extraneous class, `Iris-setossa`, was simply a typo that they forgot to fix.
Let's use the DataFrame to fix these errors.
```
iris_data['class'].unique()
# Copy and Replace
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data['class'].unique()
# So we take a row where a specific column('class' here) matches our bad values
# and change them to good values
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()
iris_data.tail()
iris_data[98:103]
```
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used the wrong classes.
>There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.
Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers: if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)
In the case of the one anomalous entry for `Iris-setosa`, let's say our field researchers know that it's impossible for `Iris-setosa` to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry than spending hours finding out what happened.
```
smallpetals = iris_data.loc[(iris_data['sepal_width_cm'] < 2.5) & (iris_data['class'] == 'Iris-setosa')]
smallpetals
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
# Let's go over this command in class
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
```
Excellent! Now all of our `Iris-setosa` rows have a sepal width greater than 2.5.
The next data issue to address is the several near-zero sepal lengths for the `Iris-versicolor` rows. Let's take a look at those rows.
```
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
```
How about that? All of these near-zero `sepal_length_cm` entries seem to be off by two orders of magnitude, as if they had been recorded in meters instead of centimeters.
After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements to centimeters. Let's do that for them.
```
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
iris_data['sepal_length_cm'].hist()
# Here we fix the wrong units
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0),
'sepal_length_cm'] *= 100.0
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
;
iris_data['sepal_length_cm'].hist()
```
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.
>We had to drop those rows with missing values.
Let's take a look at the rows with missing values:
```
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
```
It's not ideal that we had to drop those rows, especially considering they're all `Iris-setosa` entries. Since it seems like the missing data is systematic — all of the missing values are in the same column for the same *Iris* type — this error could potentially bias our analysis.
One way to deal with missing data is **mean imputation**: If we know that the values for a measurement fall in a certain range, we can fill in empty values with the average of that measurement.
Let's see if we can do that here.
```
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
```
Most of the petal widths for `Iris-setosa` fall within the 0.2-0.3 range, so let's fill in these entries with the average measured petal width.
```
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
print(average_petal_width)
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'].isnull()),
'petal_width_cm'] = average_petal_width
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'] == average_petal_width)]
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
```
Great! Now we've recovered those rows and no longer have missing data in our data set.
**Note:** If you don't feel comfortable imputing your data, you can drop all rows with missing data with the `dropna()` call:
iris_data.dropna(inplace=True)
After all this hard work, we don't want to repeat this process every time we work with the data set. Let's save the tidied data file *as a separate file* and work directly with that data file from now on.
```
iris_data.to_json('../data/iris-clean.json')
iris_data.to_csv('../data/iris-data-clean.csv', index=False)
cleanedframe = iris_data.dropna()
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
```
Now, let's take a look at the scatterplot matrix now that we've tidied the data.
```
myplot = sb.pairplot(iris_data_clean, hue='class')
myplot.savefig('irises.png')
import scipy.stats as stats
iris_data = pd.read_csv('../data/iris-data.csv')
iris_data.columns.unique()
stats.entropy(iris_data_clean['sepal_length_cm'])
iris_data.columns[:-1]
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
for col in iris_data.columns[:-1]:
print("Entropy for: ", col, stats.entropy(iris_data[col].dropna()))
```
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you may face while tidying your data.
The general takeaways here should be:
* Make sure your data is encoded properly
* Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that expected range
* Deal with missing data in one way or another: replace it if you can or drop it
* Never tidy your data manually because that is not easily reproducible
* Use code as a record of how you tidied your data
* Plot everything you can about the data at this stage of the analysis so you can *visually* confirm everything looks correct
## Bonus: Testing our data
[[ go back to the top ]](#Table-of-contents)
At SciPy 2015, I was exposed to a great idea: We should test our data. Just how we use unit tests to verify our expectations from code, we can similarly set up unit tests to verify our expectations about a data set.
We can quickly test our data using `assert` statements: We assert that something must be true, and if it is, then nothing happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and brings it to our attention. For example,
```Python
assert 1 == 2
```
will raise an `AssertionError` and stop execution of the notebook because the assertion failed.
Let's test a few things that we know about our data set now.
```
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
# We know that our data set should have no missing measurements
assert len(iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]) == 0
```
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying stage.
### Data Cleanup & Wrangling > 80% time spent in Data Science
## Step 4: Exploratory analysis
[[ go back to the top ]](#Table-of-contents)
Now after spending entirely too much time tidying our data, we can start analyzing it!
Exploratory analysis is the step where we start delving deeper into the data set beyond the outliers and errors. We'll be looking to answer questions such as:
* How is my data distributed?
* Are there any correlations in my data?
* Are there any confounding factors that explain these correlations?
This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them pretty — these charts are for internal use.
Let's return to that scatterplot matrix that we used earlier.
```
sb.pairplot(iris_data_clean)
;
```
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that assume the data is normally distributed.
There's something strange going on with the petal measurements. Maybe it's something to do with the different `Iris` types. Let's color code the data by the class again to see if that clears things up.
```
sb.pairplot(iris_data_clean, hue='class')
;
```
Sure enough, the strange distribution of the petal measurements exist because of the different species. This is actually great news for our classification task since it means that the petal measurements will make it easy to distinguish between `Iris-setosa` and the other `Iris` types.
Distinguishing `Iris-versicolor` and `Iris-virginica` will prove more difficult given how much their measurements overlap.
There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.
We can also make [**violin plots**](https://en.wikipedia.org/wiki/Violin_plot) of the data to compare the measurement distributions of the classes. Violin plots contain the same information as [box plots](https://en.wikipedia.org/wiki/Box_plot), but also scales the box according to the density of the data.
```
plt.figure(figsize=(10, 10))
for column_index, column in enumerate(iris_data_clean.columns):
if column == 'class':
continue
plt.subplot(2, 2, column_index + 1)
sb.violinplot(x='class', y=column, data=iris_data_clean)
```
Enough flirting with the data. Let's get to modeling.
## Step 5: Classification
[[ go back to the top ]](#Table-of-contents)
Wow, all this work and we *still* haven't modeled the data!
As tiresome as it can be, tidying and exploring our data is a vital component to any data analysis. If we had jumped straight to the modeling step, we would have created a faulty classification model.
Remember: **Bad data leads to bad models.** Always check your data first.
<hr />
Assured that our data is now as clean as we can make it — and armed with some cursory knowledge of the distributions and relationships in our data set — it's time to make the next big step in our analysis: Splitting the data into training and testing sets.
A **training set** is a random subset of the data that we use to train our models.
A **testing set** is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on unforseen data.
Especially in sparse data sets like ours, it's easy for models to **overfit** the data: The model will learn the training set so well that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model with the training set, but score it with the testing set.
Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We cannot use any information from the testing set to build our model or else we're cheating.
Let's set up our data first.
```
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# We're using all four measurements as inputs
# Note that scikit-learn expects each entry to be a list of values, e.g.,
# [ [val1, val2, val3],
# [val1, val2, val3],
# ... ]
# such that our input data set is represented as a list of lists
# We can extract the data in this format from pandas like this:
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Similarly, we can extract the class labels
all_labels = iris_data_clean['class'].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_labels[5]
# Here's what a subset of our inputs looks like:
all_inputs[:5]
all_labels[:5]
type(all_inputs)
all_labels[:5]
type(all_labels)
```
Now our data is ready to be split.
```
from sklearn.model_selection import train_test_split
all_inputs[:3]
iris_data_clean.head(3)
all_labels[:3]
# Here we split our data into training and testing data
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1)
training_inputs[:5]
testing_inputs[:5]
testing_classes[:5]
training_classes[:5]
```
With our data split, we can start fitting models to our data. Our company's Head of Data is all about decision tree classifiers, so let's start with one of those.
Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No questions about the data — each time getting closer to finding out the class of each entry — until they either classify the data set perfectly or simply can't differentiate a set of entries. Think of it like a game of [Twenty Questions](https://en.wikipedia.org/wiki/Twenty_Questions), except the computer is *much*, *much* better at it.
Here's an example decision tree classifier:
<img src="img/iris_dtc.png" />
Notice how the classifier asks Yes/No questions about the data — whether a certain feature is <= 1.75, for example — so it can differentiate the records. This is the essence of every decision tree.
The nice part about decision tree classifiers is that they are **scale-invariant**, i.e., the scale of the features does not affect their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1 or 0 to 1,000; decision tree classifiers will work with them just the same.
There are several [parameters](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) that we can tune for decision tree classifiers, but for now let's use a basic decision tree classifier.
```
from sklearn.tree import DecisionTreeClassifier
# Create the classifier
decision_tree_classifier = DecisionTreeClassifier()
# Train the classifier on the training set
decision_tree_classifier.fit(training_inputs, training_classes)
# Validate the classifier on the testing set using classification accuracy
decision_tree_classifier.score(testing_inputs, testing_classes)
150*0.25
len(testing_inputs)
37/38
from sklearn import svm
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
```
Heck yeah! Our model achieves 97% classification accuracy without much effort.
However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere from 80% to 100% accuracy:
```
import matplotlib.pyplot as plt
# here we randomly split data 1000 times in differrent training and test sets
model_accuracies = []
for repetition in range(1000):
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
plt.hist(model_accuracies)
;
100/38
```
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This phenomenon is known as **overfitting**: The model is learning to classify the training set so well that it doesn't generalize and perform well on data it hasn't seen before.
### Cross-validation
[[ go back to the top ]](#Table-of-contents)
This problem is the main reason that most data scientists perform ***k*-fold cross-validation** on their models: Split the original data set into *k* subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set. This process is then repeated *k* times such that each subset is used as the testing set exactly once.
10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data set looks something like this:
(each square is an entry in our data set)
```
# new text
import numpy as np
from sklearn.model_selection import StratifiedKFold
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none', cmap='gray_r')
plt.ylabel('Fold')
plt.xlabel('Row #')
plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)
```
You'll notice that we used **Stratified *k*-fold cross-validation** in the code above. Stratified *k*-fold keeps the class proportions the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have 100% `Iris setosa` entries in one of the folds.)
We can perform 10-fold cross-validation on our model with the following code:
```
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
# cross_val_score returns a list of the scores, which we can visualize
# to get a reasonable estimate of our classifier's performance
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
len(all_inputs.T[1])
print("Entropy for: ", stats.entropy(all_inputs.T[1]))
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
def printEntropy(npdata):
for i, col in enumerate(npdata.T):
print("Entropy for column:", i, stats.entropy(col))
printEntropy(all_inputs)
```
Now we have a much more consistent rating of our classifier's general classification accuracy.
### Parameter tuning
[[ go back to the top ]](#Table-of-contents)
Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:
```
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
```
the classification accuracy falls tremendously.
Therefore, we need to find a systematic method to discover the best parameters for our model and data set.
The most common method for model parameter tuning is **Grid Search**. The idea behind Grid Search is simple: explore a range of parameters and find the best-performing parameter combination. Focus your search on the best range of parameters, then repeat this process several times until the best parameters are discovered.
Let's tune our decision tree classifier. We'll stick to only two parameters for now, but it's possible to simultaneously explore dozens of parameters if we want.
```
from sklearn.model_selection import GridSearchCV
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
```
Now let's visualize the grid search to see how the parameters interact.
```
grid_search.cv_results_['mean_test_score']
grid_visualization = grid_search.cv_results_['mean_test_score']
grid_visualization.shape = (5, 4)
sb.heatmap(grid_visualization, cmap='Reds', annot=True)
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth'])
plt.xlabel('max_features')
plt.ylabel('max_depth')
;
```
Now we have a better sense of the parameter space: We know that we need a `max_depth` of at least 2 to allow the decision tree to make more than a one-off decision.
`max_features` doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily separable from the rest based on a single feature.)
Let's go ahead and use a broad grid search to find the best settings for a handful of parameters.
```
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
```
Now we can take the best classifier from the Grid Search and use that:
```
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier
```
We can even visualize the decision tree with [GraphViz](http://www.graphviz.org/) to see how it's making the classifications:
```
import sklearn.tree as tree
from sklearn.externals.six import StringIO
with open('iris_dtc.dot', 'w') as out_file:
out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)
```
<img src="img/iris_dtc.png" />
(This classifier may look familiar from earlier in the notebook.)
Alright! We finally have our demo classifier. Let's create some visuals of its performance so we have something to show our company's Head of Data.
```
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(dt_scores)
sb.stripplot(dt_scores, jitter=True, color='black')
;
```
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?
We already know from previous projects that Random Forest classifiers usually work better than individual decision trees. A common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify the training set near-perfectly, but fail to generalize to data they have not seen before.
**Random Forest classifiers** work around that limitation by creating a whole bunch of decision trees (hence "forest") — each trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) — and have the decision trees work together to make a more accurate classification.
Let that be a lesson for us: **Even in Machine Learning, we get better results when we work together!**
Let's see if a Random Forest classifier works better here.
The great part about scikit-learn is that the training, testing, parameter tuning, etc. process is the same for all models, so we only need to plug in the new classifier.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
random_forest_classifier = RandomForestClassifier()
parameter_grid = {'n_estimators': [10, 25, 50, 100],
'criterion': ['gini', 'entropy'],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_
```
Now we can compare their performance:
```
random_forest_classifier = grid_search.best_estimator_
rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Random Forest'] * 10})
dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Decision Tree'] * 10})
both_df = rf_df.append(dt_df)
sb.boxplot(x='classifier', y='accuracy', data=both_df)
sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='black')
;
```
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds of possible features to look at. In other words, there wasn't much room for improvement with this data set.
## Step 6: Reproducibility
[[ go back to the top ]](#Table-of-contents)
Ensuring that our work is reproducible is the last and — arguably — most important step in any analysis. **As a rule, we shouldn't place much weight on a discovery that can't be reproduced**. As such, if our analysis isn't reproducible, we might as well not have done it.
Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved along, we have a written record of what we did and why we did it — both in text and code.
Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This typically goes at the top of our notebooks so our readers know what tools to use.
[Sebastian Raschka](http://sebastianraschka.com/) created a handy [notebook tool](https://github.com/rasbt/watermark) for this:
```
!pip install watermark
%load_ext watermark
pd.show_versions()
%watermark -a 'RCS_April_2019' -nmv --packages numpy,pandas,sklearn,matplotlib,seaborn
```
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline.
```
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
def processData(filename):
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv(filename)
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
return rf_classifier_scores
myscores = processData('../data/iris-data-clean.csv')
myscores
```
There we have it: We have a complete and reproducible Machine Learning pipeline to demo to our company's Head of Data. We've met the success criteria that we set from the beginning (>90% accuracy), and our pipeline is flexible enough to handle new inputs or flowers when that data set is ready. Not bad for our first week on the job!
## Conclusions
[[ go back to the top ]](#Table-of-contents)
I hope you found this example notebook useful for your own work and learned at least one new trick by reading through it.
* [Submit an issue](https://github.com/ValRCS/LU-pysem/issues) on GitHub
* Fork the [notebook repository](https://github.com/ValRCS/LU-pysem), make the fix/addition yourself, then send over a pull request
## Further reading
[[ go back to the top ]](#Table-of-contents)
This notebook covers a broad variety of topics but skips over many of the specifics. If you're looking to dive deeper into a particular topic, here's some recommended reading.
**Data Science**: William Chen compiled a [list of free books](http://www.wzchen.com/data-science-books/) for newcomers to Data Science, ranging from the basics of R & Python to Machine Learning to interviews and advice from prominent data scientists.
**Machine Learning**: /r/MachineLearning has a useful [Wiki page](https://www.reddit.com/r/MachineLearning/wiki/index) containing links to online courses, books, data sets, etc. for Machine Learning. There's also a [curated list](https://github.com/josephmisiti/awesome-machine-learning) of Machine Learning frameworks, libraries, and software sorted by language.
**Unit testing**: Dive Into Python 3 has a [great walkthrough](http://www.diveintopython3.net/unit-testing.html) of unit testing in Python, how it works, and how it should be used
**pandas** has [several tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html) covering its myriad features.
**scikit-learn** has a [bunch of tutorials](http://scikit-learn.org/stable/tutorial/index.html) for those looking to learn Machine Learning in Python. Andreas Mueller's [scikit-learn workshop materials](https://github.com/amueller/scipy_2015_sklearn_tutorial) are top-notch and freely available.
**matplotlib** has many [books, videos, and tutorials](http://matplotlib.org/resources/index.html) to teach plotting in Python.
**Seaborn** has a [basic tutorial](http://stanford.edu/~mwaskom/software/seaborn/tutorial.html) covering most of the statistical plotting features.
## Acknowledgements
[[ go back to the top ]](#Table-of-contents)
Many thanks to [Andreas Mueller](http://amueller.github.io/) for some of his [examples](https://github.com/amueller/scipy_2015_sklearn_tutorial) in the Machine Learning section. I drew inspiration from several of his excellent examples.
The photo of a flower with annotations of the petal and sepal was taken by [Eric Guinther](https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg).
The photos of the various *Iris* flower types were taken by [Ken Walker](http://www.signa.org/index.pl?Display+Iris-setosa+2) and [Barry Glick](http://www.signa.org/index.pl?Display+Iris-virginica+3).
## Further questions?
Feel free to contact [Valdis Saulespurens]
(email:valdis.s.coding@gmail.com)
| github_jupyter |
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will train your CNN-RNN model.
You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
- the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Training Setup
- [Step 2](#step2): Train your Model
- [Step 3](#step3): (Optional) Validate your Model
<a id='step1'></a>
## Step 1: Training Setup
In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
### Task #1
Begin by setting the following variables:
- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
- `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
- `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
- `embed_size` - the dimensionality of the image and word embeddings.
- `hidden_size` - the number of features in the hidden state of the RNN decoder.
- `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
### Question 1
**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
**Answer:** I used a pretrained Resnet152 network to extract features (deep CNN network). In the literature other architectures like VGG16 are also used, but Resnet152 is claimed to diminish the vanishing gradient problem.I'm using 2 layers of LSTM currently (as it is taking a lot of time), in the future I will explore with more layers.
vocab_threshold is 6, I tried with 9 (meaning lesser elements in vocab) but the training seemed to converge faster in the case of 6. Many paper suggest taking batch_size of 64 or 128, I went with 64. embed_size and hidden_size are both 512. I consulted several blogs and famous papers like "Show, attend and tell - Xu et al" although i did not use attention currently.
### (Optional) Task #2
Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
- the images in the dataset have varying heights and widths, and
- if using a pre-trained model, you must perform the corresponding appropriate normalization.
### Question 2
**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
**Answer:** The transform value is the same. Empirically, these parameters values worked well in my past projects.
### Task #3
Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
```
params = list(decoder.parameters()) + list(encoder.embed.parameters())
```
### Question 3
**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
**Answer:** Since resnet was pretrained, i trained only the embedding layer and all layers of the decoder. The resnet is already fitting for feature extraction as it is pretrained, hence only the other parts of the architecture should be trained.
### Task #4
Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
### Question 4
**Question:** How did you select the optimizer used to train your model?
**Answer:** I used the Adam optimizer, since in my past similar projects it gave me better performance than SGD. I have found Adam to perform better than vanilla SGD almost in all cases, aligning with intuition.
```
import nltk
nltk.download('punkt')
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 64 # batch size
vocab_threshold = 6 # minimum word count threshold
vocab_from_file = True # if True, load existing vocab file
embed_size = 512 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 3 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# TODO #4: Define the optimizer.
optimizer = torch.optim.Adam(params, lr=0.001, betas=(0.9,0.999), eps=1e-8)
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
```
<a id='step2'></a>
## Step 2: Train your Model
Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
```python
# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
```
While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
### A Note on Tuning Hyperparameters
To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
```
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()
```
<a id='step3'></a>
## Step 3: (Optional) Validate your Model
To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset.
```
# (Optional) TODO: Validate your model.
```
| github_jupyter |
# Mount google drive to colab
```
from google.colab import drive
drive.mount("/content/drive")
```
# Import libraries
```
import os
import random
import numpy as np
import shutil
import time
from PIL import Image, ImageOps
import cv2
import pandas as pd
import math
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
import tensorflow as tf
from keras import models
from keras import layers
from keras import optimizers
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
from keras.callbacks import LearningRateScheduler
from keras.utils import np_utils
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.preprocessing import LabelBinarizer
from sklearn.preprocessing import MinMaxScaler
from keras.preprocessing.image import ImageDataGenerator
from keras import models, layers, optimizers
from keras.callbacks import ModelCheckpoint
from keras import losses
```
# Initialize basic working directories
```
directory = "drive/MyDrive/Datasets/Sign digits/Dataset"
trainDir = "train"
testDir = "test"
os.chdir(directory)
```
# Augmented dataframes
```
augDir = "augmented/"
classNames_train = os.listdir(augDir+'train/')
classNames_test = os.listdir(augDir+'test/')
classes_train = []
data_train = []
paths_train = []
classes_test = []
data_test = []
paths_test = []
classes_val = []
data_val = []
paths_val = []
for className in range(0,10):
temp_train = os.listdir(augDir+'train/'+str(className))
temp_test = os.listdir(augDir+'test/'+str(className))
for dataFile in temp_train:
path_train = augDir+'train/'+str(className)+'/'+dataFile
paths_train.append(path_train)
classes_train .append(str(className))
testSize = [i for i in range(math.floor(len(temp_test)/2),len(temp_test))]
valSize = [i for i in range(0,math.floor(len(temp_test)/2))]
for dataFile in testSize:
path_test = augDir+'test/'+str(className)+'/'+temp_test[dataFile]
paths_test.append(path_test)
classes_test .append(str(className))
for dataFile in valSize:
path_val = augDir+'test/'+str(className)+'/'+temp_test[dataFile]
paths_val.append(path_val)
classes_val .append(str(className))
augTrain_df = pd.DataFrame({'fileNames': paths_train, 'labels': classes_train})
augTest_df = pd.DataFrame({'fileNames': paths_test, 'labels': classes_test})
augVal_df = pd.DataFrame({'fileNames': paths_val, 'labels': classes_val})
augTrain_df.head(10)
augTrain_df['labels'].hist(figsize=(10,5))
augTest_df['labels'].hist(figsize=(10,5))
augTest_df['labels'].hist(figsize=(10,5))
augVal_df['labels'].hist(figsize=(10,5))
augTrainX=[]
augTrainY=[]
augTestX=[]
augTestY=[]
augValX=[]
augValY=[]
iter = -1
#read images from train set
for path in augTrain_df['fileNames']:
iter = iter + 1
#image = np.array((Image.open(path)))
image = cv2.imread(path)
augTrainX.append(image)
label = augTrain_df['labels'][iter]
augTrainY.append(label)
iter = -1
for path in augTest_df['fileNames']:
iter = iter + 1
#image = np.array((Image.open(path)))
image = cv2.imread(path)
augTestX.append(image)
augTestY.append(augTest_df['labels'][iter])
iter = -1
for path in augVal_df['fileNames']:
iter = iter + 1
#image = np.array((Image.open(path)))
image = cv2.imread(path)
augValX.append(image)
augValY.append(augVal_df['labels'][iter])
augTrainX = np.array(augTrainX)
augTestX = np.array(augTestX)
augValX = np.array(augValX)
augTrainX = augTrainX / 255
augTestX = augTestX / 255
augValX = augValX / 255
# OneHot Encode the Output
augTrainY = np_utils.to_categorical(augTrainY, 10)
augTestY = np_utils.to_categorical(augTestY, 10)
augValY = np_utils.to_categorical(augValY, 10)
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_dataframe(dataframe=augTrain_df,
x_col="fileNames",
y_col="labels",
batch_size=16,
class_mode="categorical",
color_mode="grayscale",
target_size=(100,100),
shuffle=True)
validation_generator = validation_datagen.flow_from_dataframe(dataframe=augVal_df,
x_col="fileNames",
y_col="labels",
batch_size=16,
class_mode="categorical",
color_mode="grayscale",
target_size=(100,100),
shuffle=True)
test_generator = test_datagen.flow_from_dataframe(dataframe=augTest_df,
x_col="fileNames",
y_col="labels",
batch_size=16,
class_mode="categorical",
color_mode="grayscale",
target_size=(100,100),
shuffle=True)
model_best = models.Sequential()
model_best.add(layers.Conv2D(64, (3,3), input_shape=(100, 100,1), padding='same', activation='relu'))
model_best.add(layers.BatchNormalization(momentum=0.1))
model_best.add(layers.MaxPooling2D(pool_size=(2,2)))
model_best.add(layers.Conv2D(32, (3,3), padding='same', activation='relu'))
model_best.add(layers.BatchNormalization(momentum=0.1))
model_best.add(layers.MaxPooling2D(pool_size=(2,2)))
model_best.add(layers.Conv2D(16, (3,3), padding='same', activation='relu'))
model_best.add(layers.BatchNormalization(momentum=0.1))
model_best.add(layers.MaxPooling2D(pool_size=(2,2)))
model_best.add(layers.Flatten())
model_best.add(layers.Dense(128, activation='relu'))
model_best.add(layers.Dropout(0.2))
model_best.add(layers.Dense(10, activation='softmax'))
model_best.summary()
print("[INFO] Model is training...")
time1 = time.time() # to measure time taken
# Compile the model
model_best.compile(loss='categorical_crossentropy',
optimizer=optimizers.Adam(learning_rate=1e-3),
metrics=['acc'])
history_best = model_best.fit(
train_generator,
steps_per_epoch=train_generator.samples/train_generator.batch_size ,
epochs=20,
validation_data=validation_generator,
validation_steps=validation_generator.samples/validation_generator.batch_size,
)
print('Time taken: {:.1f} seconds'.format(time.time() - time1)) # to measure time taken
print("[INFO] Model is trained.")
score = model_best.evaluate(test_generator)
print('===Testing loss and accuracy===')
print('Test loss: ', score[0])
print('Test accuracy: ', score[1])
import matplotlib.pyplot as plot
plot.plot(history_best.history['acc'])
plot.plot(history_best.history['val_acc'])
plot.title('Model accuracy')
plot.ylabel('Accuracy')
plot.xlabel('Epoch')
plot.legend(['Train', 'Vall'], loc='upper left')
plot.show()
plot.plot(history_best.history['loss'])
plot.plot(history_best.history['val_loss'])
plot.title('Model loss')
plot.ylabel('Loss')
plot.xlabel('Epoch')
plot.legend(['Train', 'Vall'], loc='upper left')
plot.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_4_pandas_functional.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 2: Python for Machine Learning**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 2 Material
Main video lecture:
* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)
* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)
* Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)
* **Part 2.4: Using Apply and Map in Pandas for Keras** [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)
* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 2.4: Apply and Map
If you've ever worked with Big Data or functional programming languages before, you've likely heard of map/reduce. Map and reduce are two functions that apply a task that you create to a data frame. Pandas supports functional programming techniques that allow you to use functions across en entire data frame. In addition to functions that you write, Pandas also provides several standard functions for use with data frames.
### Using Map with Dataframes
The map function allows you to transform a column by mapping certain values in that column to other values. Consider the Auto MPG data set that contains a field **origin_name** that holds a value between one and three that indicates the geographic origin of each car. We can see how to use the map function to transform this numeric origin into the textual name of each origin.
We will begin by loading the Auto MPG data set.
```
import os
import pandas as pd
import numpy as np
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv",
na_values=['NA', '?'])
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 5)
display(df)
```
The **map** method in Pandas operates on a single column. You provide **map** with a dictionary of values to transform the target column. The map keys specify what values in the target column should be turned into values specified by those keys. The following code shows how the map function can transform the numeric values of 1, 2, and 3 into the string values of North America, Europe and Asia.
```
# Apply the map
df['origin_name'] = df['origin'].map(
{1: 'North America', 2: 'Europe', 3: 'Asia'})
# Shuffle the data, so that we hopefully see
# more regions.
df = df.reindex(np.random.permutation(df.index))
# Display
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 10)
display(df)
```
### Using Apply with Dataframes
The **apply** function of the data frame can run a function over the entire data frame. You can use either be a traditional named function or a lambda function. Python will execute the provided function against each of the rows or columns in the data frame. The **axis** parameter specifies of the function is run across rows or columns. For axis = 1, rows are used. The following code calculates a series called **efficiency** that is the **displacement** divided by **horsepower**.
```
efficiency = df.apply(lambda x: x['displacement']/x['horsepower'], axis=1)
display(efficiency[0:10])
```
You can now insert this series into the data frame, either as a new column or to replace an existing column. The following code inserts this new series into the data frame.
```
df['efficiency'] = efficiency
```
### Feature Engineering with Apply and Map
In this section, we will see how to calculate a complex feature using map, apply, and grouping. The data set is the following CSV:
* https://www.irs.gov/pub/irs-soi/16zpallagi.csv
This URL contains US Government public data for "SOI Tax Stats - Individual Income Tax Statistics." The entry point to the website is here:
* https://www.irs.gov/statistics/soi-tax-stats-individual-income-tax-statistics-2016-zip-code-data-soi
Documentation describing this data is at the above link.
For this feature, we will attempt to estimate the adjusted gross income (AGI) for each of the zip codes. The data file contains many columns; however, you will only use the following:
* STATE - The state (e.g., MO)
* zipcode - The zipcode (e.g. 63017)
* agi_stub - Six different brackets of annual income (1 through 6)
* N1 - The number of tax returns for each of the agi_stubs
Note, the file will have six rows for each zip code, for each of the agi_stub brackets. You can skip zip codes with 0 or 99999.
We will create an output CSV with these columns; however, only one row per zip code. Calculate a weighted average of the income brackets. For example, the following six rows are present for 63017:
|zipcode |agi_stub | N1 |
|--|--|-- |
|63017 |1 | 4710 |
|63017 |2 | 2780 |
|63017 |3 | 2130 |
|63017 |4 | 2010 |
|63017 |5 | 5240 |
|63017 |6 | 3510 |
We must combine these six rows into one. For privacy reasons, AGI's are broken out into 6 buckets. We need to combine the buckets and estimate the actual AGI of a zipcode. To do this, consider the values for N1:
* 1 = 1 to 25,000
* 2 = 25,000 to 50,000
* 3 = 50,000 to 75,000
* 4 = 75,000 to 100,000
* 5 = 100,000 to 200,000
* 6 = 200,000 or more
The median of each of these ranges is approximately:
* 1 = 12,500
* 2 = 37,500
* 3 = 62,500
* 4 = 87,500
* 5 = 112,500
* 6 = 212,500
Using this you can estimate 63017's average AGI as:
```
>>> totalCount = 4710 + 2780 + 2130 + 2010 + 5240 + 3510
>>> totalAGI = 4710 * 12500 + 2780 * 37500 + 2130 * 62500
+ 2010 * 87500 + 5240 * 112500 + 3510 * 212500
>>> print(totalAGI / totalCount)
88689.89205103042
```
We begin by reading in the government data.
```
import pandas as pd
df=pd.read_csv('https://www.irs.gov/pub/irs-soi/16zpallagi.csv')
```
First, we trim all zip codes that are either 0 or 99999. We also select the three fields that we need.
```
df=df.loc[(df['zipcode']!=0) & (df['zipcode']!=99999),
['STATE','zipcode','agi_stub','N1']]
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 10)
display(df)
```
We replace all of the **agi_stub** values with the correct median values with the **map** function.
```
medians = {1:12500,2:37500,3:62500,4:87500,5:112500,6:212500}
df['agi_stub']=df.agi_stub.map(medians)
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 10)
display(df)
```
Next, we group the data frame by zip code.
```
groups = df.groupby(by='zipcode')
```
The program applies a lambda is applied across the groups, and then calculates the AGI estimate.
```
df = pd.DataFrame(groups.apply(
lambda x:sum(x['N1']*x['agi_stub'])/sum(x['N1']))) \
.reset_index()
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 10)
display(df)
```
We can now rename the new agi_estimate column.
```
df.columns = ['zipcode','agi_estimate']
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 10)
display(df)
```
Finally, we check to see that our zip code of 63017 got the correct value.
```
df[ df['zipcode']==63017 ]
```
| github_jupyter |
## Dependencies
```
import os
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
# Set seeds to make the experiment more reproducible.
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
# Preprocecss data
X_train["id_code"] = X_train["id_code"].apply(lambda x: x + ".png")
X_val["id_code"] = X_val["id_code"].apply(lambda x: x + ".png")
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
X_train['diagnosis'] = X_train['diagnosis'].astype('str')
X_val['diagnosis'] = X_val['diagnosis'].astype('str')
display(X_train.head())
```
# Model parameters
```
# Model parameters
N_CLASSES = X_train['diagnosis'].nunique()
BATCH_SIZE = 16
EPOCHS = 40
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 320
WIDTH = 320
CHANNELS = 3
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
def kappa(y_true, y_pred, n_classes=5):
y_trues = K.cast(K.argmax(y_true), K.floatx())
y_preds = K.cast(K.argmax(y_pred), K.floatx())
n_samples = K.cast(K.shape(y_true)[0], K.floatx())
distance = K.sum(K.abs(y_trues - y_preds))
max_distance = n_classes - 1
kappa_score = 1 - ((distance**2) / (n_samples * (max_distance**2)))
return kappa_score
def step_decay(epoch):
lrate = 30e-5
if epoch > 3:
lrate = 15e-5
if epoch > 7:
lrate = 7.5e-5
if epoch > 11:
lrate = 3e-5
if epoch > 15:
lrate = 1e-5
return lrate
def focal_loss(y_true, y_pred):
gamma = 2.0
epsilon = K.epsilon()
pt = y_pred * y_true + (1-y_pred) * (1-y_true)
pt = K.clip(pt, epsilon, 1-epsilon)
CE = -K.log(pt)
FL = K.pow(1-pt, gamma) * CE
loss = K.sum(FL, axis=1)
return loss
```
# Pre-procecess images
```
train_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(base_path, save_path, image_id, HEIGHT, WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
# Pre-procecss train set
for i, image_id in enumerate(X_train['id_code']):
preprocess_image(train_base_path, train_dest_path, image_id, HEIGHT, WIDTH)
# Pre-procecss validation set
for i, image_id in enumerate(X_val['id_code']):
preprocess_image(train_base_path, validation_dest_path, image_id, HEIGHT, WIDTH)
# Pre-procecss test set
for i, image_id in enumerate(test['id_code']):
preprocess_image(test_base_path, test_dest_path, image_id, HEIGHT, WIDTH)
```
# Data generator
```
train_datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
valid_datagen=ImageDataGenerator(rescale=1./255)
train_generator=train_datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="categorical",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=valid_datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="categorical",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=valid_datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
```
# Model
```
def create_model(input_shape, n_out):
input_tensor = Input(shape=input_shape)
base_model = applications.DenseNet169(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/keras-notop/densenet169_weights_tf_dim_ordering_tf_kernels_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(2048, activation='relu')(x)
x = Dropout(0.5)(x)
final_output = Dense(n_out, activation='softmax', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS), n_out=N_CLASSES)
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
class_weights = class_weight.compute_class_weight('balanced', np.unique(X_train['diagnosis'].astype('int').values), X_train['diagnosis'].astype('int').values)
metric_list = ["accuracy", kappa]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
class_weight=class_weights,
verbose=1).history
```
# Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
# lrstep = LearningRateScheduler(step_decay)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es, rlrop]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
class_weight=class_weights,
verbose=1).history
```
# Model loss graph
```
sns.set_style("whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
ax3.plot(history['kappa'], label='Train kappa')
ax3.plot(history['val_kappa'], label='Validation kappa')
ax3.legend(loc='best')
ax3.set_title('Kappa')
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Create empty arays to keep the predictions and labels
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(valid_generator)
scores = model.predict(im, batch_size=valid_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
complete_labels = [np.argmax(label) for label in lastFullComLabels]
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_labels, train_preds), (validation_labels, validation_preds))
```
## Quadratic Weighted Kappa
```
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds+validation_preds, train_labels+validation_labels, weights='quadratic'))
evaluate_model((train_preds, train_labels), (validation_preds, validation_labels))
```
## Apply model to test set and output predictions
```
step_size = test_generator.n//test_generator.batch_size
test_generator.reset()
preds = model.predict_generator(test_generator, steps=step_size)
predictions = np.argmax(preds, axis=1)
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
| github_jupyter |
```
from gs_quant.data import Dataset
from gs_quant.markets.securities import Asset, AssetIdentifier, SecurityMaster
from gs_quant.timeseries import *
from gs_quant.target.instrument import FXOption, IRSwaption
from gs_quant.markets import PricingContext, HistoricalPricingContext, BackToTheFuturePricingContext
from gs_quant.risk import CarryScenario, MarketDataPattern, MarketDataShock, MarketDataShockBasedScenario, MarketDataShockType, CurveScenario,CarryScenario
from gs_quant.markets.portfolio import Portfolio
from gs_quant.risk import IRAnnualImpliedVol
from gs_quant.timeseries import percentiles
from gs_quant.datetime import business_day_offset
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
import warnings
from datetime import date
warnings.filterwarnings('ignore')
sns.set(style="darkgrid", color_codes=True)
from gs_quant.session import GsSession
# external users should substitute their client id and secret; please skip this step if using internal jupyterhub
GsSession.use(client_id=None, client_secret=None, scopes=('run_analytics',))
```
In this notebook, we'll look at entry points for G10 vol, look for crosses with the largest downside sensivity to SPX, indicatively price several structures and analyze their carry profile.
* [1: FX entry point vs richness](#1:-FX-entry-point-vs-richness)
* [2: Downside sensitivity to SPX](#2:-Downside-sensitivity-to-SPX)
* [3: AUDJPY conditional relationship with SPX](#3:-AUDJPY-conditional-relationship-with-SPX)
* [4: Price structures](#4:-Price-structures)
* [5: Analyse rates package](#5:-Analyse-rates-package)
### 1: FX entry point vs richness
Let's pull [GS FX Spot](https://marquee.gs.com/s/developer/datasets/FXSPOT_PREMIUM) and [GS FX Implied Volatility](https://marquee.gs.com/s/developer/datasets/FXIMPLIEDVOL_PREMIUM) and look at implied vs realized vol as well as current implied level as percentile relative to the last 2 years.
```
def format_df(data_dict):
df = pd.concat(data_dict, axis=1)
df.columns = data_dict.keys()
return df.fillna(method='ffill').dropna()
g10 = ['USDJPY', 'EURUSD', 'AUDUSD', 'GBPUSD', 'USDCAD', 'USDNOK', 'NZDUSD', 'USDSEK', 'USDCHF', 'AUDJPY']
start_date = date(2005, 8, 26)
end_date = business_day_offset(date.today(), -1, roll='preceding')
fxspot_dataset, fxvol_dataset = Dataset('FXSPOT_PREMIUM'), Dataset('FXIMPLIEDVOL_PREMIUM')
spot_data, impvol_data, spot_fx = {}, {}, {}
for cross in g10:
spot = fxspot_dataset.get_data(start_date, end_date, bbid=cross)[['spot']].drop_duplicates(keep='last')
spot_fx[cross] = spot['spot']
spot_data[cross] = volatility(spot['spot'], 63) # realized vol
vol = fxvol_dataset.get_data(start_date, end_date, bbid=cross, tenor='3m', deltaStrike='DN', location='NYC')[['impliedVolatility']]
impvol_data[cross] = vol.drop_duplicates(keep='last') * 100
spdata, ivdata = format_df(spot_data), format_df(impvol_data)
diff = ivdata.subtract(spdata).dropna()
_slice = ivdata['2018-09-01': '2020-09-08']
pct_rank = {}
for x in _slice.columns:
pct = percentiles(_slice[x])
pct_rank[x] = pct.iloc[-1]
for fx in pct_rank:
plt.scatter(pct_rank[fx], diff[fx]['2020-09-08'])
plt.legend(pct_rank.keys(),loc='best', bbox_to_anchor=(0.9, -0.13), ncol=3)
plt.xlabel('Percentile of Current Implied Vol')
plt.ylabel('Implied vs Realized Vol')
plt.title('Entry Point vs Richness')
plt.show()
```
### 2: Downside sensitivity to SPX
Let's now look at beta and correlation with SPX across G10.
```
spx_spot = Dataset('TREOD').get_data(start_date, end_date, bbid='SPX')[['closePrice']]
spx_spot = spx_spot.fillna(method='ffill').dropna()
df = pd.DataFrame(spx_spot)
#FX Spot data
fx_spots = format_df(spot_fx)
data = pd.concat([spx_spot, fx_spots], axis=1).dropna()
data.columns = ['SPX'] + g10
beta_spx, corr_spx = {}, {}
#calculate rolling 84d or 4m beta to S&P
for cross in g10:
beta_spx[cross] = beta(data[cross],data['SPX'], 84)
corr_spx[cross] = correlation(data['SPX'], data[cross], 84)
fig, axs = plt.subplots(5, 2, figsize=(18, 20))
for j in range(2):
for i in range(5):
color='tab:blue'
axs[i,j].plot(beta_spx[g10[i + j*5]], color=color)
axs[i,j].set_title(g10[i + j*5])
color='tab:blue'
axs[i,j].set_ylabel('Beta', color=color)
axs[i,j].plot(beta_spx[g10[i + j*5]], color=color)
ax2 = axs[i,j].twinx()
color = 'tab:orange'
ax2.plot(corr_spx[g10[i + j*5]], color=color)
ax2.set_ylabel('Correlation', color=color)
plt.show()
```
### Part 3: AUDJPY conditional relationship with SPX
Let's focus on AUDJPY and look at its relationship with SPX when SPX is significantly up and down.
```
# resample data to weekly from daily & get weekly returns
wk_data = data.resample('W-FRI').last()
rets = returns(wk_data, 1)
sns.set(style='white', color_codes=True)
spx_returns = [-.1, -.05, .05, .1]
r2 = lambda x,y: stats.pearsonr(x,y)[0]**2
betas = pd.DataFrame(index=spx_returns, columns=g10)
for ret in spx_returns:
dns = rets[rets.SPX <= ret].dropna() if ret < 0 else rets[rets.SPX >= ret].dropna()
j = sns.jointplot(x='SPX', y='AUDJPY', data=dns, kind='reg')
j.set_axis_labels('SPX with {}% Returns'.format(ret*100), 'AUDJPY')
j.fig.subplots_adjust(wspace=.02)
plt.show()
```
Let's use the beta for all S&P returns to price a structure
```
sns.jointplot(x='SPX', y='AUDJPY', data=rets, kind='reg', stat_func=r2)
```
### 4: Price structures
##### Let's now look at a few AUDJPY structures as potential hedges
* Buy 4m AUDJPY put using spx beta to size. Max loss limited to premium paid.
* Buy 4m AUDJPY put spread (4.2%/10.6% OTMS). Max loss limited to premium paid.
For more info on this trade, check out our market strats piece [here](https://marquee.gs.com/content/#/article/2020/08/28/gs-marketstrats-audjpy-as-us-election-hedge)
```
#buy 4m AUDJPY put
audjpy_put = FXOption(option_type='Put', pair='AUDJPY', strike_price= 's-4.2%', expiration_date='4m', buy_sell='Buy')
print('cost in bps: {:,.2f}'.format(audjpy_put.premium / audjpy_put.notional_amount * 1e4))
#buy 4m AUDJPY put spread (5.3%/10.6% OTMS)
from gs_quant.markets.portfolio import Portfolio
put1 = FXOption(option_type='Put', pair='AUDJPY', strike_price= 's-4.2%', expiration_date='4m', buy_sell='Buy')
put2 = FXOption(option_type='Put', pair='AUDJPY', strike_price= 's-10.6%', expiration_date='4m', buy_sell='Sell')
fx_package = Portfolio((put1, put2))
cost = put2.premium/put2.notional_amount - put1.premium/put1.notional_amount
print('cost in bps: {:,.2f}'.format(cost * 1e4))
```
##### ...And some rates ideas
* Sell straddle. Max loss unlimited.
* Sell 3m30y straddle, buy 2y30y straddle in a 0 pv package. Max loss unlimited.
```
leg = IRSwaption('Straddle', '30y', notional_currency='USD', expiration_date='3m', buy_sell='Sell')
print('PV in USD: {:,.2f}'.format(leg.dollar_price()))
leg1 = IRSwaption('Straddle', '30y', notional_currency='USD', expiration_date='3m', buy_sell='Sell',name='3m30y ATM Straddle')
leg2 = IRSwaption('Straddle', '30y', notional_currency='USD', expiration_date='2y', notional_amount='{}/pv'.format(leg1.price()), buy_sell='Buy', name = '2y30y ATM Straddle')
rates_package = Portfolio((leg1, leg2))
rates_package.resolve()
print('Package cost in USD: {:,.2f}'.format(rates_package.price().aggregate()))
print('PV Flat notionals ($$m):', round(leg1.notional_amount/1e6, 1),' by ',round(leg2.notional_amount/1e6, 1))
```
### 5: Analyse rates package
```
dates = pd.bdate_range(date(2020, 6, 8), leg1.expiration_date, freq='5B').date.tolist()
with BackToTheFuturePricingContext(dates=dates, roll_to_fwds=True):
future = rates_package.price()
rates_future = future.result().aggregate()
rates_future.plot(figsize=(10, 6), title='Historical PV and carry for rates package')
print('PV breakdown between legs:')
results = future.result().to_frame()
results /= 1e6
results.index=[leg1.name,leg2.name]
results.loc['Total'] = results.sum()
results.round(1)
```
Let's focus on the next 3m and how the calendar carries in different rates shocks.
```
dates = pd.bdate_range(dt.date.today(), leg1.expiration_date, freq='5B').date.tolist()
shocked_pv = pd.DataFrame(columns=['Base', '5bp per week', '50bp instantaneous'], index=dates)
p1, p2, p3 = [], [], []
with PricingContext(is_batch=True):
for t, d in enumerate(dates):
with CarryScenario(date=d, roll_to_fwds=True):
p1.append(rates_package.price())
with MarketDataShockBasedScenario({MarketDataPattern('IR', 'USD'): MarketDataShock(MarketDataShockType.Absolute, t*0.0005)}):
p2.append(rates_package.price())
with MarketDataShockBasedScenario({MarketDataPattern('IR', 'USD'): MarketDataShock(MarketDataShockType.Absolute, 0.005)}):
p3.append(rates_package.price())
shocked_pv.Base = [p.result().aggregate() for p in p1]
shocked_pv['5bp per week'] = [p.result().aggregate() for p in p2]
shocked_pv['50bp instantaneous'] = [p.result().aggregate() for p in p3]
shocked_pv/=1e6
shocked_pv.round(1)
shocked_pv.plot(figsize=(10, 6), title='Carry + scenario analysis')
```
### Disclaimers
Scenarios/predictions: Simulated results are for illustrative purposes only. GS provides no assurance or guarantee that the strategy will operate or would have operated in the past in a manner consistent with the above analysis. Past performance figures are not a reliable indicator of future results.
Indicative Terms/Pricing Levels: This material may contain indicative terms only, including but not limited to pricing levels. There is no representation that any transaction can or could have been effected at such terms or prices. Proposed terms and conditions are for discussion purposes only. Finalized terms and conditions are subject to further discussion and negotiation.
www.goldmansachs.com/disclaimer/sales-and-trading-invest-rec-disclosures.html If you are not accessing this material via Marquee ContentStream, a list of the author's investment recommendations disseminated during the preceding 12 months and the proportion of the author's recommendations that are 'buy', 'hold', 'sell' or other over the previous 12 months is available by logging into Marquee ContentStream using the link below. Alternatively, if you do not have access to Marquee ContentStream, please contact your usual GS representative who will be able to provide this information to you.
Backtesting, Simulated Results, Sensitivity/Scenario Analysis or Spreadsheet Calculator or Model: There may be data presented herein that is solely for illustrative purposes and which may include among other things back testing, simulated results and scenario analyses. The information is based upon certain factors, assumptions and historical information that Goldman Sachs may in its discretion have considered appropriate, however, Goldman Sachs provides no assurance or guarantee that this product will operate or would have operated in the past in a manner consistent with these assumptions. In the event any of the assumptions used do not prove to be true, results are likely to vary materially from the examples shown herein. Additionally, the results may not reflect material economic and market factors, such as liquidity, transaction costs and other expenses which could reduce potential return.
OTC Derivatives Risk Disclosures:
Terms of the Transaction: To understand clearly the terms and conditions of any OTC derivative transaction you may enter into, you should carefully review the Master Agreement, including any related schedules, credit support documents, addenda and exhibits. You should not enter into OTC derivative transactions unless you understand the terms of the transaction you are entering into as well as the nature and extent of your risk exposure. You should also be satisfied that the OTC derivative transaction is appropriate for you in light of your circumstances and financial condition. You may be requested to post margin or collateral to support written OTC derivatives at levels consistent with the internal policies of Goldman Sachs.
Liquidity Risk: There is no public market for OTC derivative transactions and, therefore, it may be difficult or impossible to liquidate an existing position on favorable terms. Transfer Restrictions: OTC derivative transactions entered into with one or more affiliates of The Goldman Sachs Group, Inc. (Goldman Sachs) cannot be assigned or otherwise transferred without its prior written consent and, therefore, it may be impossible for you to transfer any OTC derivative transaction to a third party.
Conflict of Interests: Goldman Sachs may from time to time be an active participant on both sides of the market for the underlying securities, commodities, futures, options or any other derivative or instrument identical or related to those mentioned herein (together, "the Product"). Goldman Sachs at any time may have long or short positions in, or buy and sell Products (on a principal basis or otherwise) identical or related to those mentioned herein. Goldman Sachs hedging and trading activities may affect the value of the Products.
Counterparty Credit Risk: Because Goldman Sachs, may be obligated to make substantial payments to you as a condition of an OTC derivative transaction, you must evaluate the credit risk of doing business with Goldman Sachs or its affiliates.
Pricing and Valuation: The price of each OTC derivative transaction is individually negotiated between Goldman Sachs and each counterparty and Goldman Sachs does not represent or warrant that the prices for which it offers OTC derivative transactions are the best prices available, possibly making it difficult for you to establish what is a fair price for a particular OTC derivative transaction; The value or quoted price of the Product at any time, however, will reflect many factors and cannot be predicted. If Goldman Sachs makes a market in the offered Product, the price quoted by Goldman Sachs would reflect any changes in market conditions and other relevant factors, and the quoted price (and the value of the Product that Goldman Sachs will use for account statements or otherwise) could be higher or lower than the original price, and may be higher or lower than the value of the Product as determined by reference to pricing models used by Goldman Sachs. If at any time a third party dealer quotes a price to purchase the Product or otherwise values the Product, that price may be significantly different (higher or lower) than any price quoted by Goldman Sachs. Furthermore, if you sell the Product, you will likely be charged a commission for secondary market transactions, or the price will likely reflect a dealer discount. Goldman Sachs may conduct market making activities in the Product. To the extent Goldman Sachs makes a market, any price quoted for the OTC derivative transactions, Goldman Sachs may differ significantly from (i) their value determined by reference to Goldman Sachs pricing models and (ii) any price quoted by a third party. The market price of the OTC derivative transaction may be influenced by many unpredictable factors, including economic conditions, the creditworthiness of Goldman Sachs, the value of any underlyers, and certain actions taken by Goldman Sachs.
Market Making, Investing and Lending: Goldman Sachs engages in market making, investing and lending businesses for its own account and the accounts of its affiliates in the same or similar instruments underlying OTC derivative transactions (including such trading as Goldman Sachs deems appropriate in its sole discretion to hedge its market risk in any OTC derivative transaction whether between Goldman Sachs and you or with third parties) and such trading may affect the value of an OTC derivative transaction.
Early Termination Payments: The provisions of an OTC Derivative Transaction may allow for early termination and, in such cases, either you or Goldman Sachs may be required to make a potentially significant termination payment depending upon whether the OTC Derivative Transaction is in-the-money to Goldman Sachs or you at the time of termination. Indexes: Goldman Sachs does not warrant, and takes no responsibility for, the structure, method of computation or publication of any currency exchange rates, interest rates, indexes of such rates, or credit, equity or other indexes, unless Goldman Sachs specifically advises you otherwise.
Risk Disclosure Regarding futures, options, equity swaps, and other derivatives as well as non-investment-grade securities and ADRs: Please ensure that you have read and understood the current options, futures and security futures disclosure document before entering into any such transactions. Current United States listed options, futures and security futures disclosure documents are available from our sales representatives or at http://www.theocc.com/components/docs/riskstoc.pdf, http://www.goldmansachs.com/disclosures/risk-disclosure-for-futures.pdf and https://www.nfa.futures.org/investors/investor-resources/files/security-futures-disclosure.pdf, respectively. Certain transactions - including those involving futures, options, equity swaps, and other derivatives as well as non-investment-grade securities - give rise to substantial risk and are not available to nor suitable for all investors. If you have any questions about whether you are eligible to enter into these transactions with Goldman Sachs, please contact your sales representative. Foreign-currency-denominated securities are subject to fluctuations in exchange rates that could have an adverse effect on the value or price of, or income derived from, the investment. In addition, investors in securities such as ADRs, the values of which are influenced by foreign currencies, effectively assume currency risk.
Options Risk Disclosures: Options may trade at a value other than that which may be inferred from the current levels of interest rates, dividends (if applicable) and the underlier due to other factors including, but not limited to, expectations of future levels of interest rates, future levels of dividends and the volatility of the underlier at any time prior to maturity. Note: Options involve risk and are not suitable for all investors. Please ensure that you have read and understood the current options disclosure document before entering into any standardized options transactions. United States listed options disclosure documents are available from our sales representatives or at http://theocc.com/publications/risks/riskstoc.pdf. A secondary market may not be available for all options. Transaction costs may be a significant factor in option strategies calling for multiple purchases and sales of options, such as spreads. When purchasing long options an investor may lose their entire investment and when selling uncovered options the risk is potentially unlimited. Supporting documentation for any comparisons, recommendations, statistics, technical data, or other similar information will be supplied upon request.
This material is for the private information of the recipient only. This material is not sponsored, endorsed, sold or promoted by any sponsor or provider of an index referred herein (each, an "Index Provider"). GS does not have any affiliation with or control over the Index Providers or any control over the computation, composition or dissemination of the indices. While GS will obtain information from publicly available sources it believes reliable, it will not independently verify this information. Accordingly, GS shall have no liability, contingent or otherwise, to the user or to third parties, for the quality, accuracy, timeliness, continued availability or completeness of the data nor for any special, indirect, incidental or consequential damages which may be incurred or experienced because of the use of the data made available herein, even if GS has been advised of the possibility of such damages.
Standard & Poor's ® and S&P ® are registered trademarks of The McGraw-Hill Companies, Inc. and S&P GSCI™ is a trademark of The McGraw-Hill Companies, Inc. and have been licensed for use by the Issuer. This Product (the "Product") is not sponsored, endorsed, sold or promoted by S&P and S&P makes no representation, warranty or condition regarding the advisability of investing in the Product.
Notice to Brazilian Investors
Marquee is not meant for the general public in Brazil. The services or products provided by or through Marquee, at any time, may not be offered or sold to the general public in Brazil. You have received a password granting access to Marquee exclusively due to your existing relationship with a GS business located in Brazil. The selection and engagement with any of the offered services or products through Marquee, at any time, will be carried out directly by you. Before acting to implement any chosen service or products, provided by or through Marquee you should consider, at your sole discretion, whether it is suitable for your particular circumstances and, if necessary, seek professional advice. Any steps necessary in order to implement the chosen service or product, including but not limited to remittance of funds, shall be carried out at your discretion. Accordingly, such services and products have not been and will not be publicly issued, placed, distributed, offered or negotiated in the Brazilian capital markets and, as a result, they have not been and will not be registered with the Brazilian Securities and Exchange Commission (Comissão de Valores Mobiliários), nor have they been submitted to the foregoing agency for approval. Documents relating to such services or products, as well as the information contained therein, may not be supplied to the general public in Brazil, as the offering of such services or products is not a public offering in Brazil, nor used in connection with any offer for subscription or sale of securities to the general public in Brazil.
The offer of any securities mentioned in this message may not be made to the general public in Brazil. Accordingly, any such securities have not been nor will they be registered with the Brazilian Securities and Exchange Commission (Comissão de Valores Mobiliários) nor has any offer been submitted to the foregoing agency for approval. Documents relating to the offer, as well as the information contained therein, may not be supplied to the public in Brazil, as the offer is not a public offering of securities in Brazil. These terms will apply on every access to Marquee.
Ouvidoria Goldman Sachs Brasil: 0800 727 5764 e/ou ouvidoriagoldmansachs@gs.com
Horário de funcionamento: segunda-feira à sexta-feira (exceto feriados), das 9hs às 18hs.
Ombudsman Goldman Sachs Brazil: 0800 727 5764 and / or ouvidoriagoldmansachs@gs.com
Available Weekdays (except holidays), from 9 am to 6 pm.
| github_jupyter |
# 💡 Solutions
Before trying out these solutions, please start the [gqlalchemy-workshop notebook](../workshop/gqlalchemy-workshop.ipynb) to import all data. Also, this solutions manual is here to help you out, and it is recommended you try solving the exercises first by yourself.
## Exercise 1
**Find out how many genres there are in the database.**
The correct Cypher query is:
```
MATCH (g:Genre)
RETURN count(g) AS num_of_genres;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:
```
from gqlalchemy import match
total_genres = (
match()
.node(labels="Genre", variable="g")
.return_({"count(g)": "num_of_genres"})
.execute()
)
results = list(total_genres)
for result in results:
print(result["num_of_genres"])
```
## Exercise 2
**Find out to how many genres movie 'Matrix, The (1999)' belongs to.**
The correct Cypher query is:
```
MATCH (:Movie {title: 'Matrix, The (1999)'})-[:OF_GENRE]->(g:Genre)
RETURN count(g) AS num_of_genres;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:
```
matrix = (
match()
.node(labels="Movie", variable="m")
.to("OF_GENRE")
.node(labels="Genre", variable="g")
.where("m.title", "=", "Matrix, The (1999)")
.return_({"count(g)": "num_of_genres"})
.execute()
)
results = list(matrix)
for result in results:
print(result["num_of_genres"])
```
## Exercise 3
**Find out the title of the movies that the user with `id` 1 rated.**
The correct Cypher query is:
```
MATCH (:User {id: 1})-[:RATED]->(m:Movie)
RETURN m.title;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:
```
movies = (
match()
.node(labels="User", variable="u")
.to("RATED")
.node(labels="Movie", variable="m")
.where("u.id", "=", 1)
.return_({"m.title": "movie"})
.execute()
)
results = list(movies)
for result in results:
print(result["movie"])
```
## Exercise 4
**List 15 movies of 'Documentary' and 'Comedy' genres and sort them by title descending.**
The correct Cypher query is:
```
MATCH (m:Movie)-[:OF_GENRE]->(:Genre {name: "Documentary"})
MATCH (m)-[:OF_GENRE]->(:Genre {name: "Comedy"})
RETURN m.title
ORDER BY m.title DESC
LIMIT 15;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:
```
movies = (
match()
.node(labels="Movie", variable="m")
.to("OF_GENRE")
.node(labels="Genre", variable="g1")
.where("g1.name", "=", "Documentary")
.match()
.node(labels="Movie", variable="m")
.to("OF_GENRE")
.node(labels="Genre", variable="g2")
.where("g2.name", "=", "Comedy")
.return_({"m.title": "movie"})
.order_by("m.title DESC")
.limit(15)
.execute()
)
results = list(movies)
for result in results:
print(result["movie"])
```
## Exercise 5
**Find out the minimum rating of the 'Star Wars: Episode I - The Phantom Menace (1999)' movie.**
The correct Cypher query is:
```
MATCH (:User)-[r:RATED]->(:Movie {title: 'Star Wars: Episode I - The Phantom Menace (1999)'})
RETURN min(r.rating);
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:
```
rating = (
match()
.node(labels="User")
.to("RATED", variable="r")
.node(labels="Movie", variable="m")
.where("m.title", "=", "Star Wars: Episode I - The Phantom Menace (1999)")
.return_({"min(r.rating)": "min_rating"})
.execute()
)
results = list(rating)
for result in results:
print(result["min_rating"])
```
And that's it! If you have any issues with this notebook, feel free to open an issue on the [GitHub repository](https://github.com/pyladiesams/graphdbs-gqlalchemy-beginner-mar2022), or [join the Discord server](https://discord.gg/memgraph) and get your answer instantly. If you are interested in the Cypher query language and want to learn more, sign up for the free [Cypher Email Course](https://memgraph.com/learn-cypher-query-language).
| github_jupyter |
```
# "PGA Tour Wins Classification"
```
Can We Predict If a PGA Tour Player Won a Tournament in a Given Year?
Golf is picking up popularity, so I thought it would be interesting to focus my project here. I set out to find what sets apart the best golfers from the rest.
I decided to explore their statistics and to see if I could predict which golfers would win in a given year. My original dataset was found on Kaggle, and the data was scraped from the PGA Tour website.
From this data, I performed an exploratory data analysis to explore the distribution of players on numerous aspects of the game, discover outliers, and further explore how the game has changed from 2010 to 2018. I also utilized numerous supervised machine learning models to predict a golfer's earnings and wins.
To predict the golfer's win, I used classification methods such as logisitic regression and Random Forest Classification. The best performance came from the Random Forest Classification method.
1. The Data
pgaTourData.csv contains 1674 rows and 18 columns. Each row indicates a golfer's performance for that year.
```
# Player Name: Name of the golfer
# Rounds: The number of games that a player played
# Fairway Percentage: The percentage of time a tee shot lands on the fairway
# Year: The year in which the statistic was collected
# Avg Distance: The average distance of the tee-shot
# gir: (Green in Regulation) is met if any part of the ball is touching the putting surface while the number of strokes taken is at least two fewer than par
# Average Putts: The average number of strokes taken on the green
# Average Scrambling: Scrambling is when a player misses the green in regulation, but still makes par or better on a hole
# Average Score: Average Score is the average of all the scores a player has played in that year
# Points: The number of FedExCup points a player earned in that year
# Wins: The number of competition a player has won in that year
# Top 10: The number of competitions where a player has placed in the Top 10
# Average SG Putts: Strokes gained: putting measures how many strokes a player gains (or loses) on the greens
# Average SG Total: The Off-the-tee + approach-the-green + around-the-green + putting statistics combined
# SG:OTT: Strokes gained: off-the-tee measures player performance off the tee on all par-4s and par-5s
# SG:APR: Strokes gained: approach-the-green measures player performance on approach shots
# SG:ARG: Strokes gained: around-the-green measures player performance on any shot within 30 yards of the edge of the green
# Money: The amount of prize money a player has earned from tournaments
#collapse
# importing packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Importing the data
df = pd.read_csv('pgaTourData.csv')
# Examining the first 5 data
df.head()
#collapse
df.info()
#collapse
df.shape
```
2. Data Cleaning
After looking at the dataframe, the data needs to be cleaned:
-For the columns Top 10 and Wins, convert the NaNs to 0s
-Change Top 10 and Wins into an int
-Drop NaN values for players who do not have the full statistics
-Change the columns Rounds into int
-Change points to int
-Remove the dollar sign ($) and commas in the column Money
```
# Replace NaN with 0 in Top 10
df['Top 10'].fillna(0, inplace=True)
df['Top 10'] = df['Top 10'].astype(int)
# Replace NaN with 0 in # of wins
df['Wins'].fillna(0, inplace=True)
df['Wins'] = df['Wins'].astype(int)
# Drop NaN values
df.dropna(axis = 0, inplace=True)
# Change Rounds to int
df['Rounds'] = df['Rounds'].astype(int)
# Change Points to int
df['Points'] = df['Points'].apply(lambda x: x.replace(',',''))
df['Points'] = df['Points'].astype(int)
# Remove the $ and commas in money
df['Money'] = df['Money'].apply(lambda x: x.replace('$',''))
df['Money'] = df['Money'].apply(lambda x: x.replace(',',''))
df['Money'] = df['Money'].astype(float)
#collapse
df.info()
#collapse
df.describe()
```
3. Exploratory Data Analysis
```
#collapse_output
# Looking at the distribution of data
f, ax = plt.subplots(nrows = 6, ncols = 3, figsize=(20,20))
distribution = df.loc[:,df.columns!='Player Name'].columns
rows = 0
cols = 0
for i, column in enumerate(distribution):
p = sns.distplot(df[column], ax=ax[rows][cols])
cols += 1
if cols == 3:
cols = 0
rows += 1
```
From the distributions plotted, most of the graphs are normally distributed. However, we can observe that Money, Points, Wins, and Top 10s are all skewed to the right. This could be explained by the separation of the best players and the average PGA Tour player. The best players have multiple placings in the Top 10 with wins that allows them to earn more from tournaments, while the average player will have no wins and only a few Top 10 placings that prevent them from earning as much.
```
#collapse_output
# Looking at the number of players with Wins for each year
win = df.groupby('Year')['Wins'].value_counts()
win = win.unstack()
win.fillna(0, inplace=True)
# Converting win into ints
win = win.astype(int)
print(win)
```
From this table, we can see that most players end the year without a win. It's pretty rare to find a player that has won more than once!
```
# Looking at the percentage of players without a win in that year
players = win.apply(lambda x: np.sum(x), axis=1)
percent_no_win = win[0]/players
percent_no_win = percent_no_win*100
print(percent_no_win)
#collapse_output
# Plotting percentage of players without a win each year
fig, ax = plt.subplots()
bar_width = 0.8
opacity = 0.7
index = np.arange(2010, 2019)
plt.bar(index, percent_no_win, bar_width, alpha = opacity)
plt.xticks(index)
plt.xlabel('Year')
plt.ylabel('%')
plt.title('Percentage of Players without a Win')
```
From the box plot above, we can observe that the percentages of players without a win are around 80%. There was very little variation in the percentage of players without a win in the past 8 years.
```
#collapse_output
# Plotting the number of wins on a bar chart
fig, ax = plt.subplots()
index = np.arange(2010, 2019)
bar_width = 0.2
opacity = 0.7
def plot_bar(index, win, labels):
plt.bar(index, win, bar_width, alpha=opacity, label=labels)
# Plotting the bars
rects = plot_bar(index, win[0], labels = '0 Wins')
rects1 = plot_bar(index + bar_width, win[1], labels = '1 Wins')
rects2 = plot_bar(index + bar_width*2, win[2], labels = '2 Wins')
rects3 = plot_bar(index + bar_width*3, win[3], labels = '3 Wins')
rects4 = plot_bar(index + bar_width*4, win[4], labels = '4 Wins')
rects5 = plot_bar(index + bar_width*5, win[5], labels = '5 Wins')
plt.xticks(index + bar_width, index)
plt.xlabel('Year')
plt.ylabel('Number of Players')
plt.title('Distribution of Wins each Year')
plt.legend()
```
By looking at the distribution of Wins each year, we can see that it is rare for most players to even win a tournament in the PGA Tour. Majority of players do not win, and a very few number of players win more than once a year.
```
# Percentage of people who did not place in the top 10 each year
top10 = df.groupby('Year')['Top 10'].value_counts()
top10 = top10.unstack()
top10.fillna(0, inplace=True)
players = top10.apply(lambda x: np.sum(x), axis=1)
no_top10 = top10[0]/players * 100
print(no_top10)
```
By looking at the percentage of players that did not place in the top 10 by year, We can observe that only approximately 20% of players did not place in the Top 10. In addition, the range for these player that did not place in the Top 10 is only 9.47%. This tells us that this statistic does not vary much on a yearly basis.
```
# Who are some of the longest hitters
distance = df[['Year','Player Name','Avg Distance']].copy()
distance.sort_values(by='Avg Distance', inplace=True, ascending=False)
print(distance.head())
```
Rory McIlroy is one of the longest hitters in the game, setting the average driver distance to be 319.7 yards in 2018. He was also the longest hitter in 2017 with an average of 316.7 yards.
```
# Who made the most money
money_ranking = df[['Year','Player Name','Money']].copy()
money_ranking.sort_values(by='Money', inplace=True, ascending=False)
print(money_ranking.head())
```
We can see that Jordan Spieth has made the most amount of money in a year, earning a total of 12 million dollars in 2015.
```
#collapse_output
# Who made the most money each year
money_rank = money_ranking.groupby('Year')['Money'].max()
money_rank = pd.DataFrame(money_rank)
indexs = np.arange(2010, 2019)
names = []
for i in range(money_rank.shape[0]):
temp = df.loc[df['Money'] == money_rank.iloc[i,0],'Player Name']
names.append(str(temp.values[0]))
money_rank['Player Name'] = names
print(money_rank)
```
With this table, we can examine the earnings of each player by year. Some of the most notable were Jordan Speith's earning of 12 million dollars and Justin Thomas earning the most money in both 2017 and 2018.
```
#collapse_output
# Plot the correlation matrix between variables
corr = df.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
cmap='coolwarm')
df.corr()['Wins']
```
From the correlation matrix, we can observe that Money is highly correlated to wins along with the FedExCup Points. We can also observe that the fairway percentage, year, and rounds are not correlated to Wins.
4. Machine Learning Model (Classification)
To predict winners, I used multiple machine learning models to explore which models could accurately classify if a player is going to win in that year.
To measure the models, I used Receiver Operating Characterisitc Area Under the Curve. (ROC AUC) The ROC AUC tells us how capable the model is at distinguishing players with a win. In addition, as the data is skewed with 83% of players having no wins in that year, ROC AUC is a much better metric than the accuracy of the model.
```
#collapse
# Importing the Machine Learning modules
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn.feature_selection import RFE
from sklearn.metrics import classification_report
from sklearn.preprocessing import PolynomialFeatures
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import MinMaxScaler
```
Preparing the Data for Classification
We know from the calculation above that the data for wins is skewed. Even without machine learning we know that approximately 83% of the players does not lead to a win. Therefore, we will be utilizing ROC AUC as the metric of these models
```
# Adding the Winner column to determine if the player won that year or not
df['Winner'] = df['Wins'].apply(lambda x: 1 if x>0 else 0)
# New DataFrame
ml_df = df.copy()
# Y value for machine learning is the Winner column
target = df['Winner']
# Removing the columns Player Name, Wins, and Winner from the dataframe to avoid leakage
ml_df.drop(['Player Name','Wins','Winner'], axis=1, inplace=True)
print(ml_df.head())
## Logistic Regression Baseline
per_no_win = target.value_counts()[0] / (target.value_counts()[0] + target.value_counts()[1])
per_no_win = per_no_win.round(4)*100
print(str(per_no_win)+str('%'))
#collapse_show
# Function for the logisitic regression
def log_reg(X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state = 10)
clf = LogisticRegression().fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
cf_mat = confusion_matrix(y_test, y_pred)
confusion = pd.DataFrame(data = cf_mat)
print(confusion)
print(classification_report(y_test, y_pred))
# Returning the 5 important features
#rfe = RFE(clf, 5)
# rfe = rfe.fit(X, y)
# print('Feature Importance')
# print(X.columns[rfe.ranking_ == 1].values)
print('ROC AUC Score: {:.2f}'.format(roc_auc_score(y_test, y_pred)))
#collapse_show
log_reg(ml_df, target)
```
From the logisitic regression, we got an accuracy of 0.9 on the training set and an accuracy of 0.91 on the test set. This was surprisingly accurate for a first run. However, the ROC AUC Score of 0.78 could be improved. Therefore, I decided to add more features as a way of possibly improving the model.
```
## Feature Engineering
# Adding Domain Features
ml_d = ml_df.copy()
# Top 10 / Money might give us a better understanding on how well they placed in the top 10
ml_d['Top10perMoney'] = ml_d['Top 10'] / ml_d['Money']
# Avg Distance / Fairway Percentage to give us a ratio that determines how accurate and far a player hits
ml_d['DistanceperFairway'] = ml_d['Avg Distance'] / ml_d['Fairway Percentage']
# Money / Rounds to see on average how much money they would make playing a round of golf
ml_d['MoneyperRound'] = ml_d['Money'] / ml_d['Rounds']
#collapse_show
log_reg(ml_d, target)
#collapse_show
# Adding Polynomial Features to the ml_df
mldf2 = ml_df.copy()
poly = PolynomialFeatures(2)
poly = poly.fit(mldf2)
poly_feature = poly.transform(mldf2)
print(poly_feature.shape)
# Creating a DataFrame with the polynomial features
poly_feature = pd.DataFrame(poly_feature, columns = poly.get_feature_names(ml_df.columns))
print(poly_feature.head())
#collapse_show
log_reg(poly_feature, target)
```
From feature engineering, there were no improvements in the ROC AUC Score. In fact as I added more features, the accuracy and the ROC AUC Score decreased. This could signal to us that another machine learning algorithm could better predict winners.
```
#collapse_show
## Randon Forest Model
def random_forest(X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state = 10)
clf = RandomForestClassifier(n_estimators=200).fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('Accuracy of Random Forest classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Random Forest classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
cf_mat = confusion_matrix(y_test, y_pred)
confusion = pd.DataFrame(data = cf_mat)
print(confusion)
print(classification_report(y_test, y_pred))
# Returning the 5 important features
rfe = RFE(clf, 5)
rfe = rfe.fit(X, y)
print('Feature Importance')
print(X.columns[rfe.ranking_ == 1].values)
print('ROC AUC Score: {:.2f}'.format(roc_auc_score(y_test, y_pred)))
#collapse_show
random_forest(ml_df, target)
#collapse_show
random_forest(ml_d, target)
#collapse_show
random_forest(poly_feature, target)
```
The Random Forest Model scored highly on the ROC AUC Score, obtaining a value of 0.89. With this, we observed that the Random Forest Model could accurately classify players with and without a win.
6. Conclusion
It's been interesting to learn about numerous aspects of the game that differentiate the winner and the average PGA Tour player. For example, we can see that the fairway percentage and greens in regulations do not seem to contribute as much to a player's win. However, all the strokes gained statistics contribute pretty highly to wins for these players. It was interesting to see which aspects of the game that the professionals should put their time into. This also gave me the idea of track my personal golf statistics, so that I could compare it to the pros and find areas of my game that need the most improvement.
Machine Learning Model
I've been able to examine the data of PGA Tour players and classify if a player will win that year or not. With the random forest classification model, I was able to achieve an ROC AUC of 0.89 and an accuracy of 0.95 on the test set. This was a significant improvement from the ROC AUC of 0.78 and accuracy of 0.91. Because the data is skewed with approximately 80% of players not earning a win, the primary measure of the model was the ROC AUC. I was able to improve my model from ROC AUC score of 0.78 to a score of 0.89 by simply trying 3 different models, adding domain features, and polynomial features.
The End!!
| github_jupyter |
# Minimum spanning trees
*Selected Topics in Mathematical Optimization*
**Michiel Stock** ([email](michiel.stock@ugent.be))

```
import matplotlib.pyplot as plt
%matplotlib inline
from minimumspanningtrees import red, green, blue, orange, yellow
```
## Graphs in python
Consider the following example graph:

This graph can be represented using an *adjacency list*. We do this using a `dict`. Every vertex is a key with the adjacent vertices given as a `set` containing tuples `(weight, neighbor)`. The weight is first because this makes it easy to compare the weights of two edges. Note that for every ingoing edges, there is also an outgoing edge, this is an undirected graph.
```
graph = {
'A' : set([(2, 'B'), (3, 'D')]),
'B' : set([(2, 'A'), (1, 'C'), (2, 'E')]),
'C' : set([(1, 'B'), (2, 'D'), (1, 'E')]),
'D' : set([(2, 'C'), (3, 'A'), (3, 'E')]),
'E' : set([(2, 'B'), (1, 'C'), (3, 'D')])
}
```
Sometimes we will use an *edge list*, i.e. a list of (weighted) edges. This is often a more compact way of storing a graph. The edge list is given below. Note that again every edge is double: an in- and outgoing edge is included.
```
edges = [
(2, 'B', 'A'),
(3, 'D', 'A'),
(2, 'C', 'D'),
(3, 'A', 'D'),
(3, 'E', 'D'),
(2, 'B', 'E'),
(3, 'D', 'E'),
(1, 'C', 'E'),
(2, 'E', 'B'),
(2, 'A', 'B'),
(1, 'C', 'B'),
(1, 'E', 'C'),
(1, 'B', 'C'),
(2, 'D', 'C')]
```
We can easily turn one representation in the other (with a time complexity proportional to the number of edges) using the provided functions `edges_to_adj_list` and `adj_list_to_edges`.
```
from minimumspanningtrees import edges_to_adj_list, adj_list_to_edges
adj_list_to_edges(graph)
edges_to_adj_list(edges)
```
## Disjoint-set data structure
Implementing an algorithm for finding the minimum spanning tree is fairly straightforward. The only bottleneck is that the algorithm requires the a disjoint-set data structure to keep track of a set partitioned in a number of disjoined subsets.
For example, consider the following inital set of eight elements.

We decide to group elements A, B and C together in a subset and F and G in another subset.

The disjoint-set data structure support the following operations:
- **Find**: check which subset an element is in. Is typically used to check whether two objects are in the same subset;
- **Union** merges two subsets into a single subset.
A python implementation of a disjoint-set is available using an union-set forest. A simple example will make everything clear!
```
from union_set_forest import USF
animals = ['mouse', 'bat', 'robin', 'trout', 'seagull', 'hummingbird',
'salmon', 'goldfish', 'hippopotamus', 'whale', 'sparrow']
union_set_forest = USF(animals)
# group mammals together
union_set_forest.union('mouse', 'bat')
union_set_forest.union('mouse', 'hippopotamus')
union_set_forest.union('whale', 'bat')
# group birds together
union_set_forest.union('robin', 'seagull')
union_set_forest.union('seagull', 'sparrow')
union_set_forest.union('seagull', 'hummingbird')
union_set_forest.union('robin', 'hummingbird')
# group fishes together
union_set_forest.union('goldfish', 'salmon')
union_set_forest.union('trout', 'salmon')
# mouse and whale in same subset?
print(union_set_forest.find('mouse') == union_set_forest.find('whale'))
# robin and salmon in the same subset?
print(union_set_forest.find('robin') == union_set_forest.find('salmon'))
```
## Heap queue
Can be used to find the minimum of a changing list without having to sort the list every update.
```
from heapq import heapify, heappop, heappush
heap = [(5, 'A'), (3, 'B'), (2, 'C'), (7, 'D')]
heapify(heap) # turn in a heap
print(heap)
# return item lowest value while retaining heap property
print(heappop(heap))
print(heap)
# add new item and retain heap prop
heappush(heap, (4, 'E'))
print(heap)
```
## Prim's algorithm
Prim's algorithm starts with a single vertex and add $|V|-1$ edges to it, always taking the next edge with minimal weight that connects a vertex on the MST to a vertex not yet in the MST.
```
def prim(vertices, edges, start):
"""
Prim's algorithm for finding a minimum spanning tree.
Inputs :
- vertices : a set of the vertices of the Graph
- edges : a list of weighted edges (e.g. (0.7, 'A', 'B') for an
edge from node A to node B with weigth 0.7)
- start : a vertex to start with
Output:
- edges : a minumum spanning tree represented as a list of edges
- total_cost : total cost of the tree
"""
adj_list = edges_to_adj_list(edges) # easier using an adjacency list
... # to complete
return mst_edges, total_cost
```
## Kruskal's algorithm
Kruskal's algorithm is a very simple algorithm to find the minimum spanning tree. The main idea is to start with an intial 'forest' of the individual nodes of the graph. In each step of the algorithm we add an edge with the smallest possible value that connects two disjoint trees in the forest. This process is continued until we have a single tree, which is a minimum spanning tree, or until all edges are considered. In the latter case, the algoritm returns a minimum spanning forest.
```
from minimumspanningtrees import kruskal
def kruskal(vertices, edges):
"""
Kruskal's algorithm for finding a minimum spanning tree.
Inputs :
- vertices : a set of the vertices of the Graph
- edges : a list of weighted edges (e.g. (0.7, 'A', 'B') for an
edge from node A to node B with weigth 0.7)
Output:
- edges : a minumum spanning tree represented as a list of edges
- total_cost : total cost of the tree
"""
... # to complete
return mst_edges, total_cost
```
```
print(vertices)
print(edges[:5])
# compute the minimum spanning tree of the ticket to ride data set
...
```
## Clustering
Minimum spanning trees on a distance graph can be used to cluster a data set.
```
# import features and distance
from clustering import X, D
fig, ax = plt.subplots()
ax.scatter(X[:,0], X[:,1], color=green)
# cluster the data based on the distance
```
| github_jupyter |
**INITIALIZATION:**
- I use these three lines of code on top of my each notebooks because it will help to prevent any problems while reloading the same project. And the third line of code helps to make visualization within the notebook.
```
#@ INITIALIZATION:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
**LIBRARIES AND DEPENDENCIES:**
- I have downloaded all the libraries and dependencies required for the project in one particular cell.
```
#@ IMPORTING NECESSARY LIBRARIES AND DEPENDENCIES:
from keras.models import Sequential
from keras.layers import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dense, Dropout
from keras import backend as K
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.datasets import cifar10
from keras.callbacks import LearningRateScheduler
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import numpy as np
```
**VGG ARCHITECTURE:**
- I will define the build method of Mini VGGNet architecture below. It requires four parameters: width of input image, height of input image, depth of image, number of class labels in the classification task. The Sequential class, the building block of sequential networks sequentially stack one layer on top of the other layer initialized below. Batch Normalization operates over the channels, so in order to apply BN, we need to know which axis to normalize over.
```
#@ DEFINING VGGNET ARCHITECTURE:
class MiniVGGNet: # Defining VGG Network.
@staticmethod
def build(width, height, depth, classes): # Defining Build Method.
model = Sequential() # Initializing Sequential Model.
inputShape = (width, height, depth) # Initializing Input Shape.
chanDim = -1 # Index of Channel Dimension.
if K.image_data_format() == "channels_first":
inputShape = (depth, width, height) # Initializing Input Shape.
chanDim = 1 # Index of Channel Dimension.
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=inputShape)) # Adding Convolutional Layer.
model.add(Activation("relu")) # Adding RELU Activation Function.
model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer.
model.add(Conv2D(32, (3, 3), padding='same')) # Adding Convolutional Layer.
model.add(Activation("relu")) # Adding RELU Activation Function.
model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer.
model.add(MaxPooling2D(pool_size=(2, 2))) # Adding Max Pooling Layer.
model.add(Dropout(0.25)) # Adding Dropout Layer.
model.add(Conv2D(64, (3, 3), padding="same")) # Adding Convolutional Layer.
model.add(Activation("relu")) # Adding RELU Activation Function.
model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer.
model.add(Conv2D(64, (3, 3), padding='same')) # Adding Convolutional Layer.
model.add(Activation("relu")) # Adding RELU Activation Function.
model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer.
model.add(MaxPooling2D(pool_size=(2, 2))) # Adding Max Pooling Layer.
model.add(Dropout(0.25)) # Adding Dropout Layer.
model.add(Flatten()) # Adding Flatten Layer.
model.add(Dense(512)) # Adding FC Dense Layer.
model.add(Activation("relu")) # Adding Activation Layer.
model.add(BatchNormalization()) # Adding Batch Normalization Layer.
model.add(Dropout(0.5)) # Adding Dropout Layer.
model.add(Dense(classes)) # Adding Dense Output Layer.
model.add(Activation("softmax")) # Adding Softmax Layer.
return model
#@ CUSTOM LEARNING RATE SCHEDULER:
def step_decay(epoch): # Definig step decay function.
initAlpha = 0.01 # Initializing initial LR.
factor = 0.25 # Initializing drop factor.
dropEvery = 5 # Initializing epochs to drop.
alpha = initAlpha*(factor ** np.floor((1 + epoch) / dropEvery))
return float(alpha)
```
**VGGNET ON CIFAR10**
```
#@ GETTING THE DATASET:
((trainX, trainY), (testX, testY)) = cifar10.load_data() # Loading Dataset.
trainX = trainX.astype("float") / 255.0 # Normalizing Dataset.
testX = testX.astype("float") / 255.0 # Normalizing Dataset.
#@ PREPARING THE DATASET:
lb = LabelBinarizer() # Initializing LabelBinarizer.
trainY = lb.fit_transform(trainY) # Converting Labels to Vectors.
testY = lb.transform(testY) # Converting Labels to Vectors.
labelNames = ["airplane", "automobile", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"] # Initializing LabelNames.
#@ INITIALIZING OPTIMIZER AND MODEL:
callbacks = [LearningRateScheduler(step_decay)] # Initializing Callbacks.
opt = SGD(0.01, nesterov=True, momentum=0.9) # Initializing SGD Optimizer.
model = MiniVGGNet.build(width=32, height=32, depth=3, classes=10) # Initializing VGGNet Architecture.
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"]) # Compiling VGGNet Model.
H = model.fit(trainX, trainY,
validation_data=(testX, testY), batch_size=64,
epochs=40, verbose=1, callbacks=callbacks) # Training VGGNet Model.
```
**MODEL EVALUATION:**
```
#@ INITIALIZING MODEL EVALUATION:
predictions = model.predict(testX, batch_size=64) # Getting Model Predictions.
print(classification_report(testY.argmax(axis=1),
predictions.argmax(axis=1),
target_names=labelNames)) # Inspecting Classification Report.
#@ INSPECTING TRAINING LOSS AND ACCURACY:
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, 40), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, 40), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, 40), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, 40), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch")
plt.ylabel("Loss/Accuracy")
plt.legend()
plt.show();
```
**Note:**
- Batch Normalization can lead to a faster, more stable convergence with higher accuracy.
- Batch Normalization will require more wall time to train the network even though the network will obtain higher accuracy in less epochs.
| github_jupyter |
# 내가 닮은 연예인은?
사진 모으기
얼굴 영역 자르기
얼굴 영역 Embedding 추출
연예인들의 얼굴과 거리 비교하기
시각화
회고
1. 사진 모으기
2. 얼굴 영역 자르기
이미지에서 얼굴 영역을 자름
image.fromarray를 이용하여 PIL image로 변환한 후, 추후에 시각화에 사용
```
# 필요한 모듈 불러오기
import os
import re
import glob
import glob
import pickle
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as img
import face_recognition
%matplotlib inline
from PIL import Image
import numpy as np
import face_recognition
import os
from PIL import Image
dir_path = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data'
file_list = os.listdir(dir_path)
print(len(file_list))
# 이미지 파일 불러오기
print('연예인 이미지 파일 갯수:', len(file_list) - 5) # 추가한 내 사진 수를 뺀 나머지 사진 수 세기
# 이미지 파일 리스트 확인
print ("파일 리스트:\n{}".format(file_list))
# 이미지 파일 일부 확인
# Set figsize here
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(24,10))
# flatten axes for easy iterating
for i, ax in enumerate(axes.flatten()):
image = img.imread(dir_path+'/'+file_list[i])
ax.imshow(image)
plt.show()
fig.tight_layout()
# 이미지 파일 경로를 파라미터로 넘기면 얼굴 영역만 잘라주는 함수
def get_cropped_face(image_file):
image = face_recognition.load_image_file(image_file)
face_locations = face_recognition.face_locations(image)
a, b, c, d = face_locations[0]
cropped_face = image[a:c,d:b,:]
return cropped_face
# 얼굴 영역이 정확히 잘리는 지 확인
image_path = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/이원재_02.jpg'
cropped_face = get_cropped_face(image_path)
plt.imshow(cropped_face)
```
## Step3. 얼굴 영역의 임베딩 추출하기
```
# 얼굴 영역을 가지고 얼굴 임베딩 벡터를 구하는 함수
def get_face_embedding(face):
return face_recognition.face_encodings(face, model='cnn')
# 파일 경로를 넣으면 embedding_dict를 리턴하는 함수
def get_face_embedding_dict(dir_path):
file_list = os.listdir(dir_path)
embedding_dict = {}
for file in file_list:
try:
img_path = os.path.join(dir_path, file)
face = get_cropped_face(img_path)
embedding = get_face_embedding(face)
if len(embedding) > 0:
# 얼굴영역 face가 제대로 detect되지 않으면 len(embedding)==0인 경우가 발생하므로
# os.path.splitext(file)[0]에는 이미지파일명에서 확장자를 제거한 이름이 담깁니다.
embedding_dict[os.path.splitext(file)[0]] = embedding[0]
# embedding_dict[] 이미지 파일의 임베딩을 구해 담음 키=사람이름, 값=임베딩 벡터
# os.path.splitext(file)[0] 파일의 확장자를 제거한 이름만 추출
# embedding[0]은 넣고 싶은 요소값
except:
continue
return embedding_dict
embedding_dict = get_face_embedding_dict(dir_path)
```
## Step4. 모은 연예인들과 비교하기
```
# 이미지 간 거리를 구하는 함수
def get_distance(name1, name2):
return np.linalg.norm(embedding_dict[name1]-embedding_dict[name2], ord=2)
# 본인 사진의 거리를 확인해보자
print('내 사진끼리의 거리는?:', get_distance('이원재_01', '이원재_02'))
# name1과 name2의 거리를 비교하는 함수를 생성하되, name1은 미리 지정하고, name2는 호출시에 인자로 받도록 합니다.
def get_sort_key_func(name1):
def get_distance_from_name1(name2):
return get_distance(name1, name2)
return get_distance_from_name1
# 닮은꼴 순위, 이름, 임베딩 거리를 포함한 Top-5 리스트 출력하는 함수
def get_nearest_face(name, top=5):
sort_key_func = get_sort_key_func(name)
sorted_faces = sorted(embedding_dict.items(), key=lambda x:sort_key_func(x[0]))
rank_cnt = 1 # 순위를 세는 변수
pass_cnt = 1 # 건너뛴 숫자를 세는 변수(본인 사진 카운트)
end = 0 # 닮은 꼴 5번 출력시 종료하기 위해 세는 변수
for i in range(top+15):
rank_cnt += 1
if sorted_faces[i][0].find('이원재_02') == 0: # 본인 사진인 mypicture라는 파일명으로 시작하는 경우 제외합니다.
pass_cnt += 1
continue
if sorted_faces[i]:
print('순위 {} : 이름({}), 거리({})'.format(rank_cnt - pass_cnt, sorted_faces[i][0], sort_key_func(sorted_faces[i][0])))
end += 1
if end == 5: # end가 5가 된 경우 연예인 5명 출력되었기에 종료합니다.
break
# '이원재_01'과 가장 닮은 사람은 누굴까요?
get_nearest_face('이원재_01')
# '이원재_02'와 가장 닮은 사람은 누굴까요?
get_nearest_face('이원재_02')
```
## Step5. 다양한 재미있는 시각화 시도해 보기
```
# 사진 경로 설정
mypicture1 = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/이원재_01.jpg'
mypicture2 = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/이원재_02.jpg'
mc= os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/MC몽.jpg'
gahee = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/가희.jpg'
seven = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/SE7EN.jpg'
gam = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/감우성.jpg'
gang = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/강경준.jpg'
gyung = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/강경현.jpg'
gi = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/강기영.jpg'
# 크롭한 얼굴을 저장해 보자
a1 = get_cropped_face(mypicture1)
a2 = get_cropped_face(mypicture2)
b1 = get_cropped_face(mc)
b2 = get_cropped_face(gahee)
b3 = get_cropped_face(gam)
plt.figure(figsize=(10,8))
plt.subplot(231)
plt.imshow(a1)
plt.axis('off')
plt.title('1st')
plt.subplot(232)
plt.imshow(a2)
plt.axis('off')
plt.title('me')
plt.subplot(233)
plt.imshow(b1)
plt.axis('off')
plt.title('2nd')
plt.subplot(234)
print('''mypicture의 순위
순위 1 : 이름(사쿠라), 거리(0.36107689719729225)
순위 2 : 이름(트와이스나연), 거리(0.36906292012955577)
순위 3 : 이름(아이유), 거리(0.3703590842312735)
순위 4 : 이름(유트루), 거리(0.3809516850126146)
순위 5 : 이름(지호), 거리(0.3886670633997685)''')
```
| github_jupyter |
<h1>Notebook Content</h1>
1. [Import Packages](#1)
1. [Helper Functions](#2)
1. [Input](#3)
1. [Model](#4)
1. [Prediction](#5)
1. [Complete Figure](#6)
<h1 id="1">1. Import Packages</h1>
Importing all necessary and useful packages in single cell.
```
import numpy as np
import keras
import tensorflow as tf
from numpy import array
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import TimeDistributed
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras_tqdm import TQDMNotebookCallback
from sklearn.preprocessing import MinMaxScaler
from tqdm import tqdm_notebook
import matplotlib.pyplot as plt
import pandas as pd
import random
from random import randint
```
<h1 id="2">2. Helper Functions</h1>
Defining Some helper functions which we will need later in code
```
# split a univariate sequence into samples
def split_sequence(sequence, n_steps, look_ahead=0):
X, y = list(), list()
for i in range(len(sequence)-look_ahead):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1-look_ahead:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix+look_ahead]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
def plot_multi_graph(xAxis,yAxes,title='',xAxisLabel='number',yAxisLabel='Y'):
linestyles = ['-', '--', '-.', ':']
plt.figure()
plt.title(title)
plt.xlabel(xAxisLabel)
plt.ylabel(yAxisLabel)
for key, value in yAxes.items():
plt.plot(xAxis, np.array(value), label=key, linestyle=linestyles[randint(0,3)])
plt.legend()
def normalize(values):
values = array(values, dtype="float64").reshape((len(values), 1))
# train the normalization
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(values)
#print('Min: %f, Max: %f' % (scaler.data_min_, scaler.data_max_))
# normalize the dataset and print the first 5 rows
normalized = scaler.transform(values)
return normalized,scaler
```
<h1 id="3">3. Input</h1>
<h3 id="3-1">3-1. Sequence PreProcessing</h3>
Splitting and Reshaping
```
n_features = 1
n_seq = 20
n_steps = 1
def sequence_preprocessed(values, sliding_window, look_ahead=0):
# Normalization
normalized,scaler = normalize(values)
# Try the following if randomizing the sequence:
# random.seed('sam') # set the seed
# raw_seq = random.sample(raw_seq, 100)
# split into samples
X, y = split_sequence(normalized, sliding_window, look_ahead)
# reshape from [samples, timesteps] into [samples, subsequences, timesteps, features]
X = X.reshape((X.shape[0], n_seq, n_steps, n_features))
return X,y,scaler
```
<h3 id="3-2">3-2. Providing Sequence</h3>
Defining a raw sequence, sliding window of data to consider and look ahead future timesteps
```
# define input sequence
sequence_val = [i for i in range(5000,7000)]
sequence_train = [i for i in range(1000,2000)]
sequence_test = [i for i in range(10000,14000)]
# choose a number of time steps for sliding window
sliding_window = 20
# choose a number of further time steps after end of sliding_window till target start (gap between data and target)
look_ahead = 20
X_train, y_train, scaler_train = sequence_preprocessed(sequence_train, sliding_window, look_ahead)
X_val, y_val ,scaler_val = sequence_preprocessed(sequence_val, sliding_window, look_ahead)
X_test,y_test,scaler_test = sequence_preprocessed(sequence_test, sliding_window, look_ahead)
```
<h1 id="4">4. Model</h1>
<h3 id="4-1">4-1. Defining Layers</h3>
Adding 1D Convolution, Max Pooling, LSTM and finally Dense (MLP) layer
```
# define model
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=1, activation='relu'),
input_shape=(None, n_steps, n_features)
))
model.add(TimeDistributed(MaxPooling1D(pool_size=1)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(50, activation='relu', stateful=False))
model.add(Dense(1))
```
<h3 id="4-2">4-2. Training Model</h3>
Defined early stop, can be used in callbacks param of model fit, not using for now since it's not recommended at first few iterations of experimentation with new data
```
# Defining multiple metrics, leaving it to a choice, some may be useful and few may even surprise on some problems
metrics = ['mean_squared_error',
'mean_absolute_error',
'mean_absolute_percentage_error',
'mean_squared_logarithmic_error',
'logcosh']
# Compiling Model
model.compile(optimizer='adam', loss='mape', metrics=metrics)
# Defining early stop, call it in model fit callback
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
# Fit model
history = model.fit(X_train, y_train, epochs=100, verbose=3, validation_data=(X_val,y_val))
```
<h3 id="4-3">4-3. Evaluating Model</h3>
Plotting Training and Validation mean square error
```
# Plot Errors
for metric in metrics:
xAxis = history.epoch
yAxes = {}
yAxes["Training"]=history.history[metric]
yAxes["Validation"]=history.history['val_'+metric]
plot_multi_graph(xAxis,yAxes, title=metric,xAxisLabel='Epochs')
```
<h1 id="5">5. Prediction</h1>
<h3 id="5-1">5-1. Single Value Prediction</h3>
Predicting a single value slided 20 (our provided figure for look_ahead above) values ahead
```
# demonstrate prediction
x_input = array([i for i in range(100,120)])
print(x_input)
x_input = x_input.reshape((1, n_seq, n_steps, n_features))
yhat = model.predict(x_input)
print(yhat)
```
<h3 id="5-2">5-2. Sequence Prediction</h3>
Predicting complete sequence (determining closeness to target) based on data <br />
<i>change variable for any other sequence though</i>
```
# Prediction from Training Set
predict_train = model.predict(X_train)
# Prediction from Test Set
predict_test = model.predict(X_test)
"""
df = pd.DataFrame(({"normalized y_train":y_train.flatten(),
"normalized predict_train":predict_train.flatten(),
"actual y_train":scaler_train.inverse_transform(y_train).flatten(),
"actual predict_train":scaler_train.inverse_transform(predict_train).flatten(),
}))
"""
df = pd.DataFrame(({
"normalized y_test":y_test.flatten(),
"normalized predict_test":predict_test.flatten(),
"actual y_test":scaler_test.inverse_transform(y_test).flatten(),
"actual predict_test":scaler_test.inverse_transform(predict_test).flatten()
}))
df
```
<h1 id="6">6. Complete Figure</h1>
Data, Target, Prediction - all in one single graph
```
xAxis = [i for i in range(len(y_train))]
yAxes = {}
yAxes["Data"]=sequence_train[sliding_window:len(sequence_train)-look_ahead]
yAxes["Target"]=scaler_train.inverse_transform(y_train)
yAxes["Prediction"]=scaler_train.inverse_transform(predict_train)
plot_multi_graph(xAxis,yAxes,title='')
xAxis = [i for i in range(len(y_test))]
yAxes = {}
yAxes["Data"]=sequence_test[sliding_window:len(sequence_test)-look_ahead]
yAxes["Target"]=scaler_test.inverse_transform(y_test)
yAxes["Prediction"]=scaler_test.inverse_transform(predict_test)
plot_multi_graph(xAxis,yAxes,title='')
print(metrics)
print(model.evaluate(X_test,y_test))
```
| github_jupyter |
# **Libraries**
```
from google.colab import drive
drive.mount('/content/drive')
# ***********************
# *****| LIBRARIES |*****
# ***********************
%tensorflow_version 2.x
import pandas as pd
import numpy as np
import os
import json
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Input, Embedding, Activation, Flatten, Dense
from keras.layers import Conv1D, MaxPooling1D, Dropout
from keras.models import Model
from keras.utils import to_categorical
from keras.optimizers import SGD
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
print("GPU not found")
else:
print('Found GPU at: {}'.format(device_name))
# ******************************
# *****| GLOBAL VARIABLES |*****
# ******************************
test_size = 0.2
convsize = 256
convsize2 = 1024
embedding_size = 27
input_size = 1000
conv_layers = [
[convsize, 7, 3],
[convsize, 7, 3],
[convsize, 3, -1],
[convsize, 3, -1],
[convsize, 3, -1],
[convsize, 3, 3]
]
fully_connected_layers = [convsize2, convsize2]
num_of_classes= 2
dropout_p = 0.5
optimizer= 'adam'
batch = 128
loss = 'categorical_crossentropy'
```
# **Utility functions**
```
# *****************
# *** GET FILES ***
# *****************
def getFiles( driverPath, directory, basename, extension): # Define a function that will return a list of files
pathList = [] # Declare an empty array
directory = os.path.join( driverPath, directory) #
for root, dirs, files in os.walk( directory): # Iterate through roots, dirs and files recursively
for file in files: # For every file in files
if os.path.basename(root) == basename: # If the parent directory of the current file is equal with the parameter
if file.endswith('.%s' % (extension)): # If the searched file ends in the parameter
path = os.path.join(root, file) # Join together the root path and file name
pathList.append(path) # Append the new path to the list
return pathList
# ****************************************
# *** GET DATA INTO A PANDAS DATAFRAME ***
# ****************************************
def getDataFrame( listFiles, maxFiles, minWords, limit):
counter_real, counter_max, limitReached = 0, 0, 0
text_list, label_list = [], []
print("Word min set to: %i." % ( minWords))
# Iterate through all the files
for file in listFiles:
# Open each file and look into it
with open(file) as f:
if(limitReached):
break
if maxFiles == 0:
break
else:
maxFiles -= 1
objects = json.loads( f.read())['data'] # Get the data from the JSON file
# Look into each object from the file and test for limiters
for object in objects:
if limit > 0 and counter_real >= (limit * 1000):
limitReached = 1
break
if len( object['text'].split()) >= minWords:
text_list.append(object['text'])
label_list.append(object['label'])
counter_real += 1
counter_max += 1
if(counter_real > 0 and counter_max > 0):
ratio = counter_real / counter_max * 100
else:
ratio = 0
# Print the final result
print("Lists created with %i/%i (%.2f%%) data objects." % ( counter_real, counter_max, ratio))
print("Rest ignored due to minimum words limit of %i or the limit of %i data objects maximum." % ( minWords, limit * 1000))
# Return the final Pandas DataFrame
return text_list, label_list, counter_real
```
# **Gather the path to files**
```
# ***********************************
# *** GET THE PATHS FOR THE FILES ***
# ***********************************
# Path to the content of the Google Drive
driverPath = "/content/drive/My Drive"
# Sub-directories in the driver
paths = ["processed/depression/submission",
"processed/depression/comment",
"processed/AskReddit/submission",
"processed/AskReddit/comment"]
files = [None] * len(paths)
for i in range(len(paths)):
files[i] = getFiles( driverPath, paths[i], "text", "json")
print("Gathered %i files from %s." % ( len(files[i]), paths[i]))
```
# **Gather the data from files**
```
# ************************************
# *** GATHER THE DATA AND SPLIT IT ***
# ************************************
# Local variables
rand_state_splitter = 1000
test_size = 0.2
min_files = [ 750, 0, 1300, 0]
max_words = [ 50, 0, 50, 0]
limit_packets = [300, 0, 300, 0]
message = ["Depression submissions", "Depression comments", "AskReddit submissions", "AskReddit comments"]
text, label = [], []
# Get the pandas data frames for each category
print("Build the Pandas DataFrames for each category.")
for i in range(4):
dummy_text, dummy_label, counter = getDataFrame( files[i], min_files[i], max_words[i], limit_packets[i])
if counter > 0:
text += dummy_text
label += dummy_label
dummy_text, dummy_label = None, None
print("Added %i samples to data list: %s.\n" % ( counter ,message[i]) )
# Splitting the data
x_train, x_test, y_train, y_test = train_test_split(text,
label,
test_size = test_size,
shuffle = True,
random_state = rand_state_splitter)
print("Training data: %i samples." % ( len(y_train)) )
print("Testing data: %i samples." % ( len(y_test)) )
# Clear data no longer needed
del rand_state_splitter, min_files, max_words, message, dummy_label, dummy_text
```
# **Process the data at a character-level**
```
# *******************************
# *** CONVERT STRING TO INDEX ***
# *******************************
print("Convert the strings to indexes.")
tk = Tokenizer(num_words = None, char_level = True, oov_token='UNK')
tk.fit_on_texts(x_train)
print("Original:", x_train[0])
# *********************************
# *** CONSTRUCT A NEW VOCABULARY***
# *********************************
print("Construct a new vocabulary")
alphabet = "abcdefghijklmnopqrstuvwxyz"
char_dict = {}
for i, char in enumerate(alphabet):
char_dict[char] = i + 1
print("dictionary")
tk.word_index = char_dict.copy() # Use char_dict to replace the tk.word_index
print(tk.word_index)
tk.word_index[tk.oov_token] = max(char_dict.values()) + 1 # Add 'UNK' to the vocabulary
print(tk.word_index)
# *************************
# *** TEXT TO SEQUENCES ***
# *************************
print("Text to sequence.")
x_train = tk.texts_to_sequences(x_train)
x_test = tk.texts_to_sequences(x_test)
print("After sequences:", x_train[0])
# ***************
# *** PADDING ***
# ***************
print("Padding the sequences.")
x_train = pad_sequences( x_train, maxlen = input_size, padding = 'post')
x_test = pad_sequences( x_test, maxlen= input_size , padding = 'post')
# ************************
# *** CONVERT TO NUMPY ***
# ************************
print("Convert to Numpy arrays")
x_train = np.array( x_train, dtype = 'float32')
x_test = np.array(x_test, dtype = 'float32')
# **************************************
# *** GET CLASSES FOR CLASSIFICATION ***
# **************************************
y_test_copy = y_test
y_train_list = [x-1 for x in y_train]
y_test_list = [x-1 for x in y_test]
y_train = to_categorical( y_train_list, num_of_classes)
y_test = to_categorical( y_test_list, num_of_classes)
```
# **Load embedding words**
```
# ***********************
# *** LOAD EMBEDDINGS ***
# ***********************
embedding_weights = []
vocab_size = len(tk.word_index)
embedding_weights.append(np.zeros(vocab_size))
for char, i in tk.word_index.items():
onehot = np.zeros(vocab_size)
onehot[i-1] = 1
embedding_weights.append(onehot)
embedding_weights = np.array(embedding_weights)
print("Vocabulary size: ",vocab_size)
print("Embedding weights: ", embedding_weights)
```
# **Build the CNN model**
```
def KerasModel():
# ***************************************
# *****| BUILD THE NEURAL NETWORK |******
# ***************************************
embedding_layer = Embedding(vocab_size+1,
embedding_size,
input_length = input_size,
weights = [embedding_weights])
# Input layer
inputs = Input(shape=(input_size,), name='input', dtype='int64')
# Embedding layer
x = embedding_layer(inputs)
# Convolution
for filter_num, filter_size, pooling_size in conv_layers:
x = Conv1D(filter_num, filter_size)(x)
x = Activation('relu')(x)
if pooling_size != -1:
x = MaxPooling1D( pool_size = pooling_size)(x)
x = Flatten()(x)
# Fully Connected layers
for dense_size in fully_connected_layers:
x = Dense( dense_size, activation='relu')(x)
x = Dropout( dropout_p)(x)
# Output Layer
predictions = Dense(num_of_classes, activation = 'softmax')(x)
# BUILD MODEL
model = Model( inputs = inputs, outputs = predictions)
model.compile(optimizer = optimizer, loss = loss, metrics = ['accuracy'])
model.summary()
return model
```
# **Train the CNN**
```
#with tf.device("/gpu:0"):
# history = model.fit(x_train, y_train,
# validation_data = ( x_test, y_test),
# epochs = 10,
# batch_size = batch,
# verbose = True)
with tf.device("/gpu:0"):
grid = KerasClassifier(build_fn = KerasModel, epochs = 15, verbose= True)
param_grid = dict(
epochs = [15]
)
#grid = GridSearchCV(estimator = model,
# param_grid = param_grid,
# cv = 5,
# verbose = 10,
# return_train_score = True)
grid_result = grid.fit(x_train, y_train)
```
# **Test the CNN**
```
#loss, accuracy = model.evaluate( x_train, y_train, verbose = True)
#print("Training Accuracy: {:.4f}".format( accuracy))
#loss, accuracy = model.evaluate( x_test, y_test, verbose = True)
#print("Testing Accuracy: {:.4f}".format( accuracy))
from sklearn.metrics import classification_report, confusion_matrix
y_predict = grid.predict( x_test)
# Build the confusion matrix
y_tested = y_test
print( type(y_test))
print(y_tested)
y_tested = np.argmax( y_tested, axis = 1)
print(y_tested)
confMatrix = confusion_matrix(y_tested, y_predict)
tn, fp, fn, tp = confMatrix.ravel()
# Build a classification report
classification_reports = classification_report( y_tested, y_predict, target_names = ['Non-depressed', 'Depressed'], digits=3)
print(confMatrix)
print(classification_reports)
```
| github_jupyter |
# 电影评论文本分类
此笔记本(notebook)使用评论文本将影评分为*积极(positive)*或*消极(nagetive)*两类。这是一个*二元(binary)*或者二分类问题,一种重要且应用广泛的机器学习问题。
我们将使用来源于[网络电影数据库(Internet Movie Database)](https://www.imdb.com/)的 [IMDB 数据集(IMDB dataset)](https://tensorflow.google.cn/api_docs/python/tf/keras/datasets/imdb),其包含 50,000 条影评文本。从该数据集切割出的25,000条评论用作训练,另外 25,000 条用作测试。训练集与测试集是*平衡的(balanced)*,意味着它们包含相等数量的积极和消极评论。
此笔记本(notebook)使用了 [tf.keras](https://tensorflow.google.cn/guide/keras),它是一个 Tensorflow 中用于构建和训练模型的高级API。有关使用 `tf.keras` 进行文本分类的更高级教程,请参阅 [MLCC文本分类指南(MLCC Text Classification Guide)](https://developers.google.com/machine-learning/guides/text-classification/)。
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
```
## 下载 IMDB 数据集
IMDB 数据集已经打包在 Tensorflow 中。该数据集已经经过预处理,评论(单词序列)已经被转换为整数序列,其中每个整数表示字典中的特定单词。
以下代码将下载 IMDB 数据集到您的机器上(如果您已经下载过将从缓存中复制):
```
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
```
参数 `num_words=10000` 保留了训练数据中最常出现的 10,000 个单词。为了保持数据规模的可管理性,低频词将被丢弃。
## 探索数据
让我们花一点时间来了解数据格式。该数据集是经过预处理的:每个样本都是一个表示影评中词汇的整数数组。每个标签都是一个值为 0 或 1 的整数值,其中 0 代表消极评论,1 代表积极评论。
```
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
```
评论文本被转换为整数值,其中每个整数代表词典中的一个单词。首条评论是这样的:
```
print(train_data[0])
```
电影评论可能具有不同的长度。以下代码显示了第一条和第二条评论的中单词数量。由于神经网络的输入必须是统一的长度,我们稍后需要解决这个问题。
```
len(train_data[0]), len(train_data[1])
```
### 将整数转换回单词
了解如何将整数转换回文本对您可能是有帮助的。这里我们将创建一个辅助函数来查询一个包含了整数到字符串映射的字典对象:
```
# 一个映射单词到整数索引的词典
word_index = imdb.get_word_index()
# 保留第一个索引
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
```
现在我们可以使用 `decode_review` 函数来显示首条评论的文本:
```
decode_review(train_data[0])
```
## 准备数据
影评——即整数数组必须在输入神经网络之前转换为张量。这种转换可以通过以下两种方式来完成:
* 将数组转换为表示单词出现与否的由 0 和 1 组成的向量,类似于 one-hot 编码。例如,序列[3, 5]将转换为一个 10,000 维的向量,该向量除了索引为 3 和 5 的位置是 1 以外,其他都为 0。然后,将其作为网络的首层——一个可以处理浮点型向量数据的稠密层。不过,这种方法需要大量的内存,需要一个大小为 `num_words * num_reviews` 的矩阵。
* 或者,我们可以填充数组来保证输入数据具有相同的长度,然后创建一个大小为 `max_length * num_reviews` 的整型张量。我们可以使用能够处理此形状数据的嵌入层作为网络中的第一层。
在本教程中,我们将使用第二种方法。
由于电影评论长度必须相同,我们将使用 [pad_sequences](https://tensorflow.google.cn/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) 函数来使长度标准化:
```
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
```
现在让我们看下样本的长度:
```
len(train_data[0]), len(train_data[1])
```
并检查一下首条评论(当前已经填充):
```
print(train_data[0])
```
## 构建模型
神经网络由堆叠的层来构建,这需要从两个主要方面来进行体系结构决策:
* 模型里有多少层?
* 每个层里有多少*隐层单元(hidden units)*?
在此样本中,输入数据包含一个单词索引的数组。要预测的标签为 0 或 1。让我们来为该问题构建一个模型:
```
# 输入形状是用于电影评论的词汇数目(10,000 词)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
```
层按顺序堆叠以构建分类器:
1. 第一层是`嵌入(Embedding)`层。该层采用整数编码的词汇表,并查找每个词索引的嵌入向量(embedding vector)。这些向量是通过模型训练学习到的。向量向输出数组增加了一个维度。得到的维度为:`(batch, sequence, embedding)`。
2. 接下来,`GlobalAveragePooling1D` 将通过对序列维度求平均值来为每个样本返回一个定长输出向量。这允许模型以尽可能最简单的方式处理变长输入。
3. 该定长输出向量通过一个有 16 个隐层单元的全连接(`Dense`)层传输。
4. 最后一层与单个输出结点密集连接。使用 `Sigmoid` 激活函数,其函数值为介于 0 与 1 之间的浮点数,表示概率或置信度。
### 隐层单元
上述模型在输入输出之间有两个中间层或“隐藏层”。输出(单元,结点或神经元)的数量即为层表示空间的维度。换句话说,是学习内部表示时网络所允许的自由度。
如果模型具有更多的隐层单元(更高维度的表示空间)和/或更多层,则可以学习到更复杂的表示。但是,这会使网络的计算成本更高,并且可能导致学习到不需要的模式——一些能够在训练数据上而不是测试数据上改善性能的模式。这被称为*过拟合(overfitting)*,我们稍后会对此进行探究。
### 损失函数与优化器
一个模型需要损失函数和优化器来进行训练。由于这是一个二分类问题且模型输出概率值(一个使用 sigmoid 激活函数的单一单元层),我们将使用 `binary_crossentropy` 损失函数。
这不是损失函数的唯一选择,例如,您可以选择 `mean_squared_error` 。但是,一般来说 `binary_crossentropy` 更适合处理概率——它能够度量概率分布之间的“距离”,或者在我们的示例中,指的是度量 ground-truth 分布与预测值之间的“距离”。
稍后,当我们研究回归问题(例如,预测房价)时,我们将介绍如何使用另一种叫做均方误差的损失函数。
现在,配置模型来使用优化器和损失函数:
```
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
```
## 创建一个验证集
在训练时,我们想要检查模型在未见过的数据上的准确率(accuracy)。通过从原始训练数据中分离 10,000 个样本来创建一个*验证集*。(为什么现在不使用测试集?我们的目标是只使用训练数据来开发和调整模型,然后只使用一次测试数据来评估准确率(accuracy))。
```
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
```
## 训练模型
以 512 个样本的 mini-batch 大小迭代 40 个 epoch 来训练模型。这是指对 `x_train` 和 `y_train` 张量中所有样本的的 40 次迭代。在训练过程中,监测来自验证集的 10,000 个样本上的损失值(loss)和准确率(accuracy):
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
```
## 评估模型
我们来看一下模型的性能如何。将返回两个值。损失值(loss)(一个表示误差的数字,值越低越好)与准确率(accuracy)。
```
results = model.evaluate(test_data, test_labels, verbose=2)
print(results)
```
这种十分朴素的方法得到了约 87% 的准确率(accuracy)。若采用更好的方法,模型的准确率应当接近 95%。
## 创建一个准确率(accuracy)和损失值(loss)随时间变化的图表
`model.fit()` 返回一个 `History` 对象,该对象包含一个字典,其中包含训练阶段所发生的一切事件:
```
history_dict = history.history
history_dict.keys()
```
有四个条目:在训练和验证期间,每个条目对应一个监控指标。我们可以使用这些条目来绘制训练与验证过程的损失值(loss)和准确率(accuracy),以便进行比较。
```
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# “bo”代表 "蓝点"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b代表“蓝色实线”
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # 清除数字
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
在该图中,点代表训练损失值(loss)与准确率(accuracy),实线代表验证损失值(loss)与准确率(accuracy)。
注意训练损失值随每一个 epoch *下降*而训练准确率(accuracy)随每一个 epoch *上升*。这在使用梯度下降优化时是可预期的——理应在每次迭代中最小化期望值。
验证过程的损失值(loss)与准确率(accuracy)的情况却并非如此——它们似乎在 20 个 epoch 后达到峰值。这是过拟合的一个实例:模型在训练数据上的表现比在以前从未见过的数据上的表现要更好。在此之后,模型过度优化并学习*特定*于训练数据的表示,而不能够*泛化*到测试数据。
对于这种特殊情况,我们可以通过在 20 个左右的 epoch 后停止训练来避免过拟合。稍后,您将看到如何通过回调自动执行此操作。
| github_jupyter |
## 8. Classification
[Data Science Playlist on YouTube](https://www.youtube.com/watch?v=VLKEj9EN2ew&list=PLLBUgWXdTBDg1Qgmwt4jKtVn9BWh5-zgy)
[](https://www.youtube.com/watch?v=VLKEj9EN2ew&list=PLLBUgWXdTBDg1Qgmwt4jKtVn9BWh5-zgy "Python Data Science")
**Classification** predicts *discrete labels (outcomes)* such as `yes`/`no`, `True`/`False`, or any number of discrete levels such as a letter from text recognition, or a word from speech recognition. There are two main methods for training classifiers: unsupervised and supervised learning. The difference between the two is that unsupervised learning does not use labels while supervised learning uses labels to build the classifier. The goal of unsupervised learning is to cluster input features but without labels to guide the grouping.

### Supervised Learning to Classify Numbers
A dataset that is included with sklearn is a set of 1797 images of numbers that are 64 pixels (8x8) each. There are labels with each to indicate the correct answer. A Support Vector Classifier is trained on the first half of the images.
```
from sklearn import datasets, svm
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# train classifier
digits = datasets.load_digits()
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
svc = svm.SVC(gamma=0.001)
X_train, X_test, y_train, y_test = train_test_split(
data, digits.target, test_size=0.5, shuffle=False)
svc.fit(X_train, y_train)
print('SVC Trained')
```

### Test Number Classifier
The image classification is trained on 10 randomly selected images from the other half of the data set to evaluate the training. Run the classifier test until you observe a misclassified number.
```
plt.figure(figsize=(10,4))
for i in range(10):
n = np.random.randint(int(n_samples/2),n_samples)
predict = svc.predict(digits.data[n:n+1])[0]
plt.subplot(2,5,i+1)
plt.imshow(digits.images[n], cmap=plt.cm.gray_r, interpolation='nearest')
plt.text(0,7,'Actual: ' + str(digits.target[n]),color='r')
plt.text(0,1,'Predict: ' + str(predict),color='b')
if predict==digits.target[n]:
plt.text(0,4,'Correct',color='g')
else:
plt.text(0,4,'Incorrect',color='orange')
plt.show()
```

### Classification with Supervised Learning
Select data set option with `moons`, `cirlces`, or `blobs`. Run the following cell to generate the data that will be used to test the classifiers.
```
option = 'moons' # moons, circles, or blobs
n = 2000 # number of data points
X = np.random.random((n,2))
mixing = 0.0 # add random mixing element to data
xplot = np.linspace(0,1,100)
if option=='moons':
X, y = datasets.make_moons(n_samples=n,noise=0.1)
yplot = xplot*0.0
elif option=='circles':
X, y = datasets.make_circles(n_samples=n,noise=0.1,factor=0.5)
yplot = xplot*0.0
elif option=='blobs':
X, y = datasets.make_blobs(n_samples=n,centers=[[-5,3],[5,-3]],cluster_std=2.0)
yplot = xplot*0.0
# Split into train and test subsets (50% each)
XA, XB, yA, yB = train_test_split(X, y, test_size=0.5, shuffle=False)
# Plot regression results
def assess(P):
plt.figure()
plt.scatter(XB[P==1,0],XB[P==1,1],marker='^',color='blue',label='True')
plt.scatter(XB[P==0,0],XB[P==0,1],marker='x',color='red',label='False')
plt.scatter(XB[P!=yB,0],XB[P!=yB,1],marker='s',color='orange',\
alpha=0.5,label='Incorrect')
plt.legend()
```

### S.1 Logistic Regression
**Definition:** Logistic regression is a machine learning algorithm for classification. In this algorithm, the probabilities describing the possible outcomes of a single trial are modelled using a logistic function.
**Advantages:** Logistic regression is designed for this purpose (classification), and is most useful for understanding the influence of several independent variables on a single outcome variable.
**Disadvantages:** Works only when the predicted variable is binary, assumes all predictors are independent of each other, and assumes data is free of missing values.
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='lbfgs')
lr.fit(XA,yA)
yP = lr.predict(XB)
assess(yP)
```

### S.2 Naïve Bayes
**Definition:** Naive Bayes algorithm based on Bayes’ theorem with the assumption of independence between every pair of features. Naive Bayes classifiers work well in many real-world situations such as document classification and spam filtering.
**Advantages:** This algorithm requires a small amount of training data to estimate the necessary parameters. Naive Bayes classifiers are extremely fast compared to more sophisticated methods.
**Disadvantages:** Naive Bayes is known to be a bad estimator.
```
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(XA,yA)
yP = nb.predict(XB)
assess(yP)
```

### S.3 Stochastic Gradient Descent
**Definition:** Stochastic gradient descent is a simple and very efficient approach to fit linear models. It is particularly useful when the number of samples is very large. It supports different loss functions and penalties for classification.
**Advantages:** Efficiency and ease of implementation.
**Disadvantages:** Requires a number of hyper-parameters and it is sensitive to feature scaling.
```
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(loss='modified_huber', shuffle=True,random_state=101)
sgd.fit(XA,yA)
yP = sgd.predict(XB)
assess(yP)
```

### S.4 K-Nearest Neighbours
**Definition:** Neighbours based classification is a type of lazy learning as it does not attempt to construct a general internal model, but simply stores instances of the training data. Classification is computed from a simple majority vote of the k nearest neighbours of each point.
**Advantages:** This algorithm is simple to implement, robust to noisy training data, and effective if training data is large.
**Disadvantages:** Need to determine the value of `K` and the computation cost is high as it needs to computer the distance of each instance to all the training samples. One possible solution to determine `K` is to add a feedback loop to determine the number of neighbors.
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(XA,yA)
yP = knn.predict(XB)
assess(yP)
```

### S.5 Decision Tree
**Definition:** Given a data of attributes together with its classes, a decision tree produces a sequence of rules that can be used to classify the data.
**Advantages:** Decision Tree is simple to understand and visualise, requires little data preparation, and can handle both numerical and categorical data.
**Disadvantages:** Decision tree can create complex trees that do not generalise well, and decision trees can be unstable because small variations in the data might result in a completely different tree being generated.
```
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier(max_depth=10,random_state=101,\
max_features=None,min_samples_leaf=5)
dtree.fit(XA,yA)
yP = dtree.predict(XB)
assess(yP)
```

### S.6 Random Forest
**Definition:** Random forest classifier is a meta-estimator that fits a number of decision trees on various sub-samples of datasets and uses average to improve the predictive accuracy of the model and controls over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement.
**Advantages:** Reduction in over-fitting and random forest classifier is more accurate than decision trees in most cases.
**Disadvantages:** Slow real time prediction, difficult to implement, and complex algorithm.
```
from sklearn.ensemble import RandomForestClassifier
rfm = RandomForestClassifier(n_estimators=70,oob_score=True,\
n_jobs=1,random_state=101,max_features=None,\
min_samples_leaf=3) #change min_samples_leaf from 30 to 3
rfm.fit(XA,yA)
yP = rfm.predict(XB)
assess(yP)
```

### S.7 Support Vector Classifier
**Definition:** Support vector machine is a representation of the training data as points in space separated into categories by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
**Advantages:** Effective in high dimensional spaces and uses a subset of training points in the decision function so it is also memory efficient.
**Disadvantages:** The algorithm does not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation.
```
from sklearn.svm import SVC
svm = SVC(gamma='scale', C=1.0, random_state=101)
svm.fit(XA,yA)
yP = svm.predict(XB)
assess(yP)
```

### S.8 Neural Network
The `MLPClassifier` implements a multi-layer perceptron (MLP) algorithm that trains using Backpropagation.
**Definition:** A neural network is a set of neurons (activation functions) in layers that are processed sequentially to relate an input to an output.
**Advantages:** Effective in nonlinear spaces where the structure of the relationship is not linear. No prior knowledge or specialized equation structure is defined although there are different network architectures that may lead to a better result.
**Disadvantages:** Neural networks do not extrapolate well outside of the training domain. They may also require longer to train by adjusting the parameter weights to minimize a loss (objective) function. It is also more challenging to explain the outcome of the training and changes in initialization or number of epochs (iterations) may lead to different results. Too many epochs may lead to overfitting, especially if there are excess parameters beyond the minimum needed to capture the input to output relationship.

MLP trains on two arrays: array X of size (n_samples, n_features), which holds the training samples represented as floating point feature vectors; and array y of size (n_samples,), which holds the target values (class labels) for the training samples.
MLP can fit a non-linear model to the training data. clf.coefs_ contains the weight matrices that constitute the model parameters. Currently, MLPClassifier supports only the Cross-Entropy loss function, which allows probability estimates by running the predict_proba method. MLP trains using Backpropagation. More precisely, it trains using some form of gradient descent and the gradients are calculated using Backpropagation. For classification, it minimizes the Cross-Entropy loss function, giving a vector of probability estimates. MLPClassifier supports multi-class classification by applying Softmax as the output function. Further, the model supports multi-label classification in which a sample can belong to more than one class. For each class, the raw output passes through the logistic function. Values larger or equal to 0.5 are rounded to 1, otherwise to 0. For a predicted output of a sample, the indices where the value is 1 represents the assigned classes of that sample.
```
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='lbfgs',alpha=1e-5,max_iter=200,activation='relu',\
hidden_layer_sizes=(10,30,10), random_state=1, shuffle=True)
clf.fit(XA,yA)
yP = clf.predict(XB)
assess(yP)
```

### Unsupervised Classification
Additional examples show the potential for unsupervised learning to classify the groups. Unsupervised learning does not use the labels (`True`/`False`) so the results may need to be switched to align with the test set with `if len(XB[yP!=yB]) > n/4: yP = 1 - yP
`

### U.1 K-Means Clustering
**Definition:** Specify how many possible clusters (or K) there are in the dataset. The algorithm then iteratively moves the K-centers and selects the datapoints that are closest to that centroid in the cluster.
**Advantages:** The most common and simplest clustering algorithm.
**Disadvantages:** Must specify the number of clusters although this can typically be determined by increasing the number of clusters until the objective function does not change significantly.
```
from sklearn.cluster import KMeans
km = KMeans(n_clusters=2)
km.fit(XA)
yP = km.predict(XB)
if len(XB[yP!=yB]) > n/4: yP = 1 - yP
assess(yP)
```

### U.2 Gaussian Mixture Model
**Definition:** Data points that exist at the boundary of clusters may simply have similar probabilities of being on either clusters. A mixture model predicts a probability instead of a hard classification such as K-Means clustering.
**Advantages:** Incorporates uncertainty into the solution.
**Disadvantages:** Uncertainty may not be desirable for some applications. This method is not as common as the K-Means method for clustering.
```
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=2)
gmm.fit(XA)
yP = gmm.predict_proba(XB) # produces probabilities
if len(XB[np.round(yP[:,0])!=yB]) > n/4: yP = 1 - yP
assess(np.round(yP[:,0]))
```

### U.3 Spectral Clustering
**Definition:** Spectral clustering is known as segmentation-based object categorization. It is a technique with roots in graph theory, where identify communities of nodes in a graph are based on the edges connecting them. The method is flexible and allows clustering of non graph data as well.
It uses information from the eigenvalues of special matrices built from the graph or the data set.
**Advantages:** Flexible approach for finding clusters when data doesn’t meet the requirements of other common algorithms.
**Disadvantages:** For large-sized graphs, the second eigenvalue of the (normalized) graph Laplacian matrix is often ill-conditioned, leading to slow convergence of iterative eigenvalue solvers. Spectral clustering is computationally expensive unless the graph is sparse and the similarity matrix can be efficiently constructed.
```
from sklearn.cluster import SpectralClustering
sc = SpectralClustering(n_clusters=2,eigen_solver='arpack',\
affinity='nearest_neighbors')
yP = sc.fit_predict(XB) # No separation between fit and predict calls
# need to fit and predict on same dataset
if len(XB[yP!=yB]) > n/4: yP = 1 - yP
assess(yP)
```

### TCLab Activity
Train a classifier to predict if the heater is on (100%) or off (0%). Generate data with 10 minutes of 1 second data. If you do not have a TCLab, use one of the sample data sets.
- [Sample Data Set 1 (10 min)](http://apmonitor.com/do/uploads/Main/tclab_data5.txt): http://apmonitor.com/do/uploads/Main/tclab_data5.txt
- [Sample Data Set 2 (60 min)](http://apmonitor.com/do/uploads/Main/tclab_data6.txt): http://apmonitor.com/do/uploads/Main/tclab_data6.txt
```
# 10 minute data collection
import tclab, time
import numpy as np
import pandas as pd
with tclab.TCLab() as lab:
n = 600; on=100; t = np.linspace(0,n-1,n)
Q1 = np.zeros(n); T1 = np.zeros(n)
Q2 = np.zeros(n); T2 = np.zeros(n)
Q1[20:41]=on; Q1[60:91]=on; Q1[150:181]=on
Q1[190:206]=on; Q1[220:251]=on; Q1[260:291]=on
Q1[300:316]=on; Q1[340:351]=on; Q1[400:431]=on
Q1[500:521]=on; Q1[540:571]=on; Q1[20:41]=on
Q1[60:91]=on; Q1[150:181]=on; Q1[190:206]=on
Q1[220:251]=on; Q1[260:291]=on
print('Time Q1 Q2 T1 T2')
for i in range(n):
T1[i] = lab.T1; T2[i] = lab.T2
lab.Q1(Q1[i])
if i%5==0:
print(int(t[i]),Q1[i],Q2[i],T1[i],T2[i])
time.sleep(1)
data = np.column_stack((t,Q1,Q2,T1,T2))
data8 = pd.DataFrame(data,columns=['Time','Q1','Q2','T1','T2'])
data8.to_csv('08-tclab.csv',index=False)
```
Use the data file `08-tclab.csv` to train and test the classifier. Select and scale (0-1) the features of the data including `T1`, `T2`, and the 1st and 2nd derivatives of `T1`. Use the measured temperatures, derivatives, and heater value label to create a classifier that predicts when the heater is on or off. Validate the classifier with new data that was not used for training. Starting code is provided below but does not include `T2` as a feature input. **Add `T2` as an input feature to the classifer. Does it improve the classifier performance?**
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
try:
data = pd.read_csv('08-tclab.csv')
except:
print('Warning: Unable to load 08-tclab.csv, using online data')
url = 'http://apmonitor.com/do/uploads/Main/tclab_data5.txt'
data = pd.read_csv(url)
# Input Features: Temperature and 1st / 2nd Derivatives
# Cubic polynomial fit of temperature using 10 data points
data['dT1'] = np.zeros(len(data))
data['d2T1'] = np.zeros(len(data))
for i in range(len(data)):
if i<len(data)-10:
x = data['Time'][i:i+10]-data['Time'][i]
y = data['T1'][i:i+10]
p = np.polyfit(x,y,3)
# evaluate derivatives at mid-point (5 sec)
t = 5.0
data['dT1'][i] = 3.0*p[0]*t**2 + 2.0*p[1]*t+p[2]
data['d2T1'][i] = 6.0*p[0]*t + 2.0*p[1]
else:
data['dT1'][i] = np.nan
data['d2T1'][i] = np.nan
# Remove last 10 values
X = np.array(data[['T1','dT1','d2T1']][0:-10])
y = np.array(data[['Q1']][0:-10])
# Scale data
# Input features (Temperature and 2nd derivative at 5 sec)
s1 = MinMaxScaler(feature_range=(0,1))
Xs = s1.fit_transform(X)
# Output labels (heater On / Off)
ys = [True if y[i]>50.0 else False for i in range(len(y))]
# Split into train and test subsets (50% each)
XA, XB, yA, yB = train_test_split(Xs, ys, \
test_size=0.5, shuffle=False)
# Supervised Classification
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import SGDClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
# Create supervised classification models
lr = LogisticRegression(solver='lbfgs') # Logistic Regression
nb = GaussianNB() # Naïve Bayes
sgd = SGDClassifier(loss='modified_huber', shuffle=True,\
random_state=101) # Stochastic Gradient Descent
knn = KNeighborsClassifier(n_neighbors=5) # K-Nearest Neighbors
dtree = DecisionTreeClassifier(max_depth=10,random_state=101,\
max_features=None,min_samples_leaf=5) # Decision Tree
rfm = RandomForestClassifier(n_estimators=70,oob_score=True,n_jobs=1,\
random_state=101,max_features=None,min_samples_leaf=3) # Random Forest
svm = SVC(gamma='scale', C=1.0, random_state=101) # Support Vector Classifier
clf = MLPClassifier(solver='lbfgs',alpha=1e-5,max_iter=200,\
activation='relu',hidden_layer_sizes=(10,30,10),\
random_state=1, shuffle=True) # Neural Network
models = [lr,nb,sgd,knn,dtree,rfm,svm,clf]
# Supervised learning
yP = [None]*(len(models)+3) # 3 for unsupervised learning
for i,m in enumerate(models):
m.fit(XA,yA)
yP[i] = m.predict(XB)
# Unsupervised learning modules
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
from sklearn.cluster import SpectralClustering
km = KMeans(n_clusters=2)
gmm = GaussianMixture(n_components=2)
sc = SpectralClustering(n_clusters=2,eigen_solver='arpack',\
affinity='nearest_neighbors')
km.fit(XA)
yP[8] = km.predict(XB)
gmm.fit(XA)
yP[9] = gmm.predict_proba(XB)[:,0]
yP[10] = sc.fit_predict(XB)
plt.figure(figsize=(10,7))
gs = gridspec.GridSpec(3, 1, height_ratios=[1,1,5])
plt.subplot(gs[0])
plt.plot(data['Time']/60,data['T1'],'r-',\
label='Temperature (°C)')
plt.ylabel('T (°C)')
plt.legend()
plt.subplot(gs[1])
plt.plot(data['Time']/60,data['dT1'],'b:',\
label='dT/dt (°C/sec)')
plt.plot(data['Time']/60,data['d2T1'],'k--',\
label=r'$d^2T/dt^2$ ($°C^2/sec^2$)')
plt.ylabel('Derivatives')
plt.legend()
plt.subplot(gs[2])
plt.plot(data['Time']/60,data['Q1']/100,'k-',\
label='Heater (On=1/Off=0)')
t2 = data['Time'][len(yA):-10].values
desc = ['Logistic Regression','Naïve Bayes','Stochastic Gradient Descent',\
'K-Nearest Neighbors','Decision Tree','Random Forest',\
'Support Vector Classifier','Neural Network',\
'K-Means Clustering','Gaussian Mixture Model','Spectral Clustering']
for i in range(11):
plt.plot(t2/60,yP[i]-i-1,label=desc[i])
plt.ylabel('Heater')
plt.legend()
plt.xlabel(r'Time (min)')
plt.legend()
plt.show()
```
| github_jupyter |
## Installing & importing necsessary libs
```
!pip install -q transformers
import numpy as np
import pandas as pd
from sklearn import metrics
import transformers
import torch
from torch.utils.data import Dataset, DataLoader, RandomSampler, SequentialSampler
from transformers import AlbertTokenizer, AlbertModel, AlbertConfig
from tqdm.notebook import tqdm
from transformers import get_linear_schedule_with_warmup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
torch.cuda.get_device_name(0)
```
## Data Preprocessing
```
df = pd.read_csv("../input/avjantahack/data/train.csv")
df['list'] = df[df.columns[3:]].values.tolist()
new_df = df[['ABSTRACT', 'list']].copy()
new_df.head()
```
## Model configurations
```
# Defining some key variables that will be used later on in the training
MAX_LEN = 512
TRAIN_BATCH_SIZE = 16
VALID_BATCH_SIZE = 8
EPOCHS = 5
LEARNING_RATE = 3e-05
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
```
## Custom Dataset Class
```
class CustomDataset(Dataset):
def __init__(self, dataframe, tokenizer, max_len):
self.tokenizer = tokenizer
self.data = dataframe
self.abstract = dataframe.ABSTRACT
self.targets = self.data.list
self.max_len = max_len
def __len__(self):
return len(self.abstract)
def __getitem__(self, index):
abstract = str(self.abstract[index])
abstract = " ".join(abstract.split())
inputs = self.tokenizer.encode_plus(
abstract,
None,
add_special_tokens = True,
max_length = self.max_len,
pad_to_max_length = True,
return_token_type_ids=True,
truncation = True
)
ids = inputs['input_ids']
mask = inputs['attention_mask']
token_type_ids = inputs['token_type_ids']
return{
'ids': torch.tensor(ids, dtype=torch.long),
'mask': torch.tensor(mask, dtype=torch.long),
'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),
'targets': torch.tensor(self.targets[index], dtype=torch.float)
}
train_size = 0.8
train_dataset=new_df.sample(frac=train_size,random_state=200)
test_dataset=new_df.drop(train_dataset.index).reset_index(drop=True)
train_dataset = train_dataset.reset_index(drop=True)
print("FULL Dataset: {}".format(new_df.shape))
print("TRAIN Dataset: {}".format(train_dataset.shape))
print("TEST Dataset: {}".format(test_dataset.shape))
training_set = CustomDataset(train_dataset, tokenizer, MAX_LEN)
testing_set = CustomDataset(test_dataset, tokenizer, MAX_LEN)
train_params = {'batch_size': TRAIN_BATCH_SIZE,
'shuffle': True,
'num_workers': 0
}
test_params = {'batch_size': VALID_BATCH_SIZE,
'shuffle': True,
'num_workers': 0
}
training_loader = DataLoader(training_set, **train_params)
testing_loader = DataLoader(testing_set, **test_params)
```
## Albert model
```
class AlbertClass(torch.nn.Module):
def __init__(self):
super(AlbertClass, self).__init__()
self.albert = transformers.AlbertModel.from_pretrained('albert-base-v2')
self.drop = torch.nn.Dropout(0.1)
self.linear = torch.nn.Linear(768, 6)
def forward(self, ids, mask, token_type_ids):
_, output= self.albert(ids, attention_mask = mask)
output = self.drop(output)
output = self.linear(output)
return output
model = AlbertClass()
model.to(device)
```
## Hyperparameters & Loss function
```
def loss_fn(outputs, targets):
return torch.nn.BCEWithLogitsLoss()(outputs, targets)
param_optimizer = list(model.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(nd in n for nd in no_decay)
],
"weight_decay": 0.001,
},
{
"params": [
p for n, p in param_optimizer if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
optimizer = torch.optim.AdamW(optimizer_parameters, lr=1e-5)
num_training_steps = int(len(train_dataset) / TRAIN_BATCH_SIZE * EPOCHS)
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps = 0,
num_training_steps = num_training_steps
)
```
## Train & Eval Functions
```
def train(epoch):
model.train()
for _,data in tqdm(enumerate(training_loader, 0), total=len(training_loader)):
ids = data['ids'].to(device, dtype = torch.long)
mask = data['mask'].to(device, dtype = torch.long)
token_type_ids = data['token_type_ids'].to(device, dtype = torch.long)
targets = data['targets'].to(device, dtype = torch.float)
outputs = model(ids, mask, token_type_ids)
optimizer.zero_grad()
loss = loss_fn(outputs, targets)
if _%1000==0:
print(f'Epoch: {epoch}, Loss: {loss.item()}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
scheduler.step()
def validation(epoch):
model.eval()
fin_targets=[]
fin_outputs=[]
with torch.no_grad():
for _, data in tqdm(enumerate(testing_loader, 0), total=len(testing_loader)):
ids = data['ids'].to(device, dtype = torch.long)
mask = data['mask'].to(device, dtype = torch.long)
token_type_ids = data['token_type_ids'].to(device, dtype = torch.long)
targets = data['targets'].to(device, dtype = torch.float)
outputs = model(ids, mask, token_type_ids)
fin_targets.extend(targets.cpu().detach().numpy().tolist())
fin_outputs.extend(torch.sigmoid(outputs).cpu().detach().numpy().tolist())
return fin_outputs, fin_targets
```
## Training Model
```
MODEL_PATH = "/kaggle/working/albert-multilabel-model.bin"
best_micro = 0
for epoch in range(EPOCHS):
train(epoch)
outputs, targets = validation(epoch)
outputs = np.array(outputs) >= 0.5
accuracy = metrics.accuracy_score(targets, outputs)
f1_score_micro = metrics.f1_score(targets, outputs, average='micro')
f1_score_macro = metrics.f1_score(targets, outputs, average='macro')
print(f"Accuracy Score = {accuracy}")
print(f"F1 Score (Micro) = {f1_score_micro}")
print(f"F1 Score (Macro) = {f1_score_macro}")
if f1_score_micro > best_micro:
torch.save(model.state_dict(), MODEL_PATH)
best_micro = f1_score_micro
def predict(id, abstract):
MAX_LENGTH = 512
inputs = tokenizer.encode_plus(
abstract,
None,
add_special_tokens=True,
max_length=512,
pad_to_max_length=True,
return_token_type_ids=True,
truncation = True
)
ids = inputs['input_ids']
mask = inputs['attention_mask']
token_type_ids = inputs['token_type_ids']
ids = torch.tensor(ids, dtype=torch.long).unsqueeze(0)
mask = torch.tensor(mask, dtype=torch.long).unsqueeze(0)
token_type_ids = torch.tensor(token_type_ids, dtype=torch.long).unsqueeze(0)
ids = ids.to(device)
mask = mask.to(device)
token_type_ids = token_type_ids.to(device)
with torch.no_grad():
outputs = model(ids, mask, token_type_ids)
outputs = torch.sigmoid(outputs).squeeze()
outputs = np.round(outputs.cpu().numpy())
out = np.insert(outputs, 0, id)
return out
def submit():
test_df = pd.read_csv('../input/avjantahack/data/test.csv')
sample_submission = pd.read_csv('../input/avjantahack/data/sample_submission_UVKGLZE.csv')
y = []
for id, abstract in tqdm(zip(test_df['ID'], test_df['ABSTRACT']),
total=len(test_df)):
out = predict(id, abstract)
y.append(out)
y = np.array(y)
submission = pd.DataFrame(y, columns=sample_submission.columns).astype(int)
return submission
submission = submit()
submission
submission.to_csv('/kaggle/working/alberta-tuned-lr-ws-dr.csv', index=False)
```
| github_jupyter |
# 3D Map
While representing the configuration space in 3 dimensions isn't entirely practical it's fun (and useful) to visualize things in 3D.
In this exercise you'll finish the implementation of `create_grid` such that a 3D grid is returned where cells containing a voxel are set to `True`. We'll then plot the result!
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
plt.rcParams['figure.figsize'] = 16, 16
# This is the same obstacle data from the previous lesson.
filename = 'colliders.csv'
data = np.loadtxt(filename, delimiter=',', dtype='Float64', skiprows=2)
print(data)
def create_voxmap(data, voxel_size=5):
"""
Returns a grid representation of a 3D configuration space
based on given obstacle data.
The `voxel_size` argument sets the resolution of the voxel map.
"""
# minimum and maximum north coordinates
north_min = np.floor(np.amin(data[:, 0] - data[:, 3]))
north_max = np.ceil(np.amax(data[:, 0] + data[:, 3]))
# minimum and maximum east coordinates
east_min = np.floor(np.amin(data[:, 1] - data[:, 4]))
east_max = np.ceil(np.amax(data[:, 1] + data[:, 4]))
alt_max = np.ceil(np.amax(data[:, 2] + data[:, 5]))
# given the minimum and maximum coordinates we can
# calculate the size of the grid.
north_size = int(np.ceil((north_max - north_min))) // voxel_size
east_size = int(np.ceil((east_max - east_min))) // voxel_size
alt_size = int(alt_max) // voxel_size
voxmap = np.zeros((north_size, east_size, alt_size), dtype=np.bool)
for datum in data:
x, y, z, dx, dy, dz = datum.astype(np.int32)
obstacle = np.array(((x-dx, x+dx),
(y-dy, y+dy),
(z-dz, z+dz)))
obstacle[0] = (obstacle[0] - north_min) // voxel_size
obstacle[1] = (obstacle[1] - east_min) // voxel_size
obstacle[2] = obstacle[2] // voxel_size
voxmap[obstacle[0][0]:obstacle[0][1], obstacle[1][0]:obstacle[1][1], obstacle[2][0]:obstacle[2][1]] = True
return voxmap
```
Create 3D grid.
```
voxel_size = 10
voxmap = create_voxmap(data, voxel_size)
print(voxmap.shape)
```
Plot the 3D grid.
```
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.voxels(voxmap, edgecolor='k')
ax.set_xlim(voxmap.shape[0], 0)
ax.set_ylim(0, voxmap.shape[1])
# add 100 to the height so the buildings aren't so tall
ax.set_zlim(0, voxmap.shape[2]+100//voxel_size)
plt.xlabel('North')
plt.ylabel('East')
plt.show()
```
Isn't the city pretty?
| github_jupyter |
# FloPy
## Plotting SWR Process Results
This notebook demonstrates the use of the `SwrObs` and `SwrStage`, `SwrBudget`, `SwrFlow`, and `SwrExchange`, `SwrStructure`, classes to read binary SWR Process observation, stage, budget, reach to reach flows, reach-aquifer exchange, and structure files. It demonstrates these capabilities by loading these binary file types and showing examples of plotting SWR Process data. An example showing how the simulated water surface profile at a selected time along a selection of reaches can be plotted is also presented.
```
%matplotlib inline
from IPython.display import Image
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
#Set the paths
datapth = os.path.join('..', 'data', 'swr_test')
# SWR Process binary files
files = ('SWR004.obs', 'SWR004.vel', 'SWR004.str', 'SWR004.stg', 'SWR004.flow')
```
### Load SWR Process observations
Create an instance of the `SwrObs` class and load the observation data.
```
sobj = flopy.utils.SwrObs(os.path.join(datapth, files[0]))
ts = sobj.get_data()
```
#### Plot the data from the binary SWR Process observation file
```
fig = plt.figure(figsize=(6, 12))
ax1 = fig.add_subplot(3, 1, 1)
ax1.semilogx(ts['totim']/3600., -ts['OBS1'], label='OBS1')
ax1.semilogx(ts['totim']/3600., -ts['OBS2'], label='OBS2')
ax1.semilogx(ts['totim']/3600., -ts['OBS9'], label='OBS3')
ax1.set_ylabel('Flow, in cubic meters per second')
ax1.legend()
ax = fig.add_subplot(3, 1, 2, sharex=ax1)
ax.semilogx(ts['totim']/3600., -ts['OBS4'], label='OBS4')
ax.semilogx(ts['totim']/3600., -ts['OBS5'], label='OBS5')
ax.set_ylabel('Flow, in cubic meters per second')
ax.legend()
ax = fig.add_subplot(3, 1, 3, sharex=ax1)
ax.semilogx(ts['totim']/3600., ts['OBS6'], label='OBS6')
ax.semilogx(ts['totim']/3600., ts['OBS7'], label='OBS7')
ax.set_xlim(1, 100)
ax.set_ylabel('Stage, in meters')
ax.set_xlabel('Time, in hours')
ax.legend();
```
### Load the same data from the individual binary SWR Process files
Load discharge data from the flow file. The flow file contains the simulated flow between connected reaches for each connection in the model.
```
sobj = flopy.utils.SwrFlow(os.path.join(datapth, files[1]))
times = np.array(sobj.get_times())/3600.
obs1 = sobj.get_ts(irec=1, iconn=0)
obs2 = sobj.get_ts(irec=14, iconn=13)
obs4 = sobj.get_ts(irec=4, iconn=3)
obs5 = sobj.get_ts(irec=5, iconn=4)
```
Load discharge data from the structure file. The structure file contains the simulated structure flow for each reach with a structure.
```
sobj = flopy.utils.SwrStructure(os.path.join(datapth, files[2]))
obs3 = sobj.get_ts(irec=17, istr=0)
```
Load stage data from the stage file. The flow file contains the simulated stage for each reach in the model.
```
sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3]))
obs6 = sobj.get_ts(irec=13)
```
Load budget data from the budget file. The budget file contains the simulated budget for each reach group in the model. The budget file also contains the stage data for each reach group. In this case the number of reach groups equals the number of reaches in the model.
```
sobj = flopy.utils.SwrBudget(os.path.join(datapth, files[4]))
obs7 = sobj.get_ts(irec=17)
```
#### Plot the data loaded from the individual binary SWR Process files.
Note that the plots are identical to the plots generated from the binary SWR observation data.
```
fig = plt.figure(figsize=(6, 12))
ax1 = fig.add_subplot(3, 1, 1)
ax1.semilogx(times, obs1['flow'], label='OBS1')
ax1.semilogx(times, obs2['flow'], label='OBS2')
ax1.semilogx(times, -obs3['strflow'], label='OBS3')
ax1.set_ylabel('Flow, in cubic meters per second')
ax1.legend()
ax = fig.add_subplot(3, 1, 2, sharex=ax1)
ax.semilogx(times, obs4['flow'], label='OBS4')
ax.semilogx(times, obs5['flow'], label='OBS5')
ax.set_ylabel('Flow, in cubic meters per second')
ax.legend()
ax = fig.add_subplot(3, 1, 3, sharex=ax1)
ax.semilogx(times, obs6['stage'], label='OBS6')
ax.semilogx(times, obs7['stage'], label='OBS7')
ax.set_xlim(1, 100)
ax.set_ylabel('Stage, in meters')
ax.set_xlabel('Time, in hours')
ax.legend();
```
### Plot simulated water surface profiles
Simulated water surface profiles can be created using the `ModelCrossSection` class.
Several things that we need in addition to the stage data include reach lengths and bottom elevations. We load these data from an existing file.
```
sd = np.genfromtxt(os.path.join(datapth, 'SWR004.dis.ref'), names=True)
```
The contents of the file are shown in the cell below.
```
fc = open(os.path.join(datapth, 'SWR004.dis.ref')).readlines()
fc
```
Create an instance of the `SwrStage` class for SWR Process stage data.
```
sobj = flopy.utils.SwrStage(os.path.join(datapth, files[3]))
```
Create a selection condition (`iprof`) that can be used to extract data for the reaches of interest (reaches 0, 1, and 8 through 17). Use this selection condition to extract reach lengths (from `sd['RLEN']`) and the bottom elevation (from `sd['BELEV']`) for the reaches of interest. The selection condition will also be used to extract the stage data for reaches of interest.
```
iprof = sd['IRCH'] > 0
iprof[2:8] = False
dx = np.extract(iprof, sd['RLEN'])
belev = np.extract(iprof, sd['BELEV'])
```
Create a fake model instance so that the `ModelCrossSection` class can be used.
```
ml = flopy.modflow.Modflow()
dis = flopy.modflow.ModflowDis(ml, nrow=1, ncol=dx.shape[0], delr=dx, top=4.5, botm=belev.reshape(1,1,12))
```
Create an array with the x position at the downstream end of each reach, which will be used to color the plots below each reach.
```
x = np.cumsum(dx)
```
Plot simulated water surface profiles for 8 times.
```
fig = plt.figure(figsize=(12, 12))
for idx, v in enumerate([19, 29, 34, 39, 44, 49, 54, 59]):
ax = fig.add_subplot(4, 2, idx+1)
s = sobj.get_data(idx=v)
stage = np.extract(iprof, s['stage'])
xs = flopy.plot.ModelCrossSection(model=ml, line={'Row': 0})
xs.plot_fill_between(stage.reshape(1,1,12), colors=['none', 'blue'], ax=ax, edgecolors='none')
linecollection = xs.plot_grid(ax=ax, zorder=10)
ax.fill_between(np.append(0., x), y1=np.append(belev[0], belev), y2=-0.5,
facecolor='0.5', edgecolor='none', step='pre')
ax.set_title('{} hours'.format(times[v]))
ax.set_ylim(-0.5, 4.5)
```
## Summary
This notebook demonstrates flopy functionality for reading binary output generated by the SWR Process. Binary files that can be read include observations, stages, budgets, flow, reach-aquifer exchanges, and structure data. The binary stage data can also be used to create water-surface profiles.
Hope this gets you started!
| github_jupyter |
[제가 미리 만들어놓은 이 링크](https://colab.research.google.com/github/heartcored98/Standalone-DeepLearning/blob/master/Lec4/Lab6_result_report.ipynb)를 통해 Colab에서 바로 작업하실 수 있습니다!
런타임 유형은 python3, GPU 가속 확인하기!
```
!mkdir results
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import argparse
import numpy as np
import time
from copy import deepcopy # Add Deepcopy for args
```
## Data Preparation
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainset, valset = torch.utils.data.random_split(trainset, [40000, 10000])
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
partition = {'train': trainset, 'val':valset, 'test':testset}
```
## Model Architecture
```
class MLP(nn.Module):
def __init__(self, in_dim, out_dim, hid_dim, n_layer, act, dropout, use_bn, use_xavier):
super(MLP, self).__init__()
self.in_dim = in_dim
self.out_dim = out_dim
self.hid_dim = hid_dim
self.n_layer = n_layer
self.act = act
self.dropout = dropout
self.use_bn = use_bn
self.use_xavier = use_xavier
# ====== Create Linear Layers ====== #
self.fc1 = nn.Linear(self.in_dim, self.hid_dim)
self.linears = nn.ModuleList()
self.bns = nn.ModuleList()
for i in range(self.n_layer-1):
self.linears.append(nn.Linear(self.hid_dim, self.hid_dim))
if self.use_bn:
self.bns.append(nn.BatchNorm1d(self.hid_dim))
self.fc2 = nn.Linear(self.hid_dim, self.out_dim)
# ====== Create Activation Function ====== #
if self.act == 'relu':
self.act = nn.ReLU()
elif self.act == 'tanh':
self.act == nn.Tanh()
elif self.act == 'sigmoid':
self.act = nn.Sigmoid()
else:
raise ValueError('no valid activation function selected!')
# ====== Create Regularization Layer ======= #
self.dropout = nn.Dropout(self.dropout)
if self.use_xavier:
self.xavier_init()
def forward(self, x):
x = self.act(self.fc1(x))
for i in range(len(self.linears)):
x = self.act(self.linears[i](x))
x = self.bns[i](x)
x = self.dropout(x)
x = self.fc2(x)
return x
def xavier_init(self):
for linear in self.linears:
nn.init.xavier_normal_(linear.weight)
linear.bias.data.fill_(0.01)
net = MLP(3072, 10, 100, 4, 'relu', 0.1, True, True) # Testing Model Construction
```
## Train, Validate, Test and Experiment
```
def train(net, partition, optimizer, criterion, args):
trainloader = torch.utils.data.DataLoader(partition['train'],
batch_size=args.train_batch_size,
shuffle=True, num_workers=2)
net.train()
correct = 0
total = 0
train_loss = 0.0
for i, data in enumerate(trainloader, 0):
optimizer.zero_grad() # [21.01.05 오류 수정] 매 Epoch 마다 .zero_grad()가 실행되는 것을 매 iteration 마다 실행되도록 수정했습니다.
# get the inputs
inputs, labels = data
inputs = inputs.view(-1, 3072)
inputs = inputs.cuda()
labels = labels.cuda()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
train_loss = train_loss / len(trainloader)
train_acc = 100 * correct / total
return net, train_loss, train_acc
def validate(net, partition, criterion, args):
valloader = torch.utils.data.DataLoader(partition['val'],
batch_size=args.test_batch_size,
shuffle=False, num_workers=2)
net.eval()
correct = 0
total = 0
val_loss = 0
with torch.no_grad():
for data in valloader:
images, labels = data
images = images.view(-1, 3072)
images = images.cuda()
labels = labels.cuda()
outputs = net(images)
loss = criterion(outputs, labels)
val_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
val_loss = val_loss / len(valloader)
val_acc = 100 * correct / total
return val_loss, val_acc
def test(net, partition, args):
testloader = torch.utils.data.DataLoader(partition['test'],
batch_size=args.test_batch_size,
shuffle=False, num_workers=2)
net.eval()
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images = images.view(-1, 3072)
images = images.cuda()
labels = labels.cuda()
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
test_acc = 100 * correct / total
return test_acc
def experiment(partition, args):
net = MLP(args.in_dim, args.out_dim, args.hid_dim, args.n_layer, args.act, args.dropout, args.use_bn, args.use_xavier)
net.cuda()
criterion = nn.CrossEntropyLoss()
if args.optim == 'SGD':
optimizer = optim.RMSprop(net.parameters(), lr=args.lr, weight_decay=args.l2)
elif args.optim == 'RMSprop':
optimizer = optim.RMSprop(net.parameters(), lr=args.lr, weight_decay=args.l2)
elif args.optim == 'Adam':
optimizer = optim.Adam(net.parameters(), lr=args.lr, weight_decay=args.l2)
else:
raise ValueError('In-valid optimizer choice')
# ===== List for epoch-wise data ====== #
train_losses = []
val_losses = []
train_accs = []
val_accs = []
# ===================================== #
for epoch in range(args.epoch): # loop over the dataset multiple times
ts = time.time()
net, train_loss, train_acc = train(net, partition, optimizer, criterion, args)
val_loss, val_acc = validate(net, partition, criterion, args)
te = time.time()
# ====== Add Epoch Data ====== #
train_losses.append(train_loss)
val_losses.append(val_loss)
train_accs.append(train_acc)
val_accs.append(val_acc)
# ============================ #
print('Epoch {}, Acc(train/val): {:2.2f}/{:2.2f}, Loss(train/val) {:2.2f}/{:2.2f}. Took {:2.2f} sec'.format(epoch, train_acc, val_acc, train_loss, val_loss, te-ts))
test_acc = test(net, partition, args)
# ======= Add Result to Dictionary ======= #
result = {}
result['train_losses'] = train_losses
result['val_losses'] = val_losses
result['train_accs'] = train_accs
result['val_accs'] = val_accs
result['train_acc'] = train_acc
result['val_acc'] = val_acc
result['test_acc'] = test_acc
return vars(args), result
# ===================================== #
```
# Manage Experiment Result
```
import hashlib
import json
from os import listdir
from os.path import isfile, join
import pandas as pd
def save_exp_result(setting, result):
exp_name = setting['exp_name']
del setting['epoch']
del setting['test_batch_size']
hash_key = hashlib.sha1(str(setting).encode()).hexdigest()[:6]
filename = './results/{}-{}.json'.format(exp_name, hash_key)
result.update(setting)
with open(filename, 'w') as f:
json.dump(result, f)
def load_exp_result(exp_name):
dir_path = './results'
filenames = [f for f in listdir(dir_path) if isfile(join(dir_path, f)) if '.json' in f]
list_result = []
for filename in filenames:
if exp_name in filename:
with open(join(dir_path, filename), 'r') as infile:
results = json.load(infile)
list_result.append(results)
df = pd.DataFrame(list_result) # .drop(columns=[])
return df
```
## Experiment
```
# ====== Random Seed Initialization ====== #
seed = 123
np.random.seed(seed)
torch.manual_seed(seed)
parser = argparse.ArgumentParser()
args = parser.parse_args("")
args.exp_name = "exp1_n_layer_hid_dim"
# ====== Model Capacity ====== #
args.in_dim = 3072
args.out_dim = 10
args.hid_dim = 100
args.act = 'relu'
# ====== Regularization ======= #
args.dropout = 0.2
args.use_bn = True
args.l2 = 0.00001
args.use_xavier = True
# ====== Optimizer & Training ====== #
args.optim = 'RMSprop' #'RMSprop' #SGD, RMSprop, ADAM...
args.lr = 0.0015
args.epoch = 10
args.train_batch_size = 256
args.test_batch_size = 1024
# ====== Experiment Variable ====== #
name_var1 = 'n_layer'
name_var2 = 'hid_dim'
list_var1 = [1, 2, 3]
list_var2 = [500, 300]
for var1 in list_var1:
for var2 in list_var2:
setattr(args, name_var1, var1)
setattr(args, name_var2, var2)
print(args)
setting, result = experiment(partition, deepcopy(args))
save_exp_result(setting, result)
import seaborn as sns
import matplotlib.pyplot as plt
df = load_exp_result('exp1')
fig, ax = plt.subplots(1, 3)
fig.set_size_inches(15, 6)
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
sns.barplot(x='n_layer', y='train_acc', hue='hid_dim', data=df, ax=ax[0])
sns.barplot(x='n_layer', y='val_acc', hue='hid_dim', data=df, ax=ax[1])
sns.barplot(x='n_layer', y='test_acc', hue='hid_dim', data=df, ax=ax[2])
var1 = 'n_layer'
var2 = 'hid_dim'
df = load_exp_result('exp1')
list_v1 = df[var1].unique()
list_v2 = df[var2].unique()
list_data = []
for value1 in list_v1:
for value2 in list_v2:
row = df.loc[df[var1]==value1]
row = row.loc[df[var2]==value2]
train_losses = list(row.train_losses)[0]
val_losses = list(row.val_losses)[0]
for epoch, train_loss in enumerate(train_losses):
list_data.append({'type':'train', 'loss':train_loss, 'epoch':epoch, var1:value1, var2:value2})
for epoch, val_loss in enumerate(val_losses):
list_data.append({'type':'val', 'loss':val_loss, 'epoch':epoch, var1:value1, var2:value2})
df = pd.DataFrame(list_data)
g = sns.FacetGrid(df, row=var2, col=var1, hue='type', margin_titles=True, sharey=False)
g = g.map(plt.plot, 'epoch', 'loss', marker='.')
g.add_legend()
g.fig.suptitle('Train loss vs Val loss')
plt.subplots_adjust(top=0.89)
var1 = 'n_layer'
var2 = 'hid_dim'
df = load_exp_result('exp1')
list_v1 = df[var1].unique()
list_v2 = df[var2].unique()
list_data = []
for value1 in list_v1:
for value2 in list_v2:
row = df.loc[df[var1]==value1]
row = row.loc[df[var2]==value2]
train_accs = list(row.train_accs)[0]
val_accs = list(row.val_accs)[0]
test_acc = list(row.test_acc)[0]
for epoch, train_acc in enumerate(train_accs):
list_data.append({'type':'train', 'Acc':train_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2})
for epoch, val_acc in enumerate(val_accs):
list_data.append({'type':'val', 'Acc':val_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2})
df = pd.DataFrame(list_data)
g = sns.FacetGrid(df, row=var2, col=var1, hue='type', margin_titles=True, sharey=False)
g = g.map(plt.plot, 'epoch', 'Acc', marker='.')
def show_acc(x, y, metric, **kwargs):
plt.scatter(x, y, alpha=0.3, s=1)
metric = "Test Acc: {:1.3f}".format(list(metric.values)[0])
plt.text(0.05, 0.95, metric, horizontalalignment='left', verticalalignment='center', transform=plt.gca().transAxes, bbox=dict(facecolor='yellow', alpha=0.5, boxstyle="round,pad=0.1"))
g = g.map(show_acc, 'epoch', 'Acc', 'test_acc')
g.add_legend()
g.fig.suptitle('Train Accuracy vs Val Accuracy')
plt.subplots_adjust(top=0.89)
```
| github_jupyter |
## 1、可视化DataGeneratorHomographyNet模块都干了什么
```
import glob
import os
import cv2
import numpy as np
from dataGenerator import DataGeneratorHomographyNet
img_dir = os.path.join(os.path.expanduser("~"), "/home/nvidia/test2017")
img_ext = ".jpg"
img_paths = glob.glob(os.path.join(img_dir, '*' + img_ext))
dg = DataGeneratorHomographyNet(img_paths, input_dim=(240, 240))
data, label = dg.__getitem__(0)
for idx in range(dg.batch_size):
cv2.imshow("orig", data[idx, :, :, 0])
cv2.imshow("transformed", data[idx, :, :, 1])
cv2.waitKey(0)
```
## 2、开始训练
```
import os
import glob
import datetime
import pandas as pd
import matplotlib.pyplot as plt
import keras
from keras.callbacks import ModelCheckpoint
from sklearn.model_selection import train_test_split
import tensorflow as tf
from homographyNet import HomographyNet
import dataGenerator as dg
keras.__version__
batch_size = 2
#取值0,1,2 0-安静模式 1-进度条 2-每一行都有输出
verbose = 1
#Epoch
nb_epo = 150
#计时开始
start_ts = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
#用于训练的图片目录
data_path = "/home/nvidia/test2017"
#模型保存的目录
model_dir = "/home/nvidia"
img_dir = os.path.join(os.path.expanduser("~"), data_path)
model_dir = os.path.join(os.path.expanduser("~"), model_dir, start_ts)
#以时间为名创建目录
if not os.path.exists(model_dir):
os.makedirs(model_dir)
img_ext = ".jpg"
#获取所有图像目录
img_paths = glob.glob(os.path.join(img_dir, '*' + img_ext))
input_size = (360, 360, 2)
#划分训练集和验证集,验证集搞小一点,不然每个epoch跑完太慢了
train_idx, val_idx = train_test_split(img_paths, test_size=0.01)
#拿到训练数据
train_dg = dg.DataGeneratorHomographyNet(train_idx, input_dim=input_size[0:2], batch_size=batch_size)
#拿到既定事实的标签
val_dg = dg.DataGeneratorHomographyNet(val_idx, input_dim=input_size[0:2], batch_size=batch_size)
#对于神经网络来说这个鬼一样的图就是输入,它自己从这幅图的左边和右边学习出单应性矩阵,神奇吧?
#修正网络输入头
homo_net = HomographyNet(input_size)
#实例化网络结构
model = homo_net.build_model()
#输出模型
model.summary()
#检查点回调,没写tensorboard的回调,真正的大师都是直接看loss输出的
checkpoint = ModelCheckpoint(
os.path.join(model_dir, 'model.h5'),
monitor='val_loss',
verbose=verbose,
save_best_only=True,
save_weights_only=False,
mode='auto'
)
#我嫌弃在上面改太麻烦,直接在这重定义了
#开始训练
#如果不加steps_per_epoch= 32, 就是每次全跑
history = model.fit_generator(train_dg,
validation_data = val_dg,
#steps_per_epoch = 32,
callbacks = [checkpoint],
epochs = 15,
verbose = 1)
```
```
#整个图看看
history_df = pd.DataFrame(history.history)
history_df.to_csv(os.path.join(model_dir, 'history.csv'))
history_df[['loss', 'val_loss']].plot()
history_df[['mean_squared_error', 'val_mean_squared_error']].plot()
plt.show()
```
## 预测&评估
```
TODO
```
| github_jupyter |
# Diamond Prices: Model Tuning and Improving Performance
#### Importing libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
pd.options.mode.chained_assignment = None
%matplotlib inline
```
#### Loading the dataset
```
DATA_DIR = '../data'
FILE_NAME = 'diamonds.csv'
data_path = os.path.join(DATA_DIR, FILE_NAME)
diamonds = pd.read_csv(data_path)
```
#### Preparing the dataset
```
## Preparation done from Chapter 2
diamonds = diamonds.loc[(diamonds['x']>0) | (diamonds['y']>0)]
diamonds.loc[11182, 'x'] = diamonds['x'].median()
diamonds.loc[11182, 'z'] = diamonds['z'].median()
diamonds = diamonds.loc[~((diamonds['y'] > 30) | (diamonds['z'] > 30))]
diamonds = pd.concat([diamonds, pd.get_dummies(diamonds['cut'], prefix='cut', drop_first=True)], axis=1)
diamonds = pd.concat([diamonds, pd.get_dummies(diamonds['color'], prefix='color', drop_first=True)], axis=1)
diamonds = pd.concat([diamonds, pd.get_dummies(diamonds['clarity'], prefix='clarity', drop_first=True)], axis=1)
## Dimensionality reduction
from sklearn.decomposition import PCA
pca = PCA(n_components=1, random_state=123)
diamonds['dim_index'] = pca.fit_transform(diamonds[['x','y','z']])
diamonds.drop(['x','y','z'], axis=1, inplace=True)
diamonds.columns
```
#### Train-test split
```
X = diamonds.drop(['cut','color','clarity','price'], axis=1)
y = diamonds['price']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=7)
```
#### Standarization: centering and scaling
```
numerical_features = ['carat', 'depth', 'table', 'dim_index']
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train[numerical_features])
X_train.loc[:, numerical_features] = scaler.fit_transform(X_train[numerical_features])
X_test.loc[:, numerical_features] = scaler.transform(X_test[numerical_features])
```
## Optimizing a single hyper-parameter
```
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.1, random_state=13)
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_absolute_error
candidates = np.arange(4,16)
mae_metrics = []
for k in candidates:
model = KNeighborsRegressor(n_neighbors=k, weights='distance', metric='minkowski', leaf_size=50, n_jobs=4)
model.fit(X_train, y_train)
y_pred = model.predict(X_val)
metric = mean_absolute_error(y_true=y_val, y_pred=y_pred)
mae_metrics.append(metric)
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(candidates, mae_metrics, "o-")
ax.set_xlabel('Hyper-parameter K', fontsize=14)
ax.set_ylabel('MAE', fontsize=14)
ax.set_xticks(candidates)
ax.grid();
```
#### Recalculating train-set split
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=7)
scaler = StandardScaler()
scaler.fit(X_train[numerical_features])
X_train.loc[:, numerical_features] = scaler.fit_transform(X_train[numerical_features])
X_test.loc[:, numerical_features] = scaler.transform(X_test[numerical_features])
```
#### Optimizing with cross-validation
```
from sklearn.model_selection import cross_val_score
candidates = np.arange(4,16)
mean_mae = []
std_mae = []
for k in candidates:
model = KNeighborsRegressor(n_neighbors=k, weights='distance', metric='minkowski', leaf_size=50, n_jobs=4)
cv_results = cross_val_score(model, X_train, y_train, scoring='neg_mean_absolute_error', cv=10)
mean_score, std_score = -1*cv_results.mean(), cv_results.std()
mean_mae.append(mean_score)
std_mae.append(std_score)
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(candidates, mean_mae, "o-")
ax.set_xlabel('Hyper-parameter K', fontsize=14)
ax.set_ylabel('Mean MAE', fontsize=14)
ax.set_xticks(candidates)
ax.grid();
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(candidates, std_mae, "o-")
ax.set_xlabel('Hyper-parameter K', fontsize=14)
ax.set_ylabel('Standard deviation of MAE', fontsize=14)
ax.set_xticks(candidates)
ax.grid();
```
# Improving Performance
## Improving our diamond price predictions
### Fitting a neural network
```
from keras.models import Sequential
from keras.layers import Dense
n_input = X_train.shape[1]
n_hidden1 = 32
n_hidden2 = 16
n_hidden3 = 8
nn_reg = Sequential()
nn_reg.add(Dense(units=n_hidden1, activation='relu', input_shape=(n_input,)))
nn_reg.add(Dense(units=n_hidden2, activation='relu'))
nn_reg.add(Dense(units=n_hidden3, activation='relu'))
# output layer
nn_reg.add(Dense(units=1, activation=None))
batch_size = 32
n_epochs = 40
nn_reg.compile(loss='mean_absolute_error', optimizer='adam')
nn_reg.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_split=0.05)
y_pred = nn_reg.predict(X_test).flatten()
mae_neural_net = mean_absolute_error(y_test, y_pred)
print("MAE Neural Network: {:0.2f}".format(mae_neural_net))
```
### Transforming the target
```
diamonds['price'].hist(bins=25, ec='k', figsize=(8,5))
plt.title("Distribution of diamond prices", fontsize=16)
plt.grid(False);
y_train = np.log(y_train)
pd.Series(y_train).hist(bins=25, ec='k', figsize=(8,5))
plt.title("Distribution of log diamond prices", fontsize=16)
plt.grid(False);
nn_reg = Sequential()
nn_reg.add(Dense(units=n_hidden1, activation='relu', input_shape=(n_input,)))
nn_reg.add(Dense(units=n_hidden2, activation='relu'))
nn_reg.add(Dense(units=n_hidden3, activation='relu'))
# output layer
nn_reg.add(Dense(units=1, activation=None))
batch_size = 32
n_epochs = 40
nn_reg.compile(loss='mean_absolute_error', optimizer='adam')
nn_reg.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_split=0.05)
y_pred = nn_reg.predict(X_test).flatten()
y_pred = np.exp(y_pred)
mae_neural_net2 = mean_absolute_error(y_test, y_pred)
print("MAE Neural Network (modified target): {:0.2f}".format(mae_neural_net2))
100*(mae_neural_net - mae_neural_net2)/mae_neural_net2
```
#### Analyzing the results
```
fig, ax = plt.subplots(figsize=(8,5))
residuals = y_test - y_pred
ax.scatter(y_test, residuals, s=3)
ax.set_title('Residuals vs. Observed Prices', fontsize=16)
ax.set_xlabel('Observed prices', fontsize=14)
ax.set_ylabel('Residuals', fontsize=14)
ax.grid();
mask_7500 = y_test <=7500
mae_neural_less_7500 = mean_absolute_error(y_test[mask_7500], y_pred[mask_7500])
print("MAE considering price <= 7500: {:0.2f}".format(mae_neural_less_7500))
fig, ax = plt.subplots(figsize=(8,5))
percent_residuals = (y_test - y_pred)/y_test
ax.scatter(y_test, percent_residuals, s=3)
ax.set_title('Pecent residuals vs. Observed Prices', fontsize=16)
ax.set_xlabel('Observed prices', fontsize=14)
ax.set_ylabel('Pecent residuals', fontsize=14)
ax.axhline(y=0.15, color='r'); ax.axhline(y=-0.15, color='r');
ax.grid();
```
| github_jupyter |
# Loads pre-trained model and get prediction on validation samples
### 1. Info
Please provide path to the relevant config file
```
config_file_path = "../configs/pretrained/config_model1.json"
```
### 2. Importing required modules
```
import os
import cv2
import sys
import importlib
import torch
import torchvision
import numpy as np
sys.path.insert(0, "../")
# imports for displaying a video an IPython cell
import io
import base64
from IPython.display import HTML
from data_parser import WebmDataset
from data_loader_av import VideoFolder
from models.multi_column import MultiColumn
from transforms_video import *
from utils import load_json_config, remove_module_from_checkpoint_state_dict
from pprint import pprint
```
### 3. Loading configuration file, model definition and its path
```
# Load config file
config = load_json_config(config_file_path)
# set column model
column_cnn_def = importlib.import_module("{}".format(config['conv_model']))
model_name = config["model_name"]
print("=> Name of the model -- {}".format(model_name))
# checkpoint path to a trained model
checkpoint_path = os.path.join("../", config["output_dir"], config["model_name"], "model_best.pth.tar")
print("=> Checkpoint path --> {}".format(checkpoint_path))
```
### 3. Load model
_Note: without cuda() for ease_
```
model = MultiColumn(config['num_classes'], column_cnn_def.Model, int(config["column_units"]))
model.eval();
print("=> loading checkpoint")
checkpoint = torch.load(checkpoint_path)
checkpoint['state_dict'] = remove_module_from_checkpoint_state_dict(
checkpoint['state_dict'])
model.load_state_dict(checkpoint['state_dict'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(checkpoint_path, checkpoint['epoch']))
```
### 4. Load data
```
# Center crop videos during evaluation
transform_eval_pre = ComposeMix([
[Scale(config['input_spatial_size']), "img"],
[torchvision.transforms.ToPILImage(), "img"],
[torchvision.transforms.CenterCrop(config['input_spatial_size']), "img"]
])
transform_post = ComposeMix([
[torchvision.transforms.ToTensor(), "img"],
[torchvision.transforms.Normalize(
mean=[0.485, 0.456, 0.406], # default values for imagenet
std=[0.229, 0.224, 0.225]), "img"]
])
val_data = VideoFolder(root=config['data_folder'],
json_file_input=config['json_data_val'],
json_file_labels=config['json_file_labels'],
clip_size=config['clip_size'],
nclips=config['nclips_val'],
step_size=config['step_size_val'],
is_val=True,
transform_pre=transform_eval_pre,
transform_post=transform_post,
get_item_id=True,
)
dict_two_way = val_data.classes_dict
```
### 5. Get predictions
#### 5.1. Select random sample (or specify the index)
```
selected_indx = np.random.randint(len(val_data))
# selected_indx = 136
```
#### 5.2 Get data in required format
```
input_data, target, item_id = val_data[selected_indx]
input_data = input_data.unsqueeze(0)
print("Id of the video sample = {}".format(item_id))
print("True label --> {} ({})".format(target, dict_two_way[target]))
if config['nclips_val'] > 1:
input_var = list(input_data.split(config['clip_size'], 2))
for idx, inp in enumerate(input_var):
input_var[idx] = torch.autograd.Variable(inp)
else:
input_var = [torch.autograd.Variable(input_data)]
```
#### 5.3 Compute output from the model
```
output = model(input_var).squeeze(0)
output = torch.nn.functional.softmax(output, dim=0)
# compute top5 predictions
pred_prob, pred_top5 = output.data.topk(5)
pred_prob = pred_prob.numpy()
pred_top5 = pred_top5.numpy()
```
#### 5.4 Visualize predictions
```
print("Id of the video sample = {}".format(item_id))
print("True label --> {} ({})".format(target, dict_two_way[target]))
print("\nTop-5 Predictions:")
for i, pred in enumerate(pred_top5):
print("Top {} :== {}. Prob := {:.2f}%".format(i + 1, dict_two_way[pred], pred_prob[i] * 100))
path_to_vid = os.path.join(config["data_folder"], item_id + ".webm")
video = io.open(path_to_vid, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
```
| github_jupyter |
# Bar charts
This is 'abusing' the scatter object to create a 3d bar chart
```
import ipyvolume as ipv
import numpy as np
# set up data similar to animation notebook
u_scale = 10
Nx, Ny = 30, 15
u = np.linspace(-u_scale, u_scale, Nx)
v = np.linspace(-u_scale, u_scale, Ny)
x, y = np.meshgrid(u, v, indexing='ij')
r = np.sqrt(x**2+y**2)
x = x.flatten()
y = y.flatten()
r = r.flatten()
time = np.linspace(0, np.pi*2, 15)
z = np.array([(np.cos(r + t) * np.exp(-r/5)) for t in time])
zz = z
fig = ipv.figure()
s = ipv.scatter(x, 0, y, aux=zz, marker="sphere")
dx = u[1] - u[0]
dy = v[1] - v[0]
# make the x and z lim half a 'box' larger
ipv.xlim(-u_scale-dx/2, u_scale+dx/2)
ipv.zlim(-u_scale-dx/2, u_scale+dx/2)
ipv.ylim(-1.2, 1.2)
ipv.show()
```
We now make boxes, that fit exactly in the volume, by giving them a size of 1, in domain coordinates (so 1 unit as read of by the x-axis etc)
```
# make the size 1, in domain coordinates (so 1 unit as read of by the x-axis etc)
s.geo = 'box'
s.size = 1
s.size_x_scale = fig.scales['x']
s.size_y_scale = fig.scales['y']
s.size_z_scale = fig.scales['z']
s.shader_snippets = {'size':
'size_vector.y = SCALE_SIZE_Y(aux_current); '
}
```
Using a shader snippet (that runs on the GPU), we set the y size equal to the aux value. However, since the box has size 1 around the origin of (0,0,0), we need to translate it up in the y direction by 0.5.
```
s.shader_snippets = {'size':
'size_vector.y = SCALE_SIZE_Y(aux_current) - SCALE_SIZE_Y(0.0) ; '
}
s.geo_matrix = [dx, 0, 0, 0, 0, 1, 0, 0, 0, 0, dy, 0, 0.0, 0.5, 0, 1]
```
Since we see the boxes with negative sizes inside out, we made the material double sided
```
# since we see the boxes with negative sizes inside out, we made the material double sided
s.material.side = "DoubleSide"
# Now also include, color, which containts rgb values
color = np.array([[np.cos(r + t), 1-np.abs(z[i]), 0.1+z[i]*0] for i, t in enumerate(time)])
color = np.transpose(color, (0, 2, 1)) # flip the last axes
s.color = color
ipv.animation_control(s, interval=200)
```
# Spherical bar charts
```
# Create spherical coordinates
u = np.linspace(0, 1, Nx)
v = np.linspace(0, 1, Ny)
u, v = np.meshgrid(u, v, indexing='ij')
phi = u * 2 * np.pi
theta = v * np.pi
radius = 1
xs = radius * np.cos(phi) * np.sin(theta)
ys = radius * np.sin(phi) * np.sin(theta)
zs = radius * np.cos(theta)
xs = xs.flatten()
ys = ys.flatten()
zs = zs.flatten()
fig = ipv.figure()
# we use the coordinates as the normals, and thus direction
s = ipv.scatter(xs, ys, zs, vx=xs, vy=ys, vz=zs, aux=zz, color=color, marker="cylinder_hr")
ipv.xyzlim(2)
ipv.show()
ipv.animation_control(s, interval=200)
import bqplot
# the aux range is from -1 to 1, but if we put 0 as min, negative values will go inside
# the max determines the 'height' of the bars
aux_scale = bqplot.LinearScale(min=0, max=5)
s.aux_scale = aux_scale
s.shader_snippets = {'size':
'''float sc = (SCALE_AUX(aux_current) - SCALE_AUX(0.0)); size_vector.y = sc;
'''}
s.material.side = "DoubleSide"
s.size = 2
s.geo_matrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0.0, 0.5, 0, 1]
ipv.style.box_off()
ipv.style.axes_off()
```
[screenshot](screenshot/bars.gif)
| github_jupyter |
# Emukit tutorials
Emukit tutorials can be added and used through the links below. The goal of each of these tutorials is to explain a particular functionality of the Emukit project. These tutorials are stand-alone notebooks that don't require any extra files and fully sit on Emukit components (apart from the creation of the model).
Some tutorials have been written with the purpose of explaining some scientific concepts and can be used for learning about different topics in emulation and uncertainty quantification. Other tutorials are a small guide to describe some feature of the library.
Another great resource to learn Emukit are the [examples](../emukit/examples) which are more elaborated modules focused either on the implementation of a new method with Emukit components or on the analysis and solution of some specific problem.
### Getting Started
Tutorials in this section will get you up and running with Emukit as quickly as possible.
* [5 minutes introduction to Emukit](Emukit-tutorial-intro.ipynb)
* [Philosophy and Basic use of the library](Emukit-tutorial-basic-use-of-the-library.ipynb)
### Scientific tutorials
Tutorials in this section will teach you about the theoretical foundations of surrogate optimization using Emukit.
* [Introduction to Bayesian optimization](Emukit-tutorial-Bayesian-optimization-introduction.ipynb)
* [Introduction to multi-fidelity Gaussian processes](Emukit-tutorial-multi-fidelity.ipynb)
* [Introduction to sensitivity analysis](Emukit-tutorial-sensitivity-montecarlo.ipynb)
* [Introduction to Bayesian Quadrature](Emukit-tutorial-Bayesian-quadrature-introduction.ipynb)
* [Introduction to Experimental Design](Emukit-tutorial-experimental-design-introduction.ipynb)
### Features tutorials
Tutorials in this section will give you code snippets and explanations of various practical features included in the Emukit project.
* [Bayesian optimization with external evaluation of the objective](Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb)
* [Bayesian optimization with context variables](Emukit-tutorial-bayesian-optimization-context-variables.ipynb)
* [Learn how to to combine an acquisition function (entropy search) with a multi-source (fidelity) Gaussian process](Emukit-tutorial-multi-fidelity-bayesian-optimization.ipynb)
* [How to benchmark several Bayesian optimization methods with Emukit](Emukit-tutorial-bayesian-optimization-benchmark.ipynb)
* [How to perform Bayesian optimization with non-linear constraints](Emukit-tutorial-constrained-optimization.ipynb)
* [Bayesian optimization integrating the hyper-parameters of the model](Emukit-tutorial-bayesian-optimization-integrating-model-hyperparameters.ipynb)
* [How to use custom model](Emukit-tutorial-custom-model.ipynb)
* [How to select neural network hyperparameters: categorical variables in Emukit](Emukit-tutorial-select-neural-net-hyperparameters.ipynb)
* [How to parallelize external objective function evaluations in Bayesian optimization](Emukit-tutorial-parallel-eval-of-obj-fun.ipynb)
## Contribution guide
Community contributions are vital to the success of any open source project. [Tutorials](Emukit-tutorial-how-to-write-a-notebook.ipynb) and [examples](https://github.com/emukit/emukit/tree/main/emukit/examples) are a great way to spread what you have learned about Emukit across the community and an excellent way to showcase new features. If you want to contribute with a new tutorial please follow [these steps](Emukit-tutorial-how-to-write-a-notebook.ipynb).
We also welcome feedback, so if there is any aspect of Emukit that we can improve, please [raise an issue](https://github.com/EmuKit/emukit/issues/new)!
| github_jupyter |
```
%run ../setup/nb_setup
%matplotlib inline
```
# Compute a Galactic orbit for a star using Gaia data
Author(s): Adrian Price-Whelan
## Learning goals
In this tutorial, we will retrieve the sky coordinates, astrometry, and radial velocity for a star — [Kepler-444](https://en.wikipedia.org/wiki/Kepler-444) — and compute its orbit in the default Milky Way mass model implemented in Gala. We will compare the orbit of Kepler-444 to the orbit of the Sun and a random sample of nearby stars.
### Notebook Setup and Package Imports
```
import astropy.coordinates as coord
import astropy.units as u
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pyia import GaiaData
# Gala
import gala.dynamics as gd
import gala.potential as gp
```
## Define a Galactocentric Coordinate Frame
We will start by defining a Galactocentric coordinate system using `astropy.coordinates`. We will adopt the latest parameter set assumptions for the solar Galactocentric position and velocity as implemented in Astropy, but note that these parameters are customizable by passing parameters into the `Galactocentric` class below (e.g., you could change the sun-galactic center distance by setting `galcen_distance=...`).
```
with coord.galactocentric_frame_defaults.set("v4.0"):
galcen_frame = coord.Galactocentric()
galcen_frame
```
## Define the Solar Position and Velocity
In this coordinate system, the sun is along the $x$-axis (at a negative $x$ value), and the Galactic rotation at this position is in the $+y$ direction. The 3D position of the sun is therefore given by:
```
sun_xyz = u.Quantity(
[-galcen_frame.galcen_distance, 0 * u.kpc, galcen_frame.z_sun] # x,y,z
)
```
We can combine this with the solar velocity vector (defined in the `astropy.coordinates.Galactocentric` frame) to define the sun's phase-space position, which we will use as initial conditions shortly to compute the orbit of the Sun:
```
sun_vxyz = galcen_frame.galcen_v_sun
sun_vxyz
sun_w0 = gd.PhaseSpacePosition(pos=sun_xyz, vel=sun_vxyz)
```
To compute the sun's orbit, we need to specify a mass model for the Galaxy. Here, we will use the default Milky Way mass model implemented in Gala, which is defined in detail in the Gala documentation: [Defining a Milky Way model](define-milky-way-model.html). Here, we will initialize the potential model with default parameters:
```
mw_potential = gp.MilkyWayPotential()
mw_potential
```
This potential is composed of four mass components meant to represent simple models of the different structural components of the Milky Way:
```
for k, pot in mw_potential.items():
print(f"{k}: {pot!r}")
```
With a potential model for the Galaxy and initial conditions for the sun, we can now compute the Sun's orbit using the default integrator (Leapfrog integration): We will compute the orbit for 4 Gyr, which is about 16 orbital periods.
```
sun_orbit = mw_potential.integrate_orbit(sun_w0, dt=0.5 * u.Myr, t1=0, t2=4 * u.Gyr)
```
Let's plot the Sun's orbit in 3D to get a feel for the geometry of the orbit:
```
fig, ax = sun_orbit.plot_3d()
lim = (-12, 12)
ax.set(xlim=lim, ylim=lim, zlim=lim)
```
## Retrieve Gaia Data for Kepler-444
As a comparison, we will compute the orbit of the exoplanet-hosting star "Kepler-444." To get Gaia data for this star, we first have to retrieve its sky coordinates so that we can do a positional cross-match query on the Gaia catalog. We can retrieve the sky position of Kepler-444 from Simbad using the `SkyCoord.from_name()` classmethod, which queries Simbad under the hood to resolve the name:
```
star_sky_c = coord.SkyCoord.from_name("Kepler-444")
star_sky_c
```
We happen to know a priori that Kepler-444 has a large proper motion, so the sky position reported by Simbad could be off from the Gaia sky position (epoch=2016) by many arcseconds. To run and retrieve the Gaia data, we will use the [pyia](http://pyia.readthedocs.io/) package: We can pass in an ADQL query, which `pyia` uses to query the Gaia science archive using `astroquery`, and returns the data as a `pyia.GaiaData` object. To run the query, we will do a sky position cross-match with a large positional tolerance by setting the cross-match radius to 15 arcseconds, but we will take the brightest cross-matched source within this region as our match:
```
star_gaia = GaiaData.from_query(
f"""
SELECT TOP 1 * FROM gaiaedr3.gaia_source
WHERE 1=CONTAINS(
POINT('ICRS', {star_sky_c.ra.degree}, {star_sky_c.dec.degree}),
CIRCLE('ICRS', ra, dec, {(15*u.arcsec).to_value(u.degree)})
)
ORDER BY phot_g_mean_mag
"""
)
star_gaia
```
We will assume (and hope!) that this source is Kepler-444, but we know that it is fairly bright compared to a typical Gaia source, so we should be safe.
We can now use the returned `pyia.GaiaData` object to retrieve an astropy `SkyCoord` object with all of the position and velocity measurements taken from the Gaia archive record for this source:
```
star_gaia_c = star_gaia.get_skycoord()
star_gaia_c
```
To compute this star's Galactic orbit, we need to convert its observed, Heliocentric (actually solar system barycentric) data into the Galactocentric coordinate frame we defined above. To do this, we will use the `astropy.coordinates` transformation framework using the `.transform_to()` method, and we will pass in the `Galactocentric` coordinate frame we defined above:
```
star_galcen = star_gaia_c.transform_to(galcen_frame)
star_galcen
```
Let's print out the Cartesian position and velocity for Kepler-444:
```
print(star_galcen.cartesian)
print(star_galcen.velocity)
```
Now with Galactocentric position and velocity components for Kepler-444, we can create Gala initial conditions and compute its orbit on the time grid used to compute the Sun's orbit above:
```
star_w0 = gd.PhaseSpacePosition(star_galcen.data)
star_orbit = mw_potential.integrate_orbit(star_w0, t=sun_orbit.t)
```
We can now compare the orbit of Kepler-444 to the solar orbit we computed above. We will plot the two orbits in two projections: First in the $x$-$y$ plane (Cartesian positions), then in the *meridional plane*, showing the cylindrical $R$ and $z$ position dependence of the orbits:
```
fig, axes = plt.subplots(1, 2, figsize=(10, 5), constrained_layout=True)
sun_orbit.plot(["x", "y"], axes=axes[0])
star_orbit.plot(["x", "y"], axes=axes[0])
axes[0].set_xlim(-10, 10)
axes[0].set_ylim(-10, 10)
sun_orbit.cylindrical.plot(
["rho", "z"],
axes=axes[1],
auto_aspect=False,
labels=["$R$ [kpc]", "$z$ [kpc]"],
label="Sun",
)
star_orbit.cylindrical.plot(
["rho", "z"],
axes=axes[1],
auto_aspect=False,
labels=["$R$ [kpc]", "$z$ [kpc]"],
label="Kepler-444",
)
axes[1].set_xlim(0, 10)
axes[1].set_ylim(-5, 5)
axes[1].set_aspect("auto")
axes[1].legend(loc="best", fontsize=15)
```
### Exercise: How does Kepler-444's orbit differ from the Sun's?
- What are the guiding center radii of the two orbits?
- What is the maximum $z$ height reached by each orbit?
- What are their eccentricities?
- Can you guess which star is older based on their kinematics?
- Which star do you think has a higher metallicity?
### Exercise: Compute orbits for Monte Carlo sampled initial conditions using the Gaia error distribution
*Hint: Use the `pyia.GaiaData.get_error_samples()` method to generate samples from the Gaia error distribution*
- Generate 128 samples from the error distribution
- Construct a `SkyCoord` object with all of these Monte Carlo samples
- Transform the error sample coordinates to the Galactocentric frame and define Gala initial conditions (a `PhaseSpacePosition` object)
- Compute orbits for all error samples using the same time grid we used above
- Compute the eccentricity and $L_z$ for all samples: what is the standard deviation of the eccentricity and $L_z$ values?
- With what fractional precision can we measure this star's eccentricity and $L_z$? (i.e. what is $\textrm{std}(e) / \textrm{mean}(e)$ and the same for $L_z$)
### Exercise: Comparing these orbits to the orbits of other Gaia stars
Retrieve Gaia data for a set of 100 random Gaia stars within 200 pc of the sun with measured radial velocities and well-measured parallaxes using the query:
SELECT TOP 100 * FROM gaiaedr3.gaia_source
WHERE dr2_radial_velocity IS NOT NULL AND
parallax_over_error > 10 AND
ruwe < 1.2 AND
parallax > 5
ORDER BY random_index
```
# random_stars_g = ..
```
Compute orbits for these stars for the same time grid used above to compute the sun's orbit:
```
# random_stars_c = ...
# random_stars_galcen = ...
# random_stars_w0 = ...
# random_stars_orbits = ...
```
Plot the initial (present-day) positions of all of these stars in Galactocentric Cartesian coordinates:
Now plot the orbits of these stars in the x-y and R-z planes:
```
fig, axes = plt.subplots(1, 2, figsize=(10, 5), constrained_layout=True)
random_stars_orbits.plot(["x", "y"], axes=axes[0])
axes[0].set_xlim(-15, 15)
axes[0].set_ylim(-15, 15)
random_stars_orbits.cylindrical.plot(
["rho", "z"],
axes=axes[1],
auto_aspect=False,
labels=["$R$ [kpc]", "$z$ [kpc]"],
)
axes[1].set_xlim(0, 15)
axes[1].set_ylim(-5, 5)
axes[1].set_aspect("auto")
```
Compute maximum $z$ heights ($z_\textrm{max}$) and eccentricities for all of these orbits. Compare the Sun, Kepler-444, and this random sampling of nearby stars. Where do the Sun and Kepler-444 sit relative to the random sample of nearby stars in terms of $z_\textrm{max}$ and eccentricity? (Hint: plot $z_\textrm{max}$ vs. eccentricity and highlight the Sun and Kepler-444!) Are either of them outliers in any way?
```
# rand_zmax = ...
# rand_ecc = ...
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(
rand_ecc, rand_zmax, color="k", alpha=0.4, s=14, lw=0, label="random nearby stars"
)
ax.scatter(sun_orbit.eccentricity(), sun_orbit.zmax(), color="tab:orange", label="Sun")
ax.scatter(
star_orbit.eccentricity(), star_orbit.zmax(), color="tab:cyan", label="Kepler-444"
)
ax.legend(loc="best", fontsize=14)
ax.set_xlabel("eccentricity, $e$")
ax.set_ylabel(r"max. $z$ height, $z_{\rm max}$ [kpc]")
```
| github_jupyter |
# Lab 04 : Train vanilla neural network -- solution
# Training a one-layer net on FASHION-MNIST
```
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'train_vanilla_nn_solution.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
```
### Download the TRAINING SET (data+labels)
```
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
```
### Download the TEST SET (data only)
```
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
```
### Make a one layer net class
```
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
self.linear_layer = nn.Linear( input_size, output_size , bias=False)
def forward(self, x):
y = self.linear_layer(x)
prob = F.softmax(y, dim=1)
return prob
```
### Build the net
```
net=one_layer_net(784,10)
print(net)
```
### Take the 4th image of the test set:
```
im=test_data[4]
utils.show(im)
```
### And feed it to the UNTRAINED network:
```
p = net( im.view(1,784))
print(p)
```
### Display visually the confidence scores
```
utils.show_prob_fashion_mnist(p)
```
### Train the network (only 5000 iterations) on the train set
```
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
idx=randint(0, 60000-1)
input=train_data[idx].view(1,784)
label=train_label[idx].view(1)
# feed the input to the net
input.requires_grad_()
prob=net(input)
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
### Take the 34th image of the test set:
```
im=test_data[34]
utils.show(im)
```
### Feed it to the TRAINED net:
```
p = net( im.view(1,784))
print(p)
```
### Display visually the confidence scores
```
utils.show_prob_fashion_mnist(prob)
```
### Choose image at random from the test set and see how good/bad are the predictions
```
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
```
| github_jupyter |
```
from pandas import read_csv
import cv2
import glob
import os
import numpy as np
import logging
import coloredlogs
logger = logging.getLogger(__name__)
coloredlogs.install(level='DEBUG')
coloredlogs.install(level='DEBUG', logger=logger)
IM_EXTENSIONS = ['png', 'jpg', 'jpeg', 'bmp']
def read_img(img_path, img_shape=(128, 128)):
"""
load image file and divide by 255.
"""
img = cv2.imread(img_path)
img = cv2.resize(img, img_shape)
img = img.astype('float')
img /= 255.
return img
dataset_dir = './data/images/'
label_path = './data/label.csv'
batch_size=32,
img_shape=(128, 128)
label_df = read_csv(label_path)
# img_files = glob.glob(dataset_dir + '*')
# img_files = [f for f in img_files if f[-3:] in IM_EXTENSIONS]
label_idx = label_df.set_index('filename')
img_files = label_idx.index.unique().values
label_idx.loc['0_Parade_Parade_0_628.jpg'].head()
label_idx.iloc[0:5]
len(img_files)
def append_zero(arr):
return np.append([0], arr)
# temp = label_idx.loc[img_files[0]].values[:, :4] #[0, 26, 299, 36, 315]
# np.apply_along_axis(append_zero, 1, temp)
"""
data loader
return image, [class_label, class_and_location_label]
"""
numofData = len(img_files) # endwiths(png,jpg ...)
data_idx = np.arange(numofData)
while True:
batch_idx = np.random.choice(data_idx, size=batch_size, replace=False)
batch_img = []
batch_label = []
batch_label_cls = []
for i in batch_idx:
img = read_img(dataset_dir + img_files[i], img_shape=img_shape)
label_idx = label_df.set_index('filename')
img_files = label_idx.index.unique().values
label = label_idx.loc[img_files[i]].values
label = np.array(label, ndmin=2)
label = label[:, :4]
cls_loc_label = np.apply_along_axis(append_zero, 1, label)
batch_img.append(img)
batch_label.append(label)
batch_label_cls.append(0) # label[0:1]) ---> face
# yield ({'input_1': np.array(batch_img, dtype=np.float32)},
# {'clf_output': np.array(batch_label_cls, dtype=np.float32),
# 'bb_output': np.array(batch_label, dtype=np.float32)})
import tensorflow as tf
def dataloader(dataset_dir, label_path, batch_size=1000, img_shape=(128, 128)):
"""
data loader
return image, [class_label, class_and_location_label]
"""
label_df = read_csv(label_path)
label_idx = label_df.set_index('filename')
img_files = label_idx.index.unique().values
numofData = len(img_files) # endwiths(png,jpg ...)
data_idx = np.arange(numofData)
while True:
batch_idx = np.random.choice(data_idx, size=batch_size, replace=False)
batch_img = []
batch_label = []
batch_class = []
for i in batch_idx:
img = read_img(dataset_dir + img_files[i], img_shape=img_shape)
label = label_idx.loc[img_files[i]].values
label = np.array(label, ndmin=2)
label = label[:, :4]
cls_loc_label = np.apply_along_axis(append_zero, 1, label)
batch_img.append(img)
batch_label.append(cls_loc_label) # face + bb
batch_class.append(cls_loc_label[:, 0:1]) # label[:, 0:1]) ---> face
# yield {'input_1': np.array(batch_img, dtype=np.float32)}, {'clf_output': np.array(batch_class, dtype=np.float32),'bb_output': np.array(batch_label, dtype=np.float32)}
yield np.array(batch_img, dtype=np.float32), [np.array(batch_class, dtype=np.float32), np.array(batch_label, dtype=np.float32)]
data_gen = dataloader(dataset_dir, label_path, batch_size=1, img_shape=(128, 128))
data = next(data_gen)
len(data)
```
| github_jupyter |
<font color = "mediumblue">Note: Notebook was updated July 2, 2019 with bug fixes.</font>
#### If you were working on the older version:
* Please click on the "Coursera" icon in the top right to open up the folder directory.
* Navigate to the folder: Week 3/ Planar data classification with one hidden layer. You can see your prior work in version 5: Planar data classification with one hidden layer v5.ipynb
#### List of bug fixes and enhancements
* Clarifies that the classifier will learn to classify regions as either red or blue.
* compute_cost function fixes np.squeeze by casting it as a float.
* compute_cost instructions clarify the purpose of np.squeeze.
* compute_cost clarifies that "parameters" parameter is not needed, but is kept in the function definition until the auto-grader is also updated.
* nn_model removes extraction of parameter values, as the entire parameter dictionary is passed to the invoked functions.
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
**You will learn how to:**
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## 1 - Packages ##
Let's first import all the packages that you will need during this assignment.
- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
```
## 2 - Dataset ##
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
**Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
### START CODE HERE ### (≈ 3 lines of code)
shape_X = None
shape_Y = None
m = X.shape[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
## 3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
```
You can now plot the decision boundary of these models. Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
## 4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
**Here is our model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data.
### 4.1 - Defining the neural network structure ####
**Exercise**: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = None # size of input layer
n_h = None
n_y = None # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
```
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
### 4.2 - Initialize the model's parameters ####
**Exercise**: Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
### 4.3 - The Loop ####
**Question**: Implement `forward_propagation()`.
**Instructions**:
- Look above at the mathematical representation of your classifier.
- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.
- You can use the function `np.tanh()`. It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = None
A1 = None
Z2 = None
A2 = None
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
```
(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
Note that if you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. We can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). We can cast the array as a type `float` using `float()`.
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
[Note that the parameters argument is not used in this function,
but the auto-grader currently expects this parameter.
Future version of this notebook will fix both the notebook
and the auto-grader so that `parameters` is not needed.
For now, please include `parameters` in the function signature,
and also when invoking this function.]
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = None
cost = None
### END CODE HERE ###
cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
**Question**: Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = None
W2 = None
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = None
A2 = None
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = None
dW2 = None
db2 = None
dZ1 = None
dW1 = None
db1 = None
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
**Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = None
db1 = None
dW2 = None
db2 = None
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() ####
**Question**: Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters
### START CODE HERE ### (≈ 1 line of code)
parameters = None
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = None
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = None
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = None
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = None
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
### 4.5 Predictions
**Question**: Use your model to predict by building predict().
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = None
predictions = None
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
```
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
```
**Expected Output**:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ###
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Optional questions**:
**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
**You've learnt to:**
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
## 5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
Congrats on finishing this Programming Assignment!
Reference:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
| github_jupyter |
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>2D <code>Numpy</code> in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about using <code>Numpy</code> in the Python Programming Language. By the end of this lab, you'll know what <code>Numpy</code> is and the <code>Numpy</code> operations.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li><a href="create">Create a 2D Numpy Array</a></li>
<li><a href="access">Accessing different elements of a Numpy Array</a></li>
<li><a href="op">Basic Operations</a></li>
</ul>
<p>
Estimated time needed: <strong>20 min</strong>
</p>
</div>
<hr>
<h2 id="create">Create a 2D Numpy Array</h2>
```
# Import the libraries
import numpy as np
import matplotlib.pyplot as plt
```
Consider the list <code>a</code>, the list contains three nested lists **each of equal size**.
```
# Create a list
a = [[11, 12, 13], [21, 22, 23], [31, 32, 33]]
a
```
We can cast the list to a Numpy Array as follow
```
# Convert list to Numpy Array
# Every element is the same type
A = np.array(a)
A
```
We can use the attribute <code>ndim</code> to obtain the number of axes or dimensions referred to as the rank.
```
# Show the numpy array dimensions
A.ndim
```
Attribute <code>shape</code> returns a tuple corresponding to the size or number of each dimension.
```
# Show the numpy array shape
A.shape
```
The total number of elements in the array is given by the attribute <code>size</code>.
```
# Show the numpy array size
A.size
```
<hr>
<h2 id="access">Accessing different elements of a Numpy Array</h2>
We can use rectangular brackets to access the different elements of the array. The correspondence between the rectangular brackets and the list and the rectangular representation is shown in the following figure for a 3x3 array:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoEg.png" width="500" />
We can access the 2nd-row 3rd column as shown in the following figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoFT.png" width="400" />
We simply use the square brackets and the indices corresponding to the element we would like:
```
# Access the element on the second row and third column
A[1, 2]
```
We can also use the following notation to obtain the elements:
```
# Access the element on the second row and third column
A[1][2]
```
Consider the elements shown in the following figure
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoFF.png" width="400" />
We can access the element as follows
```
# Access the element on the first row and first column
A[0][0]
```
We can also use slicing in numpy arrays. Consider the following figure. We would like to obtain the first two columns in the first row
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoFSF.png" width="400" />
This can be done with the following syntax
```
# Access the element on the first row and first and second columns
A[0][0:2]
```
Similarly, we can obtain the first two rows of the 3rd column as follows:
```
# Access the element on the first and second rows and third column
A[0:2, 2]
```
Corresponding to the following figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoTST.png" width="400" />
<hr>
<h2 id="op">Basic Operations</h2>
We can also add arrays. The process is identical to matrix addition. Matrix addition of <code>X</code> and <code>Y</code> is shown in the following figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoAdd.png" width="500" />
The numpy array is given by <code>X</code> and <code>Y</code>
```
# Create a numpy array X
X = np.array([[1, 0], [0, 1]])
X
# Create a numpy array Y
Y = np.array([[2, 1], [1, 2]])
Y
```
We can add the numpy arrays as follows.
```
# Add X and Y
Z = X + Y
Z
```
Multiplying a numpy array by a scaler is identical to multiplying a matrix by a scaler. If we multiply the matrix <code>Y</code> by the scaler 2, we simply multiply every element in the matrix by 2 as shown in the figure.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoDb.png" width="500" />
We can perform the same operation in numpy as follows
```
# Create a numpy array Y
Y = np.array([[2, 1], [1, 2]])
Y
# Multiply Y with 2
Z = 2 * Y
Z
```
Multiplication of two arrays corresponds to an element-wise product or Hadamard product. Consider matrix <code>X</code> and <code>Y</code>. The Hadamard product corresponds to multiplying each of the elements in the same position, i.e. multiplying elements contained in the same color boxes together. The result is a new matrix that is the same size as matrix <code>Y</code> or <code>X</code>, as shown in the following figure.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoMul.png" width="500" />
We can perform element-wise product of the array <code>X</code> and <code>Y</code> as follows:
```
# Create a numpy array Y
Y = np.array([[2, 1], [1, 2]])
Y
# Create a numpy array X
X = np.array([[1, 0], [0, 1]])
X
# Multiply X with Y
Z = X * Y
Z
```
We can also perform matrix multiplication with the numpy arrays <code>A</code> and <code>B</code> as follows:
First, we define matrix <code>A</code> and <code>B</code>:
```
# Create a matrix A
A = np.array([[0, 1, 1], [1, 0, 1]])
A
# Create a matrix B
B = np.array([[1, 1], [1, 1], [-1, 1]])
B
```
We use the numpy function <code>dot</code> to multiply the arrays together.
```
# Calculate the dot product
Z = np.dot(A,B)
Z
# Calculate the sine of Z
np.sin(Z)
```
We use the numpy attribute <code>T</code> to calculate the transposed matrix
```
# Create a matrix C
C = np.array([[1,1],[2,2],[3,3]])
C
# Get the transposed of C
C.T
```
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
# Segmentation
This notebook shows how to use Stardist (Object Detection with Star-convex Shapes) as a part of a segmentation-classification-tracking analysis pipeline.
The sections of this notebook are as follows:
1. Load images
2. Load model of choice and segment an initial image to test Stardist parameters
3. Batch segment a sequence of images
The data used in this notebook is timelapse microscopy data with h2b-gfp/rfp markers that show the spatial extent of the nucleus and it's mitotic state.
This notebook uses the dask octopuslite image loader from the CellX/Lowe lab project.
```
import matplotlib.pyplot as plt
import numpy as np
import os
from octopuslite import DaskOctopusLiteLoader
from stardist.models import StarDist2D
from stardist.plot import render_label
from csbdeep.utils import normalize
from tqdm.auto import tqdm
from skimage.io import imsave
import json
from scipy import ndimage as nd
%matplotlib inline
plt.rcParams['figure.figsize'] = [18,8]
```
## 1. Load images
```
# define experiment ID and select a position
expt = 'ND0011'
pos = 'Pos6'
# point to where the data is
root_dir = '/home/nathan/data'
image_path = f'{root_dir}/{expt}/{pos}/{pos}_images'
# lazily load imagesdd
images = DaskOctopusLiteLoader(image_path,
remove_background = True)
images.channels
```
Set segmentation channel and load test image
```
# segmentation channel
segmentation_channel = images.channels[3]
# set test image index
frame = 1000
# load test image
irfp = images[segmentation_channel.name][frame].compute()
# create 1-channel XYC image
img = np.expand_dims(irfp, axis = -1)
img.shape
```
## 2. Load model and test segment single image
```
model = StarDist2D.from_pretrained('2D_versatile_fluo')
model
```
### 2.1 Test run and display initial results
```
# initialise test segmentation
labels, details = model.predict_instances(normalize(img))
# plot input image and prediction
plt.clf()
plt.subplot(1,2,1)
plt.imshow(normalize(img[:,:,0]), cmap="PiYG")
plt.axis("off")
plt.title("input image")
plt.subplot(1,2,2)
plt.imshow(render_label(labels, img = img))
plt.axis("off")
plt.title("prediction + input overlay")
plt.show()
```
## 3. Batch segment a whole stack of images
When you segment a whole data set you do not want to apply any image transformation. This is so that when you load images and masks later on you can apply the same transformation. You can apply a crop but note that you need to be consistent with your use of the crop from this point on, otherwise you'll get a shift.
```
for expt in tqdm(['ND0009', 'ND0010', 'ND0011']):
for pos in tqdm(['Pos0', 'Pos1', 'Pos2', 'Pos3', 'Pos4']):
print('Starting experiment position:', expt, pos)
# load images
image_path = f'{root_dir}/{expt}/{pos}/{pos}_images'
images = DaskOctopusLiteLoader(image_path,
remove_background = True)
# iterate over images filenames
for fn in tqdm(images.files(segmentation_channel.name)):
# compile 1-channel into XYC array
img = np.expand_dims(imread(fn), axis = -1)
# predict labels
labels, details = model.predict_instances(normalize(img))
# set filename as mask format (channel099)
fn = fn.replace(f'channel00{segmentation_channel.value}', 'channel099')
# save out labelled image
imsave(fn, labels.astype(np.uint16), check_contrast=False)
```
| github_jupyter |
# Introduction to Language Processing Concepts
### Original tutorial by Brain Lehman, with updates by Fiona Pigott
The goal of this tutorial is to introduce a few basical vocabularies, ideas, and Python libraries for thinking about topic modeling, in order to make sure that we have a good set of vocabulary to talk more in-depth about processing languge with Python later. We'll spend some time on defining vocabulary for topic modeling and using basic topic modeling tools.
A big thank-you to the good people at the Stanford NLP group, for their informative and helpful online book: https://nlp.stanford.edu/IR-book/.
### Definitions.
1. **Document**: a body of text (eg. tweet)
2. **Tokenization**: dividing a document into pieces (and maybe throwing away some characters), in English this often (but not necessarily) means words separated by spaces and puctuation.
3. **Text corpus**: the set of documents that contains the text for the analysis (eg. many tweets)
4. **Stop words**: words that occur so frequently, or have so little topical meaning, that they are excluded (e.g., "and")
5. **Vectorize**: Turn some documents into vectors
6. **Vector corpus**: the set of documents transformed such that each token is a tuple (token_id , doc_freq)
```
# first, get some text:
import fileinput
try:
import ujson as json
except ImportError:
import json
documents = []
for line in fileinput.FileInput("example_tweets.json"):
documents.append(json.loads(line)["text"])
```
### 1) Document
In the case of the text that we just imported, each entry in the list is a "document"--a single body of text, hopefully with some coherent meaning.
```
print("One document: \"{}\"".format(documents[0]))
```
### 2) Tokenization
We split each document into smaller pieces ("tokens") in a process called tokenization. Tokens can be counted, and most importantly, compared between documents. There are potentially many different ways to tokenize text--splitting on spaces, removing punctionation, diving the document into n-character pieces--anything that gives us tokens that we can, hopefully, effectively compare across documents and derive meaning from.
Related to tokenization are processes called *stemming* and *lemmatiztion* which can help when using tokens to model topics based on the meaning of a word. In the phrases "they run" and "he runs" (space separated tokens: ["they", "run"] and ["he", "runs"]) the words "run" and "run*s*" mean basically the same thing, but are two different tokens. Stemming and/or lemmatization help us compare tokens with the same meaning but different spelling/suffixes.
#### Lemmatization:
Uses a dictionary of words and their possible morphologies to map many different forms of a base word ("lemma") to a single lemma, comparable across documents. E.g.: "run", "ran", "runs", and "running" might all map to the lemma "run"
#### Stemming:
Uses a set of heuristic rules to try to approximate lemmatization, without knowing the words in advance. For the English language, a simple and effective stemming algorithm might simply be to remove an "s" from the ends of words, or an "ing" from the end of words. E.g.: "run", "runs", and "running" all map to "run," but "ran" (an irregularrly conjugated verb) would not.
Stemming is particularly interesting and applicable in social data, because while some words are decidely *not* standard English, conventinoal rules of grammar still apply. A fan of the popular singer Justin Bieber might call herself a "belieber," while a group of fans call themselves "beliebers." You won't find "belieber" in any English lemmatization dictionary, but a good stemming algorithm will still map "belieber" and "beliebers" to the same token ("belieber", or even "belieb", if we remover the common suffix "er").
```
from nltk.stem import porter
from nltk.tokenize import TweetTokenizer
# tokenize the documents
# find good information on tokenization:
# https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html
# find documentation on pre-made tokenizers and options here:
# http://www.nltk.org/api/nltk.tokenize.html
tknzr = TweetTokenizer(reduce_len = True)
# stem the documents
# find good information on stemming and lemmatization:
# https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html
# find documentation on available pre-implemented stemmers here:
# http://www.nltk.org/api/nltk.stem.html
stemmer = porter.PorterStemmer()
for doc in documents[0:10]:
tokenized = tknzr.tokenize(doc)
stemmed = [stemmer.stem(x) for x in tokenized]
print("Original document:\n{}\nTokenized result:\n{}\nStemmed result:\n{}\n".format(
doc, tokenized, stemmed))
```
### 3) Text corpus
The text corpus is a collection of all of the documents (Tweets) that we're interested in modeling. Topic modeling and/or clustering on a corpus tends to work best if that corpus has some similar themes--this will mean that some tokens overlap, and we can get signal out of when documents share (or do not share) tokens.
Modeling text tends to get much harder the more different, uncommon and unrelated tokens appear in a text, especially when we are working with social data, where tokens don't necessarily appear in a dictionary. This difficultly (of having many, many unrelated tokens as dimension in our model) is one example of the [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
```
# number of documents in the corpus
print("There are {} documents in the corpus.".format(len(documents)))
```
### 4) Stop words:
Stop words are simply tokens that we've chosen to remove from the corpus, for any reason. In English, removing words like "and", "the", "a", "at", and "it" are common choices for stop words. Stop words can also be edited per project requirement, in case some words are too common in a particular dataset to be meaningful (another way to do stop word removal is to simply remove any word that appears in more than some fixed percentage of documents).
```
from nltk.corpus import stopwords
stopset = set(stopwords.words('english'))
print("The English stop words list provided by NLTK: ")
print(stopset)
stopset.update(["twitter"]) # add token
stopset.remove("i") # remove token
print("\nAdd or remove stop words form the set: ")
print(stopset)
```
### 5) Vectorize:
Transform each document into a vector. There are several good choices that you can make about how to do this transformation, and I'll talk about each of them in a second.
In order to vectorize documents in a corpus (without any dimensional reduction around the vocabulary), think of each document as a row in a matrix, and each column as a word in the vocabulary of the entire corpus. In order to vectorize a corpus, we must read the entire corpus, assign one word to each column, and then turn each document into a row.
**Example**:
**Documents**: "I love cake", "I hate chocolate", "I love chocolate cake", "I love cake, but I hate chocolate cake"
**Stopwords**: Say, because the word "but" is a conjunction, we want to make it a stop word (not include it in our document vectors)
**Vocabulary**: "I" (column 1), "love" (column 2), "cake" (column 3), "hate" (column 4), "chocolate" (column 5)
\begin{equation*}
\begin{matrix}
\text{"I love cake" } & =\\
\text{"I hate chocolate" } & =\\
\text{"I love chocolate cake" } & = \\
\text{"I love cake, but I hate chocolate cake"} & =
\end{matrix}
\qquad
\begin{bmatrix}
1 & 1 & 1 & 0 & 0\\
1 & 0 & 0 & 1 & 1\\
1 & 1 & 1 & 0 & 1\\
2 & 1 & 2 & 1 & 1
\end{bmatrix}
\end{equation*}
Vectorization like this don't take into account word order (we call this property "bag of words"), and in the above example I am simply counting the frequency of each term in each document.
```
# we're going to use the vectorizer functions that scikit learn provides
# define the tokenizer that we want to use
# must be a callable function that takes a document and returns a list of tokens
tknzr = TweetTokenizer(reduce_len = True)
stemmer = porter.PorterStemmer()
def myTokenizer(doc):
return [stemmer.stem(x) for x in tknzr.tokenize(doc)]
# choose the stopword set that we want to use
stopset = set(stopwords.words('english'))
stopset.update(["http","https","twitter","amp"])
# vectorize
# we're using the scikit learn CountVectorizer function, which is very handy
# documentation here:
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
vectorizer = CountVectorizer(tokenizer = myTokenizer, stop_words = stopset)
vectorized_documents = vectorizer.fit_transform(documents)
vectorized_documents
import matplotlib.pyplot as plt
%matplotlib inline
_ = plt.hist(vectorized_documents.todense().sum(axis = 1))
_ = plt.title("Number of tokens per document")
_ = plt.xlabel("Number of tokens")
_ = plt.ylabel("Number of documents with x tokens")
from numpy import logspace, ceil, histogram, array
# get the token frequency
token_freq = sorted(vectorized_documents.todense().astype(bool).sum(axis = 0).tolist()[0], reverse = False)
# make a histogram with log scales
bins = array([ceil(x) for x in logspace(0, 3, 5)])
widths = (bins[1:] - bins[:-1])
hist = histogram(token_freq, bins=bins)
hist_norm = hist[0]/widths
# plot (notice that most tokens only appear in one document)
plt.bar(bins[:-1], hist_norm, widths)
plt.xscale('log')
plt.yscale('log')
_ = plt.title("Number of documents in which each token appears")
_ = plt.xlabel("Number of documents")
_ = plt.ylabel("Number of tokens")
```
#### Bag of words
Taking all the words from a document, and sticking them in a bag. Order does not matter, which could cause a problem. "Alice loves cake" might have a different meaning than "Cake loves Alice."
#### Frequency
Counting the number of times a word appears in a document.
#### Tf-Idf (term frequency inverse document frequency):
A statistic that is intended to reflect how important a word is to a document in a collection or corpus. The Tf-Idf value increases proportionally to the number of times a word appears in the document and is inversely proportional to the frequency of the word in the corpus--this helps control words that are generally more common than others.
There are several different possibilities for computing the tf-idf statistic--choosing whether to normalize the vectors, choosing whether to use counts or the logarithm of counts, etc. I'm going to show how scikit-learn computed the tf-idf statistic by default, with more information available in the documentation of the sckit-learn [TfidfVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html).
$tf(t)$ : Term Frequency, count of the number of times each term appears in the document.
$idf(d,t)$ : Inverse document frequency.
$df(d,t)$ : Document frequency, the count of the number of documents in which the term appears.
$$
tfidf(t) = tf(t) * \log\big(\frac{1 + n}{1 + df(d, t)}\big) + 1
$$
We also then take the Euclidean ($l2$) norm of each document vector, so that long documents (documents with many non-stopword tokens) have the same norm as shorter documents.
```
# documentation on this sckit-learn function here:
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html
tfidf_vectorizer = TfidfVectorizer(tokenizer = myTokenizer, stop_words = stopset)
tfidf_vectorized_documents = tfidf_vectorizer.fit_transform(documents)
tfidf_vectorized_documents
# you can look at two vectors for the same document, from 2 different vectorizers:
tfidf_vectorized_documents[0].todense().tolist()[0]
vectorized_documents[0].todense().tolist()[0]
```
## That's all for now!
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D3_ModelFitting/W1D3_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 1, Day 3, Tutorial 3
# Model Fitting: Confidence intervals and bootstrapping
**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith
**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Ella Batty, Michael Waskom
#Tutorial Objectives
This is Tutorial 3 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).
In this tutorial, we wil discuss how to gauge how good our estimated model parameters are.
- Learn how to use bootstrapping to generate new sample datasets
- Estimate our model parameter on these new sample datasets
- Quantify the variance of our estimate using confidence intervals
```
#@title Video 1: Confidence Intervals & Bootstrapping
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="hs6bVGQNSIs", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Up to this point we have been finding ways to estimate model parameters to fit some observed data. Our approach has been to optimize some criterion, either minimize the mean squared error or maximize the likelihood while using the entire dataset. How good is our estimate really? How confident are we that it will generalize to describe new data we haven't seen yet?
One solution to this is to just collect more data and check the MSE on this new dataset with the previously estimated parameters. However this is not always feasible and still leaves open the question of how quantifiably confident we are in the accuracy of our model.
In Section 1, we will explore how to implement bootstrapping. In Section 2, we will build confidence intervals of our estimates using the bootstrapping method.
---
# Setup
```
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def solve_normal_eqn(x, y):
"""Solve the normal equations to produce the value of theta_hat that minimizes
MSE.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
thata_hat (float): An estimate of the slope parameter.
Returns:
float: the value for theta_hat arrived from minimizing MSE
"""
theta_hat = (x.T @ y) / (x.T @ x)
return theta_hat
```
---
# Section 1: Bootstrapping
[Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) is a widely applicable method to assess confidence/uncertainty about estimated parameters, it was originally [proposed](https://projecteuclid.org/euclid.aos/1176344552) by [Bradley Efron](https://en.wikipedia.org/wiki/Bradley_Efron). The idea is to generate many new synthetic datasets from the initial true dataset by randomly sampling from it, then finding estimators for each one of these new datasets, and finally looking at the distribution of all these estimators to quantify our confidence.
Note that each new resampled datasets will be the same size as our original one, with the new data points sampled with replacement i.e. we can repeat the same data point multiple times. Also note that in practice we need a lot of resampled datasets, here we use 2000.
To explore this idea, we will start again with our noisy samples along the line $y_n = 1.2x_n + \epsilon_n$, but this time only use half the data points as last time (15 instead of 30).
```
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
# Let's set some parameters
theta = 1.2
n_samples = 15
# Draw x and then calculate y
x = 10 * np.random.rand(n_samples) # sample from a uniform distribution over [0,10)
noise = np.random.randn(n_samples) # sample from a standard normal distribution
y = theta * x + noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
```
### Exercise 1: Resample Dataset with Replacement
In this exercise you will implement a method to resample a dataset with replacement. The method accepts $x$ and $y$ arrays. It should return a new set of $x'$ and $y'$ arrays that are created by randomly sampling from the originals.
We will then compare the original dataset to a resampled dataset.
TIP: The [numpy.random.choice](https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html) method would be useful here.
```
def resample_with_replacement(x, y):
"""Resample data points with replacement from the dataset of `x` inputs and
`y` measurements.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
Returns:
ndarray, ndarray: The newly resampled `x` and `y` data points.
"""
#######################################################
## TODO for students: resample dataset with replacement
# Fill out function and remove
raise NotImplementedError("Student exercise: resample dataset with replacement")
#######################################################
# Get array of indices for resampled points
sample_idx = ...
# Sample from x and y according to sample_idx
x_ = ...
y_ = ...
return x_, y_
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
ax1.scatter(x, y)
ax1.set(title='Original', xlabel='x', ylabel='y')
# Uncomment below to test your function
#x_, y_ = resample_with_replacement(x, y)
#ax2.scatter(x_, y_, color='c')
ax2.set(title='Resampled', xlabel='x', ylabel='y',
xlim=ax1.get_xlim(), ylim=ax1.get_ylim());
# to_remove solution
def resample_with_replacement(x, y):
"""Resample data points with replacement from the dataset of `x` inputs and
`y` measurements.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
Returns:
ndarray, ndarray: The newly resampled `x` and `y` data points.
"""
# Get array of indices for resampled points
sample_idx = np.random.choice(len(x), size=len(x), replace=True)
# Sample from x and y according to sample_idx
x_ = x[sample_idx]
y_ = y[sample_idx]
return x_, y_
with plt.xkcd():
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
ax1.scatter(x, y)
ax1.set(title='Original', xlabel='x', ylabel='y')
x_, y_ = resample_with_replacement(x, y)
ax2.scatter(x_, y_, color='c')
ax2.set(title='Resampled', xlabel='x', ylabel='y',
xlim=ax1.get_xlim(), ylim=ax1.get_ylim());
```
In the resampled plot on the right, the actual number of points is the same, but some have been repeated so they only display once.
Now that we have a way to resample the data, we can use that in the full bootstrapping process.
### Exercise 2: Bootstrap Estimates
In this exercise you will implement a method to run the bootstrap process of generating a set of $\hat\theta$ values from a dataset of $x$ inputs and $y$ measurements. You should use `resample_with_replacement` here, and you may also invoke helper function `solve_normal_eqn` from Tutorial 1 to produce the MSE-based estimator.
We will then use this function to look at the theta_hat from different samples.
```
def bootstrap_estimates(x, y, n=2000):
"""Generate a set of theta_hat estimates using the bootstrap method.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
n (int): The number of estimates to compute
Returns:
ndarray: An array of estimated parameters with size (n,)
"""
theta_hats = np.zeros(n)
##############################################################################
## TODO for students: implement bootstrap estimation
# Fill out function and remove
raise NotImplementedError("Student exercise: implement bootstrap estimation")
##############################################################################
# Loop over number of estimates
for i in range(n):
# Resample x and y
x_, y_ = ...
# Compute theta_hat for this sample
theta_hats[i] = ...
return theta_hats
np.random.seed(123) # set random seed for checking solutions
# Uncomment below to test function
# theta_hats = bootstrap_estimates(x, y, n=2000)
# print(theta_hats[0:5])
# to_remove solution
def bootstrap_estimates(x, y, n=2000):
"""Generate a set of theta_hat estimates using the bootstrap method.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
n (int): The number of estimates to compute
Returns:
ndarray: An array of estimated parameters with size (n,)
"""
theta_hats = np.zeros(n)
# Loop over number of estimates
for i in range(n):
# Resample x and y
x_, y_ = resample_with_replacement(x, y)
# Compute theta_hat for this sample
theta_hats[i] = solve_normal_eqn(x_, y_)
return theta_hats
np.random.seed(123) # set random seed for checking solutions
theta_hats = bootstrap_estimates(x, y, n=2000)
print(theta_hats[0:5])
```
You should see `[1.27550888 1.17317819 1.18198819 1.25329255 1.20714664]` as the first five estimates.
Now that we have our bootstrap estimates, we can visualize all the potential models (models computed with different resampling) together to see how distributed they are.
```
#@title
#@markdown Execute this cell to visualize all potential models
fig, ax = plt.subplots()
# For each theta_hat, plot model
theta_hats = bootstrap_estimates(x, y, n=2000)
for i, theta_hat in enumerate(theta_hats):
y_hat = theta_hat * x
ax.plot(x, y_hat, c='r', alpha=0.01, label='Resampled Fits' if i==0 else '')
# Plot observed data
ax.scatter(x, y, label='Observed')
# Plot true fit data
y_true = theta * x
ax.plot(x, y_true, 'g', linewidth=2, label='True Model')
ax.set(
title='Bootstrapped Slope Estimation',
xlabel='x',
ylabel='y'
)
# Change legend line alpha property
handles, labels = ax.get_legend_handles_labels()
handles[0].set_alpha(1)
ax.legend();
```
This looks pretty good! The bootstrapped estimates spread around the true model, as we would have hoped. Note that here we have the luxury to know the ground truth value for $\theta$, but in applications we are trying to guess it from data. Therefore, assessing the quality of estimates based on finite data is a task of fundamental importance in data analysis.
---
# Section 2: Confidence Intervals
Let us now quantify how uncertain our estimated slope is. We do so by computing [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval) (CIs) from our bootstrapped estimates. The most direct approach is to compute percentiles from the empirical distribution of bootstrapped estimates. Note that this is widely applicable as we are not assuming that this empirical distribution is Gaussian.
```
#@title
#@markdown Execute this cell to plot bootstrapped CI
theta_hats = bootstrap_estimates(x, y, n=2000)
print(f"mean = {np.mean(theta_hats):.2f}, std = {np.std(theta_hats):.2f}")
fig, ax = plt.subplots()
ax.hist(theta_hats, bins=20, facecolor='C1', alpha=0.75)
ax.axvline(theta, c='g', label=r'True $\theta$')
ax.axvline(np.percentile(theta_hats, 50), color='r', label='Median')
ax.axvline(np.percentile(theta_hats, 2.5), color='b', label='95% CI')
ax.axvline(np.percentile(theta_hats, 97.5), color='b')
ax.legend()
ax.set(
title='Bootstrapped Confidence Interval',
xlabel=r'$\hat{{\theta}}$',
ylabel='count',
xlim=[1.0, 1.5]
);
```
Looking at the distribution of bootstrapped $\hat{\theta}$ values, we see that the true $\theta$ falls well within the 95% confidence interval, wich is reinsuring. We also see that the value $\theta = 1$ does not fall within the confidence interval. From this we would reject the hypothesis that the slope was 1.
---
# Summary
- Bootstrapping is a resampling procedure that allows to build confidence intervals around inferred parameter values
- it is a widely applicable and very practical method that relies on computational power and pseudo-random number generators (as opposed to more classical approaches than depend on analytical derivations)
**Suggested readings**
Computer Age Statistical Inference: Algorithms, Evidence and Data Science, by Bradley Efron and Trevor Hastie
| github_jupyter |
# Load raw data
```
import numpy as np
data = np.loadtxt('SlowSteps1.csv', delimiter = ',') # load the raw data, change the filename as required!
```
# Find spikes
```
time_s = (data[:,8]-data[0,8])/1000000 # set the timing array to seconds and subtract 1st entry to zero it
n_spikes = 0
spike_times = [] # in seconds
spike_points = [] # in timepoints
for x in range(1, data.shape[0]-1):
if (data[x,0]>10 and data[x-1,0]<10): # looks for all instances where subsequent Vm points jump from <10 to >10
spike_times.append(time_s[x])
spike_points.append(x)
n_spikes+=1
print(n_spikes, "spikes detected")
```
# Compute spike rate
```
spike_rate = np.zeros(data.shape[0])
for x in range(0, n_spikes-1):
current_rate = 1/(spike_times[x+1]-spike_times[x])
spike_rate[spike_points[x]:spike_points[x+1]]=current_rate
```
# Plot raw data and spike rate
```
from bokeh.plotting import figure, output_file, show
from bokeh.layouts import column
from bokeh.models import Range1d
output_file("RawDataPlot.html")
spike_plot = figure(plot_width=1200, plot_height = 100)
spike_plot.line(time_s[:],spike_rate[:], line_width=1, line_color="black") # Spike rate
spike_plot.yaxis[0].axis_label = 'Rate (Hz)'
spike_plot.xgrid.grid_line_color =None
spike_plot.ygrid.grid_line_color =None
spike_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
spike_plot.yaxis.minor_tick_line_color = None # turn off y-axis minor ticks
vm_plot = figure(plot_width=1200, plot_height = 300, y_range=Range1d(-100, 50),x_range=spike_plot.x_range)
vm_plot.line(time_s[:],data[:,0], line_width=1, line_color="black") # Vm
vm_plot.scatter(spike_times[:],45, line_color="black") # Rasterplot over spikes
vm_plot.yaxis[0].axis_label = 'Vm (mV)'
vm_plot.xgrid.grid_line_color =None
vm_plot.ygrid.grid_line_color =None
vm_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
itotal_plot = figure(plot_width=1200, plot_height = 200, x_range=spike_plot.x_range)
itotal_plot.line(time_s[:], data[:,1], line_width=1, line_color="black") # Itotal
itotal_plot.yaxis[0].axis_label = 'I total (a.u.)'
itotal_plot.xgrid.grid_line_color =None
itotal_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
in_spikes_plot = figure(plot_width=1200, plot_height = 80, y_range=Range1d(-0.1,1.1), x_range=spike_plot.x_range)
in_spikes_plot.line(time_s[:], data[:,3], line_width=1, line_color="black") # Spikes in from Port 1
in_spikes_plot.line(time_s[:], data[:,4], line_width=1, line_color="grey") # Spikes in from Port 2
in_spikes_plot.yaxis[0].axis_label = 'Input spikes'
in_spikes_plot.xgrid.grid_line_color =None
in_spikes_plot.ygrid.grid_line_color =None
in_spikes_plot.yaxis.major_tick_line_color = None # turn off y-axis major ticks
in_spikes_plot.yaxis.minor_tick_line_color = None # turn off y-axis minor ticks
in_spikes_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
in_spikes_plot.yaxis.major_label_text_font_size = '0pt' # turn off y-axis tick labels
stim_plot = figure(plot_width=1200, plot_height = 100,y_range=Range1d(-0.1,1.1), x_range=spike_plot.x_range)
stim_plot.line(time_s[:], data[:,2], line_width=1, line_color="black") # Stimulus
stim_plot.yaxis[0].axis_label = 'Stimulus'
stim_plot.xaxis[0].axis_label = 'Time (s)'
stim_plot.xgrid.grid_line_color =None
stim_plot.ygrid.grid_line_color =None
stim_plot.yaxis.major_tick_line_color = None # turn off y-axis major ticks
stim_plot.yaxis.minor_tick_line_color = None # turn off y-axis minor ticks
stim_plot.yaxis.major_label_text_font_size = '0pt' # turn off y-axis tick labels
show(column(spike_plot,vm_plot,itotal_plot,in_spikes_plot,stim_plot))
```
# Analysis Option 1: Trigger stimuli and align
```
stimulus_times = []
stimulus_times_s = []
for x in range(0, data.shape[0]-1): # goes through each timepoint
if (data[x,2]<data[x+1,2]): # checks if the stimulus went from 0 to 1
stimulus_times.append(x) ## make a list of times (in points) when stimulus increased
stimulus_times_s.append(time_s[x]) ## also make a list of times (in seconds)
loop_duration = stimulus_times[1]-stimulus_times[0] # compute arraylength for single stimulus
loop_duration_s = stimulus_times_s[1]-stimulus_times_s[0] # compute arraylength for single stimulus also in s
print(loop_duration, "points per loop;", loop_duration_s, "seconds")
sr_loops = []
vm_loops = []
itotal_loops = []
stim_loops = []
stimulus_times = np.where(data[:,2]>np.roll(data[:,2], axis = 0, shift = 1)) ## make a list of times when stimulus increased (again)
sr_loops = np.vstack([spike_rate[x:x+loop_duration] for x in stimulus_times[0][:-1]])
vm_loops = np.vstack([data[x:x+loop_duration, 0] for x in stimulus_times[0][:-1]])
itotal_loops = np.vstack([data[x:x+loop_duration, 1] for x in stimulus_times[0][:-1]])
stim_loops = np.vstack([data[x:x+loop_duration, 2] for x in stimulus_times[0][:-1]])
st_loops = []
for i, x in enumerate(stimulus_times[0][:-1]):
st_loops.append([time_s[sp]-time_s[x] for sp in spike_points if sp > x and sp < x+loop_duration])
loops = vm_loops.shape[0]
print(loops, "loops")
```
# Make average arrays
```
sr_mean = np.mean(sr_loops, axis=0)
vm_mean = np.mean(vm_loops, axis=0)
itotal_mean = np.mean(itotal_loops, axis=0)
stim_mean = np.mean(stim_loops, axis=0)
```
# Plot stimulus aligned data
```
from bokeh.plotting import figure, output_file, show
from bokeh.layouts import column
from bokeh.models import Range1d
output_file("AlignedDataPlot.html")
spike_plot = figure(plot_width=400, plot_height = 100)
for i in range(0,loops-1):
spike_plot.line(time_s[0:loop_duration],sr_loops[i,:], line_width=1, line_color="gray") # Vm individual repeats
spike_plot.line(time_s[0:loop_duration],sr_mean[:], line_width=1.5, line_color="black") # Vm mean
spike_plot.yaxis[0].axis_label = 'Rate (Hz)'
spike_plot.xgrid.grid_line_color =None
spike_plot.ygrid.grid_line_color =None
spike_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
dot_plot = figure(plot_width=400, plot_height = 100, x_range=spike_plot.x_range)
for i in range(0,loops-1):
dot_plot.scatter(st_loops[i],i, line_color="black") # Rasterplot
dot_plot.yaxis[0].axis_label = 'Repeat'
dot_plot.xgrid.grid_line_color =None
dot_plot.ygrid.grid_line_color =None
dot_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
vm_plot = figure(plot_width=400, plot_height = 300, y_range=Range1d(-100, 40),x_range=spike_plot.x_range)
for i in range(0,loops-1):
vm_plot.line(time_s[0:loop_duration],vm_loops[i,:], line_width=1, line_color="gray") # Vm individual repeats
vm_plot.line(time_s[0:loop_duration],vm_mean[:], line_width=1.5, line_color="black") # Vm mean
vm_plot.yaxis[0].axis_label = 'Vm (mV)'
vm_plot.xgrid.grid_line_color =None
vm_plot.ygrid.grid_line_color =None
vm_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
itotal_plot = figure(plot_width=400, plot_height = 200, x_range=spike_plot.x_range)
for i in range(0,loops-1):
itotal_plot.line(time_s[0:loop_duration], itotal_loops[i,:], line_width=1, line_color="gray") # Itotal individual repeats
itotal_plot.line(time_s[0:loop_duration], itotal_mean[:], line_width=1.5, line_color="black") # Itotal mean
itotal_plot.yaxis[0].axis_label = 'Itotal (a.u.)'
itotal_plot.xgrid.grid_line_color =None
itotal_plot.xaxis.major_label_text_font_size = '0pt' # turn off x-axis tick labels
stim_plot = figure(plot_width=400, plot_height = 100,y_range=Range1d(-0.1,1.1), x_range=spike_plot.x_range)
for i in range(0,loops-1):
stim_plot.line(time_s[0:loop_duration], stim_loops[i,:], line_width=1, line_color="gray") # Stimulus individual repeats
stim_plot.line(time_s[0:loop_duration], stim_mean[:], line_width=1.5, line_color="black") # Stimulus mean
stim_plot.yaxis[0].axis_label = 'Stimulus'
stim_plot.xaxis[0].axis_label = 'Time (s)'
stim_plot.xgrid.grid_line_color =None
stim_plot.ygrid.grid_line_color =None
stim_plot.yaxis.major_tick_line_color = None # turn off y-axis major ticks
stim_plot.yaxis.minor_tick_line_color = None # turn off y-axis minor ticks
stim_plot.yaxis.major_label_text_font_size = '0pt' # turn off y-axis tick labels
show(column(spike_plot,dot_plot,vm_plot,itotal_plot,stim_plot))
```
# Analysis option 2: Spike triggered average (STA)
```
sta_points = 200 # number of points computed
sta_individual = []
sta_individual = np.vstack([data[x-sta_points:x,2] for x in spike_points[2:-1]])
sta = np.mean(sta_individual, axis=0)
import matplotlib.pyplot as plt
plt.plot(time_s[0:200],sta[:])
plt.ylabel('Kernel amplitude')
plt.xlabel('Time before spike (s)')
plt.show()
```
| github_jupyter |
# DIMAML for Autoencoder models
Training is on Celeba. Evaluation is on Tiny ImageNet
```
%load_ext autoreload
%autoreload 2
%env CUDA_VISIBLE_DEVICES=0
import os, sys, time
sys.path.insert(0, '..')
import lib
import math
import numpy as np
from copy import deepcopy
import torch, torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-darkgrid')
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['ps.fonttype'] = 42
# For reproducibility
import random
seed = random.randint(0, 2 ** 32 - 1)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
print(seed)
```
## Setting
```
model_type = 'AE'
# Dataset
data_dir = './data'
train_batch_size = 128
valid_batch_size = 256
test_batch_size = 128
num_workers = 3
pin_memory = True
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# AE
latent_dim = 64
loss_function = F.mse_loss
# MAML
max_steps = 1500
inner_loop_steps_in_epoch = 200
inner_loop_epochs = 3
inner_loop_steps = inner_loop_steps_in_epoch * inner_loop_epochs
meta_grad_clip = 10.
loss_kwargs={'reduction':'mean'}
loss_interval = 50
first_val_step = 200
assert (inner_loop_steps - first_val_step) % loss_interval == 0
validation_steps = int((inner_loop_steps - first_val_step) / loss_interval + 1)
# Inner optimizer
inner_optimizer_type='momentum'
inner_optimizer_kwargs = dict(
lr=0.01, momentum=0.9,
nesterov=False, weight_decay=0.0
)
# Meta optimizer
meta_learning_rate = 1e-4
meta_betas = (0.9, 0.997)
meta_decay_interval = max_steps
checkpoint_steps = 15
recovery_step = None
kwargs = dict(
first_valid_step=first_val_step,
valid_loss_interval=loss_interval,
loss_kwargs=loss_kwargs,
)
exp_name = f"{model_type}{latent_dim}_celeba_{inner_optimizer_type}" + \
f"_steps{inner_loop_steps}_interval{loss_interval}" + \
f"_tr_bs{train_batch_size}_val_bs{valid_batch_size}_seed_{seed}"
print("Experiment name: ", exp_name)
logs_path = "./logs/{}".format(exp_name)
assert recovery_step is not None or not os.path.exists(logs_path)
# !rm -rf {logs_path}
```
## Prepare the CelebA dataset
```
import pandas as pd
import shutil
celeba_data_dir = 'data/celeba/'
data = pd.read_csv(os.path.join(celeba_data_dir, 'list_eval_partition.csv'))
try:
for partition in ['train', 'val', 'test']:
os.makedirs(os.path.join(celeba_data_dir, partition))
os.makedirs(os.path.join(celeba_data_dir, partition, 'images'))
for i in data.index:
partition = data.loc[i].partition
src_path = os.path.join(celeba_data_dir, 'img_align_celeba/img_align_celeba', data.loc[i].image_id)
if partition == 0:
shutil.copyfile(src_path, os.path.join(celeba_data_dir, 'train', 'images', data.loc[i].image_id))
elif partition == 1:
shutil.copyfile(src_path, os.path.join(celeba_data_dir, 'val', 'images', data.loc[i].image_id))
elif partition == 2:
shutil.copyfile(src_path, os.path.join(celeba_data_dir, 'test', 'images', data.loc[i].image_id))
except FileExistsError:
print('\'train\', \'val\', \'test\' already exist. Probably, you do not want to copy data again')
from torchvision import transforms, datasets
from torch.utils.data import DataLoader
celeba_transforms = transforms.Compose([
transforms.Resize((64, 64)),
transforms.ToTensor(),
])
# Create the train set
celeba_train_dataset = datasets.ImageFolder(celeba_data_dir+'train', transform=celeba_transforms)
celeba_train_images = torch.cat([celeba_train_dataset[i][0][None] for i in range(len(celeba_train_dataset))])
celeba_mean_image = celeba_train_images.mean(0)
celeba_std_image = celeba_train_images.std(0)
celeba_train_images = (celeba_train_images - celeba_mean_image) / celeba_std_image
# Create the val set
celeba_valid_dataset = datasets.ImageFolder(celeba_data_dir+'val', celeba_transforms)
celeba_valid_images = torch.cat([celeba_valid_dataset[i][0][None] for i in range(len(celeba_valid_dataset))])
celeba_valid_images = (celeba_valid_images - celeba_mean_image) / celeba_std_image
# Create the test set
celeba_test_dataset = datasets.ImageFolder(celeba_data_dir+'test', celeba_transforms)
celeba_test_images = torch.cat([celeba_test_dataset[i][0][None] for i in range(len(celeba_test_dataset))])
celeba_test_images = (celeba_test_images - celeba_mean_image) / celeba_std_image
# Create data loaders
train_loader = torch.utils.data.DataLoader(celeba_train_images, batch_size=train_batch_size, shuffle=True,
pin_memory=pin_memory, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(celeba_valid_images, batch_size=valid_batch_size, shuffle=True,
pin_memory=pin_memory, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(celeba_test_images, batch_size=test_batch_size,
pin_memory=pin_memory, num_workers=num_workers)
```
## Create the model and meta-optimizer
```
optimizer = lib.make_inner_optimizer(inner_optimizer_type, **inner_optimizer_kwargs)
model = lib.models.AE(latent_dim)
maml = lib.MAML(model, model_type, optimizer=optimizer,
checkpoint_steps=checkpoint_steps,
loss_function=loss_function
).to(device)
```
## Trainer
```
def samples_batches(dataloader, num_batches):
x_batches = []
for batch_i, x_batch in enumerate(dataloader):
if batch_i >= num_batches: break
x_batches.append(x_batch)
return x_batches
class TrainerAE(lib.Trainer):
def train_on_batch(self, train_loader, valid_loader, prefix='train/', **kwargs):
""" Performs a single gradient update and reports metrics """
# Sample train and val batches
x_batches = []
for _ in range(inner_loop_epochs):
x_batches.extend(samples_batches(train_loader, inner_loop_steps_in_epoch))
x_val_batches = samples_batches(valid_loader, validation_steps)
# Perform a meta training step
self.meta_optimizer.zero_grad()
with lib.training_mode(self.maml, is_train=True):
self.maml.resample_parameters()
_updated_model, train_loss_history, valid_loss_history, *etc = \
self.maml.forward(x_batches, x_batches, x_val_batches, x_val_batches,
device=self.device, **kwargs)
train_loss = torch.cat(train_loss_history).mean()
valid_loss = torch.cat(valid_loss_history).mean() if len(valid_loss_history) > 0 else torch.zeros(1)
valid_loss.backward()
# Check gradients
grad_norm = lib.utils.total_norm_frobenius(self.maml.initializers.parameters())
self.writer.add_scalar(prefix + "grad_norm", grad_norm, self.total_steps)
bad_grad = not math.isfinite(grad_norm)
if not bad_grad:
nn.utils.clip_grad_norm_(list(self.maml.initializers.parameters()), meta_grad_clip)
else:
print("Fix bad grad. Loss {} | Grad {}".format(train_loss.item(), grad_norm))
for param in self.maml.initializers.parameters():
param.grad = torch.where(torch.isfinite(param.grad),
param.grad, torch.zeros_like(param.grad))
self.meta_optimizer.step()
return self.record(train_loss=train_loss.item(),
valid_loss=valid_loss.item(), prefix=prefix)
def evaluate_metrics(self, train_loader, test_loader, prefix='val/', **kwargs):
""" Predicts and evaluates metrics over the entire dataset """
torch.cuda.empty_cache()
print('Baseline')
self.maml.resample_parameters(initializers=self.maml.untrained_initializers, is_final=True)
base_model = deepcopy(self.maml.model)
base_train_loss_history, base_test_loss_history = eval_model(base_model, train_loader, test_loader,
device=self.device, **kwargs)
print('DIMAML')
self.maml.resample_parameters(is_final=True)
maml_model = deepcopy(self.maml.model)
maml_train_loss_history, maml_test_loss_history = eval_model(maml_model, train_loader, test_loader,
device=self.device, **kwargs)
lib.utils.ae_draw_plots(base_train_loss_history, base_test_loss_history,
maml_train_loss_history, maml_test_loss_history)
self.writer.add_scalar(prefix + "train_AUC", sum(maml_train_loss_history), self.total_steps)
self.writer.add_scalar(prefix + "test_AUC", sum(maml_test_loss_history), self.total_steps)
self.writer.add_scalar(prefix + "test_loss", maml_test_loss_history[-1], self.total_steps)
########################
# Generate Train Batch #
########################
def generate_train_batches(train_loader, batches_in_epoch=150):
x_batches = []
for batch_i, x_batch in enumerate(train_loader):
if batch_i >= batches_in_epoch: break
x_batches.append(x_batch)
assert len(x_batches) == batches_in_epoch
local_x = torch.cat(x_batches, dim=0)
return DataLoader(local_x, batch_size=train_batch_size, shuffle=True,
num_workers=num_workers, pin_memory=pin_memory)
##################
# Eval functions #
##################
@torch.no_grad()
def compute_test_loss(model, loss_function, test_loader, device='cuda'):
model.eval()
test_loss = 0.
for batch_test in test_loader:
if isinstance(batch_test, (list, tuple)):
x_test = batch_test[0].to(device)
elif isinstance(batch_test, torch.Tensor):
x_test = batch_test.to(device)
else:
raise Exception("Wrong batch")
preds = model(x_test)
test_loss += loss_function(preds, x_test) * x_test.shape[0]
test_loss /= len(test_loader.dataset)
model.train()
return test_loss.item()
def eval_model(model, train_loader, test_loader, batches_in_epoch=150,
epochs=3, test_loss_interval=50, device='cuda', **kwargs):
optimizer = lib.optimizers.make_eval_inner_optimizer(
maml, model, inner_optimizer_type,
**inner_optimizer_kwargs
)
train_loss_history = []
test_loss_history = []
training_mode = model.training
total_iters = 0
for epoch in range(1, epochs + 1):
model.train()
for x_batch in train_loader:
optimizer.zero_grad()
x_batch = x_batch.to(device)
preds = model(x_batch)
loss = loss_function(preds, x_batch)
loss.backward()
optimizer.step()
train_loss_history.append(loss.item())
if (total_iters == 0) or (total_iters + 1) % test_loss_interval == 0:
model.eval()
test_loss = compute_test_loss(model, loss_function, test_loader, device=device)
print("Epoch {} | Total Iteration {} | Loss {}".format(epoch, total_iters+1, test_loss))
test_loss_history.append(test_loss)
model.train()
total_iters += 1
model.train(training_mode)
return train_loss_history, test_loss_history
train_loss_history = []
valid_loss_history = []
trainer = TrainerAE(maml, meta_lr=meta_learning_rate,
meta_betas=meta_betas, meta_grad_clip=meta_grad_clip,
exp_name=exp_name, recovery_step=recovery_step)
from IPython.display import clear_output
lib.free_memory()
t0 = time.time()
while trainer.total_steps <= max_steps:
local_train_loader = generate_train_batches(train_loader, inner_loop_steps_in_epoch)
with lib.activate_context_batchnorm(maml.model):
metrics = trainer.train_on_batch(
local_train_loader, valid_loader, **kwargs
)
train_loss = metrics['train_loss']
train_loss_history.append(train_loss)
valid_loss = metrics['valid_loss']
valid_loss_history.append(valid_loss)
if trainer.total_steps % 20 == 0:
clear_output(True)
print("Step: %d | Time: %f | Train Loss %.5f | Valid loss %.5f"
% (trainer.total_steps, time.time()-t0, train_loss, valid_loss))
plt.figure(figsize=[16, 5])
plt.subplot(1,2,1)
plt.title('Train Loss over time')
plt.plot(lib.utils.moving_average(train_loss_history, span=50))
plt.scatter(range(len(train_loss_history)), train_loss_history, alpha=0.1)
plt.subplot(1,2,2)
plt.title('Valid Loss over time')
plt.plot(lib.utils.moving_average(valid_loss_history, span=50))
plt.scatter(range(len(valid_loss_history)), valid_loss_history, alpha=0.1)
plt.show()
trainer.evaluate_metrics(local_train_loader, test_loader, epochs=inner_loop_epochs,
test_loss_interval=loss_interval)
lib.utils.ae_visualize_pdf(maml)
t0 = time.time()
if trainer.total_steps % 100 == 0:
trainer.save_model()
trainer.total_steps += 1
```
## Probability Functions
```
lib.utils.ae_visualize_pdf(maml)
```
# Evaluation
```
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True
def genOrthgonal(dim):
a = torch.zeros((dim, dim)).normal_(0, 1)
q, r = torch.qr(a)
d = torch.diag(r, 0).sign()
diag_size = d.size(0)
d_exp = d.view(1, diag_size).expand(diag_size, diag_size)
q.mul_(d_exp)
return q
def makeDeltaOrthogonal(weights, gain):
rows = weights.size(0)
cols = weights.size(1)
if rows < cols:
print("In_filters should not be greater than out_filters.")
weights.data.fill_(0)
dim = max(rows, cols)
q = genOrthgonal(dim)
mid1 = weights.size(2) // 2
mid2 = weights.size(3) // 2
with torch.no_grad():
weights[:, :, mid1, mid2] = q[:weights.size(0), :weights.size(1)]
weights.mul_(gain)
def gradient_quotient(loss, params, eps=1e-5):
grad = torch.autograd.grad(loss, params, retain_graph=True, create_graph=True)
prod = torch.autograd.grad(sum([(g**2).sum() / 2 for g in grad]),
params, retain_graph=True, create_graph=True)
out = sum([((g - p) / (g + eps * (2*(g >= 0).float() - 1).detach()) - 1).abs().sum()
for g, p in zip(grad, prod)])
return out / sum([p.data.nelement() for p in params])
def metainit(model, criterion, x_size, lr=0.1, momentum=0.9, steps=200, eps=1e-5):
model.eval()
params = [p for p in model.parameters()
if p.requires_grad and len(p.size()) >= 2]
memory = [0] * len(params)
for i in range(steps):
input = torch.Tensor(*x_size).normal_(0, 1).cuda()
loss = criterion(model(input), input)
gq = gradient_quotient(loss, list(model.parameters()), eps)
grad = torch.autograd.grad(gq, params)
for j, (p, g_all) in enumerate(zip(params, grad)):
norm = p.data.norm().item()
g = torch.sign((p.data * g_all).sum() / norm)
memory[j] = momentum * memory[j] - lr * g.item()
new_norm = norm + memory[j]
p.data.mul_(new_norm / (norm + eps))
print("%d/GQ = %.2f" % (i, gq.item()))
```
## Evalution on Tiny Imagenet
```
class PixelNormalize(object):
def __init__(self, mean_image, std_image):
self.mean_image = mean_image
self.std_image = std_image
def __call__(self, image):
normalized_image = (image - self.mean_image) / self.std_image
return normalized_image
class Flip(object):
def __call__(self, image):
if random.random() > 0.5:
return image.flip(-1)
else:
return image
class CustomTensorDataset(torch.utils.data.Dataset):
""" TensorDataset with support of transforms """
def __init__(self, *tensors, transform=None):
assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors)
self.tensors = tensors
self.transform = transform
def __getitem__(self, index):
x = self.tensors[0][index]
if self.transform:
x = self.transform(x)
return x
def __len__(self):
return self.tensors[0].size(0)
# Load train and valid data
from torchvision import transforms, datasets
from torch.utils.data import DataLoader
data_dir = 'data/tiny-imagenet-200/'
train_image_dataset = datasets.ImageFolder(os.path.join(data_dir, 'train'), transforms.ToTensor())
train_images = torch.cat([train_image_dataset[i][0][None] for i in range(len(train_image_dataset))], dim=0)
mean_image = train_images.mean(0)
std_image = train_images.std(0)
train_transforms = transforms.Compose([
Flip(),
PixelNormalize(mean_image, std_image),
])
eval_transforms = transforms.Compose([
PixelNormalize(mean_image, std_image),
])
ti_train_dataset = CustomTensorDataset(train_images, transform=train_transforms)
valid_image_dataset = datasets.ImageFolder(os.path.join(data_dir, 'val'), transforms.ToTensor())
valid_images = torch.cat([valid_image_dataset[i][0][None] for i in range(len(valid_image_dataset))], dim=0)
ti_valid_dataset = CustomTensorDataset(valid_images, transform=eval_transforms)
test_image_dataset = datasets.ImageFolder(os.path.join(data_dir, 'test'), transforms.ToTensor())
test_images = torch.cat([test_image_dataset[i][0][None] for i in range(len(test_image_dataset))], dim=0)
ti_test_dataset = CustomTensorDataset(test_images, transform=eval_transforms)
# Create data loaders
ti_train_loader = DataLoader(
ti_train_dataset, batch_size=train_batch_size, shuffle=True,
num_workers=num_workers, pin_memory=pin_memory,
)
ti_valid_loader = DataLoader(
ti_valid_dataset, batch_size=valid_batch_size, shuffle=True,
num_workers=num_workers, pin_memory=pin_memory,
)
ti_test_loader = DataLoader(
ti_test_dataset, batch_size=test_batch_size, shuffle=False,
num_workers=num_workers, pin_memory=pin_memory
)
num_reruns = 10
ti_batches_in_epoch = len(ti_train_loader) #782 - full epoch
assert ti_batches_in_epoch == 782
ti_base_runs_10 = []
ti_base_runs_50 = []
ti_base_runs_100 = []
ti_metainit_runs_10 = []
ti_metainit_runs_50 = []
ti_metainit_runs_100 = []
ti_deltaorthogonal_runs_10 = []
ti_deltaorthogonal_runs_50 = []
ti_deltaorthogonal_runs_100 = []
ti_maml_runs_10 = []
ti_maml_runs_50 = []
ti_maml_runs_100 = []
for _ in range(num_reruns):
print("Baseline")
maml.resample_parameters(initializers=maml.untrained_initializers, is_final=True)
base_model = deepcopy(maml.model)
base_train_loss_history, base_test_loss_history = \
eval_model(base_model, ti_train_loader, ti_test_loader, epochs=100,
test_loss_interval=10*ti_batches_in_epoch, device=device)
print("MetaInit")
batch_x = next(iter(ti_train_loader))
maml.resample_parameters(initializers=maml.untrained_initializers, is_final=True)
metainit_model = deepcopy(maml.model)
metainit(metainit_model, loss_function, batch_x.shape, steps=200)
metainit_train_loss_history, metainit_test_loss_history = \
eval_model(metainit_model, ti_train_loader, ti_test_loader,
batches_in_epoch=ti_batches_in_epoch, epochs=100,
test_loss_interval=10*ti_batches_in_epoch, device=device)
print("Delta Orthogonal")
maml.resample_parameters(initializers=maml.untrained_initializers, is_final=True)
deltaorthogonal_model = deepcopy(maml.model)
for param in deltaorthogonal_model.parameters():
if len(param.size()) >= 4:
makeDeltaOrthogonal(param, nn.init.calculate_gain('relu'))
deltaorthogonal_train_loss_history, deltaorthogonal_test_loss_history = \
eval_model(deltaorthogonal_model, ti_train_loader, ti_test_loader,
batches_in_epoch=ti_batches_in_epoch, epochs=100,
test_loss_interval=10*ti_batches_in_epoch, device=device)
ti_deltaorthogonal_runs_10.append(deltaorthogonal_test_loss_history[1])
ti_deltaorthogonal_runs_50.append(deltaorthogonal_test_loss_history[5])
ti_deltaorthogonal_runs_100.append(deltaorthogonal_test_loss_history[10])
print("DIMAML")
maml.resample_parameters(is_final=True)
maml_model = deepcopy(maml.model)
maml_train_loss_history, maml_test_loss_history = \
eval_model(maml_model, ti_train_loader, ti_test_loader, epochs=100,
test_loss_interval=10*ti_batches_in_epoch, device=device)
ti_base_runs_10.append(base_test_loss_history[1])
ti_base_runs_50.append(base_test_loss_history[5])
ti_base_runs_100.append(base_test_loss_history[10])
ti_metainit_runs_10.append(metainit_test_loss_history[1])
ti_metainit_runs_50.append(metainit_test_loss_history[5])
ti_metainit_runs_100.append(metainit_test_loss_history[10])
ti_maml_runs_10.append(maml_test_loss_history[1])
ti_maml_runs_50.append(maml_test_loss_history[5])
ti_maml_runs_100.append(maml_test_loss_history[10])
print("Baseline 10 epoch: ", np.mean(ti_base_runs_10), np.std(ti_base_runs_10, ddof=1))
print("Baseline 50 epoch: ", np.mean(ti_base_runs_50), np.std(ti_base_runs_50, ddof=1))
print("Baseline 100 epoch: ", np.mean(ti_base_runs_100), np.std(ti_base_runs_100, ddof=1))
print()
print("DeltaOrthogonal 10 epoch: ", np.mean(ti_deltaorthogonal_runs_10), np.std(ti_deltaorthogonal_runs_10, ddof=1))
print("DeltaOrthogonal 50 epoch: ", np.mean(ti_deltaorthogonal_runs_50), np.std(ti_deltaorthogonal_runs_50, ddof=1))
print("DeltaOrthogonal 100 epoch: ", np.mean(ti_deltaorthogonal_runs_100), np.std(ti_deltaorthogonal_runs_100, ddof=1))
print()
print("MetaInit 10 epoch: ", np.mean(ti_metainit_runs_10), np.std(ti_metainit_runs_10, ddof=1))
print("MetaInit 50 epoch: ", np.mean(ti_metainit_runs_50), np.std(ti_metainit_runs_50, ddof=1))
print("MetaInit 100 epoch: ", np.mean(ti_metainit_runs_100), np.std(ti_metainit_runs_100, ddof=1))
print()
print("DIMAML 10 epoch: ", np.mean(ti_maml_runs_10), np.std(ti_maml_runs_10, ddof=1))
print("DIMAML 50 epoch: ", np.mean(ti_maml_runs_50), np.std(ti_maml_runs_50, ddof=1))
print("DIMAML 100 epoch: ", np.mean(ti_maml_runs_100), np.std(ti_maml_runs_100, ddof=1))
```
| github_jupyter |
# Autonomous Driving - Car Detection
Welcome to the Week 3 programming assignment! In this notebook, you'll implement object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242).
**By the end of this assignment, you'll be able to**:
- Detect objects in a car detection dataset
- Implement non-max suppression to increase accuracy
- Implement intersection over union
- Handle bounding boxes, a type of image annotation popular in deep learning
## Table of Contents
- [Packages](#0)
- [1 - Problem Statement](#1)
- [2 - YOLO](#2)
- [2.1 - Model Details](#2-1)
- [2.2 - Filtering with a Threshold on Class Scores](#2-2)
- [Exercise 1 - yolo_filter_boxes](#ex-1)
- [2.3 - Non-max Suppression](#2-3)
- [Exercise 2 - iou](#ex-2)
- [2.4 - YOLO Non-max Suppression](#2-4)
- [Exercise 3 - yolo_non_max_suppression](#ex-3)
- [2.5 - Wrapping Up the Filtering](#2-5)
- [Exercise 4 - yolo_eval](#ex-4)
- [3 - Test YOLO Pre-trained Model on Images](#3)
- [3.1 - Defining Classes, Anchors and Image Shape](#3-1)
- [3.2 - Loading a Pre-trained Model](#3-2)
- [3.3 - Convert Output of the Model to Usable Bounding Box Tensors](#3-3)
- [3.4 - Filtering Boxes](#3-4)
- [3.5 - Run the YOLO on an Image](#3-5)
- [4 - Summary for YOLO](#4)
- [5 - References](#5)
<a name='0'></a>
## Packages
Run the following cell to load the packages and dependencies that will come in handy as you build the object detector!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
from PIL import ImageFont, ImageDraw, Image
import tensorflow as tf
from tensorflow.python.framework.ops import EagerTensor
from tensorflow.keras.models import load_model
from yad2k.models.keras_yolo import yolo_head
from yad2k.utils.utils import draw_boxes, get_colors_for_classes, scale_boxes, read_classes, read_anchors, preprocess_image
%matplotlib inline
```
<a name='1'></a>
## 1 - Problem Statement
You are working on a self-driving car. Go you! As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds as you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> Dataset provided by <a href="https://www.drive.ai/">drive.ai</a>.
</center></caption>
You've gathered all these images into a folder and labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like:
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u><b>Figure 1</u></b>: Definition of a box<br> </center></caption>
If there are 80 classes you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1, and the rest of which are 0. The video lectures used the latter representation; in this notebook, you'll use both representations, depending on which is more convenient for a particular step.
In this exercise, you'll discover how YOLO ("You Only Look Once") performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, the pre-trained weights are already loaded for you to use.
<a name='2'></a>
## 2 - YOLO
"You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
<a name='2-1'></a>
### 2.1 - Model Details
#### Inputs and outputs
- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
#### Anchor Boxes
* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'
* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.
* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
#### Encoding
Let's look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u><b> Figure 2 </u></b>: Encoding architecture for YOLO<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since you're using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, you'll flatten the last two dimensions of the shape (19, 19, 5, 85) encoding, so the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u><b> Figure 3 </u></b>: Flattening the last two last dimensions<br> </center></caption>
#### Class score
Now, for each box (of each cell) you'll compute the following element-wise product and extract a probability that the box contains a certain class.
The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u><b>Figure 4</u></b>: Find the class detected by each box<br> </center></caption>
##### Example of figure 4
* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1).
* The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$.
* The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$.
* Let's say you calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So you'll assign the score 0.44 and class "3" to this box "1".
#### Visualizing classes
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u><b>Figure 5</u></b>: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
#### Visualizing bounding boxes
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u><b>Figure 6</u></b>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
#### Non-Max suppression
In the figure above, the only boxes plotted are ones for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects.
To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score. Meaning, the box is not very confident about detecting a class, either due to the low probability of any object, or low probability of this particular class.
- Select only one box when several boxes overlap with each other and detect the same object.
<a name='2-2'></a>
### 2.2 - Filtering with a Threshold on Class Scores
You're going to first apply a filter by thresholding, meaning you'll get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It's convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19, 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19, 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.
- `box_class_probs`: tensor of shape $(19, 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
<a name='ex-1'></a>
### Exercise 1 - yolo_filter_boxes
Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$).
The following code may help you choose the right operator:
```python
a = np.random.randn(19, 19, 5, 1)
b = np.random.randn(19, 19, 5, 80)
c = a * b # shape of c will be (19, 19, 5, 80)
```
This is an example of **broadcasting** (multiplying vectors of different sizes).
2. For each box, find:
- the index of the class with the maximum box score
- the corresponding box score
**Useful References**
* [tf.math.argmax](https://www.tensorflow.org/api_docs/python/tf/math/argmax)
* [tf.math.reduce_max](https://www.tensorflow.org/api_docs/python/tf/math/reduce_max)
**Helpful Hints**
* For the `axis` parameter of `argmax` and `reduce_max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`.
* Applying `reduce_max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. You don't need to keep the last dimension after applying the maximum here.
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be `True` for the boxes you want to keep.
4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes you don't want. You should be left with just the subset of boxes you want to keep.
**One more useful reference**:
* [tf.boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask)
**And one more helpful hint**: :)
* For the `tf.boolean_mask`, you can keep the default `axis=None`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(boxes, box_confidence, box_class_probs, threshold = 0.6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
boxes -- tensor of shape (19, 19, 5, 4)
box_confidence -- tensor of shape (19, 19, 5, 1)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold],
then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
x = 10
y = tf.constant(100)
# YOUR CODE STARTS HERE
# Step 1: Compute box scores
##(≈ 1 line)
box_scores = box_class_probs*box_confidence
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
##(≈ 2 lines)
box_classes = tf.math.argmax(box_scores,axis=-1)
box_class_scores = tf.math.reduce_max(box_scores,axis=-1)
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
## (≈ 1 line)
filtering_mask = (box_class_scores >= threshold)
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
## (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores,filtering_mask)
boxes = tf.boolean_mask(boxes,filtering_mask)
classes = tf.boolean_mask(box_classes,filtering_mask)
# YOUR CODE ENDS HERE
return scores, boxes, classes
tf.random.set_seed(10)
box_confidence = tf.random.normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random.normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random.normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(boxes, box_confidence, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].numpy()))
print("boxes[2] = " + str(boxes[2].numpy()))
print("classes[2] = " + str(classes[2].numpy()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
assert type(scores) == EagerTensor, "Use tensorflow functions"
assert type(boxes) == EagerTensor, "Use tensorflow functions"
assert type(classes) == EagerTensor, "Use tensorflow functions"
assert scores.shape == (1789,), "Wrong shape in scores"
assert boxes.shape == (1789, 4), "Wrong shape in boxes"
assert classes.shape == (1789,), "Wrong shape in classes"
assert np.isclose(scores[2].numpy(), 9.270486), "Values are wrong on scores"
assert np.allclose(boxes[2].numpy(), [4.6399336, 3.2303846, 4.431282, -2.202031]), "Values are wrong on boxes"
assert classes[2].numpy() == 8, "Values are wrong on classes"
print("\033[92m All tests passed!")
```
**Expected Output**:
<table>
<tr>
<td>
<b>scores[2]</b>
</td>
<td>
9.270486
</td>
</tr>
<tr>
<td>
<b>boxes[2]</b>
</td>
<td>
[ 4.6399336 3.2303846 4.431282 -2.202031 ]
</td>
</tr>
<tr>
<td>
<b>classes[2]</b>
</td>
<td>
8
</td>
</tr>
<tr>
<td>
<b>scores.shape</b>
</td>
<td>
(1789,)
</td>
</tr>
<tr>
<td>
<b>boxes.shape</b>
</td>
<td>
(1789, 4)
</td>
</tr>
<tr>
<td>
<b>classes.shape</b>
</td>
<td>
(1789,)
</td>
</tr>
</table>
**Note** In the test for `yolo_filter_boxes`, you're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative.
<a name='2-3'></a>
### 2.3 - Non-max Suppression
Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> <b>Figure 7</b> </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> <b>Figure 8</b> </u>: Definition of "Intersection over Union". <br> </center></caption>
<a name='ex-2'></a>
### Exercise 2 - iou
Implement `iou()`
Some hints:
- This code uses the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, you move to the right. As y increases, you move down.
- For this exercise, a box is defined using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. This makes it a bit easier to calculate the intersection.
- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.
- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$:
- Feel free to draw some examples on paper to clarify this conceptually.
- The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom.
- The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top.
- The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero).
- The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.
**Additional Hints**
- `xi1` = **max**imum of the x1 coordinates of the two boxes
- `yi1` = **max**imum of the y1 coordinates of the two boxes
- `xi2` = **min**imum of the x2 coordinates of the two boxes
- `yi2` = **min**imum of the y2 coordinates of the two boxes
- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
```
#########################################################################
######################## USELESS BELOW ##################################
#########################################################################
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# YOUR CODE STARTS HERE
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
##(≈ 7 lines)
xi1 = max(box1_x1,box2_x1)
yi1 = max(box1_y1,box2_y1)
xi2 = min(box1_x2,box2_x2)
yi2 = min(box1_y2,box2_y2)
inter_width = max(0,yi2 - yi1)
inter_height = max(0,xi2 - xi1)
inter_area = inter_width*inter_height
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
## (≈ 3 lines)
box1_area = (box1_x2-box1_x1)*((box1_y2-box1_y1))
box2_area = (box2_x2-box2_x1)*((box2_y2-box2_y1))
union_area = box1_area + box2_area - inter_area
# compute the IoU
## (≈ 1 line)
iou = inter_area/union_area
# YOUR CODE ENDS HERE
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
assert iou(box1, box2) < 1, "The intersection area must be always smaller or equal than the union area."
assert np.isclose(iou(box1, box2), 0.14285714), "Wrong value. Check your implementation. Problem with intersecting boxes"
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
assert iou(box1, box2) == 0, "Intersection must be 0"
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
assert iou(box1, box2) == 0, "Intersection at vertices must be 0"
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
assert iou(box1, box2) == 0, "Intersection at edges must be 0"
print("\033[92m All tests passed!")
```
**Expected Output**:
```
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
```
<a name='2-4'></a>
### 2.4 - YOLO Non-max Suppression
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).
3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
<a name='ex-3'></a>
### Exercise 3 - yolo_non_max_suppression
Implement `yolo_non_max_suppression()` using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
**Reference documentation**:
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
```
tf.image.non_max_suppression(
boxes,
scores,
max_output_size,
iou_threshold=0.5,
name=None
)
```
Note that in the version of TensorFlow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument `score_threshold`.*
- [tf.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
```
keras.gather(
reference,
indices
)
```
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = tf.Variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
##(≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes,scores,max_boxes_tensor,iou_threshold)
# Use tf.gather() to select only nms_indices from scores, boxes and classes
##(≈ 3 lines)
scores = tf.gather(scores,nms_indices)
boxes = tf.gather(boxes,nms_indices)
classes = tf.gather(classes,nms_indices)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return scores, boxes, classes
tf.random.set_seed(10)
scores = tf.random.normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random.normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random.normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
assert type(scores) == EagerTensor, "Use tensoflow functions"
print("scores[2] = " + str(scores[2].numpy()))
print("boxes[2] = " + str(boxes[2].numpy()))
print("classes[2] = " + str(classes[2].numpy()))
print("scores.shape = " + str(scores.numpy().shape))
print("boxes.shape = " + str(boxes.numpy().shape))
print("classes.shape = " + str(classes.numpy().shape))
assert type(scores) == EagerTensor, "Use tensoflow functions"
assert type(boxes) == EagerTensor, "Use tensoflow functions"
assert type(classes) == EagerTensor, "Use tensoflow functions"
assert scores.shape == (10,), "Wrong shape"
assert boxes.shape == (10, 4), "Wrong shape"
assert classes.shape == (10,), "Wrong shape"
assert np.isclose(scores[2].numpy(), 8.147684), "Wrong value on scores"
assert np.allclose(boxes[2].numpy(), [ 6.0797963, 3.743308, 1.3914018, -0.34089637]), "Wrong value on boxes"
assert np.isclose(classes[2].numpy(), 1.7079165), "Wrong value on classes"
print("\033[92m All tests passed!")
```
**Expected Output**:
<table>
<tr>
<td>
<b>scores[2]</b>
</td>
<td>
8.147684
</td>
</tr>
<tr>
<td>
<b>boxes[2]</b>
</td>
<td>
[ 6.0797963 3.743308 1.3914018 -0.34089637]
</td>
</tr>
<tr>
<td>
<b>classes[2]</b>
</td>
<td>
1.7079165
</td>
</tr>
<tr>
<td>
<b>scores.shape</b>
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
<b>boxes.shape</b>
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
<b>classes.shape</b>
</td>
<td>
(10,)
</td>
</tr>
</table>
<a name='2-5'></a>
### 2.5 - Wrapping Up the Filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
<a name='ex-4'></a>
### Exercise 4 - yolo_eval
Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which are provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image -- for example, the car detection dataset had 720x1280 images -- this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; you'll see where they need to be called below.
```
def yolo_boxes_to_corners(box_xy, box_wh):
"""Convert YOLO box predictions to bounding box corners."""
box_mins = box_xy - (box_wh / 2.)
box_maxes = box_xy + (box_wh / 2.)
return tf.keras.backend.concatenate([
box_mins[..., 1:2], # y_min
box_mins[..., 0:1], # x_min
box_maxes[..., 1:2], # y_max
box_maxes[..., 0:1] # x_max
])
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720, 1280), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
# Retrieve outputs of the YOLO model (≈1 line)
box_xy, box_wh, box_confidence, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(boxes, box_confidence, box_class_probs, score_threshold)
# Scale boxes back to original image shape (720, 1280 or whatever)
boxes = scale_boxes(boxes, image_shape) # Network was trained to run on 608x608 images
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return scores, boxes, classes
tf.random.set_seed(10)
yolo_outputs = (tf.random.normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random.normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random.normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random.normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].numpy()))
print("boxes[2] = " + str(boxes[2].numpy()))
print("classes[2] = " + str(classes[2].numpy()))
print("scores.shape = " + str(scores.numpy().shape))
print("boxes.shape = " + str(boxes.numpy().shape))
print("classes.shape = " + str(classes.numpy().shape))
assert type(scores) == EagerTensor, "Use tensoflow functions"
assert type(boxes) == EagerTensor, "Use tensoflow functions"
assert type(classes) == EagerTensor, "Use tensoflow functions"
assert scores.shape == (10,), "Wrong shape"
assert boxes.shape == (10, 4), "Wrong shape"
assert classes.shape == (10,), "Wrong shape"
assert np.isclose(scores[2].numpy(), 171.60194), "Wrong value on scores"
assert np.allclose(boxes[2].numpy(), [-1240.3483, -3212.5881, -645.78, 2024.3052]), "Wrong value on boxes"
assert np.isclose(classes[2].numpy(), 16), "Wrong value on classes"
print("\033[92m All tests passed!")
```
**Expected Output**:
<table>
<tr>
<td>
<b>scores[2]</b>
</td>
<td>
171.60194
</td>
</tr>
<tr>
<td>
<b>boxes[2]</b>
</td>
<td>
[-1240.3483 -3212.5881 -645.78 2024.3052]
</td>
</tr>
<tr>
<td>
<b>classes[2]</b>
</td>
<td>
16
</td>
</tr>
<tr>
<td>
<b>scores.shape</b>
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
<b>boxes.shape</b>
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
<b>classes.shape</b>
</td>
<td>
(10,)
</td>
</tr>
</table>
<a name='3'></a>
## 3 - Test YOLO Pre-trained Model on Images
In this section, you are going to use a pre-trained model and test it on the car detection dataset.
<a name='3-1'></a>
### 3.1 - Defining Classes, Anchors and Image Shape
You're trying to detect 80 classes, and are using 5 anchor boxes. The information on the 80 classes and 5 boxes is gathered in two files: "coco_classes.txt" and "yolo_anchors.txt". You'll read class names and anchors from text files. The car detection dataset has 720x1280 images, which are pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
model_image_size = (608, 608) # Same as yolo_model input layer size
```
<a name='3-2'></a>
### 3.2 - Loading a Pre-trained Model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5". These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but are simply referred to as "YOLO" in this notebook.
Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/", compile=False)
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains:
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do -- this is fine!
**Reminder**: This model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
<a name='3-3'></a>
### 3.3 - Convert Output of the Model to Usable Bounding Box Tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. You will need to call `yolo_head` to format the encoding of the model you got from `yolo_model` into something decipherable:
`yolo_model_outputs = yolo_model(image_data)`
`yolo_outputs = yolo_head(yolo_model_outputs, anchors, len(class_names))`
The variable `yolo_outputs` will be defined as a set of 4 tensors that you can then use as input by your yolo_eval function. If you are curious about how yolo_head is implemented, you can find the function definition in the file `keras_yolo.py`. The file is also located in your workspace in this path: `yad2k/models/keras_yolo.py`.
<a name='3-4'></a>
### 3.4 - Filtering Boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. To perform filtering and select only the best boxes, you will call `yolo_eval`, which you had previously implemented, to do so:
out_scores, out_boxes, out_classes = yolo_eval(yolo_outputs, [image.size[1], image.size[0]], 10, 0.3, 0.5)
<a name='3-5'></a>
### 3.5 - Run the YOLO on an Image
Let the fun begin! You will create a graph that can be summarized as follows:
`yolo_model.input` is given to `yolo_model`. The model is used to compute the output `yolo_model.output`
`yolo_model.output` is processed by `yolo_head`. It gives you `yolo_outputs`
`yolo_outputs` goes through a filtering function, `yolo_eval`. It outputs your predictions: `out_scores`, `out_boxes`, `out_classes`.
Now, we have implemented for you the `predict(image_file)` function, which runs the graph to test YOLO on an image to compute `out_scores`, `out_boxes`, `out_classes`.
The code below also uses the following function:
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
which opens the image file and scales, reshapes and normalizes the image. It returns the outputs:
image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
image_data: a numpy-array representing the image. This will be the input to the CNN.
```
def predict(image_file):
"""
Runs the graph to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
yolo_model_outputs = yolo_model(image_data) # It's output is of shape (m, 19, 19, 5, 85)
# But yolo_eval takes input a tensor contains 4 tensors: box_xy,box_wh, box_confidence & box_class_probs
yolo_outputs = yolo_head(yolo_model_outputs, anchors, len(class_names))
out_scores, out_boxes, out_classes = yolo_eval(yolo_outputs, [image.size[1], image.size[0]], 10, 0.3, 0.5)
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), "images/" + image_file))
# Generate colors for drawing bounding boxes.
colors = get_colors_for_classes(len(class_names))
# Draw bounding boxes on the image file
#draw_boxes2(image, out_scores, out_boxes, out_classes, class_names, colors, image_shape)
draw_boxes(image, out_boxes, out_classes, class_names, out_scores)
# Save the predicted bounding box on the image
image.save(os.path.join("out", str(image_file).split('.')[0]+"_annotated." +str(image_file).split('.')[1] ), quality=100)
# Display the results in the notebook
output_image = Image.open(os.path.join("out", str(image_file).split('.')[0]+"_annotated." +str(image_file).split('.')[1] ))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict("0001.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
<b>Found 10 boxes for images/test.jpg</b>
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
<b>bus</b>
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.44 (336, 296) (378, 335)
</td>
</tr>
<tr>
<td>
<b>car</b>
</td>
<td>
0.37 (965, 273) (1022, 292)
</td>
</tr>
<tr>
<td>
<b>traffic light</b>
</td>
<td>
00.36 (681, 195) (692, 214)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks to <a href="https://www.drive.ai/">drive.ai</a> for providing this dataset! </center></caption>
<a name='4'></a>
## 4 - Summary for YOLO
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
<font color='blue'>
**What you should remember**:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN, which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, previously trained model parameters were used in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**Congratulations!** You've come to the end of this assignment.
Here's a quick recap of all you've accomplished.
You've:
- Detected objects in a car detection dataset
- Implemented non-max suppression to achieve better accuracy
- Implemented intersection over union as a function of NMS
- Created usable bounding box tensors from the model's predictions
Amazing work! If you'd like to know more about the origins of these ideas, spend some time on the papers referenced below.
<a name='5'></a>
## 5 - References
The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's GitHub repository. The pre-trained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
### Car detection dataset
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Thanks to Brody Huval, Chih Hu and Rahul Patel for providing this data.
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Eager execution basics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/2/tutorials/eager/eager_basics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/eager/eager_basics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/eager/eager_basics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This is an introductory TensorFlow tutorial shows how to:
* Import the required package
* Create and use tensors
* Use GPU acceleration
* Demonstrate `tf.data.Dataset`
```
!pip install tf-nightly-2.0-preview
```
## Import TensorFlow
Import the `tensorflow` module to get started. [Eager execution](../../guide/eager.ipynb) is enabled by default.
```
import tensorflow as tf
```
## Tensors
A Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `tf.Tensor` objects have a data type and a shape. Additionally, `tf.Tensor`s can reside in accelerator memory (like a GPU). TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce `tf.Tensor`s. These operations automatically convert native Python types, for example:
```
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.io.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
```
Each `tf.Tensor` has a shape and a datatype:
```
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
```
The most obvious differences between NumPy arrays and `tf.Tensor`s are:
1. Tensors can be backed by accelerator memory (like GPU, TPU).
2. Tensors are immutable.
### NumPy Compatibility
Converting between a TensorFlow `tf.Tensor`s and a NumPy `ndarray` is easy:
* TensorFlow operations automatically convert NumPy ndarrays to Tensors.
* NumPy operations automatically convert Tensors to NumPy ndarrays.
Tensors are explicitly converted to NumPy ndarrays using their `.numpy()` method. These conversions are typically cheap since the array and `tf.Tensor` share the underlying memory representation, if possible. However, sharing the underlying representation isn't always possible since the `tf.Tensor` may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion involves a copy from GPU to host memory.
```
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
```
## GPU acceleration
Many TensorFlow operations are accelerated using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation—copying the tensor between CPU and GPU memory, if necessary. Tensors produced by an operation are typically backed by the memory of the device on which the operation executed, for example:
```
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
```
### Device Names
The `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:<N>` if the tensor is placed on the `N`-th GPU on the host.
### Explicit Device Placement
In TensorFlow, *placement* refers to how individual operations are assigned (placed on) a device for execution. As mentioned, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation and copies tensors to that device, if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager, for example:
```
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
```
## Datasets
This section uses the [`tf.data.Dataset` API](https://www.tensorflow.org/guide/datasets) to build a pipeline for feeding data to your model. The `tf.data.Dataset` API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
### Create a source `Dataset`
Create a *source* dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices), or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Dataset guide](https://www.tensorflow.org/guide/datasets#reading_input_data) for more information.
```
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
```
### Apply transformations
Use the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), and [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) to apply transformations to dataset records.
```
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
```
### Iterate
`tf.data.Dataset` objects support iteration to loop over records:
```
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
```
| github_jupyter |
# Remote Sensing Hands-On Lesson, using TGO
EPSC Conference, Berlin, September 18, 2018
## Overview
In this lesson you will develop a series of simple programs that
demonstrate the usage of SpiceyPy to compute a variety of different
geometric quantities applicable to experiments carried out by a remote
sensing instrument flown on an interplanetary spacecraft. This
particular lesson focuses on a spectrometer flying on the ExoMars2016 TGO
spacecraft, but many of the concepts are easily extended and generalized
to other scenarios.
## Importing SpiceyPy and Loading the Kernels
## Time Conversion
Write a program that prompts the user for an input UTC time string,
converts it to the following time systems and output formats:
* Ephemeris Time (ET) in seconds past J2000
* Calendar Ephemeris Time
* Spacecraft Clock Time
and displays the results. Use the program to convert "2018 JUN 11
19:32:00" UTC into these alternate systems.
## Obtaining Target States and Positions
Write a program that prompts the user for an input UTC time string,
computes the following quantities at that epoch:
* The apparent state of Mars as seen from ExoMars2016 TGO in the J2000 frame, in kilometers and kilometers/second. This vector itself is not of any particular interest, but it is a useful intermediate quantity in some geometry calculations.
* The apparent position of the Earth as seen from ExoMars2016 TGO in the J2000 frame, in kilometers.
* The one-way light time between ExoMars2016 TGO and the apparent position of Earth, in seconds.
* The apparent position of the Sun as seen from Mars in the J2000 frame (J2000), in kilometers.
* The actual (geometric) distance between the Sun and Mars, in astronomical units.
and displays the results. Use the program to compute these quantities at
"2018 JUN 11 19:32:00" UTC.
## Spacecraft Orientation and Reference Frames
Write a program that prompts the user for an input time string, and
computes and displays the following at the epoch of interest:
* The apparent state of Mars as seen from ExoMars2016 TGO in the IAU_MARS body-fixed frame. This vector itself is not of any particular interest, but it is a useful intermediate quantity in some geometry calculations.
* The angular separation between the apparent position of Mars as seen from ExoMars2016 TGO and the nominal instrument view direction.
* The nominal instrument view direction is not provided by any kernel variable, but it is indicated in the ExoMars2016 TGO frame kernel.
Use the program to compute these quantities at the epoch 2018 JUN 11
19:32:00 UTC.
## Computing Sub-s/c and Sub-solar Points on an Ellipsoid and a DSK
Write a program that prompts the user for an input UTC time string and computes the following quantities at that epoch:
* The apparent sub-observer point of ExoMars2016 TGO on Mars, in the body fixed frame IAU_MARS, in kilometers.
* The apparent sub-solar point on Mars, as seen from ExoMars2016 TGO in the body fixed frame IAU_MARS, in kilometers.
The program computes each point twice: once using an ellipsoidal shape model and the
near point/ellipsoid
definition, and once using a DSK shape model and the
nadir/dsk/unprioritized
definition.
The program displays the results. Use the program to compute these
quantities at 2018 JUN 11 19:32:00 UTC.
## Intersecting Vectors with an Ellipsoid and a DSK (fovint)
Write a program that prompts the user for an input UTC time string and,
for that time, computes the intersection of the ExoMars-16 TGO NOMAD LNO
Nadir aperture boresight and field of view (FOV) boundary vectors with
the surface of Mars. Compute each intercept twice: once with Mars' shape
modeled as an ellipsoid, and once with Mars' shape modeled by DSK data.
The program presents each point of intersection as
* A Cartesian vector in the IAU_MARS frame
* Planetocentric (latitudinal) coordinates in the IAU_MARS frame.
For each of the camera FOV boundary and boresight vectors, if an
intersection is found, the program displays the results of the above
computations, otherwise it indicates no intersection exists.
At each point of intersection compute the following:
* Phase angle
* Solar incidence angle
* Emission angle
These angles should be computed using both ellipsoidal and DSK shape
models.
Additionally compute the local solar time at the intercept of the
spectrometer aperture boresight with the surface of Mars, using both
ellipsoidal and DSK shape models.
Use this program to compute values at 2018 JUN 11 19:32:00 UTC
| github_jupyter |
```
import numpy
from context import vaeqst
import numpy
from context import base
base.RandomCliffordGate(0,1)
```
# Random Clifford Circuit
## RandomCliffordGate
`RandomClifordGate(*qubits)` represents a random Clifford gate acting on a set of qubits. There is no further parameter to specify, as it is not any particular gate, but a placeholder for a generic random Clifford gate.
**Parameters**
- `*qubits`: indices of the set of qubits on which the gate acts on.
Example:
```
gate = vaeqst.RandomCliffordGate(0,1)
gate
```
`RandomCliffordGate.random_clifford_map()` evokes a random sampling of the Clifford unitary, return in the form of operator mapping table $M$ and the corresponding sign indicator $h$. Such that under the mapping, any Pauli operator $\sigma_g$ specified by the binary representation $g$ (and localized within the gate support) gets mapped to
$$\sigma_g \to \prod_{i=1}^{2n} (-)^{h_i}\sigma_{M_i}^{g_i}.$$
The binary representation is in the $g=(x_0,z_0,x_1,z_1,\cdots)$ basis.
```
gate.random_clifford_map()
```
## RandomCliffordLayer
`RandomCliffordLayer(*gates)` represents a layer of random Clifford gates.
**Parameters:**
* `*gates`: quantum gates contained in the layer.
The gates in the same layer should not overlap with each other (all gates need to commute). To ensure this, we do not manually add gates to the layer, but using the higher level function `.gate()` provided by `RandomCliffordCircuit` (see discussion later).
Example:
```
layer = vaeqst.RandomCliffordLayer(vaeqst.RandomCliffordGate(0,1),vaeqst.RandomCliffordGate(3,5))
layer
```
It hosts a list of gates:
```
layer.gates
```
Given the total number of qubits $N$, the layer can sample the Clifford unitary (as product of each gate) $U=\prod_{a}U_a$, and represent it as a single operator mapping (because gates do not overlap, so they maps operators in different supports independently).
```
layer.random_clifford_map(6)
```
## RandomCliffordCircuit
`RandomCliffordCircuit()` represents a quantum circuit of random Clifford gates.
### Methods
#### Construct the Circuit
Example: create a random Clifford circuit.
```
circ = vaeqst.RandomCliffordCircuit()
```
Use `.gate(*qubits)` to add random Clifford gates to the circuit.
```
circ.gate(0,1)
circ.gate(2,4)
circ.gate(1,4)
circ.gate(0,2)
circ.gate(3,5)
circ.gate(3,4)
circ
```
Gates will automatically arranged into layers. Each new gate added to the circuit will commute through the layers if it is not blocked by the existing gates.
If the number of qubits `.N` is not explicitly defined, it will be dynamically infered from the circuit width, as the largest qubit index of all gates + 1.
```
circ.N
```
#### Navigate in the Circuit
`.layers_forward()` and `.layers_backward()` provides two generators to iterate over layers in forward and backward order resepctively.
```
list(circ.layers_forward())
list(circ.layers_backward())
```
`.first_layer` and `.last_layer` points to the first and the last layers.
```
circ.first_layer
circ.last_layer
```
Use `.next_layer` and `.prev_layer` to move forward and backward.
```
circ.first_layer.next_layer, circ.last_layer.prev_layer
```
Locate a gate in the circuit.
```
circ.first_layer.next_layer.next_layer.gates[0]
```
#### Apply Circuit to State
`.forward(state)` and `.backward(state)` applies the circuit to transform the state forward / backward.
* Each call will sample a new random realization of the random Clifford circuit.
* The transformation will create a new state, the original state remains untouched.
```
rho = vaeqst.StabilizerState(6, r=0)
rho
circ.forward(rho)
circ.backward(rho)
```
#### POVM
`.povm(nsample)` provides a generator to sample $n_\text{sample}$ from the prior POVM based on the circuit by back evolution.
```
list(circ.povm(3))
```
## BrickWallRCC
`BrickWallRCC(N, depth)` is a subclass of `RandomCliffordCircuit`. It represents the circuit with 2-qubit gates arranged following a brick wall pattern.
```
circ = vaeqst.BrickWallRCC(16,2)
circ
```
Create an inital state as a computational basis state.
```
rho = vaeqst.StabilizerState(16, r=0)
rho
```
Backward evolve the state to obtain the measurement operator.
```
circ.backward(rho)
```
## OnSiteRCC
`OnSiteRCC(N)` is a subclass of `RandomCliffordCircuit`. It represents the circuit of a single layer of on-site Clifford gates. It can be used to generate random Pauli states.
```
circ = vaeqst.OnSiteRCC(16)
circ
rho = vaeqst.StabilizerState(16, r=0)
circ.backward(rho)
```
## GlobalRCC
`GlobalRCC(N)` is a subclass of `RandomCliffordCircuit`. It represents the circuit consists of a global Clifford gate. It can be used to generate Clifford states.
```
circ = vaeqst.GlobalRCC(16)
circ
rho = vaeqst.StabilizerState(16, r=0)
circ.backward(rho)
```
| github_jupyter |
<font size=6>
<b>Curso de Programación en Python</b>
</font>
<font size=4>
Curso de formación interna, CIEMAT. <br/>
Madrid, Octubre de 2021
Antonio Delgado Peris
</font>
https://github.com/andelpe/curso-intro-python/
<br/>
# Tema 9 - El ecosistema Python: librería estándar y otros paquetes populares
## Objetivos
- Conocer algunos módulos de la librería estándar
- Interacción con el propio intérprete
- Interacción con el sistema operativo
- Gestión del sistema de ficheros
- Gestión de procesos y concurrencia
- Desarrollo, depuración y perfilado
- Números y matemáticas
- Acceso y funcionalidad de red
- Utilidades para manejo avanzado de funciones e iteradores
- Introducir el ecosistema de librerías científicas de Python
- La pila Numpy/SciPY
- Gráficos
- Matemáticas y estadística
- Aprendizaje automático
- Procesamiento del lenguaje natural
- Biología
- Física
## La librería estándar
Uno de los eslóganes de Python es _batteries included_. Se refiere a la cantidad de funcionalidad disponible en la instalación Python básica, sin necesidad de recurrir a paquetes externos.
En esta sección revisamos brevemente algunos de los módulos disponibles. Para muchas más información: https://docs.python.org/3/library/
### Interacción con el intérprete de Python: `sys`
Ofrece tanto información, como capacidad de manipular diversos aspectos del propio entorno de Python.
- `sys.argv`: Lista con los argumentos pasados al programa en ejecución.
- `sys.version`: String con la versión actual de Python.
- `sys.stdin/out/err`: Objetos fichero usados por el intérprete para entrada, salida y error.
- `sys.exit`: Función para acabar el programa.
### Interacción con el sistema operativo: `os`
Interfaz _portable_ para funcionalidad que depende del sistema operativo.
Contiene funcionalidad muy variada, a veces de muy bajo nivel.
- `os.environ`: diccionario con variables de entorno (modificable)
- `os.getuid`, `os.getgid`, `os.getpid`...: Obtener UID, GID, process ID, etc. (Unix)
- `os.uname`: información sobre el sistema operativo
- `os.getcwd`, `os.chdir`, `os.mkdir`, `os.remove`, `os.stat`...: operaciones sobre el sistema de ficheros
- `os.exec`, `os.fork`, `os.kill`... : gestión de procesos
Para algunas de estas operaciones es más conveniente utilizar módulos más específicos, o de más alto nivel.
### Operaciones sobre el sistema de ficheros
- Para manipulación de _paths_, borrado, creación de directorios, etc.: `pathlib` (moderno), o `os.path` (clásico)
- Expansión de _wildcards_ de nombres de fichero (Unix _globs_): `glob`
- Para operaciones de copia (y otros) de alto nivel: `shutil`
- Para ficheros y directorios temporales (de usar y tirar): `tempfile`
### Gestión de procesos
- `threading`: interfaz de alto nivel para gestión de _threads_.
- Padece el problema del _Global Interpreter Lock_, de Python: es un _lock_ global, que asegura que solo un thread se está ejecutando en Python en un momento dado (excepto en pausas por I/O). Impide mejorar el rendimiento con múltiples CPUs.
- `queue`: implementa colas multi-productor, multi-consumidor para un intercambio seguro de información entre múltiples _threads_.
- `multiprocessing`: interfaz que imita al the `threading`, pero utiliza multi-proceso, en lugar de threads (evita el problema del GIL). Soporta Unix y Windows. Ofrece concurrencia local y remota.
- El módulo `multiprocessing.shared_memory`: facilita la asignación y gestión de memoria compartida entre varios procesos.
- `subprocess`: Permite lanzar y gestionar subprocesos (comandos externos) desde Python.
- Para Python >= 3.5, se recomienda usar la función `run`, salvo casos complejos.
```
from subprocess import run
def showRes(res):
print('\n------- ret code:', res.returncode, '; err:', res.stderr)
if res.stdout:
print('\n'.join(res.stdout.splitlines()[:3]))
print()
print('NO SHELL')
res = run(['ls', '-l'], capture_output=True, text=True)
showRes(res)
print('WITH SHELL')
res = run('ls -l', shell=True, capture_output=True, text=True)
showRes(res)
print('NO OUTPUT')
res = run(['ls', '-l'])
showRes(res)
print('ERROR NO-CHECK')
res = run(['ls', '-l', 'XXXX'], capture_output=True, text=True)
showRes(res)
print('ERROR CHECK')
try:
res = run(['ls', '-l', 'XXXX'], capture_output=True, check=True)
showRes(res)
except Exception as ex:
print(f'--- Error of type {type(ex)}:\n {ex}\n')
print('NO OUTPUT')
res = run(['ls', '-l', 'XXXX'])
showRes(res)
```
### Números y matemáticas
- `math`: operaciones matemáticas definidas por el estándar de C (`cmath`, para números complejos)
- `random`: generadores de números pseudo-aleatorios para varias distribuciones
- `statistics`: estadísticas básicas
### Manejo avanzado de funciones e iteradores
- `itertools`: útiles para crear iteradores de forma eficiente.
- `functools`: funciones de alto nivel que manipulan otras funciones
- `operators`: funciones correspondientes a los operadores intrínsicos de Python
```
import operator
operator.add(3, 4)
```
### Red
- `socket`: operaciones de red de bajo nivel
- `asyncio`: soporte para entornos de entrada/salida asíncrona
- Existen varias librerías para interacción HTTP, pero se recomienda la librería externa `requests`.
### Desarrollo, depuración y perfilado
- `pydoc`: generación de documentación (HTML), a partir de los docstrings
- Depuración
- Muchos IDEs, y Jupyterlab, incluyen facilidades de depuración en sus entornos.
- `pdb`: _Debugger_ oficial de Python
- Correr scripts como `python3 -m pdb myscript.py`
- Introducir un _break point_ con `import pdb; pdb.set_trace()`
- `cProfile`: _Profiler_
- `timeit`: Medición de tiempos de ejecución de código/scripts
```python
$ python3 -m timeit '"-".join(str(n) for n in range(100))'
10000 loops, best of 5: 30.2 usec per loop
>>> import timeit
>>> timeit.timeit('"-".join(str(n) for n in range(100))', number=10000)
0.3018611848820001
%timeit "-".join(str(n) for n in range(100)) # Jupyter line mode
%%timeit ... # Jupyter cell mode
```
- `unittest`: creación de tests para validación de código (_test-driven programming_)
- La librería externa `pytest` simplifica algunas tareas, y es muy popular
### Números y matemáticas
- `math`: operaciones matemáticas definidas por el estándar de C (`cmath`, para números complejos)
- `random`: generadores de números pseudo-aleatorios para varias distribuciones
- `statistics`: estadísticas básicas
### Otros
- `argparse`: procesado de argumentos y opciones por línea de comando
- Mi recomendación es crearse un _esqueleto_ tipo como base para futuros scripts.
- `re`: procesado de expresiones regulares
- `time`, `datetime`: manipulación de fechas y tiempo (medición y representación del tiempo, deltas de tiempo, etc.)
## La pila NumPy/Scipy
Este conjunto de librerías de código abierto constituye la base numérica, matemática, y de visualización sobre la que se construye el universo matemático/científico en Python.
- **NumPy**: Paquete de propósito general para procesamiento de objetos _array_ (vectores y matrices), de altas prestaciones.
- Sirve de base para la mayoría de los demás paquetes matemáticos.
- Permite realizar operaciones matriciales eficientes (sin usar bucles explícitos)
- Utiliza librerías compiladas (C y Fortran), con un API Python, para conseguir mejor rendimiento.
- **SciPy**: Construida sobre NumPy, y como base de muchas de las siguientes, ofrece múltiples utilidades para integración numérica, interpolación, optimización, algebra lineal, procesado de señal y estadística.
- No confundir la _librería SciPy_, con el proyecto o pila SciPy, que se refiere a todas las de esta sección.
- **Matplotlib**: Librería de visualización (gráficos 2D) de referencia de Python.
- También sirve de base para otras librerías, como _Seaborn_ o _Pandas_.
- **Pandas**: Manipulación de datos de manera ágil y eficiente.
- Utiliza un objeto _DataFrame_, que representa la información en columnas etiquetadas e indexadas.
- Ofrece funcionalidades para buscar, filtrar, ordenar, transformar o extraer información.
- **SymPy**: Librería de matemáticas simbólicas (al estilo de _Mathematica_)
## Gráficos
- **Seaborn**: Construida sobre Matplotlib ofrece un interfaz de alto nivel, para construir, de forma sencilla, gráficos avanzados para modelos estadísticos.
- **Bokeh**: Librería para visualización interactiva de gráficos en web, o en Jupyter notebooks.
- **Plotly**: Gráficos interactivos para web. Es parte de un proyecto mayor **_Dash_**, un entorno para construir aplicaciones web para análisis de datos en Python (sin escribir _javascript_).
- **Scikit-image**: Algoritmos para _procesado_ de imágenes (diferente propósito que los anteriores).
- Otras: **ggplot2/plotnine** (basadas en la librería _ggplot2_ de R), **Altair** (librería declarativa, basada en _Vega-Lite_), `Geoplotlib` y `Folium` (para construir mapas).
## Matemáticas y estadística
- **Statsmodel**: Estimación de modelos estadísticos, realización de tests y exploración de datos estadísticos.
- **PyStan**: Inferencia Bayesiana.
- **NetworkX**: Creación, manipulación y análisis de redes y grafos.
## Machine Learning
- **Scikit-learn**: Librería de aprendizaje automático de propósito general, construida sobre NumPy. Ofrece múltiples algoritmos de ML, como _support vector machines_, o _random forests_, así como muchas utilidades para pre- y postprocesado de datos.
- **TensorFlow** y **PyTorch**: son dos librerías para programación de redes neuronales, incluyendo optimización para GPUs, muy extendidas.
- **Keras**: Es un interfaz simplificado (de alto nivel) para el uso de TensorFlow.
## Otros
### Procesamiento del Lenguaje Natural
Las siguientes librerías ofrecen funcionalidades de análisis sintáctico y semántico de textos libres:
- **GenSim**
- **SpaCy**
- **NLTK**
### Biología
- **Scikit-bio**: Estructuras de datos, algoritmos y recursos educativos para bioinformática.
- **BioPython**: Herramientas para computación biológica.
- **PyEnsembl**: Interfaz Python a Ensembl, base de datos de genómica.
### Física
- Astronomía: **Astropy**, y **PyFITS**
- Física de altas energías:
- **PyROOT**: interfaz Python a ROOT, entorno con ambición generalista, que ofrece muchas utilidades para análisis y almacenamiento de datos, estadística y visualización.
- **Scikit-HEP**: colección de librerías que pretenden trabajar con datos ROOT utilizando código exclusivamente Python (integrado con Numpy), sin usar PyROOT. Algunas son **uproot**, **awkward array**, **coffea**.
### Datos HDF5
- **h5py**: Interfaz a datos HDF5 que trata de ofrecer toda la funcionalidad del interfaz C de HDF5 en Python, integrado con el los objetos y tipos NumPy, por lo que puede usarse en código Python de manera sencilla.
- **pytables**: Otro interfaz a datos HDF5 con un interfaz a más alto nivel que `h5py`, y que ofrece funcionalidades adicionales al estilo de una base de datos (consultas complejas, indexado avanzado, optimización de computación con datos HDF5, etc.)
| github_jupyter |
<a href="https://colab.research.google.com/github/google/evojax/blob/main/examples/notebooks/TutorialTaskImplementation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial: Creating Tasks
## Pre-requisite
Before we start, we need to install EvoJAX and import some libraries.
**Note** In our [paper](https://arxiv.org/abs/2202.05008), we ran the experiments on NVIDIA V100 GPU(s). Your results can be different from ours.
```
from IPython.display import clear_output, Image
!pip install evojax
!pip install torchvision # We use torchvision.datasets.MNIST in this tutorial.
clear_output()
import os
import numpy as np
import jax
import jax.numpy as jnp
from evojax.task.cartpole import CartPoleSwingUp
from evojax.policy.mlp import MLPPolicy
from evojax.algo import PGPE
from evojax import Trainer
from evojax.util import create_logger
# Let's create a directory to save logs and models.
log_dir = './log'
logger = create_logger(name='EvoJAX', log_dir=log_dir)
logger.info('Welcome to the tutorial on Task creation!')
logger.info('Jax backend: {}'.format(jax.local_devices()))
!nvidia-smi --query-gpu=name --format=csv,noheader
```
## Introduction
EvoJAX has three major components: the *task*, the *policy network* and the *neuroevolution algorithm*. Once these components are implemented and instantiated, we can use a trainer to start the training process. The following code snippet provides an example of how we use EvoJAX.
```
seed = 42 # Wish me luck!
# We use the classic cart-pole swing up as our tasks, see
# https://github.com/google/evojax/tree/main/evojax/task for more example tasks.
# The test flag provides the opportunity for a user to
# 1. Return different signals as rewards. For example, in our MNIST example,
# we use negative cross-entropy loss as the reward in training tasks, and the
# classification accuracy as the reward in test tasks.
# 2. Perform reward shaping. It is common for RL practitioners to modify the
# rewards during training so that the agent learns more efficiently. But this
# modification should not be allowed in tests for fair evaluations.
hard = False
train_task = CartPoleSwingUp(harder=hard, test=False)
test_task = CartPoleSwingUp(harder=hard, test=True)
# We use a feedforward network as our policy.
# By default, MLPPolicy uses "tanh" as its activation function for the output.
policy = MLPPolicy(
input_dim=train_task.obs_shape[0],
hidden_dims=[64, 64],
output_dim=train_task.act_shape[0],
logger=logger,
)
# We use PGPE as our evolution algorithm.
# If you want to know more about the algorithm, please take a look at the paper:
# https://people.idsia.ch/~juergen/nn2010.pdf
solver = PGPE(
pop_size=64,
param_size=policy.num_params,
optimizer='adam',
center_learning_rate=0.05,
seed=seed,
)
# Now that we have all the three components instantiated, we can create a
# trainer and start the training process.
trainer = Trainer(
policy=policy,
solver=solver,
train_task=train_task,
test_task=test_task,
max_iter=600,
log_interval=100,
test_interval=200,
n_repeats=5,
n_evaluations=128,
seed=seed,
log_dir=log_dir,
logger=logger,
)
_ = trainer.run()
# Let's visualize the learned policy.
def render(task, algo, policy):
"""Render the learned policy."""
task_reset_fn = jax.jit(test_task.reset)
policy_reset_fn = jax.jit(policy.reset)
step_fn = jax.jit(test_task.step)
act_fn = jax.jit(policy.get_actions)
params = algo.best_params[None, :]
task_s = task_reset_fn(jax.random.PRNGKey(seed=seed)[None, :])
policy_s = policy_reset_fn(task_s)
images = [CartPoleSwingUp.render(task_s, 0)]
done = False
step = 0
reward = 0
while not done:
act, policy_s = act_fn(task_s, params, policy_s)
task_s, r, d = step_fn(task_s, act)
step += 1
reward = reward + r
done = bool(d[0])
if step % 3 == 0:
images.append(CartPoleSwingUp.render(task_s, 0))
print('reward={}'.format(reward))
return images
imgs = render(test_task, solver, policy)
gif_file = os.path.join(log_dir, 'cartpole.gif')
imgs[0].save(
gif_file, save_all=True, append_images=imgs[1:], duration=40, loop=0)
Image(open(gif_file,'rb').read())
```
Including the three major components, EvoJAX implements the entire training pipeline in JAX. In the first release, we have created several [demo tasks](https://github.com/google/evojax/tree/main/evojax/task) to showcase EvoJAX's capacity. And we encourage the users to bring their own tasks. To this end, we will walk you through the process of creating EvoJAX tasks in this tutorial.
To contribute a task implementation to EvoJAX, all you need to do is to implement the `VectorizedTask` interface.
The interface is defined as the following and you can see the related Python file [here](https://github.com/google/evojax/blob/main/evojax/task/base.py):
```python
class TaskState(ABC):
"""A template of the task state."""
obs: jnp.ndarray
class VectorizedTask(ABC):
"""Interface for all the EvoJAX tasks."""
max_steps: int
obs_shape: Tuple
act_shape: Tuple
test: bool
multi_agent_training: bool = False
@abstractmethod
def reset(self, key: jnp.array) -> TaskState:
"""This resets the vectorized task.
Args:
key - A jax random key.
Returns:
TaskState. Initial task state.
"""
raise NotImplementedError()
@abstractmethod
def step(self,
state: TaskState,
action: jnp.ndarray) -> Tuple[TaskState, jnp.ndarray, jnp.ndarray]:
"""This steps once the simulation.
Args:
state - System internal states of shape (num_tasks, *).
action - Vectorized actions of shape (num_tasks, action_size).
Returns:
TaskState. Task states.
jnp.ndarray. Reward.
jnp.ndarray. Task termination flag: 1 for done, 0 otherwise.
"""
raise NotImplementedError()
```
## MNIST classification
While one would obviously use gradient descent for MNIST in practice, the point is to show that neuroevolution can also solve them to some degree of accuracy within a short amount of time, which will be useful when these models are adapted within a more complicated task where gradient-based approaches may not work.
The following code snippet shows how we wrap the dataset and treat it as a one-step `VectorizedTask`.
```
from torchvision import datasets
from flax.struct import dataclass
from evojax.task.base import TaskState
from evojax.task.base import VectorizedTask
# This state contains the information we wish to carry over to the next step.
# The state will be used in `VectorizedTask.step` method.
# In supervised learning tasks, we want to store the data and the labels so that
# we can calculate the loss or the accuracy and use that as the reward signal.
@dataclass
class State(TaskState):
obs: jnp.ndarray
labels: jnp.ndarray
def sample_batch(key, data, labels, batch_size):
ix = jax.random.choice(
key=key, a=data.shape[0], shape=(batch_size,), replace=False)
return (jnp.take(data, indices=ix, axis=0),
jnp.take(labels, indices=ix, axis=0))
def loss(prediction, target):
target = jax.nn.one_hot(target, 10)
return -jnp.mean(jnp.sum(prediction * target, axis=1))
def accuracy(prediction, target):
predicted_class = jnp.argmax(prediction, axis=1)
return jnp.mean(predicted_class == target)
class MNIST(VectorizedTask):
"""MNIST classification task.
We model the classification as an one-step task, i.e.,
`MNIST.reset` returns a batch of data to the agent, the agent outputs
predictions, `MNIST.step` returns the reward (loss or accuracy) and
terminates the rollout.
"""
def __init__(self, batch_size, test):
self.max_steps = 1
# These are similar to OpenAI Gym environment's
# observation_space and action_space.
# They are helpful for initializing the policy networks.
self.obs_shape = tuple([28, 28, 1])
self.act_shape = tuple([10, ])
# We download the dataset and normalize the value.
dataset = datasets.MNIST('./data', train=not test, download=True)
data = np.expand_dims(dataset.data.numpy() / 255., axis=-1)
labels = dataset.targets.numpy()
def reset_fn(key):
if test:
# In the test mode, we want to test on the entire test set.
batch_data, batch_labels = data, labels
else:
# In the training mode, we only sample a batch of training data.
batch_data, batch_labels = sample_batch(
key, data, labels, batch_size)
return State(obs=batch_data, labels=batch_labels)
# We use jax.vmap for auto-vectorization.
self._reset_fn = jax.jit(jax.vmap(reset_fn))
def step_fn(state, action):
if test:
# In the test mode, we report the classification accuracy.
reward = accuracy(action, state.labels)
else:
# In the training mode, we return the negative loss as the
# reward signal. It is legitimate to return accuracy as the
# reward signal in training too, but we find the performance is
# not as good as when we use the negative loss.
reward = -loss(action, state.labels)
# This is an one-step task, so that last return value (the `done`
# flag) is one.
return state, reward, jnp.ones(())
# We use jax.vmap for auto-vectorization.
self._step_fn = jax.jit(jax.vmap(step_fn))
def reset(self, key):
return self._reset_fn(key)
def step(self, state, action):
return self._step_fn(state, action)
# Okay, let's test out the task with a ConvNet policy.
from evojax.policy.convnet import ConvNetPolicy
batch_size = 1024
train_task = MNIST(batch_size=batch_size, test=False)
test_task = MNIST(batch_size=batch_size, test=True)
policy = ConvNetPolicy(logger=logger)
solver = PGPE(
pop_size=64,
param_size=policy.num_params,
optimizer='adam',
center_learning_rate=0.006,
stdev_learning_rate=0.09,
init_stdev=0.04,
logger=logger,
seed=seed,
)
trainer = Trainer(
policy=policy,
solver=solver,
train_task=train_task,
test_task=test_task,
max_iter=5000,
log_interval=100,
test_interval=1000,
n_repeats=1,
n_evaluations=1,
seed=seed,
log_dir=log_dir,
logger=logger,
)
_ = trainer.run()
```
Okay! Our implementation of the classification task is successful and EvoJAX achieved $>98\%$ test accuracy within 5 min on a V100 GPU.
As mentioned before, MNIST is a simple one-step task, we want to get you familiar with the interfaces.
Next, we will build the classic cart-pole task from scratch.
## Cart-pole swing up
In our cart-pole swing up task, the agent applies an action $a \in [-1, 1]$ on the cart, and we maintain 4 states:
1. cart position $x$
2. cart velocity $\dot{x}$
3. the angle between the cart and the pole $\theta$
4. the pole's angular velocity $\dot{\theta}$
We randomly sample the initial states and will use the forward Euler integration to update them:
$\mathbf{x}(t + \Delta t) = \mathbf{x}(t) + \Delta t \mathbf{v}(t)$ and
$\mathbf{v}(t + \Delta t) = \mathbf{v}(t) + \Delta t f(a, \mathbf{x}(t), \mathbf{v}(t))$
where $\mathbf{x}(t) = [x, \theta]^{\intercal}$, $\mathbf{v}(t) = [\dot{x}, \dot{\theta}]^{\intercal}$ and $f(\cdot)$ is a function that represents the physical model.
Thanks to `jax.vmap`, we are able to write the task as if it is designed to deal with non-batch inputs though in the training process JAX will automatically vectorize the task for us.
```
from evojax.task.base import TaskState
from evojax.task.base import VectorizedTask
import PIL
# Define some physics metrics.
GRAVITY = 9.82
CART_MASS = 0.5
POLE_MASS = 0.5
POLE_LEN = 0.6
FRICTION = 0.1
FORCE_SCALING = 10.0
DELTA_T = 0.01
CART_X_LIMIT = 2.4
# Define some constants for visualization.
SCREEN_W = 600
SCREEN_H = 600
CART_W = 40
CART_H = 20
VIZ_SCALE = 100
WHEEL_RAD = 5
@dataclass
class State(TaskState):
obs: jnp.ndarray # This is the tuple (x, x_dot, theta, theta_dot)
state: jnp.ndarray # This maintains the system's state.
steps: jnp.int32 # This tracks the rollout length.
key: jnp.ndarray # This serves as a random seed.
class CartPole(VectorizedTask):
"""A quick implementation of the cart-pole task."""
def __init__(self, max_steps=1000, test=False):
self.max_steps = max_steps
self.obs_shape = tuple([4, ])
self.act_shape = tuple([1, ])
def sample_init_state(sample_key):
return (
jax.random.normal(sample_key, shape=(4,)) * 0.2 +
jnp.array([0, 0, jnp.pi, 0])
)
def get_reward(x, x_dot, theta, theta_dot):
# We encourage
# the pole to be held upward (i.e., theta is close to 0) and
# the cart to be at the origin (i.e., x is close to 0).
reward_theta = (jnp.cos(theta) + 1.0) / 2.0
reward_x = jnp.cos((x / CART_X_LIMIT) * (jnp.pi / 2.0))
return reward_theta * reward_x
def update_state(action, x, x_dot, theta, theta_dot):
action = jnp.clip(action, -1.0, 1.0)[0] * FORCE_SCALING
s = jnp.sin(theta)
c = jnp.cos(theta)
total_m = CART_MASS + POLE_MASS
m_p_l = POLE_MASS * POLE_LEN
# This is the physical model: f-function.
x_dot_update = (
(-2 * m_p_l * (theta_dot ** 2) * s +
3 * POLE_MASS * GRAVITY * s * c +
4 * action - 4 * FRICTION * x_dot) /
(4 * total_m - 3 * POLE_MASS * c ** 2)
)
theta_dot_update = (
(-3 * m_p_l * (theta_dot ** 2) * s * c +
6 * total_m * GRAVITY * s +
6 * (action - FRICTION * x_dot) * c) /
(4 * POLE_LEN * total_m - 3 * m_p_l * c ** 2)
)
# This is the forward Euler integration.
x = x + x_dot * DELTA_T
theta = theta + theta_dot * DELTA_T
x_dot = x_dot + x_dot_update * DELTA_T
theta_dot = theta_dot + theta_dot_update * DELTA_T
return jnp.array([x, x_dot, theta, theta_dot])
def out_of_screen(x):
"""We terminate the rollout if the cart is out of the screen."""
beyond_boundary_l = jnp.where(x < -CART_X_LIMIT, 1, 0)
beyond_boundary_r = jnp.where(x > CART_X_LIMIT, 1, 0)
return jnp.bitwise_or(beyond_boundary_l, beyond_boundary_r)
def reset_fn(key):
next_key, key = jax.random.split(key)
state = sample_init_state(key)
return State(
obs=state, # We make the task fully-observable.
state=state,
steps=jnp.zeros((), dtype=int),
key=next_key,
)
self._reset_fn = jax.jit(jax.vmap(reset_fn))
def step_fn(state, action):
current_state = update_state(action, *state.state)
reward = get_reward(*current_state)
steps = state.steps + 1
done = jnp.bitwise_or(
out_of_screen(current_state[0]), steps >= max_steps)
# We reset the step counter to zero if the rollout has ended.
steps = jnp.where(done, jnp.zeros((), jnp.int32), steps)
# We automatically reset the states if the rollout has ended.
next_key, key = jax.random.split(state.key)
# current_state = jnp.where(
# done, sample_init_state(key), current_state)
return State(
state=current_state,
obs=current_state,
steps=steps,
key=next_key), reward, done
self._step_fn = jax.jit(jax.vmap(step_fn))
def reset(self, key):
return self._reset_fn(key)
def step(self, state, action):
return self._step_fn(state, action)
# Optinally, we can implement a render method to visualize the task.
@staticmethod
def render(state, task_id):
"""Render a specified task."""
img = PIL.Image.new('RGB', (SCREEN_W, SCREEN_H), (255, 255, 255))
draw = PIL.ImageDraw.Draw(img)
x, _, theta, _ = np.array(state.state[task_id])
cart_y = SCREEN_H // 2 + 100
cart_x = x * VIZ_SCALE + SCREEN_W // 2
# Draw the horizon.
draw.line(
(0, cart_y + CART_H // 2 + WHEEL_RAD,
SCREEN_W, cart_y + CART_H // 2 + WHEEL_RAD),
fill=(0, 0, 0), width=1)
# Draw the cart.
draw.rectangle(
(cart_x - CART_W // 2, cart_y - CART_H // 2,
cart_x + CART_W // 2, cart_y + CART_H // 2),
fill=(255, 0, 0), outline=(0, 0, 0))
# Draw the wheels.
draw.ellipse(
(cart_x - CART_W // 2 - WHEEL_RAD,
cart_y + CART_H // 2 - WHEEL_RAD,
cart_x - CART_W // 2 + WHEEL_RAD,
cart_y + CART_H // 2 + WHEEL_RAD),
fill=(220, 220, 220), outline=(0, 0, 0))
draw.ellipse(
(cart_x + CART_W // 2 - WHEEL_RAD,
cart_y + CART_H // 2 - WHEEL_RAD,
cart_x + CART_W // 2 + WHEEL_RAD,
cart_y + CART_H // 2 + WHEEL_RAD),
fill=(220, 220, 220), outline=(0, 0, 0))
# Draw the pole.
draw.line(
(cart_x, cart_y,
cart_x + POLE_LEN * VIZ_SCALE * np.cos(theta - np.pi / 2),
cart_y + POLE_LEN * VIZ_SCALE * np.sin(theta - np.pi / 2)),
fill=(0, 0, 255), width=6)
return img
# Okay, let's test this simple cart-pole implementation.
rollout_key = jax.random.PRNGKey(seed=seed)
reset_key, rollout_key = jax.random.split(rollout_key, 2)
reset_key = reset_key[None, :] # Expand dim, the leading is the batch dim.
# Initialize the task.
cart_pole_task = CartPole()
t_state = cart_pole_task.reset(reset_key)
task_screens = [CartPole.render(t_state, 0)]
# Rollout with random actions.
done = False
step_cnt = 0
total_reward = 0
while not done:
action_key, rollout_key = jax.random.split(rollout_key, 2)
action = jax.random.uniform(
action_key, shape=(1, 1), minval=-1., maxval=1.)
t_state, reward, done = cart_pole_task.step(t_state, action)
total_reward = total_reward + reward
step_cnt += 1
if step_cnt % 4 == 0:
task_screens.append(CartPole.render(t_state, 0))
print('reward={}, steps={}'.format(total_reward, step_cnt))
# Visualze the rollout.
gif_file = os.path.join(log_dir, 'rand_cartpole.gif')
task_screens[0].save(
gif_file, save_all=True, append_images=task_screens[1:], loop=0)
Image(open(gif_file,'rb').read())
```
The random policy does not solve the cart-pole task, but our implementation seems to be correct. Let's now plug in this task to EvoJAX.
```
train_task = CartPole(test=False)
test_task = CartPole(test=True)
# We use the same policy and solver to solve this "new" task.
policy = MLPPolicy(
input_dim=train_task.obs_shape[0],
hidden_dims=[64, 64],
output_dim=train_task.act_shape[0],
logger=logger,
)
solver = PGPE(
pop_size=64,
param_size=policy.num_params,
optimizer='adam',
center_learning_rate=0.05,
seed=seed,
)
trainer = Trainer(
policy=policy,
solver=solver,
train_task=train_task,
test_task=test_task,
max_iter=600,
log_interval=100,
test_interval=200,
n_repeats=5,
n_evaluations=128,
seed=seed,
log_dir=log_dir,
logger=logger,
)
_ = trainer.run()
# Let's visualize the learned policy.
def render(task, algo, policy):
"""Render the learned policy."""
task_reset_fn = jax.jit(test_task.reset)
policy_reset_fn = jax.jit(policy.reset)
step_fn = jax.jit(test_task.step)
act_fn = jax.jit(policy.get_actions)
params = algo.best_params[None, :]
task_s = task_reset_fn(jax.random.PRNGKey(seed=seed)[None, :])
policy_s = policy_reset_fn(task_s)
images = [CartPole.render(task_s, 0)]
done = False
step = 0
reward = 0
while not done:
act, policy_s = act_fn(task_s, params, policy_s)
task_s, r, d = step_fn(task_s, act)
step += 1
reward = reward + r
done = bool(d[0])
if step % 3 == 0:
images.append(CartPole.render(task_s, 0))
print('reward={}'.format(reward))
return images
imgs = render(test_task, solver, policy)
gif_file = os.path.join(log_dir, 'trained_cartpole.gif')
imgs[0].save(
gif_file, save_all=True, append_images=imgs[1:], duration=40, loop=0)
Image(open(gif_file,'rb').read())
```
Nice! EvoJAX is able to solve the new cart-pole task within a minute.
In this tutorial, we walked you through the process of creating tasks from scratch. The two examples we used are simple and are supposed to help you understand the interfaces. If you are interested in learning more, please check out our GitHub [repo](https://github.com/google/evojax/tree/main/evojax/task).
Please let us (evojax-dev@google.com) know if you have any problems or suggestions, thanks!
```
```
| github_jupyter |
# Categorical encoders
Examples of how to use the different categorical encoders using the Titanic dataset.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from feature_engine import categorical_encoders as ce
from feature_engine.missing_data_imputers import CategoricalVariableImputer
pd.set_option('display.max_columns', None)
# Load titanic dataset from OpenML
def load_titanic():
data = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl')
data = data.replace('?', np.nan)
data['cabin'] = data['cabin'].astype(str).str[0]
data['pclass'] = data['pclass'].astype('O')
data['age'] = data['age'].astype('float')
data['fare'] = data['fare'].astype('float')
data['embarked'].fillna('C', inplace=True)
data.drop(labels=['boat', 'body', 'home.dest'], axis=1, inplace=True)
return data
# load data
data = load_titanic()
data.head()
data.isnull().sum()
# we will encode the below variables, they have no missing values
data[['cabin', 'pclass', 'embarked']].isnull().sum()
data[['cabin', 'pclass', 'embarked']].dtypes
data[['cabin', 'pclass', 'embarked']].dtypes
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['survived', 'name', 'ticket'], axis=1), data['survived'], test_size=0.3, random_state=0)
X_train.shape, X_test.shape
```
## CountFrequencyCategoricalEncoder
The CountFrequencyCategoricalEncoder, replaces the categories by the count or frequency of the observations in the train set for that category.
If we select "count" in the encoding_method, then for the variable colour, if there are 10 observations in the train set that show colour blue, blue will be replaced by 10. Alternatively, if we select "frequency" in the encoding_method, if 10% of the observations in the train set show blue colour, then blue will be replaced by 0.1.
### Frequency
Labels are replaced by the percentage of the observations that show that label in the train set.
```
count_enc = ce.CountFrequencyCategoricalEncoder(
encoding_method='frequency', variables=['cabin', 'pclass', 'embarked'])
count_enc.fit(X_train)
# we can explore the encoder_dict_ to find out the category replacements.
count_enc.encoder_dict_
# transform the data: see the change in the head view
train_t = count_enc.transform(X_train)
test_t = count_enc.transform(X_test)
test_t.head()
test_t['pclass'].value_counts().plot.bar()
```
### Count
Labels are replaced by the number of the observations that show that label in the train set.
```
# this time we encode only 1 variable
count_enc = ce.CountFrequencyCategoricalEncoder(encoding_method='count',
variables='cabin')
count_enc.fit(X_train)
# we can find the mappings in the encoder_dict_ attribute.
count_enc.encoder_dict_
# transform the data: see the change in the head view for Cabin
train_t = count_enc.transform(X_train)
test_t = count_enc.transform(X_test)
test_t.head()
test_t['pclass'].value_counts().plot.bar()
```
### Select categorical variables automatically
If we don't indicate which variables we want to encode, the encoder will find all categorical variables
```
# this time we ommit the argument for variable
count_enc = ce.CountFrequencyCategoricalEncoder(encoding_method = 'count')
count_enc.fit(X_train)
# we can see that the encoder selected automatically all the categorical variables
count_enc.variables
# transform the data: see the change in the head view
train_t = count_enc.transform(X_train)
test_t = count_enc.transform(X_test)
test_t.head()
```
Note that if there are labels in the test set that were not present in the train set, the transformer will introduce NaN, and raise a warning.
## MeanCategoricalEncoder
The MeanCategoricalEncoder replaces the labels of the variables by the mean value of the target for that label. For example, in the variable colour, if the mean value of the binary target is 0.5 for the label blue, then blue is replaced by 0.5
```
# we will transform 3 variables
mean_enc = ce.MeanCategoricalEncoder(variables=['cabin', 'pclass', 'embarked'])
# Note: the MeanCategoricalEncoder needs the target to fit
mean_enc.fit(X_train, y_train)
# see the dictionary with the mappings per variable
mean_enc.encoder_dict_
mean_enc.variables
# we can see the transformed variables in the head view
train_t = count_enc.transform(X_train)
test_t = count_enc.transform(X_test)
test_t.head()
```
### Automatically select the variables
This encoder will select all categorical variables to encode, when no variables are specified when calling the encoder
```
mean_enc = ce.MeanCategoricalEncoder()
mean_enc.fit(X_train, y_train)
mean_enc.variables
# we can see the transformed variables in the head view
train_t = count_enc.transform(X_train)
test_t = count_enc.transform(X_test)
test_t.head()
```
## WoERatioCategoricalEncoder
This encoder replaces the labels by the weight of evidence or the ratio of probabilities. It only works for binary classification.
The weight of evidence is given by: np.log( p(1) / p(0) )
The target probability ratio is given by: p(1) / p(0)
### Weight of evidence
```
## Rare value encoder first to reduce the cardinality
# see below for more details on this encoder
rare_encoder = ce.RareLabelCategoricalEncoder(
tol=0.03, n_categories=2, variables=['cabin', 'pclass', 'embarked'])
rare_encoder.fit(X_train)
# transform
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_test)
woe_enc = ce.WoERatioCategoricalEncoder(
encoding_method='woe', variables=['cabin', 'pclass', 'embarked'])
# to fit you need to pass the target y
woe_enc.fit(train_t, y_train)
woe_enc.encoder_dict_
# transform and visualise the data
train_t = woe_enc.transform(train_t)
test_t = woe_enc.transform(test_t)
test_t.head()
```
### Ratio
Similarly, it is recommended to remove rare labels and high cardinality before using this encoder.
```
# rare label encoder first: transform
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_test)
ratio_enc = ce.WoERatioCategoricalEncoder(
encoding_method='ratio', variables=['cabin', 'pclass', 'embarked'])
# to fit we need to pass the target y
ratio_enc.fit(train_t, y_train)
ratio_enc.encoder_dict_
# transform and visualise the data
train_t = woe_enc.transform(train_t)
test_t = woe_enc.transform(test_t)
test_t.head()
```
## OrdinalCategoricalEncoder
The OrdinalCategoricalEncoder will replace the variable labels by digits, from 1 to the number of different labels. If we select "arbitrary", then the encoder will assign numbers as the labels appear in the variable (first come first served). If we select "ordered", the encoder will assign numbers following the mean of the target value for that label. So labels for which the mean of the target is higher will get the number 1, and those where the mean of the target is smallest will get the number n.
### Ordered
```
# we will encode 3 variables:
ordinal_enc = ce.OrdinalCategoricalEncoder(
encoding_method='ordered', variables=['pclass', 'cabin', 'embarked'])
# for this encoder, we need to pass the target as argument
# if encoding_method='ordered'
ordinal_enc.fit(X_train, y_train)
# here we can see the mappings
ordinal_enc.encoder_dict_
# transform and visualise the data
train_t = ordinal_enc.transform(X_train)
test_t = ordinal_enc.transform(X_test)
test_t.head()
```
### Arbitrary
```
ordinal_enc = ce.OrdinalCategoricalEncoder(encoding_method='arbitrary',
variables='cabin')
# for this encoder we don't need to add the target. You can leave it or remove it.
ordinal_enc.fit(X_train, y_train)
ordinal_enc.encoder_dict_
```
Note that the ordering of the different labels is not the same when we select "arbitrary" or "ordered"
```
# transform: see the numerical values in the former categorical variables
train_t = ordinal_enc.transform(X_train)
test_t = ordinal_enc.transform(X_test)
test_t.head()
```
### Automatically select categorical variables
These encoder as well selects all the categorical variables, if None is passed to the variable argument when calling the enconder.
```
ordinal_enc = ce.OrdinalCategoricalEncoder(encoding_method = 'arbitrary')
# for this encoder we don't need to add the target. You can leave it or remove it.
ordinal_enc.fit(X_train)
ordinal_enc.variables
# transform: see the numerical values in the former categorical variables
train_t = ordinal_enc.transform(X_train)
test_t = ordinal_enc.transform(X_test)
test_t.head()
```
## OneHotCategoricalEncoder
Performs One Hot Encoding. The encoder can select how many different labels per variable to encode into binaries. When top_categories is set to None, all the categories will be transformed in binary variables. However, when top_categories is set to an integer, for example 10, then only the 10 most popular categories will be transformed into binary, and the rest will be discarded.
The encoder has also the possibility to create binary variables from all categories (drop_last = False), or remove the binary for the last category (drop_last = True), for use in linear models.
### All binary, no top_categories
```
ohe_enc = ce.OneHotCategoricalEncoder(
top_categories=None,
variables=['pclass', 'cabin', 'embarked'],
drop_last=False)
ohe_enc.fit(X_train)
ohe_enc.drop_last
ohe_enc.encoder_dict_
train_t = ohe_enc.transform(X_train)
test_t = ohe_enc.transform(X_train)
test_t.head()
```
### Dropping the last category for linear models
```
ohe_enc = ce.OneHotCategoricalEncoder(
top_categories=None,
variables=['pclass', 'cabin', 'embarked'],
drop_last=True)
ohe_enc.fit(X_train)
ohe_enc.encoder_dict_
train_t = ohe_enc.transform(X_train)
test_t = ohe_enc.transform(X_train)
test_t.head()
```
### Selecting top_categories to encode
```
ohe_enc = ce.OneHotCategoricalEncoder(
top_categories=2,
variables=['pclass', 'cabin', 'embarked'],
drop_last=False)
ohe_enc.fit(X_train)
ohe_enc.encoder_dict_
train_t = ohe_enc.transform(X_train)
test_t = ohe_enc.transform(X_train)
test_t.head()
```
## RareLabelCategoricalEncoder
The RareLabelCategoricalEncoder groups labels that show a small number of observations in the dataset into a new category called 'Rare'. This helps to avoid overfitting.
The argument tol indicates the percentage of observations that the label needs to have in order not to be re-grouped into the "Rare" label. The argument n_categories indicates the minimum number of distinct categories that a variable needs to have for any of the labels to be re-grouped into rare. If the number of labels is smaller than n_categories, then the encoder will not group the labels for that variable.
```
## Rare value encoder
rare_encoder = ce.RareLabelCategoricalEncoder(
tol=0.03, n_categories=5, variables=['cabin', 'pclass', 'embarked'])
rare_encoder.fit(X_train)
# the encoder_dict_ contains a dictionary of the {variable: frequent labels} pair
rare_encoder.encoder_dict_
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_train)
test_t.head()
```
### Automatically select all categorical variables
If no variable list is passed as argument, it selects all the categorical variables.
```
## Rare value encoder
rare_encoder = ce.RareLabelCategoricalEncoder(tol = 0.03, n_categories=5)
rare_encoder.fit(X_train)
rare_encoder.encoder_dict_
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_train)
test_t.head()
```
| github_jupyter |
```
__author__ = 'Mike Fitzpatrick <mike.fitzpatrick@noirlab.edu>, Robert Nikutta <robert.nikutta@noirlab.edu>'
__version__ = '20211130'
__datasets__ = []
__keywords__ = []
```
## How to use the Data Lab *Store Client* Service
This notebook documents how to use the Data Lab virtual storage system via the store client service. This can be done either from a Python script (e.g. within this notebook) or from the command line using the <i>datalab</i> command.
### The storage manager service interface
The store client service simplifies access to the Data Lab virtual storage system. This section describes the store client service interface in case we want to write our own code against that rather than using one of the provided tools. The store client service accepts an HTTP GET call to the appropriate endpoint for the particular operation:
| Endpoint | Description | Req'd Parameters |
|----------|-------------|------------|
| /get | Retrieve a file | name |
| /put | Upload a file | name |
| /load | Load a file to vospace | name, endpoint |
| /cp | Copy a file/directory | from, to |
| /ln | Link a file/directory | from, to |
| /lock | Lock a node from write updates | name |
| /ls | Get a file/directory listing | name |
| /access | Determine file accessability | name |
| /stat | File status info | name,verbose |
| /mkdir | Create a directory | name |
| /mv | Move/rename a file/directory | from, to |
| /rm | Delete a file | name |
| /rmdir | Delete a directory | name |
| /tag | Annotate a file/directory | name, tag |
For example, a call to <i>http://datalab.noirlab.edu/storage/get?name=vos://mag.csv</i> will retrieve the file '_mag.csv_' from the root directory of the user's virtual storage. Likewise, a python call using the _storeClient_ interface such as "_storeClient.get('vos://mag.csv')_" would get the same file.
#### Virtual storage identifiers
Files in the virtual storage are usually identified via the prefix "_vos://_". This shorthand identifier is resolved to a user's home directory of the storage space in the service. As a convenience, the prefix may optionally be omitted when the parameter refers to a node in the virtual storage. Navigation above a user's home directory is not supported, however, subdirectories within the space may be created and used as needed.
#### Authentication
The storage manager service requires a DataLab security token. This needs to be passed as the value of the header keyword "X-DL-AuthToken" in any HTTP GET call to the service. If the token is not supplied anonymous access is assumed but provides access only to public storage spaces.
### From Python code
The store client service can be called from Python code using the <i>datalab</i> module. This provides methods to access the various functions in the <i>storeClient</i> subpackage.
#### Initialization
This is the setup that is required to use the store client. The first thing to do is import the relevant Python modules and also retrieve our DataLab security token.
```
# Standard notebook imports
from getpass import getpass
from dl import authClient, storeClient
```
Comment out and run the cell below if you need to login to Data Lab:
```
## Get the authentication token for the user
#token = authClient.login(input("Enter user name: (+ENTER) "),getpass("Enter password: (+ENTER) "))
#if not authClient.isValidToken(token):
# raise Exception('Token is not valid. Please check your usename/password and execute this cell again.')
```
#### Listing a file/directory
We can see all the files that are in a specific directory or get a full listing for a specific file. In this case, we'll list the default virtual storage directory to use as a basis for changes we'll make below.
```
listing = storeClient.ls (name = 'vos://')
print (listing)
```
The *public* directory shown here is visible to all Data Lab users and provides a means of sharing data without having to setup special access. Similarly, the *tmp* directory is read-protected and provides a convenient temporary directory to be used in a workflow.
#### File Existence and Info
Aside from simply listing files, it's possible to test whether a named file already exists or to determine more information about it.
```
# A simple file existence test:
if storeClient.access ('vos://public'):
print ('User "public" directory exists')
if storeClient.access ('vos://public', mode='w'):
print ('User "public" directory is group/world writable')
else:
print ('User "public" directory is not group/world writable')
if storeClient.access ('vos://tmp'):
print ('User "tmp" directory exists')
if storeClient.access ('vos://tmp', mode='w'):
print ('User "tmp" directory is group/world writable')
else:
print ('User "tmp" directory is not group/world writable')
```
#### Uploading a file
Now we want to upload a new data file from our local disk to the virtual storage:
```
storeClient.put (to = 'vos://newmags.csv', fr = './newmags.csv')
print(storeClient.ls (name='vos://'))
```
#### Downloading a file
Let's say we want to download a file from our virtual storage space, in this case a query result that we saved to it in the "How to use the Data Lab query manager service" notebook:
```
storeClient.get (fr = 'vos://newmags.csv', to = './mymags.csv')
```
It is also possible to get the contents of a remote file directly into your notebook by specifying the location as an empty string:
```
data = storeClient.get (fr = 'vos://newmags.csv', to = '')
print (data)
```
#### Loading a file from a remote URL
It is possible to load a file directly to virtual storage from a remote URL )e.g. an "accessURL" for an image cutout, a remote data file, etc) using the "storeClient.load()" method:
```
url = "http://datalab.noirlab.edu/svc/cutout?col=&siaRef=c4d_161005_022804_ooi_g_v1.fits.fz&extn=31&POS=335.0,0.0&SIZE=0.1"
storeClient.load('vos://cutout.fits',url)
```
#### Creating a directory
We can create a directory on the remote storage to be used for saving data later:
```
storeClient.mkdir ('vos://results')
```
#### Copying a file/directory
We want to put a copy of the file in a remote work directory:
```
storeClient.mkdir ('vos://temp')
print ("Before: " + storeClient.ls (name='vos://temp/'))
storeClient.cp (fr = 'vos://newmags.csv', to = 'vos://temp/newmags.csv',verbose=True)
print ("After: " + storeClient.ls (name='vos://temp/'))
print(storeClient.ls('vos://',format='long'))
```
Notice that in the *ls()* call we append the directory name with a trailing '/' to list the contents of the directory rather than the directory itself.
#### Linking to a file/directory
**WARNING**: Linking is currently **not** working in the Data Lab storage manager. This notebook will be updated when the problem has been resolved.
Sometimes we want to create a link to a file or directory. In this case, the link named by the *'fr'* parameter is created and points to the file/container named by the *'target'* parameter.
```
storeClient.ln ('vos://mags.csv', 'vos://temp/newmags.csv')
print ("Root dir: " + storeClient.ls (name='vos://'))
print ("Temp dir: " + storeClient.ls (name='vos://temp/'))
```
#### Moving/renaming a file/directory
We can move a file or directory:
```
storeClient.mv(fr = 'vos://temp/newmags.csv', to = 'vos://results')
print ("Results dir: " + storeClient.ls (name='vos://results/'))
```
#### Deleting a file
We can delete a file:
```
print ("Before: " + storeClient.ls (name='vos://'))
storeClient.rm (name = 'vos://mags.csv')
print ("After: " + storeClient.ls (name='vos://'))
```
#### Deleting a directory
We can also delete a directory, doing so also deletes the contents of that directory:
```
storeClient.rmdir(name = 'vos://temp')
```
#### Tagging a file/directory
**Warning**: Tagging is currently **not** working in the Data Lab storage manager. This notebook will be updated when the problem has been resolved.
We can tag any file or directory with arbitrary metadata:
```
storeClient.tag('vos://results', 'The results from my analysis')
```
#### Cleanup the demo directory of remaining files
```
storeClient.rm (name = 'vos://newmags.csv')
storeClient.rm (name = 'vos://results')
storeClient.ls (name = 'vos://')
```
### Using the datalab command
The <i>datalab</i> command provides an alternate command line way to work with the query manager through the <i>query</i> subcommands, which is especially useful if you want to interact with the query manager from your local computer. Please have the `datalab` command line utility installed first (for install instructions see https://github.com/astro-datalab/datalab ).
The cells below are commented out. Copy and paste any of them (without the comment sign) and run locally.
#### Log in once
```
#!datalab login
```
and enter the credentials as prompted.
#### Downloading a file
Let's say we want to download a file from our virtual storage space:
```
#!datalab get fr="vos://mags.csv" to="./mags.csv"
```
#### Uploading a file
Now we want to upload a new data file from our local disk:
```
#!datalab put fr="./newmags.csv" to="vos://newmags.csv"
```
#### Copying a file/directory
We want to put a copy of the file in a remote work directory:
```
#!datalab cp fr="vos://newmags.csv" to="vos://temp/newmags.csv"
```
#### Linking to a file/directory
Sometimes we want to create a link to a file or directory:
```
#!datalab ln fr="vos://temp/mags.csv" to="vos://mags.csv"
```
#### Listing a file/directory
We can see all the files that are in a specific directory or get a full listing for a specific file:
```
#!datalab ls name="vos://temp"
```
#### Creating a directory
We can create a directory:
```
#!datalab mkdir name="vos://results"
```
#### Moving/renaming a file/directory
We can move a file or directory:
```
#!datalab mv fr="vos://temp/newmags.csv" to="vos://results"
```
#### Deleting a file
We can delete a file:
```
#!datalab rm name="vos://temp/mags.csv"
```
#### Deleting a directory
We can also delete a directory:
```
#!datalab rmdir name="vos://temp"
```
#### Tagging a file/directory
We can tag any file or directory with arbitrary metadata:
```
#!datalab tag name="vos://results" tag="The results from my analysis"
```
| github_jupyter |
# *Bosonic statistics and the Bose-Einstein condensation*
`Doruk Efe Gökmen -- 30/08/2018 -- Ankara`
## Non-interacting ideal bosons
Non-interacting bosons is the only system in physics that can undergo a phase transition without mutual interactions between its components.
Let us enumerate the energy eigenstates of a single 3D boson in an harmonic trap by the following program.
```
Emax = 30
States = []
for E_x in range(Emax):
for E_y in range(Emax):
for E_z in range(Emax):
States.append(((E_x + E_y + E_z), (E_x, E_y, E_z)))
States.sort()
for k in range(Emax):
print '%3d' % k, States[k][0], States[k][1]
```
Here it can be perceived that the degeneracy at an energy level $E_n$, which we denote by $\mathcal{N}(E_n)$, is $\frac{(n+1)(n+2)}{2}$. Alternatively, we may use a more systematic approach. We can calculate the number of states at the $n$th energy level as $\mathcal{N}(E_n)=\sum_{E_x=0}^{E_n}\sum_{E_y=0}^{E_n}\sum_{E_z=0}^{E_n}\delta_{(E_x+E_y+E_z),E_n}$, where $\delta_{j,k}$ is the Kronecker delta. In the continuous limit we have the Dirac delta function
$\delta_{j,k}\rightarrow\delta(j-k) =\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{i(j-k)\lambda}$. (1)
If we insert this function into above expression, we get $\mathcal{N}(E_n)=\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-iE_n\lambda}\left(\sum_{E_x=0}^{E_n}e^{iE_x\lambda}\right)^3$. The geometric sum can be evaluated, hence we have the integral $\mathcal{N}(E_n)=\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-iE_n\lambda}\left[\frac{1-e^{i\lambda (n+1)}}{1-e^{i\lambda}}\right]^3$. The integration range corresponds to a circular contour $\mathcal{C}$ of radius 1 centered at 0 at the complex plane. If we define $z=e^{i\lambda}$, the integral transforms into $\mathcal{N}(E_n)=\frac{1}{2\pi i}\oint_{\mathcal{C}}\frac{\text{d}z}{z^{n+1}}\left[\frac{1-z^{n+1}}{1-z}\right]^3$. Using the residue theorem, this integral can be evaluated by determining the coefficient of the $z^{-1}$ term in the Laurent series of $\frac{1}{z^{n+1}}\left[\frac{1-z^{n+1}}{1-z}\right]^3$, which is $(n+1)(n+1)/2$. Hence we recover the previous result.
##### Five boson bounded trap model
Consider 5 bosons in the harmonic trap, but with a cutoff on the single-particle energies: $E_\sigma\leq 4$. There are $34$ possible single-particles energy states. For this model, the above naive enumeration of these energy states still works. We can label the state of each 5 particle by $\sigma_i$, so that $\{\text{5-particle state}\}=\{\sigma_1,\cdots,\sigma_5\}$. The partititon function for this system is given by $Z(\beta)=\sum_{0\leq\sigma_1\leq\cdots\leq\sigma_5\leq 34}e^{-\beta E(\sigma_1,\cdots,\sigma_5)}$. In the following program, the average occupation number of the ground state per particle is calculated at different temperatures (corresponds to the condensate). However, due to the nested for loops, this method is very inconvenient for higher number of particles.
```
%pylab inline
import math, numpy as np, pylab as plt
#calculate the partition function for 5 bosons by stacking the bosons in one of the N_states
#number of possible states and counting only a specific order of them (they are indistinguishable)
def bosons_bounded_harmonic(beta, N):
Energy = [] #initialise the vector that the energy values are saved with enumeration
n_states_1p = 0 #initialise the total number of single trapped boson states
for n in range(N + 1):
degeneracy = (n + 1) * (n + 2) / 2 #degeneracy in the 3D harmonic oscillator
Energy += [float(n)] * degeneracy
n_states_1p += degeneracy
n_states_5p = 0 #initialise the total number states of 5 trapped bosons
Z = 0.0 #initialise the partition function
N0_mean = 0.0
E_mean = 0.0
for s_0 in range(n_states_1p):
for s_1 in range(s_0, n_states_1p): #consider the order s_0<s_1... to avoid overcounting
for s_2 in range(s_1, n_states_1p):
for s_3 in range(s_2, n_states_1p):
for s_4 in range(s_3, n_states_1p):
n_states_5p += 1
state = [s_0, s_1, s_2, s_3, s_4] #construct the state of each 5 boson
E = sum(Energy[s] for s in state) #calculate the total energy by above enumeration
Z += math.exp(-beta * E) #canonical partition function
E_mean += E * math.exp(-beta * E) #avg. total energy
N0_mean += state.count(0) * math.exp(-beta * E) #avg. ground level occupation number
return n_states_5p, Z, E_mean, N0_mean
N = 4 #the energy cutoff for each boson
beta = 1.0 #inverse temperature
n_states_5p, Z, E_mean, N0_mean = bosons_bounded_harmonic(beta, N)
print 'Temperature:', 1 / beta, 'Total number of possible states:', n_states_5p, '| Partition function:', Z,\
'| Average energy per particle:', E_mean / Z / 5.0,\
'| Condensate fraction (ground state occupation per particle):', N0_mean / Z / 5.0
cond_frac = []
temperature = []
for T in np.linspace(0.1, 1.0, 10):
n_states_5p, Z, E_mean, N0_mean = bosons_bounded_harmonic(1.0 / T, N)
cond_frac.append(N0_mean / Z / 5.0)
temperature.append(T)
plt.plot(temperature, cond_frac)
plt.title('Condensate? fraction for the $N=5$ bosons bounded trap model ($N_{bound}=%i$)' % N, fontsize = 14)
plt.xlabel('$T$', fontsize = 14)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 14)
plt.grid()
```
Here we see that all particles are in the ground states at very low temperatures this is a simple consequence of Boltzmann statistics. At zero temperature all the particles populate the ground state. Bose-Einstein condensation is something else, it means that a finite fraction of the system is in the ground-state for temperatures which are much higher than the gap between the gap between the ground-state and the first excited state, which is one, in our system. Bose-Einstein condensation occurs when all of a sudden a finite fraction of particles populate the single-particle ground state. In a trap, this happens at higher and higher temperatures as we increase the particle number.
Alternatively, we can characterise any single particle state $\sigma=0,\cdots,34$ by an occupation number $n_\sigma$. Using this occupation number representation, the energy is given by $E=n_0E_0+\cdots + n_{34}E_{34}$, and the partition function is $Z(\beta)=\sum^{N=5}_{n_0=0}\cdots\sum^{N=5}_{n_{34}=0}e^{-\beta(n_0E_0+\cdots + n_{34}E_{34})}\delta_{(n_0+\cdots + n_{34}),N=5}$. Using the integral representation of the Kronecker delta given in (1), and evaluating the resulting sums, we have
$Z(\beta)=\int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)}$. (2)
### The bosonic density matrix
**Distinguishable particles:** The partition function of $N$ distinguishable particles is given by $Z^D(\beta)=\int \text{d}\mathbf{x}\rho(\mathbf{x},\mathbf{x},\beta)$, where $\mathbf{x}=\{0,\cdots,N-1\}$, i.e. the positions of the $i$th particle; and $\rho$ is the $N$ distinguishable particle density matrix. If the particles are non-interacting (ideal), then the density matrix can simply be decomposed into $N$ single particle density matrices as
$\rho^{D,\text{ideal}}(\mathbf{x},\mathbf{x}',\beta)=\Pi_{i=0}^{N-1}\rho(x_i,x_i',\beta)$, (3)
with the single particle density matrix $\rho(x_i,x_i',\beta)=\sum_{\lambda_i=0}^{\infty}\psi_{\lambda_i}(x_i)\psi_{\lambda_i}^{*}(x'_i)e^{-\beta E_{\lambda_i}}$, where $\lambda_i$ is the energy eigenstate of the $i$th particle. That means that the quantum statistical paths of the two particles are independent. More generally, the interacting many distinguishable particle density matrix is
$\rho^{D}(\mathbf{x},\mathbf{x}',\beta)=\sum_{\sigma}\Psi_{\sigma}(\mathbf{x})\Psi_{\sigma}^{*}(\mathbf{x}')e^{-\beta E_{\sigma}}$, (4)
where the sum is done over the all possible $N$ particle states $\sigma=\{\lambda_0,\cdots,\lambda_{N-1}\}$. The interacting paths are described by the paths whose weight are modified through Trotter decomposition, which *correlates* those paths.
**Indistinguishable particles:** The particles $\{0,\cdots,N-1\}$ are indistinguishable if and only if
$\Psi_{\sigma_\text{id}}(\mathbf{x})=\xi^\mathcal{P}\Psi_{\sigma_\text{id}}(\mathcal{P}\mathbf{x})$ $\forall \sigma$, (5)
where they are in an indistinguishable state ${\sigma_\text{id}}$, $\mathcal{P}$ is any $N$ particle permutation and the *species factor* $\xi$ is $-1$ (antisymmetric) for fermions, and $1$ (symmetric) for bosons. Here we focus on the bosonic case. Since there are $N!$ such permutations, if the particles are indistinguishable bosons, using (5) we get $\frac{1}{N!}\sum_{\mathcal{P}}\Psi_\sigma(\mathcal{P}x)=\Psi_\sigma(\mathbf{x})$, i.e. $\Psi_\sigma(x)=\Psi_{\sigma_\text{id}}(x)$. Furthermore, from a group theory argument it follows that $\frac{1}{N!}\sum_{\mathcal{P}}\Psi_\sigma(\mathcal{P}x)=0$ otherwise (fermionic or distinguishable). This can be expressed in a more compact form as
$\frac{1}{N!}\sum_{\mathcal{P}}\Psi_\sigma(\mathcal{P}x)=\delta_{{\sigma_\text{id}},\sigma}\Psi_\sigma(x)$. (6)
By definition, the bosonic density matrix should be $\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\sum_{\sigma=\{\sigma_\text{id}\}}\Psi_\sigma(\mathbf{x})\Psi^{*}_\sigma(\mathbf{x}')e^{-\beta E_\sigma}=\sum_{\sigma}\delta_{{\sigma_\text{id}},\sigma}\Psi_\sigma(\mathbf{x})\Psi^{*}_\sigma(\mathbf{x}')e^{-\beta E_\sigma}$, i.e. a sum over all $N$ particle states which are symmetric. If we insert Eqn. (6) here in the latter equality, we get $\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\sigma\Psi_\sigma(\mathbf{x})\sum_\mathcal{P}\Psi^{*}_\sigma(\mathcal{P}\mathbf{x}')e^{-\beta E_\sigma}$. Exchanging the sums, we get $\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}\sum_\sigma\Psi_\sigma(\mathbf{x})\Psi^{*}_\sigma(\mathcal{P}\mathbf{x}')e^{-\beta E_\sigma}$. In other words, we simply have
$\boxed{\rho^\text{bose}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}\rho^D(\mathbf{x},\mathcal{P}\mathbf{x}',\beta)}$, (7)
that is the average of the distinguishable density matrices over all permutations of $N$ particles.
For ideal bosons, we have $\boxed{\rho^\text{bose, ideal}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}\rho(x_0,\mathcal{P}x_0',\beta)\rho(x_1,\mathcal{P}x_1',\beta)\cdots\rho(x_{N-1},\mathcal{P}x_{N-1}',\beta)}$. (8)
The partition function is therefore
$Z^\text{bose}(\beta)=\frac{1}{N!}\int \text{d}x_0\cdots\text{d}x_{N-1}\sum_\mathcal{P}\rho^D(\mathbf{x},\mathcal{P}\mathbf{x},\beta)=\frac{1}{N!}\sum_\mathcal{P}Z_\mathcal{P}$, (9)
i.e. an integral over paths and an average over all permutations. We should therefore sample both positions and permutations.
For fermions, the sum over permutations $\mathcal{P}$ involve a weighting with factor $(-1)^{\mathcal{P}}$:
$\rho^\text{fermi}(\mathbf{x},\mathbf{x}',\beta)=\frac{1}{N!}\sum_\mathcal{P}(-1)^\mathcal{P}\rho^D(\mathbf{x},\mathcal{P}\mathbf{x}',\beta)$
Therefore for fermions corresponding path integrals are nontrivial, and they involve Grassmann variables (see e.g. Negele, Orland https://www.amazon.com/Quantum-Many-particle-Systems-Advanced-Classics/dp/0738200522 ).
#### Sampling permutations
The following Markov-chain algorithm samples permutations of $N$ elements on a list $L$. The permutation function for the uniformly distributed $\mathcal{P}$ is $Y_N=\sum_\mathcal{P}1=N!$.
```
import random
N = 3 #length of the list
statistics = {}
L = range(N) #initialise the list
nsteps = 10
for step in range(nsteps):
i = random.randint(0, N - 1) #pick two random indices i and j from the list L
j = random.randint(0, N - 1)
L[i], L[j] = L[j], L[i] #exchange the i'th and j'th elements
if tuple(L) in statistics:
statistics[tuple(L)] += 1 #if a certain configuration appears again, add 1 to its count
else:
statistics[tuple(L)] = 1 #if a certain configuration for the first time, give it a count of 1
print L
print range(N)
print
for item in statistics:
print item, statistics[item]
```
Let us look at the permutation cycles and their frequency of occurrence:
```
import random
N = 20 #length of the list
stats = [0] * (N + 1) #initialise the "stats" vector
L = range(N) #initialise the list
nsteps = 1000000 #number of steps
for step in range(nsteps):
i = random.randint(0, N - 1) #pick two random indices i and j from the list L
j = random.randint(0, N - 1)
L[i], L[j] = L[j], L[i] #exchange the i'th and j'th elements in the list L
#Calculate the lengths of the permutation cycles in list L
if step % 100 == 0: #i.e. at each 100 steps
cycle_dict = {} #initialise the permutation cycle dictionary
for k in range(N): #loop over the list length,where keys (k) represent the particles
cycle_dict[k] = L[k] #and the values (L) are for the successors of the particles in the perm. cycle
while cycle_dict != {}: #i.e. when the cycle dictionary is not empty?
starting_element = cycle_dict.keys()[0] #save the first (0th) element in the cycle as the starting element
cycle_length = 0 #initialise the cycle length
old_element = starting_element #ancillary variable
while True:
cycle_length += 1 #increase the cycle length while...
new_element = cycle_dict.pop(old_element) #get the successor of the old element in the perm. cycle
if new_element == starting_element: break #the new element is the same as the first one (cycle complete)
else: old_element = new_element #move on to the next successor in the perm. cycle
stats[cycle_length] += 1 #increase the number of occurrences of a cycle of that length by 1
for k in range(1, N + 1): #print the cycle lengths and their number of occurrences
print k, stats[k]
```
The partition function of permutations $\mathcal{P}$ on a list of lentgth $N$ is $Y_N=\sum_\mathcal{P}\text{weight}(\mathcal{P})$. Let $z_n$ be the weight of a permutation cycle of length $n$. Then, the permutation $[0,1,2,3]\rightarrow[0,1,2,3]$, which can be represented as $(0)(1)(2)(3)$, has the weight $z_1^4$; similarly, $(0)(12)(3)$ would have $z_1^2z_2$, etc.
Generally, the cycle $\{n_1,\cdots,n_{k-1},\text{last element}\}$, i.e. the cycle containing the last element, has a length $k$, with the weight $z_k$. The remaining $N-k$ elements have the partition function $Y_{(N-k)}$. Hence, the total partition function is given by $Y_N=\sum_{k=1}^Nz_k\{\text{# of choices for} \{n_1,\cdots,n_{k-1}\}\}\{\text{# of cycles with} \{n_1,\cdots,n_{k}\}\}Y_{N-k}$
$\implies Y_N=\sum_{k=1}^N z_k{{N-1}\choose{k-1}}(k-1)!Y_{N-k}$ which leads to the following recursion formula
$\boxed{Y_N=\frac{1}{N}\sum_{k=1}^N z_k\frac{N!}{(N-k)!}Y_{N-k}, (\text{with }Y_0=1)}$. (10)
***Using the convolution property, we can regard the $l+1$ bosons in a permutation cycle of length $l$ at temperatyre $1/\beta$ as a single boson at a temperature $1/(l\beta)$.***
*Example 1:* Consider the permutation $[0,3,1,2]\rightarrow[0,1,2,3]$ consists of the following permutation cycle $1\rightarrow 2 \rightarrow 3 \rightarrow 1$ of length 3 ($\mathcal{P}=(132)$). This corresponds to the partition function $Z^\text{bose}_{(0)(132)}(\beta)=\int \text{d}x_0\rho(x_0,x_0,\beta)\int\text{d}x_1\int\text{d}x_2\int\text{d}x_3\rho(x_1,x_3,\beta)\rho(x_3,x_2,\beta)\rho(x_2,x_1,\beta)$. Using the convolution property, we have: $\int\text{d}x_3\rho(x_1,x_3,\beta)\rho(x_3,x_2,\beta)=\rho(x_1,x_2,2\beta)\implies\int\text{d}x_2\rho(x_1,x_2,2\beta)\rho(x_2,x_1,\beta)=\rho(x_1,x_1,3\beta)$. The single particle partition function is defined as $z(\beta)=\int\text{d}\mathbf{x}\rho(\mathbf{x},\mathbf{x},\beta) =\left[ \int\text{d}x\rho(x,x,\beta)\right]^3$.
$\implies Z^\text{bose}_{(0)(132)}(\beta)=\int \text{d}x_0\rho(x_0,x_0,\beta)\int\text{d}x_1\rho(x_1,x_1,3\beta)=z(\beta)z(3\beta)$.
*Example 2:* $Z^\text{bose}_{(0)(1)(2)(3)}(\beta)=z(\beta)^4$.
Simulation of bosons in a harmonic trap: (Carefully note that here are no intermediate slices in the sampled paths, since the paths are sampled from the exact distribution.)
```
import random, math, pylab, mpl_toolkits.mplot3d
#3 dimensional Levy algorithm, used for resampling the positions of entire permutation cycles of bosons
#to sample positions
def levy_harmonic_path(k, beta):
#direct sample (rejection-free) three coordinate values, use diagonal density matrix
#k corresponds to the length of the permutation cycle
xk = tuple([random.gauss(0.0, 1.0 / math.sqrt(2.0 *
math.tanh(k * beta / 2.0))) for d in range(3)])
x = [xk] #save the 3 coordinate values xk into a 3d vector x (final point)
for j in range(1, k): #loop runs through the permutation cycle
#Levy sampling (sample a point given the latest sample and the final point)
Upsilon_1 = (1.0 / math.tanh(beta) +
1.0 / math.tanh((k - j) * beta))
Upsilon_2 = [x[j - 1][d] / math.sinh(beta) + xk[d] /
math.sinh((k - j) * beta) for d in range(3)]
x_mean = [Upsilon_2[d] / Upsilon_1 for d in range(3)]
sigma = 1.0 / math.sqrt(Upsilon_1)
dummy = [random.gauss(x_mean[d], sigma) for d in range(3)] #direct sample the j'th point
x.append(tuple(dummy)) #construct the 3d path (permutation cycle) by appending tuples
return x
#(Non-diagonal) harmonic oscillator density matrix, used for organising the exchange of two elements
#to sample permutations
def rho_harm(x, xp, beta):
Upsilon_1 = sum((x[d] + xp[d]) ** 2 / 4.0 *
math.tanh(beta / 2.0) for d in range(3))
Upsilon_2 = sum((x[d] - xp[d]) ** 2 / 4.0 /
math.tanh(beta / 2.0) for d in range(3))
return math.exp(- Upsilon_1 - Upsilon_2)
N = 256 #number of bosons
T_star = 0.3
beta = 1.0 / (T_star * N ** (1.0 / 3.0)) #??
nsteps = 1000000
positions = {} #initial position dictionary
for j in range(N): #loop over all particles, initial permutation is identity (k=1)
a = levy_harmonic_path(1, beta) #initial positions (outputs a single 3d point)
positions[a[0]] = a[0] #positions of particles are keys for themselves in the initial position dict.
for step in range(nsteps):
boson_a = random.choice(positions.keys()) #randomly pick the position of boson "a" from the dict.
perm_cycle = [] #initialise the permutation cycle
while True: #compute the permutation cycle of the boson "a":
perm_cycle.append(boson_a) #construct the permutation cycle by appending the updated position of boson "a"
boson_b = positions.pop(boson_a) #remove and return (pop) the position of "a", save it as a temp. var.
if boson_b == perm_cycle[0]: break #if the cycle is completed, break the while loop
else: boson_a = boson_b #move boson "a" to position of "b" and continue permuting
k = len(perm_cycle) #length of the permutation cycle
#SAMPLE POSITIONS:
perm_cycle = levy_harmonic_path(k, beta) #resample the particle positions in the current permutation cycle
positions[perm_cycle[-1]] = perm_cycle[0] #assures that the new path is a "cycle" (last term maps to the first term)
for j in range(len(perm_cycle) - 1): #update the positions of bosons
positions[perm_cycle[j]] = perm_cycle[j + 1] #construct the "cycle": j -> j+1
#SAMPLE PERMUTATION CYCLES by exchanges:
#Pick two particles and attempt an exchange to sample permutations (with Metropolis acceptance rate):
a_1 = random.choice(positions.keys()) #pick the first random particle
b_1 = positions.pop(a_1) #save the random particle to a temporary variable
a_2 = random.choice(positions.keys()) #pick the second random particle
b_2 = positions.pop(a_2) #save the random particle to a temporary variable
weight_new = rho_harm(a_1, b_2, beta) * rho_harm(a_2, b_1, beta) #the new Metropolis acceptance rate
weight_old = rho_harm(a_1, b_1, beta) * rho_harm(a_2, b_2, beta) #the old Metropolis acceptance rate
if random.uniform(0.0, 1.0) < weight_new / weight_old:
positions[a_1] = b_2 #accept
positions[a_2] = b_1
else:
positions[a_1] = b_1 #reject
positions[a_2] = b_2
#Figure output:
fig = pylab.figure()
ax = mpl_toolkits.mplot3d.axes3d.Axes3D(fig)
ax.set_aspect('equal')
list_colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k']
n_colors = len(list_colors)
dict_colors = {}
i_color = 0
# find and plot permutation cycles:
while positions:
x, y, z = [], [], []
starting_boson = positions.keys()[0]
boson_old = starting_boson
while True:
x.append(boson_old[0])
y.append(boson_old[1])
z.append(boson_old[2])
boson_new = positions.pop(boson_old)
if boson_new == starting_boson: break
else: boson_old = boson_new
len_cycle = len(x)
if len_cycle > 2:
x.append(x[0])
y.append(y[0])
z.append(z[0])
if len_cycle in dict_colors:
color = dict_colors[len_cycle]
ax.plot(x, y, z, color + '+-', lw=0.75)
else:
color = list_colors[i_color]
i_color = (i_color + 1) % n_colors
dict_colors[len_cycle] = color
ax.plot(x, y, z, color + '+-', label='k=%i' % len_cycle, lw=0.75)
# finalize plot
pylab.title('$N=%i$, $T^*=%s$' % (N, T_star))
pylab.legend()
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
ax.set_zlabel('$z$', fontsize=16)
ax.set_xlim3d([-8, 8])
ax.set_ylim3d([-8, 8])
ax.set_zlim3d([-8, 8])
pylab.savefig('snapshot_bosons_3d_N%04i_Tstar%04.2f.png' % (N, T_star))
pylab.show()
```

But we do know that for the harmonic trap, the single 3-dimensional particle partition function is given by $z(\beta)=\left(\frac{1}{1-e^{-\beta}}\right)^3$. The permutation cycle of length $k$ corresponds to $z_k=z(k\beta)=\left(\frac{1}{1-e^{-k\beta}}\right)^3$. Hence, using (9) and (10), we have that
$Z^\text{bose}_N=Y_N/{N!}=\frac{1}{N}\sum_{k=1}^N z_k Z^\text{bose}_{N-k}, (\text{with }Z^\text{bose}_0=1)$. (11)
(Due to Landsberg, 1961 http://store.doverpublications.com/0486664937.html)
This recursion relation relates the partition function of a system of $N$ ideal bosons to the partition function of a single particle and the partition functions of systems with fewer particles.
```
import math, pylab
def z(k, beta):
return 1.0 / (1.0 - math.exp(- k * beta)) ** 3 #partition function of a single particle in a harmonic trap
def canonic_recursion(N, beta): #Landsberg recursion relations for the partition function of N bosons
Z = [1.0] #Z_0 = 1
for M in range(1, N + 1):
Z.append(sum(Z[k] * z(M - k, beta) \
for k in range(M)) / M)
return Z #list of partition functions for boson numbers up to N
N = 256 #number of bosons
T_star = 0.5 #temperature
beta = 1.0 / N ** (1.0 / 3.0) / T_star
Z = canonic_recursion(N, beta) #partition function
pi_k = [(z(k, beta) * Z[N - k] / Z[-1]) / float(N) for k in range(1, N + 1)] #probability of a cycle of length k
# graphics output
pylab.plot(range(1, N + 1), pi_k, 'b-', lw=2.5)
pylab.ylim(0.0, 0.01)
pylab.xlabel('cycle length $k$', fontsize=16)
pylab.ylabel('cycle probability $\pi_k$', fontsize=16)
pylab.title('Cycle length distribution ($N=%i$, $T^*=%s$)' % (N, T_star), fontsize=16)
pylab.savefig('plot-prob_cycle_length.png')
phase = [pi[k+1] - pi[k] for k in range(1, N+1)]
# graphics output
pylab.plot(range(1, N + 1), pi_k, 'b-', lw=2.5)
pylab.ylim(0.0, 0.01)
pylab.xlabel('cycle length $k$', fontsize=16)
pylab.ylabel('cycle probability $\pi_k$', fontsize=16)
pylab.title('Cycle length distribution ($N=%i$, $T^*=%s$)' % (N, T_star), fontsize=16)
pylab.savefig('plot-prob_cycle_length.png')
```
Since we have an analytical solution to the problem, we can now implement a rejection-free direct sampling algorithm for the permutations.
```
import math, random
def z(k, beta): #partition function of a single particle in a harmonic trap
return (1.0 - math.exp(- k * beta)) ** (-3)
def canonic_recursion(N, beta): #Landsberg recursion relation for the partition function of N bosons in a harmonic trap
Z = [1.0]
for M in range(1, N + 1):
Z.append(sum(Z[k] * z(M - k, beta) for k in range(M)) / M)
return Z
def make_pi_list(Z, M): #the probability for a boson to be in a permutation length of length up to M?
pi_list = [0.0] + [z(k, beta) * Z[M - k] / Z[M] / M for k in range(1, M + 1)]
pi_cumulative = [0.0]
for k in range(1, M + 1):
pi_cumulative.append(pi_cumulative[k - 1] + pi_list[k])
return pi_cumulative
def naive_tower_sample(pi_cumulative):
eta = random.uniform(0.0, 1.0)
for k in range(len(pi_cumulative)):
if eta < pi_cumulative[k]: break
return k
def levy_harmonic_path(dtau, N): #path sampling (to sample permutation positions)
beta = N * dtau
x_N = random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(beta / 2.0)))
x = [x_N]
for k in range(1, N):
dtau_prime = (N - k) * dtau
Upsilon_1 = 1.0 / math.tanh(dtau) + 1.0 / math.tanh(dtau_prime)
Upsilon_2 = x[k - 1] / math.sinh(dtau) + x_N / math.sinh(dtau_prime)
x_mean = Upsilon_2 / Upsilon_1
sigma = 1.0 / math.sqrt(Upsilon_1)
x.append(random.gauss(x_mean, sigma))
return x
### main program starts here ###
N = 8 #number of bosons
T_star = 0.1 #temperature
beta = 1.0 / N ** (1.0 / 3.0) / T_star
n_steps = 1000
Z = canonic_recursion(N, beta) #{N} boson partition function
for step in range(n_steps):
N_tmp = N #ancillary
x_config, y_config, z_config = [], [], [] #initialise the configurations in each 3 directions
while N_tmp > 0: #iterate through all particles
pi_sum = make_pi_list(Z, N_tmp)
k = naive_tower_sample(pi_sum)
x_config += levy_harmonic_path(beta, k)
y_config += levy_harmonic_path(beta, k)
z_config += levy_harmonic_path(beta, k)
N_tmp -= k #reduce the number of particles that are in the permutation cycle of length k
```
### Physical properties of the 1-dimensional classical and bosonic systems
* Consider 2 non-interacting **distinguishable particles** in a 1-dimensional harmonic trap:
```
import random, math, pylab
#There are only two possible cases: For k=1, we sample a single position (cycle of length 1),
#for k=2, we sample two positions (a cycle of length two).
def levy_harmonic_path(k):
x = [random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0)))] #direct-sample the first position
if k == 2:
Ups1 = 2.0 / math.tanh(beta)
Ups2 = 2.0 * x[0] / math.sinh(beta)
x.append(random.gauss(Ups2 / Ups1, 1.0 / math.sqrt(Ups1)))
return x[:]
def pi_x(x, beta):
sigma = 1.0 / math.sqrt(2.0 * math.tanh(beta / 2.0))
return math.exp(-x ** 2 / (2.0 * sigma ** 2)) / math.sqrt(2.0 * math.pi) / sigma
beta = 2.0
nsteps = 1000000
#initial sample has identity permutation
low = levy_harmonic_path(2) #tau=0
high = low[:] #tau=beta
data = []
for step in xrange(nsteps):
k = random.choice([0, 1])
low[k] = levy_harmonic_path(1)[0]
high[k] = low[k]
data.append(high[k])
list_x = [0.1 * a for a in range (-30, 31)]
y = [pi_x(a, beta) for a in list_x]
pylab.plot(list_x, y, linewidth=2.0, label='Exact distribution')
pylab.hist(data, normed=True, bins=80, label='QMC', alpha=0.5, color='green')
pylab.legend()
pylab.xlabel('$x$',fontsize=14)
pylab.ylabel('$\\pi(x)$',fontsize=14)
pylab.title('2 non-interacting distinguishable 1-d particles',fontsize=14)
pylab.xlim(-3, 3)
pylab.savefig('plot_A1_beta%s.png' % beta)
```
* Consider two non-interacting **indistinguishable bosonic** quantum particles in a one-dimensional harmonic trap:
```
import math, random, pylab, numpy as np
def z(beta):
return 1.0 / (1.0 - math.exp(- beta))
def pi_two_bosons(x, beta): #exact two boson position distribution
pi_x_1 = math.sqrt(math.tanh(beta / 2.0)) / math.sqrt(math.pi) * math.exp(-x ** 2 * math.tanh(beta / 2.0))
pi_x_2 = math.sqrt(math.tanh(beta)) / math.sqrt(math.pi) * math.exp(-x ** 2 * math.tanh(beta))
weight_1 = z(beta) ** 2 / (z(beta) ** 2 + z(2.0 * beta))
weight_2 = z(2.0 * beta) / (z(beta) ** 2 + z(2.0 * beta))
pi_x = pi_x_1 * weight_1 + pi_x_2 * weight_2
return pi_x
def levy_harmonic_path(k):
x = [random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0)))]
if k == 2:
Ups1 = 2.0 / math.tanh(beta)
Ups2 = 2.0 * x[0] / math.sinh(beta)
x.append(random.gauss(Ups2 / Ups1, 1.0 / math.sqrt(Ups1)))
return x[:]
def rho_harm_1d(x, xp, beta):
Upsilon_1 = (x + xp) ** 2 / 4.0 * math.tanh(beta / 2.0)
Upsilon_2 = (x - xp) ** 2 / 4.0 / math.tanh(beta / 2.0)
return math.exp(- Upsilon_1 - Upsilon_2)
beta = 2.0
list_beta = np.linspace(0.1, 5.0)
nsteps = 10000
low = levy_harmonic_path(2)
high = low[:]
fract_one_cycle_dat, fract_two_cycles_dat = [], []
for beta in list_beta:
one_cycle_dat = 0.0 #initialise the permutation fractions for each temperature
data = []
for step in xrange(nsteps):
# move 1 (direct-sample the positions)
if low[0] == high[0]: #if the cycle is of length 1
k = random.choice([0, 1])
low[k] = levy_harmonic_path(1)[0]
high[k] = low[k] #assures the cycle
else: #if the cycle is of length 2s
low[0], low[1] = levy_harmonic_path(2)
high[1] = low[0] #assures the cycle
high[0] = low[1]
one_cycle_dat += 1.0 / float(nsteps) #calculate the fraction of the single cycle cases
data += low[:] #save the position histogram data
# move 2 (Metropolis for sampling the permutations)
weight_old = (rho_harm_1d(low[0], high[0], beta) * rho_harm_1d(low[1], high[1], beta))
weight_new = (rho_harm_1d(low[0], high[1], beta) * rho_harm_1d(low[1], high[0], beta))
if random.uniform(0.0, 1.0) < weight_new / weight_old:
high[0], high[1] = high[1], high[0]
fract_one_cycle_dat.append(one_cycle_dat)
fract_two_cycles_dat.append(1.0 - one_cycle_dat) #save the fraction of the two cycles cases
#Exact permutation distributions for all temperatures
fract_two_cycles = [z(beta) ** 2 / (z(beta) ** 2 + z(2.0 * beta)) for beta in list_beta]
fract_one_cycle = [z(2.0 * beta) / (z(beta) ** 2 + z(2.0 * beta)) for beta in list_beta]
#Graphics output:
list_x = [0.1 * a for a in range (-30, 31)]
y = [pi_two_bosons(a, beta) for a in list_x]
pylab.plot(list_x, y, linewidth=2.0, label='Exact distribution')
pylab.hist(data, normed=True, bins=80, label='QMC', alpha=0.5, color='green')
pylab.legend()
pylab.xlabel('$x$',fontsize=14)
pylab.ylabel('$\\pi(x)$',fontsize=14)
pylab.title('2 non-interacting bosonic 1-d particles',fontsize=14)
pylab.xlim(-3, 3)
pylab.savefig('plot_A2_beta%s.png' % beta)
pylab.show()
pylab.clf()
fig = pylab.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 2, 1)
ax.plot(list_beta, fract_one_cycle_dat, linewidth=4, label='QMC')
ax.plot(list_beta, fract_one_cycle, linewidth=2, label='exact')
ax.legend()
ax.set_xlabel('$\\beta$',fontsize=14)
ax.set_ylabel('$\\pi_2(\\beta)$',fontsize=14)
ax.set_title('Fraction of cycles of length 2',fontsize=14)
ax = fig.add_subplot(1, 2, 2)
ax.plot(list_beta, fract_two_cycles_dat, linewidth=4, label='QMC')
ax.plot(list_beta, fract_two_cycles, linewidth=2,label='exact')
ax.legend()
ax.set_xlabel('$\\beta$',fontsize=14)
ax.set_ylabel('$\\pi_1(\\beta)$',fontsize=14)
ax.set_title('Fraction of cycles of length 1',fontsize=14)
pylab.savefig('plot_A2.png')
pylab.show()
pylab.clf()
```
We can use dictionaries instead of lists. The implementation is in the following program.
Here we also calculate the correlation between the two particles, i.e. sample of the absolute distance $r$ between the two bosons. The comparison between the resulting distribution and the distribution for the distinguishable case corresponds to boson bunching (high weight for small distances between the bosons).
```
import math, random, pylab
def prob_r_distinguishable(r, beta): #the exact correlation function for two particles
sigma = math.sqrt(2.0) / math.sqrt(2.0 * math.tanh(beta / 2.0))
prob = (math.sqrt(2.0 / math.pi) / sigma) * math.exp(- r ** 2 / 2.0 / sigma ** 2)
return prob
def levy_harmonic_path(k):
x = [random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(k * beta / 2.0)))]
if k == 2:
Ups1 = 2.0 / math.tanh(beta)
Ups2 = 2.0 * x[0] / math.sinh(beta)
x.append(random.gauss(Ups2 / Ups1, 1.0 / math.sqrt(Ups1)))
return x[:]
def rho_harm_1d(x, xp, beta):
Upsilon_1 = (x + xp) ** 2 / 4.0 * math.tanh(beta / 2.0)
Upsilon_2 = (x - xp) ** 2 / 4.0 / math.tanh(beta / 2.0)
return math.exp(- Upsilon_1 - Upsilon_2)
beta = 0.1
nsteps = 1000000
low_1, low_2 = levy_harmonic_path(2)
x = {low_1:low_1, low_2:low_2}
data_corr = []
for step in xrange(nsteps):
# move 1
a = random.choice(x.keys())
if a == x[a]:
dummy = x.pop(a)
a_new = levy_harmonic_path(1)[0]
x[a_new] = a_new
else:
a_new, b_new = levy_harmonic_path(2)
x = {a_new:b_new, b_new:a_new}
r = abs(x.keys()[1] - x.keys()[0])
data_corr.append(r)
# move 2
(low1, high1), (low2, high2) = x.items()
weight_old = rho_harm_1d(low1, high1, beta) * rho_harm_1d(low2, high2, beta)
weight_new = rho_harm_1d(low1, high2, beta) * rho_harm_1d(low2, high1, beta)
if random.uniform(0.0, 1.0) < weight_new / weight_old:
x = {low1:high2, low2:high1}
#Graphics output:
list_x = [0.1 * a for a in range (0, 100)]
y = [prob_r_distinguishable(a, beta) for a in list_x]
pylab.plot(list_x, y, linewidth=2.0, label='Exact distinguishable distribution')
pylab.hist(data_corr, normed=True, bins=120, label='Indistinguishable QMC', alpha=0.5, color='green')
pylab.legend()
pylab.xlabel('$r$',fontsize=14)
pylab.ylabel('$\\pi_{corr}(r)$',fontsize=14)
pylab.title('Correlation function of non-interacting 1-d bosons',fontsize=14)
pylab.xlim(0, 10)
pylab.savefig('plot_A3_beta%s.png' % beta)
pylab.show()
pylab.clf()
```
### 3-dimensional bosons
#### Isotropic trap
```
import random, math, numpy, sys, os
import matplotlib.pyplot as plt
def harmonic_ground_state(x):
return math.exp(-x ** 2)/math.sqrt(math.pi)
def levy_harmonic_path_3d(k):
x0 = tuple([random.gauss(0.0, 1.0 / math.sqrt(2.0 *
math.tanh(k * beta / 2.0))) for d in range(3)])
x = [x0]
for j in range(1, k):
Upsilon_1 = 1.0 / math.tanh(beta) + 1.0 / \
math.tanh((k - j) * beta)
Upsilon_2 = [x[j - 1][d] / math.sinh(beta) + x[0][d] /
math.sinh((k - j) * beta) for d in range(3)]
x_mean = [Upsilon_2[d] / Upsilon_1 for d in range(3)]
sigma = 1.0 / math.sqrt(Upsilon_1)
dummy = [random.gauss(x_mean[d], sigma) for d in range(3)]
x.append(tuple(dummy))
return x
def rho_harm_3d(x, xp):
Upsilon_1 = sum((x[d] + xp[d]) ** 2 / 4.0 *
math.tanh(beta / 2.0) for d in range(3))
Upsilon_2 = sum((x[d] - xp[d]) ** 2 / 4.0 /
math.tanh(beta / 2.0) for d in range(3))
return math.exp(- Upsilon_1 - Upsilon_2)
N = 512
T_star = 0.8
list_T = numpy.linspace(0.8,0.1,5)
beta = 1.0 / (T_star * N ** (1.0 / 3.0))
cycle_min = 10
nsteps = 50000
data_x, data_y, data_x_l, data_y_l = [], [], [], []
for T_star in list_T:
# Initial condition
filename = 'data_boson_configuration_N%i_T%.1f.txt' % (N,T_star)
positions = {}
if os.path.isfile(filename):
f = open(filename, 'r')
for line in f:
a = line.split()
positions[tuple([float(a[0]), float(a[1]), float(a[2])])] = \
tuple([float(a[3]), float(a[4]), float(a[5])])
f.close()
if len(positions) != N:
sys.exit('ERROR in the input file.')
print 'starting from file', filename
else:
for k in range(N):
a = levy_harmonic_path_3d_anisotropic(1)
positions[a[0]] = a[0]
print 'Starting from a new configuration'
# Monte Carlo loop
for step in range(nsteps):
# move 1: resample one permutation cycle
boson_a = random.choice(positions.keys())
perm_cycle = []
while True:
perm_cycle.append(boson_a)
boson_b = positions.pop(boson_a)
if boson_b == perm_cycle[0]:
break
else:
boson_a = boson_b
k = len(perm_cycle)
data_x.append(boson_a[0])
data_y.append(boson_a[1])
if k > cycle_min:
data_x_l.append(boson_a[0])
data_y_l.append(boson_a[1])
perm_cycle = levy_harmonic_path_3d(k)
positions[perm_cycle[-1]] = perm_cycle[0]
for k in range(len(perm_cycle) - 1):
positions[perm_cycle[k]] = perm_cycle[k + 1]
# move 2: exchange
a_1 = random.choice(positions.keys())
b_1 = positions.pop(a_1)
a_2 = random.choice(positions.keys())
b_2 = positions.pop(a_2)
weight_new = rho_harm_3d(a_1, b_2) * rho_harm_3d(a_2, b_1)
weight_old = rho_harm_3d(a_1, b_1) * rho_harm_3d(a_2, b_2)
if random.uniform(0.0, 1.0) < weight_new / weight_old:
positions[a_1] = b_2
positions[a_2] = b_1
else:
positions[a_1] = b_1
positions[a_2] = b_2
f = open(filename, 'w')
for a in positions:
b = positions[a]
f.write(str(a[0]) + ' ' + str(a[1]) + ' ' + str(a[2]) + ' ' +
str(b[0]) + ' ' + str(b[1]) + ' ' + str(b[2]) + '\n')
f.close()
# Analyze cycles, do 3d plot
import pylab, mpl_toolkits.mplot3d
fig = pylab.figure()
ax = mpl_toolkits.mplot3d.axes3d.Axes3D(fig)
ax.set_aspect('equal')
n_colors = 10
list_colors = pylab.cm.rainbow(numpy.linspace(0, 1, n_colors))[::-1]
dict_colors = {}
i_color = 0
positions_copy = positions.copy()
while positions_copy:
x, y, z = [], [], []
starting_boson = positions_copy.keys()[0]
boson_old = starting_boson
while True:
x.append(boson_old[0])
y.append(boson_old[1])
z.append(boson_old[2])
boson_new = positions_copy.pop(boson_old)
if boson_new == starting_boson: break
else: boson_old = boson_new
len_cycle = len(x)
if len_cycle > 2:
x.append(x[0])
y.append(y[0])
z.append(z[0])
if len_cycle in dict_colors:
color = dict_colors[len_cycle]
ax.plot(x, y, z, '+-', c=color, lw=0.75)
else:
color = list_colors[i_color]
i_color = (i_color + 1) % n_colors
dict_colors[len_cycle] = color
ax.plot(x, y, z, '+-', c=color, label='k=%i' % len_cycle, lw=0.75)
pylab.title(str(N) + ' bosons at T* = ' + str(T_star))
pylab.legend()
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
ax.set_zlabel('$z$', fontsize=16)
xmax = 6.0
ax.set_xlim3d([-xmax, xmax])
ax.set_ylim3d([-xmax, xmax])
ax.set_zlim3d([-xmax, xmax])
pylab.savefig('plot_boson_configuration_N%i_T%.1f.png' %(N,T_star))
pylab.show()
pylab.clf()
#Plot the histograms
list_x = [0.1 * a for a in range (-50, 51)]
y = [harmonic_ground_state(a) for a in list_x]
pylab.plot(list_x, y, linewidth=2.0, label='Ground state')
pylab.hist(data_x, normed=True, bins=120, alpha = 0.5, label='All bosons')
pylab.hist(data_x_l, normed=True, bins=120, alpha = 0.5, label='Bosons in longer cycle')
pylab.xlim(-3.0, 3.0)
pylab.xlabel('$x$',fontsize=14)
pylab.ylabel('$\pi(x)$',fontsize=14)
pylab.title('3-d non-interacting bosons $x$ distribution $N= %i$, $T= %.1f$' %(N,T_star))
pylab.legend()
pylab.savefig('position_distribution_N%i_T%.1f.png' %(N,T_star))
pylab.show()
pylab.clf()
plt.hist2d(data_x_l, data_y_l, bins=40, normed=True)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('The distribution of the $x$ and $y$ positions')
plt.colorbar()
plt.xlim(-3.0, 3.0)
plt.ylim(-3.0, 3.0)
plt.show()
```
#### Anisotropic trap
We can imitate the experiments that imitate 1-d bosons in *cigar shaped* anisotropic harmonic traps, and 2-d bosons in *pancake shaped* anisotropic harmonic traps.
```
%pylab inline
import random, math, numpy, os, sys
def levy_harmonic_path_3d_anisotropic(k, omega):
sigma = [1.0 / math.sqrt(2.0 * omega[d] *
math.tanh(0.5 * k * beta * omega[d])) for d in xrange(3)]
xk = tuple([random.gauss(0.0, sigma[d]) for d in xrange(3)])
x = [xk]
for j in range(1, k):
Upsilon_1 = [1.0 / math.tanh(beta * omega[d]) +
1.0 / math.tanh((k - j) * beta * omega[d]) for d in range(3)]
Upsilon_2 = [x[j - 1][d] / math.sinh(beta * omega[d]) + \
xk[d] / math.sinh((k - j) * beta * omega[d]) for d in range(3)]
x_mean = [Upsilon_2[d] / Upsilon_1[d] for d in range(3)]
sigma = [1.0 / math.sqrt(Upsilon_1[d] * omega[d]) for d in range(3)]
dummy = [random.gauss(x_mean[d], sigma[d]) for d in range(3)]
x.append(tuple(dummy))
return x
def rho_harm_3d_anisotropic(x, xp, beta, omega):
Upsilon_1 = sum(omega[d] * (x[d] + xp[d]) ** 2 / 4.0 *
math.tanh(beta * omega[d] / 2.0) for d in range(3))
Upsilon_2 = sum(omega[d] * (x[d] - xp[d]) ** 2 / 4.0 /
math.tanh(beta * omega[d] / 2.0) for d in range(3))
return math.exp(- Upsilon_1 - Upsilon_2)
omegas = numpy.array([[4.0, 4.0, 1.0], [1.0, 5.0, 1.0]])
for i in range(len(omegas[:,1])):
N = 512
nsteps = 100000
omega_harm = 1.0
omega = omegas[i,:]
for d in range(3):
omega_harm *= omega[d] ** (1.0 / 3.0)
T_star = 0.5
T = T_star * omega_harm * N ** (1.0 / 3.0)
beta = 1.0 / T
print 'omega: ', omega
# Initial condition
if i == 0:
filename = 'data_boson_configuration_anisotropic_N%i_T%.1f_cigar.txt' % (N,T_star)
elif i == 1:
filename = 'data_boson_configuration_anisotropic_N%i_T%.1f_pancake.txt' % (N,T_star)
positions = {}
if os.path.isfile(filename):
f = open(filename, 'r')
for line in f:
a = line.split()
positions[tuple([float(a[0]), float(a[1]), float(a[2])])] = \
tuple([float(a[3]), float(a[4]), float(a[5])])
f.close()
if len(positions) != N:
sys.exit('ERROR in the input file.')
print 'starting from file', filename
else:
for k in range(N):
a = levy_harmonic_path_3d_anisotropic(1,omega)
positions[a[0]] = a[0]
print 'Starting from a new configuration'
for step in range(nsteps):
boson_a = random.choice(positions.keys())
perm_cycle = []
while True:
perm_cycle.append(boson_a)
boson_b = positions.pop(boson_a)
if boson_b == perm_cycle[0]: break
else: boson_a = boson_b
k = len(perm_cycle)
perm_cycle = levy_harmonic_path_3d_anisotropic(k,omega)
positions[perm_cycle[-1]] = perm_cycle[0]
for j in range(len(perm_cycle) - 1):
positions[perm_cycle[j]] = perm_cycle[j + 1]
a_1 = random.choice(positions.keys())
b_1 = positions.pop(a_1)
a_2 = random.choice(positions.keys())
b_2 = positions.pop(a_2)
weight_new = (rho_harm_3d_anisotropic(a_1, b_2, beta, omega) *
rho_harm_3d_anisotropic(a_2, b_1, beta, omega))
weight_old = (rho_harm_3d_anisotropic(a_1, b_1, beta, omega) *
rho_harm_3d_anisotropic(a_2, b_2, beta, omega))
if random.uniform(0.0, 1.0) < weight_new / weight_old:
positions[a_1], positions[a_2] = b_2, b_1
else:
positions[a_1], positions[a_2] = b_1, b_2
f = open(filename, 'w')
for a in positions:
b = positions[a]
f.write(str(a[0]) + ' ' + str(a[1]) + ' ' + str(a[2]) + ' ' +
str(b[0]) + ' ' + str(b[1]) + ' ' + str(b[2]) + '\n')
f.close()
import pylab, mpl_toolkits.mplot3d
fig = pylab.figure()
ax = mpl_toolkits.mplot3d.axes3d.Axes3D(fig)
ax.set_aspect('equal')
n_colors = 10
list_colors = pylab.cm.rainbow(numpy.linspace(0, 1, n_colors))[::-1]
dict_colors = {}
i_color = 0
positions_copy = positions.copy()
while positions_copy:
x, y, z = [], [], []
starting_boson = positions_copy.keys()[0]
boson_old = starting_boson
while True:
x.append(boson_old[0])
y.append(boson_old[1])
z.append(boson_old[2])
boson_new = positions_copy.pop(boson_old)
if boson_new == starting_boson: break
else: boson_old = boson_new
len_cycle = len(x)
if len_cycle > 2:
x.append(x[0])
y.append(y[0])
z.append(z[0])
if len_cycle in dict_colors:
color = dict_colors[len_cycle]
ax.plot(x, y, z, '+-', c=color, lw=0.75)
else:
color = list_colors[i_color]
i_color = (i_color + 1) % n_colors
dict_colors[len_cycle] = color
ax.plot(x, y, z, '+-', c=color, label='k=%i' % len_cycle, lw=0.75)
pylab.legend()
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
ax.set_zlabel('$z$', fontsize=16)
xmax = 8.0
ax.set_xlim3d([-xmax, xmax])
ax.set_ylim3d([-xmax, xmax])
ax.set_zlim3d([-xmax, xmax])
if i == 0:
pylab.title(str(N) + ' bosons at T* = ' + str(T_star) + ' cigar potential')
pylab.savefig('position_distribution_N%i_T%.1f_cigar.png' %(N,T_star))
elif i == 1:
pylab.title(str(N) + ' bosons at T* = ' + str(T_star) + ' pancake potential')
pylab.savefig('position_distribution_N%i_T%.1f_pancake.png' %(N,T_star))
pylab.show()
```
There it is found that the critical temperature for Bose-Einstein condensation is around $T^*\sim 0.9$.
## To do:
* Calculate the pair correlation function
| github_jupyter |
# Working with 3D city models in Python
**Balázs Dukai** [*@BalazsDukai*](https://twitter.com/balazsdukai), **FOSS4G 2019**
Tweet <span style="color:blue">#CityJSON</span>
[3D geoinformation research group, TU Delft, Netherlands](https://3d.bk.tudelft.nl/)

Repo of this talk: [https://github.com/balazsdukai/foss4g2019](https://github.com/balazsdukai/foss4g2019)
# 3D + city + model ?

Probably the most well known 3d city model is what we see in Google Earth. And it is a very nice model to look at and it is improving continuously. However, certain applications require more information than what is stored in such a mesh model. They need to know what does an object in the model represent in the real world.
# Semantic models

That is why we have semantic models, where for each object in the model we store a label of is meaning.
Once we have labels on the object and on their parts, data preparation becomes more simple. An important property for analytical applications, such as wind flow simulations.
# Useful for urban analysis

García-Sánchez, C., van Beeck, J., Gorlé, C., Predictive Large Eddy Simulations for Urban Flows: Challenges and Opportunities, Building and Environment, 139, 146-156, 2018.
But we can do much more with 3d city models. We can use them to better estimate the energy consumption in buildings, simulate noise in cities or analyse views and shadows. In the Netherlands sunshine is precious commodity, so we like to get as much as we can.
# And many more...

There are many open 3d city models available. They come in different formats and quality. However, at our group we are still waiting for the "year of the 3d city model" to come. We don't really see mainstream use, apart of visualisation. Which is nice, I belive they can provide much more value than having a nice thing to simply look at.
# ...mostly just production of the models
many available, but who **uses** them? **For more than visualisation?**

# In truth, 3D CMs are a bit difficult to work with
### Our built environment is complex, and the objects are complex too

### Software are lagging behind
+ not many software supports 3D city models
+ if they do, mostly propietary data model and format
+ large, *"eterprise"*-type applications (think Esri, FME, Bentley ... )
+ few tools accessible for the individual developer / hobbyist
2. GML doesn't help ( *[GML madness](http://erouault.blogspot.com/2014/04/gml-madness.html) by Even Rouault* )
That is why we are developing CityJSON, which is a data format for 3d city models. Essentially, it aims to increase the value of 3d city models by making it more simple to work with them and lower the entry for a wider audience than cadastral organisations.

## Key concepts of CityJSON
+ *simple*, as in easy to implement
+ designed with programmers in mind
+ fully developed in the open
+ flattened hierarchy of objects
+ <span style="color:red">implementation first</span>

CityJSON implements the data model of CityGML. CityGML is an international standard for 3d city models and it is coupled with its GML-based encoding.
We don't really like GML, because it's verbose, files are deeply nested and large (often several GB). And there are many different ways to do one thing.
Also, I'm not a web-developer, but I would be surprised if anyone prefers GML over JSON for sending stuff around the web.
# JSON-based encoding of the CityGML data model

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">I just got sent a CityGML file. <a href="https://t.co/jnTVoRnVLS">pic.twitter.com/jnTVoRnVLS</a></p>— James Fee (@jamesmfee) <a href="https://twitter.com/jamesmfee/status/748270105319006208?ref_src=twsrc%5Etfw">June 29, 2016</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
+ files are deeply nested, and large
+ many "points of entry"
+ many diff ways to do one thing (GML doesn't help, *[GML madness](http://erouault.blogspot.com/2014/04/gml-madness.html) by Even Rouault* )
## The CityGML data model

## Compression ~6x over CityGML

## Compression
| file | CityGML size (original) | CityGML size (w/o spaces) | textures | CityJSON | compression |
| -------- | ----------------------- | ----------------------------- |--------- | ------------ | --------------- |
| [CityGML demo "GeoRes"](https://www.citygml.org/samplefiles/) | 4.3MB | 4.1MB | yes | 524KB | 8.0 |
| [CityGML v2 demo "Railway"](https://www.citygml.org/samplefiles/) | 45MB | 34MB | yes | 4.3MB | 8.1 |
| [Den Haag "tile 01"](https://data.overheid.nl/data/dataset/ngr-3d-model-den-haag) | 23MB | 18MB | no, material | 2.9MB | 6.2 |
| [Montréal VM05](http://donnees.ville.montreal.qc.ca/dataset/maquette-numerique-batiments-citygml-lod2-avec-textures/resource/36047113-aa19-4462-854a-cdcd6281a5af) | 56MB | 42MB | yes | 5.4MB | 7.8 |
| [New York LoD2 (DA13)](https://www1.nyc.gov/site/doitt/initiatives/3d-building.page) | 590MB | 574MB | no | 105MB | 5.5 |
| [Rotterdam Delfshaven](http://rotterdamopendata.nl/dataset/rotterdam-3d-bestanden/resource/edacea54-76ce-41c7-a0cc-2ebe5750ac18) | 16MB | 15MB | yes | 2.6MB | 5.8 |
| [Vienna (the demo file)](https://www.data.gv.at/katalog/dataset/86d88cae-ad97-4476-bae5-73488a12776d) | 37MB | 36MB | no | 5.3MB | 6.8 |
| [Zürich LoD2](https://www.data.gv.at/katalog/dataset/86d88cae-ad97-4476-bae5-73488a12776d) | 3.03GB | 2.07GB | no | 292MB | 7.1 |
If you are interested in a more detailed comparison between CityGML and CityJSON you can read our article, its open access.

And yes, we are guilty of charge.

[https://xkcd.com/927/](https://xkcd.com/927/)
# Let's have a look-see, shall we?

Now let's take a peek under the hood, what's going on in a CityJSON file.
## An empty CityJSON file

In a city model we represent the real-world objects such as buildings, bridges, trees as different types of CityObjects. Each CityObject has its
+ unique ID,
+ attributes,
+ geometry,
+ and it can have children objects or it can be part of a parent object.
Note however, that CityObject are not nested. Each of them is stored at root and the hierachy represented by linking to object IDs.
## A CityObject

Each CityObject has a geometry representation. This geometry is composed of *boundaries* and *semantics*.
## Geometry
+ **boundaries** definition uses vertex indices (inspired by Wavefront OBJ)
+ We have a vertex list at the root of the document
+ Vertices are not repeated (unlike Simple Features)
+ **semantics** are linked to the boundary surfaces

This `MulitSurface` has
5 surfaces
```json
[[0, 3, 2, 1]], [[4, 5, 6, 7]], [[0, 1, 5, 4]], [[0, 2, 3, 8]], [[10, 12, 23, 48]]
```
each surface has only an exterior ring (the first array)
```json
[ [0, 3, 2, 1] ]
```
The semantic surfaces in the `semantics` json-object are linked to the boundary surfaces. The integers in the `values` property of `surfaces` are the 0-based indices of the surfaces of the boundary.
```
import json
import os
path = os.path.join('data', 'rotterdam_subset.json')
with open(path) as fin:
cm = json.loads(fin.read())
print(f"There are {len(cm['CityObjects'])} CityObjects")
# list all IDs
for id in cm['CityObjects']:
print(id, "\t")
```
+ Working with a CityJSON file is straightforward. One can open it with the standard library and get going.
+ But you need to know the schema well.
+ And you need to write everything from scratch.
That is why we are developing **cjio**.
**cjio** is how *we eat what we cook*
Aims to help to actually work with and analyse 3D city models, and extract more value from them. Instead of letting them gather dust in some governmental repository.

## `cjio` has a (quite) stable CLI
```bash
$ cjio city_model.json reproject 2056 export --format glb /out/model.glb
```
## and an experimental API
```python
from cjio import cityjson
cm = cityjson.load('city_model.json')
cm.get_cityobjects(type='building')
```
**`pip install cjio`**
This notebook is based on the develop branch.
**`pip install git+https://github.com/tudelft3d/cjio@develop`**
# `cjio`'s CLI
```
! cjio --help
! cjio data/rotterdam_subset.json info
! cjio data/rotterdam_subset.json validate
! cjio data/rotterdam_subset.json \
subset --exclude --id "{CD98680D-A8DD-4106-A18E-15EE2A908D75}" \
merge data/rotterdam_one.json \
reproject 2056 \
save data/test_rotterdam.json
```
+ The CLI was first, no plans for API
+ **Works with whole city model only**
+ Functions for the CLI work with the JSON directly, passing it along
+ Simple and effective architecture
# `cjio`'s API
Allow *read* --> *explore* --> *modify* --> *write* iteration
Work with CityObjects and their parts
Functions for common operations
Inspired by the *tidyverse* from the R ecosystem
```
import os
from copy import deepcopy
from cjio import cityjson
from shapely.geometry import Polygon
import matplotlib.pyplot as plt
plt.close('all')
from sklearn.preprocessing import FunctionTransformer
from sklearn import cluster
import numpy as np
```
In the following we work with a subset of the 3D city model of Rotterdam

## Load a CityJSON
The `load()` method loads a CityJSON file into a CityJSON object.
```
path = os.path.join('data', 'rotterdam_subset.json')
cm = cityjson.load(path)
print(type(cm))
```
## Using the CLI commands in the API
You can use any of the CLI commands on a CityJSON object
*However,* not all CLI commands are mapped 1-to-1 to `CityJSON` methods
And we haven't harmonized the CLI and the API yet.
```
cm.validate()
```
## Explore the city model
Print the basic information about the city model. Note that `print()` returns the same information as the `info` command in the CLI.
```
print(cm)
```
## Getting objects from the model
Get CityObjects by their *type*, or a list of types. Also by their IDs.
Note that `get_cityobjects()` == `cm.cityobjects`
```
buildings = cm.get_cityobjects(type='building')
# both Building and BuildingPart objects
buildings_parts = cm.get_cityobjects(type=['building', 'buildingpart'])
r_ids = ['{C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE}',
'{6271F75F-E8D8-4EE4-AC46-9DB02771A031}']
buildings_ids = cm.get_cityobjects(id=r_ids)
```
## Properties and geometry of objects
```
b01 = buildings_ids['{C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE}']
print(b01)
b01.attributes
```
CityObjects can have *children* and *parents*
```
b01.children is None and b01.parents is None
```
CityObject geometry is a list of `Geometry` objects. That is because a CityObject can have multiple geometry representations in different levels of detail, eg. a geometry in LoD1 and a second geometry in LoD2.
```
b01.geometry
geom = b01.geometry[0]
print("{}, lod {}".format(geom.type, geom.lod))
```
### Geometry boundaries and Semantic Surfaces
On the contrary to a CityJSON file, the geometry boundaries are dereferenced when working with the API. This means that the vertex coordinates are included in the boundary definition, not only the vertex indices.
`cjio` doesn't provide specific geometry classes (yet), eg. MultiSurface or Solid class. If you are working with the geometry boundaries, you need to the geometric operations yourself, or cast the boundary to a geometry-class of some other library. For example `shapely` if 2D is enough.
Vertex coordinates are kept 'as is' on loading the geometry. CityJSON files are often compressed and coordinates are shifted and transformed into integers so probably you'll want to transform them back. Otherwise geometry operations won't make sense.
```
transformation_object = cm.transform
geom_transformed = geom.transform(transformation_object)
geom_transformed.boundaries[0][0]
```
But it might be easier to transform (decompress) the whole model on load.
```
cm_transformed = cityjson.load(path, transform=True)
print(cm_transformed)
```
Semantic Surfaces are stored in a similar fashion as in a CityJSON file, in the `surfaces` attribute of a Geometry object.
```
geom.surfaces
```
`surfaces` does not store geometry boundaries, just references (`surface_idx`). Use the `get_surface_boundaries()` method to obtain the boundary-parts connected to the semantic surface.
```
roofs = geom.get_surfaces(type='roofsurface')
roofs
roof_boundaries = []
for r in roofs.values():
roof_boundaries.append(geom.get_surface_boundaries(r))
roof_boundaries
```
### Assigning attributes to Semantic Surfaces
1. extract the surfaces,
2. make the changes on the surface,
3. overwrite the CityObjects with the changes.
```
cm_copy = deepcopy(cm)
new_cos = {}
for co_id, co in cm.cityobjects.items():
new_geoms = []
for geom in co.geometry:
# Only LoD >= 2 models have semantic surfaces
if geom.lod >= 2.0:
# Extract the surfaces
roofsurfaces = geom.get_surfaces('roofsurface')
for i, rsrf in roofsurfaces.items():
# Change the attributes
if 'attributes' in rsrf.keys():
rsrf['attributes']['cladding'] = 'tiles'
else:
rsrf['attributes'] = {}
rsrf['attributes']['cladding'] = 'tiles'
geom.surfaces[i] = rsrf
new_geoms.append(geom)
else:
# Use the unchanged geometry
new_geoms.append(geom)
co.geometry = new_geoms
new_cos[co_id] = co
cm_copy.cityobjects = new_cos
print(cm_copy.cityobjects['{C9D4A5CF-094A-47DA-97E4-4A3BFD75D3AE}'])
```
### Create new Semantic Surfaces
The process is similar as previously. However, in this example we create new SemanticSurfaces that hold the values which we compute from the geometry. The input city model has a single semantic "WallSurface", without attributes, for all the walls of a building. The snippet below illustrates how to separate surfaces and assign the semantics to them.
```
new_cos = {}
for co_id, co in cm_copy.cityobjects.items():
new_geoms = []
for geom in co.geometry:
if geom.lod >= 2.0:
max_id = max(geom.surfaces.keys())
old_ids = []
for w_i, wsrf in geom.get_surfaces('wallsurface').items():
old_ids.append(w_i)
del geom.surfaces[w_i]
boundaries = geom.get_surface_boundaries(wsrf)
for j, boundary_geometry in enumerate(boundaries):
# The original geometry has the same Semantic for all wall,
# but we want to divide the wall surfaces by their orientation,
# thus we need to have the correct surface index
surface_index = wsrf['surface_idx'][j]
new_srf = {
'type': wsrf['type'],
'surface_idx': surface_index
}
for multisurface in boundary_geometry:
# Do any operation here
x, y, z = multisurface[0]
if j % 2 > 0:
orientation = 'north'
else:
orientation = 'south'
# Add the new attribute to the surface
if 'attributes' in wsrf.keys():
wsrf['attributes']['orientation'] = orientation
else:
wsrf['attributes'] = {}
wsrf['attributes']['orientation'] = orientation
new_srf['attributes'] = wsrf['attributes']
# if w_i in geom.surfaces.keys():
# del geom.surfaces[w_i]
max_id = max_id + 1
geom.surfaces[max_id] = new_srf
new_geoms.append(geom)
else:
# If LoD1, just add the geometry unchanged
new_geoms.append(geom)
co.geometry = new_geoms
new_cos[co_id] = co
cm_copy.cityobjects = new_cos
```
# Analysing CityModels

In the following I show how to compute some attributes from CityObject geometry and use these attributes as input for machine learning. For this we use the LoD2 model of Zürich.
Download the Zürich data set from https://3d.bk.tudelft.nl/opendata/cityjson/1.0/Zurich_Building_LoD2_V10.json
```
path = os.path.join('data', 'zurich.json')
zurich = cityjson.load(path, transform=True)
```
## A simple geometry function
Here is a simple geometry function that computes the area of the groundsurface (footprint) of buildings in the model. It also show how to cast surfaces, in this case the ground surface, to Shapely Polygons.
```
def compute_footprint_area(co):
"""Compute the area of the footprint"""
footprint_area = 0
for geom in co.geometry:
# only LoD2 (or higher) objects have semantic surfaces
if geom.lod >= 2.0:
footprints = geom.get_surfaces(type='groundsurface')
# there can be many surfaces with label 'groundsurface'
for i,f in footprints.items():
for multisurface in geom.get_surface_boundaries(f):
for surface in multisurface:
# cast to Shapely polygon
shapely_poly = Polygon(surface)
footprint_area += shapely_poly.area
return footprint_area
```
## Compute new attributes
Then we need to loop through the CityObjects and update add the new attributes. Note that the `attributes` CityObject attribute is just a dictionary.
Thus we compute the number of vertices of the CityObject and the area of is footprint. Then we going to cluster these two variables. This is completely arbitrary excercise which is simply meant to illustrate how to transform a city model into machine-learnable features.
```
for co_id, co in zurich.cityobjects.items():
co.attributes['nr_vertices'] = len(co.get_vertices())
co.attributes['fp_area'] = compute_footprint_area(co)
zurich.cityobjects[co_id] = co
```
It is possible to export the city model into a pandas DataFrame. Note that only the CityObject attributes are exported into the dataframe, with CityObject IDs as the index of the dataframe. Thus if you want to export the attributes of SemanticSurfaces for example, then you need to add them as CityObject attributes.
The function below illustrates this operation.
```
def assign_cityobject_attribute(cm):
"""Copy the semantic surface attributes to CityObject attributes.
Returns a copy of the citymodel.
"""
new_cos = {}
cm_copy = deepcopy(cm)
for co_id, co in cm.cityobjects.items():
for geom in co.geometry:
for srf in geom.surfaces.values():
if 'attributes' in srf:
for attr,a_v in srf['attributes'].items():
if (attr not in co.attributes) or (co.attributes[attr] is None):
co.attributes[attr] = [a_v]
else:
co.attributes[attr].append(a_v)
new_cos[co_id] = co
cm_copy.cityobjects = new_cos
return cm_copy
df = zurich.to_dataframe()
df.head()
```
In order to have a nicer distribution of the data, we remove the missing values and apply a log-transform on the two variables. Note that the `FuntionTransformer.transform` transforms a DataFrame to a numpy array that is ready to be used in `scikit-learn`. The details of a machine learning workflow is beyond the scope of this tutorial however.
```
df_subset = df[df['Geomtype'].notnull() & df['fp_area'] > 0.0].loc[:, ['nr_vertices', 'fp_area']]
transformer = FunctionTransformer(np.log, validate=True)
df_logtransform = transformer.transform(df_subset)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(df_logtransform[:,0], df_logtransform[:,1], alpha=0.3, s=1.0)
plt.show()
def plot_model_results(model, data):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
colormap = np.array(['lightblue', 'red', 'lime', 'blue','black'])
ax.scatter(data[:,0], data[:,1], c=colormap[model.labels_], s=10, alpha=0.5)
ax.set_xlabel('Number of vertices [log]')
ax.set_ylabel('Footprint area [log]')
plt.title(f"DBSCAN clustering with estimated {len(set(model.labels_))} clusters")
plt.show()
```
Since we transformed our DataFrame, we can fit any model in `scikit-learn`. I use DBSCAN because I wanted to find the data points on the fringes of the central cluster.
```
%matplotlib notebook
model = cluster.DBSCAN(eps=0.2).fit(df_logtransform)
plot_model_results(model, df_logtransform)
# merge the cluster labels back to the data frame
df_subset['dbscan'] = model.labels_
```
## Save the results back to CityJSON
And merge the DataFrame with cluster labels back to the city model.
```
for co_id, co in zurich.cityobjects.items():
if co_id in df_subset.index:
ml_results = dict(df_subset.loc[co_id])
else:
ml_results = {'nr_vertices': 'nan', 'fp_area': 'nan', 'dbscan': 'nan'}
new_attrs = {**co.attributes, **ml_results}
co.attributes = new_attrs
zurich.cityobjects[co_id] = co
```
At the end, the `save()` method saves the edited city model into a CityJSON file.
```
path_out = os.path.join('data', 'zurich_output.json')
cityjson.save(zurich, path_out)
```
## And view the results in QGIS again

However, you'll need to set up the styling based on the cluster labels by hand.
# Other software
## Online CityJSON viewer

## QGIS plugin

## Azul

# Full conversion CityGML <--> CityJSON

# Thank you!
Balázs Dukai
b.dukai@tudelft.nl
@BalazsDukai
## A few links
Repo of this talk: [https://github.com/balazsdukai/foss4g2019](https://github.com/balazsdukai/foss4g2019)
[cityjson.org](cityjson.org)
[viewer.cityjson.org](viewer.cityjson.org)
QGIS plugin: [github.com/tudelft3d/cityjson-qgis-plugin](github.com/tudelft3d/cityjson-qgis-plugin)
Azul – CityJSON viewer on Mac – check the [AppStore](https://apps.apple.com/nl/app/azul/id1173239678?mt=12)
cjio: [github.com/tudelft3d/cjio](github.com/tudelft3d/cjio) & [tudelft3d.github.io/cjio/](tudelft3d.github.io/cjio/)
| github_jupyter |
# 100 pandas puzzles
Inspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power.
Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects.
Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.
The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.
If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...
- [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)
- [pandas basics](http://pandas.pydata.org/pandas-docs/stable/basics.html)
- [tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)
- [cookbook and idioms](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#cookbook)
Enjoy the puzzles!
\* *the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.*
## Importing pandas
### Getting started and checking your pandas setup
Difficulty: *easy*
**1.** Import pandas under the alias `pd`.
**2.** Print the version of pandas that has been imported.
**3.** Print out all the *version* information of the libraries that are required by the pandas library.
## DataFrame basics
### A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Difficulty: *easy*
Note: remember to import numpy using:
```python
import numpy as np
```
Consider the following Python dictionary `data` and Python list `labels`:
``` python
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
```
(This is just some meaningless data I made up with the theme of animals and trips to a vet.)
**4.** Create a DataFrame `df` from this dictionary `data` which has the index `labels`.
```
import numpy as np
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df = # (complete this line of code)
```
**5.** Display a summary of the basic information about this DataFrame and its data (*hint: there is a single method that can be called on the DataFrame*).
**6.** Return the first 3 rows of the DataFrame `df`.
**7.** Select just the 'animal' and 'age' columns from the DataFrame `df`.
**8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`.
**9.** Select only the rows where the number of visits is greater than 3.
**10.** Select the rows where the age is missing, i.e. it is `NaN`.
**11.** Select the rows where the animal is a cat *and* the age is less than 3.
**12.** Select the rows the age is between 2 and 4 (inclusive).
**13.** Change the age in row 'f' to 1.5.
**14.** Calculate the sum of all visits in `df` (i.e. find the total number of visits).
**15.** Calculate the mean age for each different animal in `df`.
**16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame.
**17.** Count the number of each type of animal in `df`.
**18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visit' column in *ascending* order (so row `i` should be first, and row `d` should be last).
**19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`.
**20.** In the 'animal' column, change the 'snake' entries to 'python'.
**21.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (*hint: use a pivot table*).
## DataFrames: beyond the basics
### Slightly trickier: you may need to combine two or more methods to get the right answer
Difficulty: *medium*
The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method.
**22.** You have a DataFrame `df` with a column 'A' of integers. For example:
```python
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
```
How do you filter out rows which contain the same integer as the row immediately above?
You should be left with a column containing the following values:
```python
1, 2, 3, 4, 5, 6, 7
```
**23.** Given a DataFrame of numeric values, say
```python
df = pd.DataFrame(np.random.random(size=(5, 3))) # a 5x3 frame of float values
```
how do you subtract the row mean from each element in the row?
**24.** Suppose you have DataFrame with 10 columns of real numbers, for example:
```python
df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))
```
Which column of numbers has the smallest sum? Return that column's label.
**25.** How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)? As input, use a DataFrame of zeros and ones with 10 rows and 3 columns.
```python
df = pd.DataFrame(np.random.randint(0, 2, size=(10, 3)))
```
The next three puzzles are slightly harder.
**26.** In the cell below, you have a DataFrame `df` that consists of 10 columns of floating-point numbers. Exactly 5 entries in each row are NaN values.
For each row of the DataFrame, find the *column* which contains the *third* NaN value.
You should return a Series of column labels: `e, c, d, h, d`
```
nan = np.nan
data = [[0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan],
[ nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16],
[ nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01],
[0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan],
[ nan, nan, 0.41, nan, 0.05, nan, 0.61, nan, 0.48, 0.68]]
columns = list('abcdefghij')
df = pd.DataFrame(data, columns=columns)
# write a solution to the question here
```
**27.** A DataFrame has a column of groups 'grps' and and column of integer values 'vals':
```python
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
```
For each *group*, find the sum of the three greatest values. You should end up with the answer as follows:
```
grps
a 409
b 156
c 345
```
```
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
# write a solution to the question here
```
**28.** The DataFrame `df` constructed below has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive).
For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.
The answer should be a Series as follows:
```
A
(0, 10] 635
(10, 20] 360
(20, 30] 315
(30, 40] 306
(40, 50] 750
(50, 60] 284
(60, 70] 424
(70, 80] 526
(80, 90] 835
(90, 100] 852
```
```
df = pd.DataFrame(np.random.RandomState(8765).randint(1, 101, size=(100, 2)), columns = ["A", "B"])
# write a solution to the question here
```
## DataFrames: harder problems
### These might require a bit of thinking outside the box...
...but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit `for` loops).
Difficulty: *hard*
**29.** Consider a DataFrame `df` where there is an integer column 'X':
```python
df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})
```
For each value, count the difference back to the previous zero (or the start of the Series, whichever is closer). These values should therefore be
```
[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]
```
Make this a new column 'Y'.
**30.** Consider the DataFrame constructed below which contains rows and columns of numerical data.
Create a list of the column-row index locations of the 3 largest values in this DataFrame. In this case, the answer should be:
```
[(5, 7), (6, 4), (2, 5)]
```
```
df = pd.DataFrame(np.random.RandomState(30).randint(1, 101, size=(8, 8)))
```
**31.** You are given the DataFrame below with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals'.
```python
df = pd.DataFrame({"vals": np.random.RandomState(31).randint(-30, 30, size=15),
"grps": np.random.RandomState(31).choice(["A", "B"], 15)})
```
Create a new column 'patched_values' which contains the same values as the 'vals' any negative values in 'vals' with the group mean:
```
vals grps patched_vals
0 -12 A 13.6
1 -7 B 28.0
2 -14 A 13.6
3 4 A 4.0
4 -7 A 13.6
5 28 B 28.0
6 -2 A 13.6
7 -1 A 13.6
8 8 A 8.0
9 -2 B 28.0
10 28 A 28.0
11 12 A 12.0
12 16 A 16.0
13 -24 A 13.6
14 -12 A 13.6
```
**32.** Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:
```python
>>> df = pd.DataFrame({'group': list('aabbabbbabab'),
'value': [1, 2, 3, np.nan, 2, 3, np.nan, 1, 7, 3, np.nan, 8]})
>>> df
group value
0 a 1.0
1 a 2.0
2 b 3.0
3 b NaN
4 a 2.0
5 b 3.0
6 b NaN
7 b 1.0
8 a 7.0
9 b 3.0
10 a NaN
11 b 8.0
```
The goal is to compute the Series:
```
0 1.000000
1 1.500000
2 3.000000
3 3.000000
4 1.666667
5 3.000000
6 3.000000
7 2.000000
8 3.666667
9 2.000000
10 4.500000
11 4.000000
```
E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2)
## Series and DatetimeIndex
### Exercises for creating and manipulating Series with datetime data
Difficulty: *easy/medium*
pandas is fantastic for working with dates and times. These puzzles explore some of this functionality.
**33.** Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of random numbers. Let's call this Series `s`.
**34.** Find the sum of the values in `s` for every Wednesday.
**35.** For each calendar month in `s`, find the mean of values.
**36.** For each group of four consecutive calendar months in `s`, find the date on which the highest value occurred.
**37.** Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016.
## Cleaning Data
### Making a DataFrame easier to work with
Difficulty: *easy/medium*
It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?
Take this monstrosity as the DataFrame to use in the following puzzles:
```python
df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm',
'Budapest_PaRis', 'Brussels_londOn'],
'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],
'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],
'Airline': ['KLM(!)', '<Air France> (12)', '(British Airways. )',
'12. Air France', '"Swiss Air"']})
```
Formatted, it looks like this:
```
From_To FlightNumber RecentDelays Airline
0 LoNDon_paris 10045.0 [23, 47] KLM(!)
1 MAdrid_miLAN NaN [] <Air France> (12)
2 londON_StockhOlm 10065.0 [24, 43, 87] (British Airways. )
3 Budapest_PaRis NaN [13] 12. Air France
4 Brussels_londOn 10085.0 [67, 32] "Swiss Air"
```
(It's some flight data I made up; it's not meant to be accurate in any way.)
**38.** Some values in the the **FlightNumber** column are missing (they are `NaN`). These numbers are meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Modify `df` to fill in these missing numbers and make the column an integer column (instead of a float column).
**39.** The **From\_To** column would be better as two separate columns! Split each string on the underscore delimiter `_` to give a new temporary DataFrame called 'temp' with the correct values. Assign the correct column names 'From' and 'To' to this temporary DataFrame.
**40.** Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame 'temp'. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
**41.** Delete the **From_To** column from `df` and attach the temporary DataFrame 'temp' from the previous questions.
**42**. In the **Airline** column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. `'(British Airways. )'` should become `'British Airways'`.
**43**. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the Series of lists into a DataFrame named `delays`, rename the columns `delay_1`, `delay_2`, etc. and replace the unwanted RecentDelays column in `df` with `delays`.
The DataFrame should look much better now.
```
FlightNumber Airline From To delay_1 delay_2 delay_3
0 10045 KLM London Paris 23.0 47.0 NaN
1 10055 Air France Madrid Milan NaN NaN NaN
2 10065 British Airways London Stockholm 24.0 43.0 87.0
3 10075 Air France Budapest Paris 13.0 NaN NaN
4 10085 Swiss Air Brussels London 67.0 32.0 NaN
```
## Using MultiIndexes
### Go beyond flat DataFrames with additional index levels
Difficulty: *medium*
Previous exercises have seen us analysing data from DataFrames equipped with a single index level. However, pandas also gives you the possibilty of indexing your data using *multiple* levels. This is very much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.
The set of puzzles below explores how you might use multiple index levels to enhance data analysis.
To warm up, we'll look make a Series with two index levels.
**44**. Given the lists `letters = ['A', 'B', 'C']` and `numbers = list(range(10))`, construct a MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call this Series `s`.
**45.** Check the index of `s` is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex).
**46**. Select the labels `1`, `3` and `6` from the second level of the MultiIndexed Series.
**47**. Slice the Series `s`; slice up to label 'B' for the first level and from label 5 onwards for the second level.
**48**. Sum the values in `s` for each label in the first level (you should have Series giving you a total for labels A, B and C).
**49**. Suppose that `sum()` (and other methods) did not accept a `level` keyword argument. How else could you perform the equivalent of `s.sum(level=1)`?
**50**. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it.
## Minesweeper
### Generate the numbers for safe squares in a Minesweeper grid
Difficulty: *medium* to *hard*
If you've ever used an older version of Windows, there's a good chance you've played with Minesweeper:
- https://en.wikipedia.org/wiki/Minesweeper_(video_game)
If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number telling you how many mines are found in the squares that are immediately adjacent. The aim of the game is to uncover all squares in the grid that do not contain a mine.
In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper: coordinates of the squares, whether the square contains a mine and the number of mines found on adjacent squares.
**51**. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.
```
X = 5
Y = 4
```
To begin, generate a DataFrame `df` with two columns, `'x'` and `'y'` containing every coordinate for this grid. That is, the DataFrame should start:
```
x y
0 0 0
1 0 1
2 0 2
```
**52**. For this DataFrame `df`, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4.
**53**. Now create a new column for this DataFrame called `'adjacent'`. This column should contain the number of mines found on adjacent squares in the grid.
(E.g. for the first row, which is the entry for the coordinate `(0, 0)`, count how many mines are found on the coordinates `(0, 1)`, `(1, 0)` and `(1, 1)`.)
**54**. For rows of the DataFrame that contain a mine, set the value in the `'adjacent'` column to NaN.
**55**. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the `x` coordinate, rows are the `y` coordinate.
## Plotting
### Visualize trends and patterns in data
Difficulty: *medium*
To really get a good understanding of the data contained in your DataFrame, it is often essential to create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is baked into pandas and the puzzles below explore some of what's possible with the library.
**56.** Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames very user-friendly! Plotting in a notebook environment usually makes use of the following boilerplate:
```python
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
```
matplotlib is the plotting library which pandas' plotting functionality is built upon, and it is usually aliased to ```plt```.
```%matplotlib inline``` tells the notebook to show plots inline, instead of creating them in a separate window.
```plt.style.use('ggplot')``` is a style theme that most people find agreeable, based upon the styling of R's ggplot package.
For starters, make a scatter plot of this random data, but use black X's instead of the default markers.
```df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})```
Consult the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) if you get stuck!
**57.** Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.
(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to represent size by 10 or more)
*The chart doesn't have to be pretty: this isn't a course in data viz!*
```
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
```
**58.** What if we want to plot multiple things? Pandas allows you to pass in a matplotlib *Axis* object for plots, and plots will also return an Axis object.
Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)
```
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
```
Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.

This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.
Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this.
The below cell contains helper functions. Call ```day_stock_data()``` to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call ```plot_candlestick(df)``` on your properly aggregated and formatted stock data to print the candlestick chart.
```
import numpy as np
def float_to_time(x):
return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" + str(int(x*60 % 1 * 60)).zfill(2)
def day_stock_data():
#NYSE is open from 9:30 to 4:00
time = 9.5
price = 100
results = [(float_to_time(time), price)]
while time < 16:
elapsed = np.random.exponential(.001)
time += elapsed
if time > 16:
break
price_diff = np.random.uniform(.999, 1.001)
price *= price_diff
results.append((float_to_time(time), price))
df = pd.DataFrame(results, columns = ['time','price'])
df.time = pd.to_datetime(df.time)
return df
#Don't read me unless you get stuck!
def plot_candlestick(agg):
"""
agg is a DataFrame which has a DatetimeIndex and five columns: ["open","high","low","close","color"]
"""
fig, ax = plt.subplots()
for time in agg.index:
ax.plot([time.hour] * 2, agg.loc[time, ["high","low"]].values, color = "black")
ax.plot([time.hour] * 2, agg.loc[time, ["open","close"]].values, color = agg.loc[time, "color"], linewidth = 10)
ax.set_xlim((8,16))
ax.set_ylabel("Price")
ax.set_xlabel("Hour")
ax.set_title("OHLC of Stock Value During Trading Day")
plt.show()
```
**59.** Generate a day's worth of random stock data, and aggregate / reformat it so that it has hourly summaries of the opening, highest, lowest, and closing prices
**60.** Now that you have your properly-formatted data, try to plot it yourself as a candlestick chart. Use the ```plot_candlestick(df)``` function above, or matplotlib's [```plot``` documentation](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.plot.html) if you get stuck.
*More exercises to follow soon...*
| github_jupyter |
<a href="https://colab.research.google.com/github/rjrahul24/ai-with-python-series/blob/main/01.%20Getting%20Started%20with%20Python/Python_Revision_and_Statistical_Methods.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Inheritence in Python**
Object Oriented Programming is a coding paradigm that revolves around creating modular code and stopping mulitple uses of the same structure. It is aimed at increasing stability and usability of code. It consists of some well-known concepts stated below:
1. Classes: These often show a collection of functions and attributes that are fastened to a precise name and represent an abstract container.
2. Attributes: Generally, the data that is associated with each class. Examples are variables declared during creation of the class.
3. Objects: An instance generated from the class. There can be multiple objects of a class and every individual object takes on the properties of the class.
```
# Implementation of Classes in Python
# Creating a Class Math with 2 functions
class Math:
def subtract (self, i, j):
return i-j
def add (self, x, y):
return x+y
# Creating an object of the class Math
math_child = Math()
test_int_A = 10
test_int_B = 20
print(math_child.subtract(test_int_B, test_int_A))
# Creating a Class Person with an attribute and an initialization function
class Person:
name = 'George'
def __init__ (self):
self.age = 34
# Creating an object of the class and printing its attributes
p1 = Person()
print (p1.name)
print (p1.age)
```
**Constructors and Inheritance**
The constructor is an initialization function that is always called when a class’s instance is created. The constructor is named __init__() in Python and defines the specifics of instantiating a class and its attributes.
Class inheritance is a concept of taking values of a class from its origin and giving the same properties to a child class. It creates relationship models like “Class A is a Class B”, like a triangle (child class) is a shape (parent class). All the functions and attributes of a superclass are inherited by the subclass.
1. Overriding: During the inheritance, the behavior of the child class or the subclass can be modified. Doing this modification on functions is class “overriding” and is achieved by declaring functions in the subclass with the same name. Functions created in the subclass will take precedence over those in the parent class.
2. Composition: Classes can also be built from other smaller classes that support relationship models like “Class A has a Class B”, like a Department has Students.
3. Polymorphism: The functionality of similar looking functions can be changed in run-time, during their implementation. This is achieved using Polymorphism, that includes two objects of different parent class but having the same set of functions. The outward look of these functions is the same, but implementations differ.
```
# Creating a class and instantiating variables
class Animal_Dog:
species = "Canis"
def __init__(self, name, age):
self.name = name
self.age = age
# Instance method
def description(self):
return f"{self.name} is {self.age} years old"
# Another instance method
def animal_sound(self, sound):
return f"{self.name} says {sound}"
# Check the object’s type
Animal_Dog("Bunny", 7)
# Even though a and b are both instances of the Dog class, they represent two distinct objects in memory.
a = Animal_Dog("Fog", 6)
b = Animal_Dog("Bunny", 7)
a == b
# Instantiating objects with the class’s constructor arguments
fog = Animal_Dog("Fog", 6)
bunny = Animal_Dog("Bunny", 7)
print (bunny.name)
print (bunny.age)
# Accessing attributes directly
print (bunny.species)
# Creating a new Object to access through instance functions
fog = Animal_Dog("Fog", 6)
fog.description()
fog.animal_sound("Whoof Whoof")
fog.animal_sound("Bhoof Whoof")
# Inheriting the Class
class GoldRet(Animal_Dog):
def speak(self, sound="Warf"):
return f"{self.name} says {sound}"
bunny = GoldRet("Bunny", 5)
bunny.speak()
bunny.speak("Grrr Grrr")
# Code Snippet 3: Variables and data types
int_var = 100 # Integer variable
float_var = 1000.0 # Float value
string_var = "John" # String variable
print (int_var)
print (float_var)
print (string_var)
```
Variables and Data Types in Python
Variables are reserved locations in the computer’s memory that store values defined within them. Whenever a variable is created, a piece of the computer’s memory is allocated to it. Based on the data type of this declared variable, the interpreter allocates varied chunks of memory. Therefore, basis the assignment of variables as integer, float, strings, etc. different sizes of memory allocations are invoked.
• Declaration: Variables in Python do not need explicit declaration to reserve memory space. This happens automatically when a value is assigned. The (=) sign is used to assign values to variables.
• Multiple Assignment: Python allows for multiple variables to hold a single value and this declaration can be done together for all variables.
• Deleting References: Memory reference once created can also be deleted. The 'del' statement is used to delete the reference to a number object. Multiple object deletion is also supported by the 'del' statement.
• Strings: Strings are a set of characters, that Python allows representation through single or double quotes. String subsets can be formed using the slice operator ([ ] and [:] ) where indexing starts from 0 on the left and -1 on the right. The (+) sign is the string concatenation operator and the (*) sign is the repetition operator.
Datatype Conversion
Function Description
int(x [,base]) Converts given input to integer. Base is used for string conversions.
long(x [,base] ) Converts given input to a long integer
float(x) Follows conversion to floating-point number.
complex(real [,imag]) Used for creating a complex number.
str(x) Converts any given object to a string
eval(str) Evaluates given string and returns an object.
tuple(s) Conversion to tuple
list(s) List conversion of given input
set(s) Converts the given value to a set
unichr(x) Conversion from an integer to Unicode character.
Looking at Variables and Datatypes
Data stored as Python’s variables is abstracted as objects. Data is represented by objects or through relations between individual objects. Therefore, every variable and its corresponding values are an object of a class, depending on the stored data.
```
# Multiple Assignment: All are assigned to the same memory location
a = b = c = 1
# Assigning multiple variables with multiple values
a,b,c = 1,2,"jacob"
# Assigning and deleting variable references
var1 = 1
var2 = 10
del var1 # Removes the reference of var1
del var2
# Basic String Operations in Python
str = 'Hello World!'
print (str)
# Print the first character of string variable
print (str[0])
# Prints characters from 3rd to 5th positions
print (str[2:5])
# Print the string twice
print (str * 2)
# Concatenate the string and print
print (str + "TEST")
```
| github_jupyter |
# Continuous Control
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.
### 1. Start the Environment
We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
import numpy as np
import torch
import matplotlib.pyplot as plt
import time
from unityagents import UnityEnvironment
from collections import deque
from itertools import count
import datetime
from ddpg import DDPG, ReplayBuffer
%matplotlib inline
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Reacher.app"`
- **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"`
- **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"`
- **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"`
- **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"`
- **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"`
- **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"`
For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Reacher.app")
```
```
#env = UnityEnvironment(file_name='envs/Reacher_Linux_NoVis_20/Reacher.x86_64') # Headless
env = UnityEnvironment(file_name='envs/Reacher_Linux_20/Reacher.x86_64') # Visual
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
In this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.
The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
```
When finished, you can close the environment.
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
```
BUFFER_SIZE = int(5e5) # replay buffer size
CACHE_SIZE = int(6e4)
BATCH_SIZE = 256 # minibatch size
GAMMA = 0.99 # discount factor
TAU = 1e-3 # for soft update of target parameters
LR_ACTOR = 1e-3 # learning rate of the actor
LR_CRITIC = 1e-3 # learning rate of the critic
WEIGHT_DECAY = 0 # L2 weight decay
UPDATE_EVERY = 20 # timesteps between updates
NUM_UPDATES = 15 # num of update passes when updating
EPSILON = 1.0 # epsilon for the noise process added to the actions
EPSILON_DECAY = 1e-6 # decay for epsilon above
NOISE_SIGMA = 0.05
# 96 Neurons solves the environment consistently and usually fastest
fc1_units=96
fc2_units=96
random_seed=23
def store(buffers, states, actions, rewards, next_states, dones, timestep):
memory, cache = buffers
for state, action, reward, next_state, done in zip(states, actions, rewards, next_states, dones):
memory.add(state, action, reward, next_state, done)
cache.add(state, action, reward, next_state, done)
store
def learn(agent, buffers, timestep):
memory, cache = buffers
if len(memory) > BATCH_SIZE and timestep % UPDATE_EVERY == 0:
for _ in range(NUM_UPDATES):
experiences = memory.sample()
agent.learn(experiences, GAMMA)
for _ in range(3):
experiences = cache.sample()
agent.learn(experiences, GAMMA)
learn
avg_over = 100
print_every = 10
def ddpg(agent, buffers, n_episodes=200, stopOnSolved=True):
print('Start: ',datetime.datetime.now())
scores_deque = deque(maxlen=avg_over)
scores_global = []
average_global = []
min_global = []
max_global = []
best_avg = -np.inf
tic = time.time()
print('\rEpis,EpAvg,GlAvg, Max, Min, Time')
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
agent.reset()
score_average = 0
timestep = time.time()
for t in count():
actions = agent.act(states, add_noise=True)
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
store(buffers, states, actions, rewards, next_states, dones, t)
learn(agent, buffers, t)
states = next_states # roll over states to next time step
scores += rewards # update the score (for each agent)
if np.any(dones): # exit loop if episode finished
break
score = np.mean(scores)
scores_deque.append(score)
score_average = np.mean(scores_deque)
scores_global.append(score)
average_global.append(score_average)
min_global.append(np.min(scores))
max_global.append(np.max(scores))
print('\r {}, {:.2f}, {:.2f}, {:.2f}, {:.2f}, {:.2f}'\
.format(str(i_episode).zfill(3), score, score_average, np.max(scores),
np.min(scores), time.time() - timestep), end="\n")
if i_episode % print_every == 0:
agent.save('./')
if stopOnSolved and score_average >= 30.0:
toc = time.time()
print('\nSolved in {:d} episodes!\tAvg Score: {:.2f}, time: {}'.format(i_episode, score_average, toc-tic))
agent.save('./'+str(i_episode)+'_')
break
print('End: ',datetime.datetime.now())
return scores_global, average_global, max_global, min_global
ddpg
# Create new empty buffers to start training from scratch
buffers = [ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, random_seed),
ReplayBuffer(action_size, CACHE_SIZE, BATCH_SIZE, random_seed)]
agent = DDPG(state_size=state_size, action_size=action_size, random_seed=23,
fc1_units=96, fc2_units=96)
scores, averages, maxima, minima = ddpg(agent, buffers, n_episodes=130)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.plot(np.arange(1, len(averages)+1), averages)
plt.plot(np.arange(1, len(maxima)+1), maxima)
plt.plot(np.arange(1, len(minima)+1), minima)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(['EpAvg', 'GlAvg', 'Max', 'Min'], loc='upper left')
plt.show()
# Smaller agent learning this task from larger agent experiences
agent = DDPG(state_size=state_size, action_size=action_size, random_seed=23,
fc1_units=48, fc2_units=48)
scores, averages, maxima, minima = ddpg(agent, buffers, n_episodes=200)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.plot(np.arange(1, len(averages)+1), averages)
plt.plot(np.arange(1, len(maxima)+1), maxima)
plt.plot(np.arange(1, len(minima)+1), minima)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(['EpAvg', 'GlAvg', 'Max', 'Min'], loc='lower center')
plt.show()
```
Saves experiences for training future agents. Warning file is quite large.
```
memory, cache = buffers
memory.save('experiences.pkl')
#env.close()
```
### 5. See the pre-trained agent in action
```
agent = DDPG(state_size=state_size, action_size=action_size, random_seed=23,
fc1_units=96, fc2_units=96)
agent.load('./saves/96_96_108_actor.pth', './saves/96_96_108_critic.pth')
def play(agent, episodes=3):
for i_episode in range(episodes):
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = agent.act(states, add_noise=False) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
#break
print('Ep No: {} Total score (averaged over agents): {}'.format(i_episode, np.mean(scores)))
play(agent, 10)
```
### 6. Experiences
Experiences from the Replay Buffer could be saved and loaded for training different agents.
As an example I've provided `experiences.pkl.7z` which you should unpack with your favorite archiver.
Create new ReplayBuffer and load saved experiences
```
savedBuffer = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, random_seed)
savedBuffer.load('experiences.pkl')
```
Afterward you can use it to train your agent
```
savedBuffer.sample()
```
| github_jupyter |
```
def download(url, params={}, retries=3):
resp = None
header = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36"}
try:
resp = requests.get(url, params=params, headers = header)
resp.raise_for_status()
except requests.exceptions.HTTPError as e:
if 500 <= e.response.status_code < 600 and retries > 0:
print(retries)
resp = download(url, params, retries - 1)
else:
print(e.response.status_code)
print(e.response.reason)
print(e.request.headers)
return resp
from bs4 import BeautifulSoup
import requests
html = download("https://media.daum.net/breakingnews/society")
daumnews = BeautifulSoup(html.text, "lxml")
daumnewstitellists = daumnews.select("strong > a")
output_file_name = "DaumNews_Urls.txt"
output_file = open(output_file_name, "w", encoding="utf-8")
for links in daumnewstitellists:
#print(links.text)
links.get('href')
print()
output_file_name = "DaumNews_Urls.txt"
output_file = open(output_file_name, "w", encoding="utf-8")
page_num = 1
max_page_num = 2
user_agent = "'Mozilla/5.0"
headers ={"User-Agent" : user_agent}
while page_num<=max_page_num:
page_url = "https://media.daum.net/breakingnews/society"
response = requests.get(page_url, headers=headers)
html = response.text
"""
주어진 HTML에서 기사 URL을 추출한다.
"""
url_frags = re.findall('<a href="(.*?)"',html)
urls = []
for url_frag in url_frags:
urls.append(url_frag)
for url in urls:
print(url, file=output_file)
time.sleep(2)
page_num+=1
output_file.close()
html = download('http://v.media.daum.net/v/20190512030900250')
daumnews = BeautifulSoup(html.text, "lxml")
import json
daumnewstitellists = daumnews.select("p")
print(daumnewstitellists)
for links in daumnewstitellists:
a = links.text
print(a)
with open('사회-2019051101.txt', 'w+', encoding='utf-8') as json_file:
json.dump(a, json_file, ensure_ascii=False, indent='\n', sort_keys=True)
import requests
from bs4 import BeautifulSoup
import re
import ast
base_url = 'https://media.daum.net/society/'
req = requests.get(base_url)
html = req.content
soup = BeautifulSoup(html, 'lxml')
newslist = soup.find(name="div", attrs={"class":"section_cate section_headline"})
newslist_atag = newslist.find_all('a')
#print(newslist_atag)
url_list = []
for a in newslist_atag:
url_list.append(a.get('href'))
print(url_list)
#print(url_list)
# 각 기사에서 텍스트만 정제하여 추출
req = requests.get(url_list[0])
#print(req)
html = req.content
#print(html)
soup = BeautifulSoup(html, 'lxml')
text = ''
doc = None
for item in soup.find_all('div', id='mArticle'):
text = text + str(item.find_all(text=True))
text = ast.literal_eval(text)
print(text)
print(url_list[3])
req = requests.get(url_list[3])
#print(req)
html = req.content
#print(html)
soup = BeautifulSoup(html, 'lxml')
text = ''
doc = None
for item in soup.find_all('div', id='mArticle'):
text = text + str(item.find_all(text=True))
text = ast.literal_eval(text)
print(text)
from selenium import webdriver
import json
driver = webdriver.Chrome()
driver.get('https://media.daum.net/society/')
driver.find_element_by_xpath('//*[@id="cSub"]/div/div[1]/div[1]/div/strong/a').click()
driver.implicitly_wait(5)
html = driver.page_source
daumnews = BeautifulSoup(html, "lxml")
lists = daumnews.select("p")
data = {}
for contents in lists:
a = contents.text
print(a)
with open('daumnews-society.json', 'w+') as json_file:
json.dump(data, json_file)
#ensure_ascii=False, indent='\t'
# encoding='utf-8'
#driver.close()
driver.close()
from bs4 import BeautifulSoup
import requests
html = download("https://media.daum.net/society/")
daumnews = BeautifulSoup(html.text, "lxml")
req = requests.get(daumnews)
html = req.content
soup = BeautifulSoup(html, 'lxml')
#!/usr/bin/env python3
#-*- coding: utf-8 -*
"""
네이버 경제 뉴스 중 증권관련 뉴스의 기사 URL을 수집합니다. 최근 10개의 페이지만 가져오겠습니다.
"""
import time
import re
import requests
eval_d = "20190511"
output_file_name = "DaumNews_Urls.txt"
output_file = open(output_file_name, "w", encoding="utf-8")
page_num = 1
max_page_num = 2
user_agent = "'Mozilla/5.0"
headers ={"User-Agent" : user_agent}
while page_num<=max_page_num:
page_url = "https://media.daum.net/breakingnews/society"
response = requests.get(page_url, headers=headers)
html = response.text
"""
주어진 HTML에서 기사 URL을 추출한다.
"""
url_frags = re.findall('<a href="(.*?)"',html)
urls = []
for url_frag in url_frags:
urls.append(url_frag)
for url in urls:
print(url, file=output_file)
time.sleep(2)
page_num+=1
output_file.close()
#[출처] 증권뉴스 데이터 수집(1/3)|작성자 엉드루
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
네이버 뉴스 기사를 수집한다.
"""
import time
import requests
import os
def get_url_file_name() :
"""
url 파일 이름을 받아 돌려준다.
:return:
"""
url_file_name = input("Enter url file name : ")
return url_file_name
def get_output_file_name():
"""
철력 파일의 이름을 입력받아 돌려준다.
:return:
"""
output_file_name = input("Enter output file name : ")
return output_file_name
def open_url_file(url_file_name):
"""
URL 파일을 연다.
:param url_file_name:
:return:
"""
url_file = open(url_file_name, "r", encoding ="utf-8")
return url_file
def create_output_file(output_file_name):
"""
출력 파일을 생성한다.
:param output_file_name:
:return:
"""
output_file = open(output_file_name, "w", encoding='utf-8')
return output_file
def gen_print_url(url_line):
"""
주어진 기사 링크 URL로 부터 인쇄용 URL을 만들어 돌려준다.
:param url_line:
:return:
"""
article_id = url_line[(len(url_line)-24):len(url_line)]
print_url = "https://media.daum.net/breakingnews/society" + article_id
return print_url
def get_html(print_url) :
"""
주어진 인쇄용 URL에 접근하여 HTML을 읽어서 돌려준다.
:param print_url:
:return:
"""
user_agent = "'Mozilla/5.0"
headers ={"User-Agent" : user_agent}
response = requests.get(print_url, headers=headers)
html = response.text
return html
def write_html(output_file, html):
"""
주어진 HTML 텍스트를 출력 파일에 쓴다.
:param output_file:
:param html:
:return:
"""
output_file.write("{}\n".format(html))
output_file.write("@@@@@ ARTICLE DELMITER @@@@\n")
def pause():
"""
3초동안 쉰다.
:return:
"""
time.sleep(3)
def close_output_file(output_file):
"""
출력 파일을 닫는다.
:param output_file:
:return:
"""
output_file.close()
def close_url_file(url_file):
"""
URL 파일을 닫는다.
:param url_file:
:return:
"""
url_file.close()
def main():
"""
네이버 뉴스기사를 수집한다.
:return:
"""
url_file_name = get_url_file_name()
output_file_name = get_output_file_name()
url_file = open_url_file(url_file_name)
output_file = create_output_file(output_file_name)
for line in url_file:
print_url = gen_print_url(line)
html = get_html(print_url)
write_html(output_file,html)
close_output_file(output_file)
close_url_file(url_file)
main()
#[출처] 증권뉴스데이터 수집(2/3편)|작성자 엉드루
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
네이버 뉴스 기사 HTML에서 순수 텍스트 기사를 추출한다.
"""
import bs4
import time
import requests
import os
ARTICLE_DELIMITER = "@@@@@ ARTICLE DELMITER @@@@\n"
TITLE_START_PAT = '<h3 class="tit_view" data-translation="">'
TITLE_END_PAT = '</h3>'
DATE_TIME_START_PAT = '<span class="txt_info">입력 </span>'
BODY_START_PAT = '<p dmcf-pid="" dmcf-ptype="">'
BODY_END_PAT = '</p>'
TIDYUP_START_PAT = '<div class="foot_view">'
def get_html_file_name():
"""
사용자로 부터 HTML 파일 이름을 입력받아 돌려준다.
:return:
"""
html_file_name = input("Enter HTML File name : ")
return html_file_name
def get_text_file_name():
"""
사용자로부터 텍스트 파일 이름을 입력받아 돌려준다.
:return:
"""
text_file_name = input("Enter text file name : ")
return text_file_name
def open_html_file(html_file_name):
"""
HTML 기사 파일을 열어서 파일 객체를 돌려준다.
:param html_file_name:
:return:
"""
html_file = open(html_file_name, "r", encoding="utf-8")
return html_file
def create_text_file(text_file_name):
"""
텍스트 기사 파일을 만들어 파일 객체를 돌려준다.
:param text_file_name:
:return:
"""
text_file = open(text_file_name, "w", encoding="utf-8")
return text_file
def read_html_article(html_file):
"""
HTML 파일에서 기사 하나를 읽어서 돌려준다.
:param html_file:
:return:
"""
lines = []
for line in html_file:
if line.startswith(ARTICLE_DELIMITER):
html_text = "".join(lines).strip()
return html_text
lines.append(line)
return None
def ext_title(html_text):
"""
HTML 기사에서 제목을 추출하여 돌려준다.
:param html_text:
:return:
"""
p = html_text.find(TITLE_START_PAT)
q = html_text.find(TITLE_END_PAT)
title = html_text[p + len(TITLE_START_PAT):q]
title = title.strip()
return title
def ext_date_time(html_text):
"""
HTML 기사에서 날짜와 시간을 추출하여 돌려준다.
:param html_text:
:return:
"""
start_p = html_text.find(DATE_TIME_START_PAT)+len(DATE_TIME_START_PAT)
end_p = start_p + 10
date_time = html_text[start_p:end_p]
date_time = date_time.strip()
return date_time
def strip_html(html_body):
"""
HTML 본문에서 HTML 태그를 제거하고 돌려준다.
:param html_body:
:return:
"""
page = bs4.BeautifulSoup(html_body, "html.parser")
body = page.text
return body
def tidyup(body):
"""
본문에서 필요없는 부분을 자르고 돌려준다.
:param body:
:return:
"""
p = body.find(TIDYUP_START_PAT)
body = body[:p]
body = body.strip()
return body
def ext_body(html_text):
"""
HTML 기사에서 본문을 추출하여 돌려준다.
:param html_text:
:return:
"""
p = html_text.find(BODY_START_PAT)
q = html_text.find(BODY_END_PAT)
html_body = html_text[p + len(BODY_START_PAT):q]
html_body = html_body.replace("<br />","\n")
html_body = html_body.strip()
body = strip_html(html_body)
body = tidyup(body)
return body
def write_article(text_file, title, date_time, body):
"""
텍스트 파일에 항목이 구분된 기사를 출력한다.
:param text_file:
:param title:
:param date_time:
:param body:
:return:
"""
text_file.write("{}\n".format(title))
text_file.write("{}\n".format(date_time))
text_file.write("{}\n".format(body))
text_file.write("{}\n".format(ARTICLE_DELIMITER))
def main():{
"cells": [
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"^C\n"
]
}
],
"source": [
"!pip install newspaper3k"
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[포토] G2 '밀키' 미하엘, \"이제 IG만 남았어요\"\n",
"13일 오후 베트남 하노이 국립 컨벤션센터에서 열린 MSI 그룹 스테이지 4일차 G2 e스포츠와 플래시 울브즈의 경기서 승리한 G2 '밀키' 미하엘 메흘레가 베트남어 방송 인터뷰를 하고 있다.\n",
"\n",
"하노이(베트남) ㅣ 김용우 기자 kenzi@fomos.co.kr\n",
"\n",
"포모스와 함께 즐기는 e스포츠, 게임 그 이상을 향해!\n",
"\n",
"Copyrights ⓒ FOMOS(http://www.fomos.kr) 무단 전재 및 재배포 금지\n"
]
}
],
"source": [
"from newspaper import Article\n",
"\n",
"'''\n",
"http://v.media.daum.net/v/20190513202543774\n",
"http://v.media.daum.net/v/20190513202526771\n",
"http://v.media.daum.net/v/20190513202442768\n",
"http://v.media.daum.net/v/20190513202100733\n",
"http://v.media.daum.net/v/20190513201951713\n",
"http://v.media.daum.net/v/20190513201912711\n",
"http://v.media.daum.net/v/20190513201708688\n",
"http://v.media.daum.net/v/20190513201646686\n",
"http://v.media.daum.net/v/20190513201515670\n",
"http://v.media.daum.net/v/20190513201343654\n",
"http://v.media.daum.net/v/20190513201042627\n",
"http://v.media.daum.net/v/20190513200900613\n",
"http://v.media.daum.net/v/20190513200731602\n",
"http://v.media.daum.net/v/20190513200601595\n",
"http://v.media.daum.net/v/20190513200601594\n",
"http://v.media.daum.net/v/20190513201012624\n",
"http://v.media.daum.net/v/20190513200300564\n",
"'''\n",
"\n",
"url = 'http://v.media.daum.net/v/20190513202526771'\n",
"a = Article(url, language='ko')\n",
"a.download()\n",
"a.parse()\n",
"print(a.title)\n",
"print(a.text)\n",
"\n",
"with open(\"F:/daumnews/sports/02.txt\", \"w\") as f:\n",
" f.write(a.text)\n",
"f.close()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from newspaper import Article"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"from bs4 import BeautifulSoup\n",
"import requests\n",
"\n",
"html = download(\"https://media.daum.net/breakingnews/culture\")\n",
"daumnews = BeautifulSoup(html.text, \"lxml\")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"daumnewstitellists = daumnews.select(\"div > strong > a\")\n",
"k = []\n",
"\n",
"t = 18\n",
"\n",
"for links in daumnewstitellists:\n",
" l = links.get('href')\n",
" k.append(l)\n",
"\n",
"for i in range(0,17):\n",
" url = k[i]\n",
" a = Article(url, language='ko')\n",
" a.download()\n",
" a.parse()\n",
" with open(\"F:/daumnews/culture/%d.txt\" % int(i+t), \"w\", encoding=\"utf-8\") as f:\n",
" f.write(a.title)\n",
" f.write(a.text)\n",
" f.close()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from bs4 import BeautifulSoup\n",
"import requests\n",
"\n",
"html = download(\"https://media.daum.net/breakingnews/sports\")\n",
"daumnews = BeautifulSoup(html.text, \"lxml\")\n",
"\n",
"daumnewstitellists = daumnews.select(\"div > strong > a\")\n",
"\n",
"for links in daumnewstitellists:\n",
" #print(links.text)\n",
" print(links.get('href'))\n",
" #print()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def download(url, params={}, retries=3):\n",
" resp = None\n",
" \n",
" header = {\"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36\"}\n",
" \n",
" try:\n",
" resp = requests.get(url, params=params, headers = header)\n",
" resp.raise_for_status()\n",
" except requests.exceptions.HTTPError as e:\n",
" if 500 <= e.response.status_code < 600 and retries > 0:\n",
" print(retries)\n",
" resp = download(url, params, retries - 1)\n",
" else:\n",
" print(e.response.status_code)\n",
" print(e.response.reason)\n",
" print(e.request.headers)\n",
"\n",
" return resp"
]
},
{
"cell_type": "code",
"execution_count": 117,
"metadata": {},
"outputs": [],
"source": [
"from newspaper import Article\n",
"from bs4 import BeautifulSoup\n",
"import requests\n",
"\n",
"html = download(\"https://media.daum.net/breakingnews/sports\")\n",
"daumnews = BeautifulSoup(html.text, \"lxml\")"
]
},
{
"cell_type": "code",
"execution_count": 139,
"metadata": {},
"outputs": [],
"source": [
"daumnewstitellists = daumnews.select(\"div > strong > a\")\n",
"\n",
"for links in daumnewstitellists:\n",
" b = links.get('href')\n",
" a = Article(b, language='ko')\n",
" a.download()\n",
" a.parse() \n",
" with open(\"F:/daumnews/sports/01.txt\", \"w\") as f:\n",
" f.write(a.text)\n",
" f.close()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
"""
네이트 뉴스 기사 HTML에서 순수 텍스트 기사를 추출한다.
:return:
"""
html_file_name = get_html_file_name()
text_file_name = get_text_file_name()
html_file = open_html_file(html_file_name)
text_file = create_text_file(text_file_name)
while True:
html_text = read_html_article(html_file)
if not html_text:
break
title = ext_title(html_text)
date_time = ext_date_time(html_text)
body = ext_body(html_text)
write_article(text_file, title, date_time, body)
html_file.close()
text_file.close()
main()
```
| github_jupyter |
```
%matplotlib inline
```
# Species distribution modeling
Modeling species' geographic distributions is an important
problem in conservation biology. In this example we
model the geographic distribution of two south american
mammals given past observations and 14 environmental
variables. Since we have only positive examples (there are
no unsuccessful observations), we cast this problem as a
density estimation problem and use the :class:`sklearn.svm.OneClassSVM`
as our modeling tool. The dataset is provided by Phillips et. al. (2006).
If available, the example uses
`basemap <https://matplotlib.org/basemap/>`_
to plot the coast lines and national boundaries of South America.
The two species are:
- `"Bradypus variegatus"
<http://www.iucnredlist.org/details/3038/0>`_ ,
the Brown-throated Sloth.
- `"Microryzomys minutus"
<http://www.iucnredlist.org/details/13408/0>`_ ,
also known as the Forest Small Rice Rat, a rodent that lives in Peru,
Colombia, Ecuador, Peru, and Venezuela.
References
----------
* `"Maximum entropy modeling of species geographic distributions"
<http://rob.schapire.net/papers/ecolmod.pdf>`_
S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,
190:231-259, 2006.
```
# Authors: Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Jake Vanderplas <vanderplas@astro.washington.edu>
#
# License: BSD 3 clause
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import Bunch
from sklearn.datasets import fetch_species_distributions
from sklearn import svm, metrics
# if basemap is available, we'll use it.
# otherwise, we'll improvise later...
try:
from mpl_toolkits.basemap import Basemap
basemap = True
except ImportError:
basemap = False
print(__doc__)
def construct_grids(batch):
"""Construct the map grid from the batch object
Parameters
----------
batch : Batch object
The object returned by :func:`fetch_species_distributions`
Returns
-------
(xgrid, ygrid) : 1-D arrays
The grid corresponding to the values in batch.coverages
"""
# x,y coordinates for corner cells
xmin = batch.x_left_lower_corner + batch.grid_size
xmax = xmin + (batch.Nx * batch.grid_size)
ymin = batch.y_left_lower_corner + batch.grid_size
ymax = ymin + (batch.Ny * batch.grid_size)
# x coordinates of the grid cells
xgrid = np.arange(xmin, xmax, batch.grid_size)
# y coordinates of the grid cells
ygrid = np.arange(ymin, ymax, batch.grid_size)
return (xgrid, ygrid)
def create_species_bunch(species_name, train, test, coverages, xgrid, ygrid):
"""Create a bunch with information about a particular organism
This will use the test/train record arrays to extract the
data specific to the given species name.
"""
bunch = Bunch(name=' '.join(species_name.split("_")[:2]))
species_name = species_name.encode('ascii')
points = dict(test=test, train=train)
for label, pts in points.items():
# choose points associated with the desired species
pts = pts[pts['species'] == species_name]
bunch['pts_%s' % label] = pts
# determine coverage values for each of the training & testing points
ix = np.searchsorted(xgrid, pts['dd long'])
iy = np.searchsorted(ygrid, pts['dd lat'])
bunch['cov_%s' % label] = coverages[:, -iy, ix].T
return bunch
def plot_species_distribution(species=("bradypus_variegatus_0",
"microryzomys_minutus_0")):
"""
Plot the species distribution.
"""
if len(species) > 2:
print("Note: when more than two species are provided,"
" only the first two will be used")
t0 = time()
# Load the compressed data
data = fetch_species_distributions()
# Set up the data grid
xgrid, ygrid = construct_grids(data)
# The grid in x,y coordinates
X, Y = np.meshgrid(xgrid, ygrid[::-1])
# create a bunch for each species
BV_bunch = create_species_bunch(species[0],
data.train, data.test,
data.coverages, xgrid, ygrid)
MM_bunch = create_species_bunch(species[1],
data.train, data.test,
data.coverages, xgrid, ygrid)
# background points (grid coordinates) for evaluation
np.random.seed(13)
background_points = np.c_[np.random.randint(low=0, high=data.Ny,
size=10000),
np.random.randint(low=0, high=data.Nx,
size=10000)].T
# We'll make use of the fact that coverages[6] has measurements at all
# land points. This will help us decide between land and water.
land_reference = data.coverages[6]
# Fit, predict, and plot for each species.
for i, species in enumerate([BV_bunch, MM_bunch]):
print("_" * 80)
print("Modeling distribution of species '%s'" % species.name)
# Standardize features
mean = species.cov_train.mean(axis=0)
std = species.cov_train.std(axis=0)
train_cover_std = (species.cov_train - mean) / std
# Fit OneClassSVM
print(" - fit OneClassSVM ... ", end='')
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.5)
clf.fit(train_cover_std)
print("done.")
# Plot map of South America
plt.subplot(1, 2, i + 1)
if basemap:
print(" - plot coastlines using basemap")
m = Basemap(projection='cyl', llcrnrlat=Y.min(),
urcrnrlat=Y.max(), llcrnrlon=X.min(),
urcrnrlon=X.max(), resolution='c')
m.drawcoastlines()
m.drawcountries()
else:
print(" - plot coastlines from coverage")
plt.contour(X, Y, land_reference,
levels=[-9998], colors="k",
linestyles="solid")
plt.xticks([])
plt.yticks([])
print(" - predict species distribution")
# Predict species distribution using the training data
Z = np.ones((data.Ny, data.Nx), dtype=np.float64)
# We'll predict only for the land points.
idx = np.where(land_reference > -9999)
coverages_land = data.coverages[:, idx[0], idx[1]].T
pred = clf.decision_function((coverages_land - mean) / std)
Z *= pred.min()
Z[idx[0], idx[1]] = pred
levels = np.linspace(Z.min(), Z.max(), 25)
Z[land_reference == -9999] = -9999
# plot contours of the prediction
plt.contourf(X, Y, Z, levels=levels, cmap=plt.cm.Reds)
plt.colorbar(format='%.2f')
# scatter training/testing points
plt.scatter(species.pts_train['dd long'], species.pts_train['dd lat'],
s=2 ** 2, c='black',
marker='^', label='train')
plt.scatter(species.pts_test['dd long'], species.pts_test['dd lat'],
s=2 ** 2, c='black',
marker='x', label='test')
plt.legend()
plt.title(species.name)
plt.axis('equal')
# Compute AUC with regards to background points
pred_background = Z[background_points[0], background_points[1]]
pred_test = clf.decision_function((species.cov_test - mean) / std)
scores = np.r_[pred_test, pred_background]
y = np.r_[np.ones(pred_test.shape), np.zeros(pred_background.shape)]
fpr, tpr, thresholds = metrics.roc_curve(y, scores)
roc_auc = metrics.auc(fpr, tpr)
plt.text(-35, -70, "AUC: %.3f" % roc_auc, ha="right")
print("\n Area under the ROC curve : %f" % roc_auc)
print("\ntime elapsed: %.2fs" % (time() - t0))
plot_species_distribution()
plt.show()
```
| github_jupyter |
```
%pushd ../../
%env CUDA_VISIBLE_DEVICES=3
import json
import os
import sys
import tempfile
from tqdm.auto import tqdm
import torch
import torchvision
from torchvision import transforms
from PIL import Image
import numpy as np
torch.cuda.set_device(0)
from netdissect import setting
segopts = 'netpqc'
segmodel, seglabels, _ = setting.load_segmenter(segopts)
segmodel.get_label_and_category_names()
!ls notebooks/stats/churches
import glob
ns = []
for f in glob.glob('/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/domes/*.png'):
ns.append(int(os.path.split(f)[1][6:][:-4]))
ns = sorted(ns)
label2idx = {l: i for i, l in enumerate(seglabels)}
label2idx['dome']
label2idx['building']
label2idx['tree']
class Dataset():
def __init__(self, before, before_prefix, after, after_prefix, device='cpu'):
self.before = before
self.before_prefix = before_prefix
self.after = after
self.after_prefix = after_prefix
self.device = device
def __getitem__(self, key):
before_seg = torch.load(os.path.join(self.before, f'{self.before_prefix}{key}.pth'), map_location=self.device)
after_seg = torch.load(os.path.join(self.after, f'{self.after_prefix}{key}.pth'), map_location=self.device)
mapped = after_seg.permute(1, 2, 0)[(before_seg == 1708).sum(0).nonzero(as_tuple=True)]
assert mapped.shape[1] == 6
return (mapped == 5).sum(), mapped.shape[0]
class Sampler(torch.utils.data.Sampler):
def __init__(self, indices):
self.indices = indices
def __len__(self):
return len(self.indices)
def __iter__(self):
yield from self.indices
def compute(before, before_pref, after, after_pref, tgt=5, tgtc=0, src=1708, srcc=2, ns=ns):
total = 0
count = 0
import time
for subn in tqdm(torch.as_tensor(ns).split(100)):
t0 = time.time()
before_segs = [
torch.load(os.path.join(before, f'{before_pref}{n}.pth'), map_location='cpu') for n in subn]
after_segs = [
torch.load(os.path.join(after, f'{after_pref}{n}.pth'), map_location='cpu') for n in subn]
t1 = time.time()
before_segs = torch.stack(before_segs).cuda()
after_segs = torch.stack(after_segs).cuda()
mapped = after_segs[:, tgtc][before_segs[:, srcc] == src]
t2 = time.time()
total += (mapped == tgt).sum()
count += mapped.shape[0]
print(total, count, t1-t0,t2-t1)
return total.item(), count
before = 'notebooks/stats/churches/domes'
before_pref = 'domes_'
after = 'notebooks/stats/churches/dome2tree/ours'
after_pref = 'dome2tree_'
dome2tree_ours = compute(before, before_pref, after, after_pref, tgt=4)
before = 'notebooks/stats/churches/domes'
before_pref = 'domes_'
after = 'notebooks/stats/churches/dome2tree/overfit'
after_pref = 'image_'
dome2tree_overfit = compute(before, before_pref, after, after_pref, tgt=4)
before = 'notebooks/stats/churches/church'
before_pref = 'church_'
after = 'notebooks/stats/churches/dome2tree_all/ours'
after_pref = 'dome2tree_all_'
dome2tree_all_ours = compute(before, before_pref, after, after_pref, ns=torch.arange(10000))
dome2tree_all_overfit[0] / dome2tree_all_overfit[1]
!ls /data/vision/torralba/ganprojects/placesgan/tracer/results/ablations/stylegan-church-dome2tree-8-1-2001-0.0001-overfit
Image.open('/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/church/church_1.png')
Image.open('/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2spire_all/dome2spire_all_1.png')
Image.open('/data/vision/torralba/distillation/gan_rewriting/results/ablations/stylegan-church-dome2spire-8-10-2001-0.05-ours-10-stdcovariance/images/image_0.png')
before = 'notebooks/stats/churches/church'
before_pref = 'church_'
after = 'notebooks/stats/churches/dome2tree_all/overfit'
after_pref = 'image_'
dome2tree_all_overfit = compute(before, before_pref, after, after_pref, ns=torch.arange(10000), tgt=4)
before = 'notebooks/stats/churches/domes'
before_pref = 'domes_'
after = 'notebooks/stats/churches/dome2spire/ours'
after_pref = 'dome2spire_'
all_mapped = []
total = 0
count = 0
import time
for subn in tqdm(torch.as_tensor(ns).split(100)):
t0 = time.time()
before_segs = [
torch.load(os.path.join(before, f'{before_pref}{n}.pth'), map_location='cpu') for n in subn]
after_segs = [
torch.load(os.path.join(after, f'{after_pref}{n}.pth'), map_location='cpu') for n in subn]
t1 = time.time()
before_segs = torch.stack(before_segs).cuda()
after_segs = torch.stack(after_segs).cuda()
# mapped = after_segs.permute(0, 2, 3, 1)[before_segs[:, 2] == 1708]
mapped = after_segs[:, 0][before_segs[:, 2] == 1708]
# all_mapped.append()
t2 = time.time()
total += (mapped == 5).sum()
count += mapped.shape[0]
print(total, count, t1-t0,t2-t1)
before = 'notebooks/stats/churches/domes'
before_pref = 'domes_'
after = 'notebooks/stats/churches/dome2spire/ours'
after_pref = 'dome2spire_'
dataset = Dataset(before, before_pref, after, after_pref)
def wif(*args):
torch.set_num_threads(8)
def cfn(l):
return torch.stack([p[0] for p in l]).sum(), sum(p[1] for p in l)
loader = torch.utils.data.DataLoader(dataset, num_workers=10, batch_size=50, sampler=Sampler(ns), collate_fn=cfn, worker_init_fn=wif)
all_mapped = []
for mapped in tqdm(loader):
all_mapped.append(mapped)
after_seg.permute(1, 2, 0)[(before_seg == 1708).to(torch.int64).sum(0).nonzero(as_tuple=True)].shape
!ls notebooks/stats/churches/dome2spire/ours
class UnsupervisedImageFolder(torchvision.datasets.ImageFolder):
def __init__(self, root, transform=None, max_size=None, get_path=False):
self.temp_dir = tempfile.TemporaryDirectory()
os.symlink(root, os.path.join(self.temp_dir.name, 'dummy'))
root = self.temp_dir.name
super().__init__(root, transform=transform)
self.get_path = get_path
self.perm = None
if max_size is not None:
actual_size = super().__len__()
if actual_size > max_size:
self.perm = torch.randperm(actual_size)[:max_size].clone()
logging.info(f"{root} has {actual_size} images, downsample to {max_size}")
else:
logging.info(f"{root} has {actual_size} images <= max_size={max_size}")
def _find_classes(self, dir):
return ['./dummy'], {'./dummy': 0}
def __getitem__(self, key):
if self.perm is not None:
key = self.perm[key].item()
sample = super().__getitem__(key)[0]
if self.get_path:
path, _ = self.samples[key]
return sample, path
else:
return sample
def __len__(self):
if self.perm is not None:
return self.perm.size(0)
else:
return super().__len__()
len(seglabels)
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
def process(img_path, seg_path, device='cuda', batch_size=128, **kwargs):
os.makedirs(seg_path, exist_ok=True)
dataset = UnsupervisedImageFolder(img_path, transform=transform, get_path=True)
loader = torch.utils.data.DataLoader(dataset, num_workers=24, batch_size=batch_size, pin_memory=True)
with torch.no_grad():
for x, paths in tqdm(loader):
segs = segmodel.segment_batch(x.to(device), **kwargs).detach().cpu()
for path, seg in zip(paths, segs):
k = os.path.splitext(os.path.basename(path))[0]
torch.save(seg, os.path.join(seg_path, k + '.pth'))
del segs
import glob
torch.backends.cudnn.benchmark=True
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/domes',
'churches/domes',
batch_size=12)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2tree',
'churches/dome2tree/ours',
batch_size=8)
process(
'/data/vision/torralba/ganprojects/placesgan/tracer/utils/samples/dome2spire',
'churches/dome2spire/ours',
batch_size=8)
```
| github_jupyter |
# ML Project 6033657523 - Feedforward neural network
## Importing the libraries
```
from sklearn.metrics import mean_absolute_error
from sklearn.svm import SVR
from sklearn.model_selection import KFold, train_test_split
from math import sqrt
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error
import matplotlib.pyplot as plt
```
## Importing the cleaned dataset
```
dataset = pd.read_csv('cleanData_Final.csv')
X = dataset[['PrevAVGCost', 'PrevAssignedCost', 'AVGCost', 'LatestDateCost', 'A', 'B', 'C', 'D', 'E', 'F', 'G']]
y = dataset['GenPrice']
X
```
## Splitting the dataset into the Training set and Test set
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
```
## Feedforward neural network
### Fitting Feedforward neural network to the Training Set
```
from sklearn.neural_network import MLPRegressor
regressor = MLPRegressor(hidden_layer_sizes = (200, 200, 200, 200, 200), activation = 'relu', solver = 'adam', max_iter = 500, learning_rate = 'adaptive')
regressor.fit(X_train, y_train)
trainSet = pd.concat([X_train, y_train], axis = 1)
trainSet.head()
```
## Evaluate model accuracy
```
y_pred = regressor.predict(X_test)
y_pred
testSet = pd.concat([X_test, y_test], axis = 1)
testSet.head()
```
Compare GenPrice with PredictedGenPrice
```
datasetPredict = pd.concat([testSet.reset_index(), pd.Series(y_pred, name = 'PredictedGenPrice')], axis = 1).round(2)
datasetPredict.head(10)
datasetPredict.corr()
print("Training set accuracy = " + str(regressor.score(X_train, y_train)))
print("Test set accuracy = " + str(regressor.score(X_test, y_test)))
```
Training set accuracy = 0.9885445650077587<br>
Test set accuracy = 0.9829187423043221
### MSE
```
from sklearn import metrics
print('MSE:', metrics.mean_squared_error(y_test, y_pred))
```
MSE v1: 177.15763887557458<br>
MSE v2: 165.73161615532584<br>
MSE v3: 172.98494783761967
### MAPE
```
def mean_absolute_percentage_error(y_test, y_pred):
y_test, y_pred = np.array(y_test), np.array(y_pred)
return np.mean(np.abs((y_test - y_pred)/y_test)) * 100
print('MAPE:', mean_absolute_percentage_error(y_test, y_pred))
```
MAPE v1: 6.706572320387714<br>
MAPE v2: 6.926678067146115<br>
MAPE v3: 7.34081953098462
### Visualize
```
import matplotlib.pyplot as plt
plt.plot([i for i in range(len(y_pred))], y_pred, color = 'r')
plt.scatter([i for i in range(len(y_pred))], y_test, color = 'b')
plt.ylabel('Price')
plt.xlabel('Index')
plt.legend(['Predict', 'True'], loc = 'best')
plt.show()
```
| github_jupyter |
# PyFunc Model + Transformer Example
This notebook demonstrates how to deploy a Python function based model and a custom transformer. This type of model is useful as user would be able to define their own logic inside the model as long as it satisfy contract given in `merlin.PyFuncModel`. If the pre/post-processing steps could be implemented in Python, it's encouraged to write them in the PyFunc model code instead of separating them into another transformer.
The model we are going to develop and deploy is a cifar10 model accepts a tensor input. The transformer has preprocessing step that allows the user to send a raw image data and convert it to a tensor input.
## Requirements
- Authenticated to gcloud (```gcloud auth application-default login```)
```
!pip install --upgrade -r requirements.txt > /dev/null
import warnings
warnings.filterwarnings('ignore')
```
## 1. Initialize Merlin
### 1.1 Set Merlin Server
```
import merlin
MERLIN_URL = "<MERLIN_HOST>/api/merlin"
merlin.set_url(MERLIN_URL)
```
### 1.2 Set Active Project
`project` represent a project in real life. You may have multiple model within a project.
`merlin.set_project(<project-name>)` will set the active project into the name matched by argument. You can only set it to an existing project. If you would like to create a new project, please do so from the MLP UI.
```
PROJECT_NAME = "sample"
merlin.set_project(PROJECT_NAME)
```
### 1.3 Set Active Model
`model` represents an abstract ML model. Conceptually, `model` in Merlin is similar to a class in programming language. To instantiate a `model` you'll have to create a `model_version`.
Each `model` has a type, currently model type supported by Merlin are: sklearn, xgboost, tensorflow, pytorch, and user defined model (i.e. pyfunc model).
`model_version` represents a snapshot of particular `model` iteration. You'll be able to attach information such as metrics and tag to a given `model_version` as well as deploy it as a model service.
`merlin.set_model(<model_name>, <model_type>)` will set the active model to the name given by parameter, if the model with given name is not found, a new model will be created.
```
from merlin.model import ModelType
MODEL_NAME = "transformer-pyfunc"
merlin.set_model(MODEL_NAME, ModelType.PYFUNC)
```
## 2. Train Model
In this step, we are going to train a cifar10 model using PyToch and create PyFunc model class that does the prediction using trained PyTorch model.
### 2.1 Prepare Training Data
```
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
```
### 2.2 Create PyTorch Model
```
import torch.nn as nn
import torch.nn.functional as F
class PyTorchModel(nn.Module):
def __init__(self):
super(PyTorchModel, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
```
### 2.3 Train Model
```
import torch.optim as optim
net = PyTorchModel()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
```
### 2.4 Check Prediction
```
dataiter = iter(trainloader)
inputs, labels = dataiter.next()
predict_out = net(inputs[0:1])
predict_out
```
### 2.5 Serialize Model
```
import os
model_dir = "pytorch-model"
model_path = os.path.join(model_dir, "model.pt")
model_class_path = os.path.join(model_dir, "model.py")
torch.save(net.state_dict(), model_path)
```
### 2.6 Save PyTorchModel Class
We also need to save the PyTorchModel class and upload it to Merlin alongside the serialized trained model. The next cell will write the PyTorchModel we defined above to `pytorch-model/model.py` file.
```
%%file pytorch-model/model.py
import torch.nn as nn
import torch.nn.functional as F
class PyTorchModel(nn.Module):
def __init__(self):
super(PyTorchModel, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
```
## 3. Create PyFunc Model
To create a PyFunc model you'll have to extend `merlin.PyFuncModel` class and implement its `initialize` and `infer` method.
`initialize` will be called once during model initialization. The argument to `initialize` is a dictionary containing a key value pair of artifact name and its URL. The artifact's keys are the same value as received by `log_pyfunc_model`.
`infer` method is the prediction method that is need to be implemented. It accept a dictionary type argument which represent incoming request body. `infer` should return a dictionary object which correspond to response body of prediction result.
In following example we are creating PyFunc model called `CifarModel`. In its `initialize` method we expect 2 artifacts called `model_path` and `model_class_path`, those 2 artifacts would point to the serialized model and the PyTorch model class file. The `infer` method will simply does prediction for the model and return the result.
```
import importlib
import sys
from merlin.model import PyFuncModel
MODEL_CLASS_NAME="PyTorchModel"
class CifarModel(PyFuncModel):
def initialize(self, artifacts):
model_path = artifacts["model_path"]
model_class_path = artifacts["model_class_path"]
# Load the python class into memory
sys.path.append(os.path.dirname(model_class_path))
modulename = os.path.basename(model_class_path).split('.')[0].replace('-', '_')
model_class = getattr(importlib.import_module(modulename), MODEL_CLASS_NAME)
# Make sure the model weight is transform with the right device in this machine
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self._pytorch = model_class().to(device)
self._pytorch.load_state_dict(torch.load(model_path, map_location=device))
self._pytorch.eval()
def infer(self, request, **kwargs):
inputs = torch.tensor(request["instances"])
result = self._pytorch(inputs)
return {"predictions": result.tolist()}
```
Now, let's test it locally.
```
import json
with open(os.path.join("input-tensor.json"), "r") as f:
tensor_req = json.load(f)
m = CifarModel()
m.initialize({"model_path": model_path, "model_class_path": model_class_path})
m.infer(tensor_req)
```
## 4. Deploy Model
To deploy the model, we will have to create an iteration of the model (by create a `model_version`), upload the serialized model to MLP, and then deploy.
### 4.1 Create Model Version and Upload
`merlin.new_model_version()` is a convenient method to create a model version and start its development process. It is equal to following codes:
```
v = model.new_model_version()
v.start()
v.log_pyfunc_model(model_instance=EnsembleModel(),
conda_env="env.yaml",
artifacts={"xgb_model": model_1_path, "sklearn_model": model_2_path})
v.finish()
```
To upload PyFunc model you have to provide following arguments:
1. `model_instance` is the instance of PyFunc model, the model has to extend `merlin.PyFuncModel`
2. `conda_env` is path to conda environment yaml file. The environment yaml file must contain all dependency required by the PyFunc model.
3. (Optional) `artifacts` is additional artifact that you want to include in the model
4. (Optional) `code_path` is a list of directory containing python code that will be loaded during model initialization, this is required when `model_instance` depend on local python package
```
with merlin.new_model_version() as v:
merlin.log_pyfunc_model(model_instance=CifarModel(),
conda_env="env.yaml",
artifacts={"model_path": model_path, "model_class_path": model_class_path})
```
### 4.2 Deploy Model and Transformer
To deploy a model and its transformer, you must pass a `transformer` object to `deploy()` function. Each of deployed model version will have its own generated url.
```
from merlin.resource_request import ResourceRequest
from merlin.transformer import Transformer
# Create a transformer object and its resources requests
resource_request = ResourceRequest(min_replica=1, max_replica=1,
cpu_request="100m", memory_request="200Mi")
transformer = Transformer("gcr.io/kubeflow-ci/kfserving/image-transformer:latest",
resource_request=resource_request)
endpoint = merlin.deploy(v, transformer=transformer)
```
### 4.3 Send Test Request
```
import json
import requests
with open(os.path.join("input-raw-image.json"), "r") as f:
req = json.load(f)
resp = requests.post(endpoint.url, json=req)
resp.text
```
## 4. Clean Up
## 4.1 Delete Deployment
```
merlin.undeploy(v)
```
| github_jupyter |
# PROYECTO CIFAR-10
## CARLOS CABAÑÓ
## 1. Librerias
Descargamos la librería para los arrays en preprocesamiento de Keras
```
from tensorflow import keras as ks
from matplotlib import pyplot as plt
import numpy as np
import time
import datetime
import random
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.regularizers import l2
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.preprocessing.image import ImageDataGenerator
```
## 2. Arquitectura de red del modelo
Adoptamos la arquitectura del modelo 11 con los ajustes en Batch Normalization, Kernel Regularizer y Kernel Initializer. Añadimos Batch normalization a las capas de convolución.
```
model = ks.Sequential()
model.add(ks.layers.Conv2D(64, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same', input_shape=(32,32,3)))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(64, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D((2, 2)))
model.add(ks.layers.Dropout(0.2))
model.add(ks.layers.Conv2D(128, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(128, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(ks.layers.Dropout(0.2))
model.add(ks.layers.Conv2D(256, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(256, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(256, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(ks.layers.Dropout(0.2))
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Dropout(0.3))
model.add(ks.layers.Flatten())
model.add(ks.layers.Dense(512, activation='relu', kernel_regularizer=l2(0.001), kernel_initializer="he_uniform"))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Dropout(0.4))
model.add(ks.layers.Dense(512, activation='relu', kernel_regularizer=l2(0.001), kernel_initializer="he_uniform"))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Dropout(0.5))
model.add(ks.layers.Dense(10, activation='softmax'))
model.summary()
```
## 3. Optimizador, función error
Añadimos el learning rate al optimizador
```
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.001, momentum=0.9),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
## 4. Preparamos los datos
```
cifar10 = ks.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
cifar10_labels = [
'airplane', # id 0
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck',
]
print('Number of labels: %s' % len(cifar10_labels))
```
Pintemos una muestra de las imagenes del dataset CIFAR10:
```
# Pintemos una muestra de las las imagenes del dataset MNIST
print('Train: X=%s, y=%s' % (x_train.shape, y_train.shape))
print('Test: X=%s, y=%s' % (x_test.shape, y_test.shape))
for i in range(9):
plt.subplot(330 + 1 + i)
plt.imshow(x_train[i], cmap=plt.get_cmap('gray'))
plt.title(cifar10_labels[y_train[i,0]])
plt.subplots_adjust(hspace = 1)
plt.show()
```
Hacemos la validación al mismo tiempo que el entrenamiento:
```
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
```
Hacemos el OHE para la clasificación
```
le = LabelEncoder()
le.fit(y_train.ravel())
y_train_encoded = le.transform(y_train.ravel())
y_val_encoded = le.transform(y_val.ravel())
y_test_encoded = le.transform(y_test.ravel())
```
## 5. Ajustes: Early Stopping
Definimos un early stopping con base en el loss de validación y con el parámetro de "patience" a 10, para tener algo de margen. Con el Early Stopping lograremos parar el entrenamiento en el momento óptimo para evitar que siga entrenando a partir del overfitting.
```
callback_val_loss = EarlyStopping(monitor="val_loss", patience=5)
callback_val_accuracy = EarlyStopping(monitor="val_accuracy", patience=10)
```
## 6. Transformador de imágenes
### 6.1 Imágenes de entrenamiento
```
train_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
)
train_generator = train_datagen.flow(
x_train,
y_train_encoded,
batch_size=64
)
```
### 6.2 Imágenes de validación y testeo
```
validation_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
)
validation_generator = validation_datagen.flow(
x_val,
y_val_encoded,
batch_size=64
)
test_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
)
test_generator = test_datagen.flow(
x_test,
y_test_encoded,
batch_size=64
)
```
### 6.3 Generador de datos
```
sample = random.choice(range(0,1457))
image = x_train[sample]
plt.imshow(image, cmap=plt.cm.binary)
sample = random.choice(range(0,1457))
example_generator = train_datagen.flow(
x_train[sample:sample+1],
y_train_encoded[sample:sample+1],
batch_size=64
)
plt.figure(figsize=(12, 12))
for i in range(0, 15):
plt.subplot(5, 3, i+1)
for X, Y in example_generator:
image = X[0]
plt.imshow(image)
break
plt.tight_layout()
plt.show()
```
## 7. Entrenamiento
```
t = time.perf_counter()
steps=int(x_train.shape[0]/64)
history = model.fit(train_generator, epochs=100, use_multiprocessing=False, batch_size= 64, validation_data=validation_generator, steps_per_epoch=steps, callbacks=[callback_val_loss, callback_val_accuracy])
elapsed_time = datetime.timedelta(seconds=(time.perf_counter() - t))
print('Tiempo de entrenamiento:', elapsed_time)
```
## 8. Evaluamos los resultados
```
_, acc = model.evaluate(x_test, y_test_encoded, verbose=0)
print('> %.3f' % (acc * 100.0))
plt.title('Cross Entropy Loss')
plt.plot(history.history['loss'], color='blue', label='train')
plt.plot(history.history['val_loss'], color='orange', label='test')
plt.show()
plt.title('Classification Accuracy')
plt.plot(history.history['accuracy'], color='blue', label='train')
plt.plot(history.history['val_accuracy'], color='orange', label='test')
plt.show()
predictions = model.predict(x_test)
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(predicted_label,
100*np.max(predictions_array),
true_label[0]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label[0]].set_color('blue')
```
Dibujamos las primeras imágenes:
```
i = 0
for l in cifar10_labels:
print(i, l)
i += 1
num_rows = 5
num_cols = 4
start = 650
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i+start, predictions[i+start], y_test, x_test)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i+start, predictions[i+start], y_test)
plt.tight_layout()
plt.show()
```
| github_jupyter |
# NumPy arrays
Nikolay Koldunov
koldunovn@gmail.com
This is part of [**Python for Geosciences**](https://github.com/koldunovn/python_for_geosciences) notes.
================
<img height="100" src="files/numpy.png" >
- a powerful N-dimensional array object
- sophisticated (broadcasting) functions
- tools for integrating C/C++ and Fortran code
- useful linear algebra, Fourier transform, and random number capabilities
```
#allow graphics inline
%matplotlib inline
import matplotlib.pylab as plt #import plotting library
import numpy as np #import numpy library
np.set_printoptions(precision=3) # this is just to make the output look better
```
## Load data
I am going to use some real data as an example of array manipulations. This will be the AO index downloaded by wget through a system call (you have to be on Linux of course):
```
!wget www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/monthly.ao.index.b50.current.ascii
```
This is how data in the file look like (we again use system call for *head* command):
```
!head monthly.ao.index.b50.current.ascii
```
Load data in to a variable:
```
ao = np.loadtxt('monthly.ao.index.b50.current.ascii')
ao
ao.shape
```
So it's a *row-major* order. Matlab and Fortran use *column-major* order for arrays.
```
type(ao)
```
Numpy arrays are statically typed, which allow faster operations
```
ao.dtype
```
You can't assign value of different type to element of the numpy array:
```
ao[0,0] = 'Year'
```
Slicing works similarly to Matlab:
```
ao[0:5,:]
```
One can look at the data. This is done by matplotlib.pylab module that we have imported in the beggining as `plt`. We will plot only first 780 poins:
```
plt.plot(ao[:780,2])
```
## Index slicing
In general it is similar to Matlab
First 12 elements of **second** column (months). Remember that indexing starts with 0:
```
ao[0:12,1]
```
First raw:
```
ao[0,:]
```
We can create mask, selecting all raws where values in second raw (months) equals 10 (October):
```
mask = (ao[:,1]==10)
```
Here we apply this mask and show only first 5 rowd of the array:
```
ao[mask][:5,:]
```
You don't have to create separate variable for mask, but apply it directly. Here instead of first five rows I show five last rows:
```
ao[ao[:,1]==10][-5:,:]
```
You can combine conditions. In this case we select October-December data (only first 10 elements are shown):
```
ao[(ao[:,1]>=10)&(ao[:,1]<=12)][0:10,:]
```
You can assighn values to subset of values (*thi expression fixes the problem with very small value at 2015-04*)
```
ao[ao<-10]=0
```
## Basic operations
Create example array from first 12 values of second column and perform some basic operations:
```
months = ao[0:12,1]
months
months+10
months*20
months*months
```
## Basic statistics
Create *ao_values* that will contain onlu data values:
```
ao_values = ao[:,2]
```
Simple statistics:
```
ao_values.min()
ao_values.max()
ao_values.mean()
ao_values.std()
ao_values.sum()
```
You can also use *np.sum* function:
```
np.sum(ao_values)
```
One can make operations on the subsets:
```
np.mean(ao[ao[:,1]==1,2]) # January monthly mean
```
Result will be the same if we use method on our selected data:
```
ao[ao[:,1]==1,2].mean()
```
## Saving data
You can save your data as a text file
```
np.savetxt('ao_only_values.csv',ao[:, 2], fmt='%.4f')
```
Head of resulting file:
```
!head ao_only_values.csv
```
You can also save it as binary:
```
f=open('ao_only_values.bin', 'w')
ao[:,2].tofile(f)
f.close()
```
| github_jupyter |
<img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="35%" align="right" border="0"><br>
# Python for Finance (2nd ed.)
**Mastering Data-Driven Finance**
© Dr. Yves J. Hilpisch | The Python Quants GmbH
<img src="http://hilpisch.com/images/py4fi_2nd_shadow.png" width="300px" align="left">
# Data Analysis with pandas
## pandas Basics
### First Steps with DataFrame Class
```
import pandas as pd
df = pd.DataFrame([10, 20, 30, 40],
columns=['numbers'],
index=['a', 'b', 'c', 'd'])
df
df.index
df.columns
df.loc['c']
df.loc[['a', 'd']]
df.iloc[1:3]
df.sum()
df.apply(lambda x: x ** 2)
df ** 2
df['floats'] = (1.5, 2.5, 3.5, 4.5)
df
df['floats']
df['names'] = pd.DataFrame(['Yves', 'Sandra', 'Lilli', 'Henry'],
index=['d', 'a', 'b', 'c'])
df
df.append({'numbers': 100, 'floats': 5.75, 'names': 'Jil'},
ignore_index=True)
df = df.append(pd.DataFrame({'numbers': 100, 'floats': 5.75,
'names': 'Jil'}, index=['y',]))
df
df = df.append(pd.DataFrame({'names': 'Liz'}, index=['z',]), sort=False)
df
df.dtypes
df[['numbers', 'floats']].mean()
df[['numbers', 'floats']].std()
```
### Second Steps with DataFrame Class
```
import numpy as np
np.random.seed(100)
a = np.random.standard_normal((9, 4))
a
df = pd.DataFrame(a)
df
df.columns = ['No1', 'No2', 'No3', 'No4']
df
df['No2'].mean()
dates = pd.date_range('2019-1-1', periods=9, freq='M')
dates
df.index = dates
df
df.values
np.array(df)
```
## Basic Analytics
```
df.info()
df.describe()
df.sum()
df.mean()
df.mean(axis=0)
df.mean(axis=1)
df.cumsum()
np.mean(df)
# raises warning
np.log(df)
np.sqrt(abs(df))
np.sqrt(abs(df)).sum()
100 * df + 100
```
## Basic Visualization
```
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
%matplotlib inline
df.cumsum().plot(lw=2.0, figsize=(10, 6));
# plt.savefig('../../images/ch05/pd_plot_01.png')
df.plot.bar(figsize=(10, 6), rot=30);
# df.plot(kind='bar', figsize=(10, 6))
# plt.savefig('../../images/ch05/pd_plot_02.png')
```
## Series Class
```
type(df)
S = pd.Series(np.linspace(0, 15, 7), name='series')
S
type(S)
s = df['No1']
s
type(s)
s.mean()
s.plot(lw=2.0, figsize=(10, 6));
# plt.savefig('../../images/ch05/pd_plot_03.png')
```
## GroupBy Operations
```
df['Quarter'] = ['Q1', 'Q1', 'Q1', 'Q2', 'Q2',
'Q2', 'Q3', 'Q3', 'Q3']
df
groups = df.groupby('Quarter')
groups.size()
groups.mean()
groups.max()
groups.aggregate([min, max]).round(2)
df['Odd_Even'] = ['Odd', 'Even', 'Odd', 'Even', 'Odd', 'Even',
'Odd', 'Even', 'Odd']
groups = df.groupby(['Quarter', 'Odd_Even'])
groups.size()
groups[['No1', 'No4']].aggregate([sum, np.mean])
```
## Complex Selection
```
data = np.random.standard_normal((10, 2))
df = pd.DataFrame(data, columns=['x', 'y'])
df.info()
df.head()
df.tail()
df['x'] > 0.5
(df['x'] > 0) & (df['y'] < 0)
(df['x'] > 0) | (df['y'] < 0)
df[df['x'] > 0]
df.query('x > 0')
df[(df['x'] > 0) & (df['y'] < 0)]
df.query('x > 0 & y < 0')
df[(df.x > 0) | (df.y < 0)]
df > 0
df[df > 0]
```
## Concatenation, Joining and Merging
```
df1 = pd.DataFrame(['100', '200', '300', '400'],
index=['a', 'b', 'c', 'd'],
columns=['A',])
df1
df2 = pd.DataFrame(['200', '150', '50'],
index=['f', 'b', 'd'],
columns=['B',])
df2
```
#### Concatenation
```
df1.append(df2, sort=False)
df1.append(df2, ignore_index=True, sort=False)
pd.concat((df1, df2), sort=False)
pd.concat((df1, df2), ignore_index=True, sort=False)
```
#### Joining
```
df1.join(df2)
df2.join(df1)
df1.join(df2, how='left')
df1.join(df2, how='right')
df1.join(df2, how='inner')
df1.join(df2, how='outer')
df = pd.DataFrame()
df['A'] = df1['A']
df
df['B'] = df2
df
df = pd.DataFrame({'A': df1['A'], 'B': df2['B']})
df
```
#### Merging
```
c = pd.Series([250, 150, 50], index=['b', 'd', 'c'])
df1['C'] = c
df2['C'] = c
df1
df2
pd.merge(df1, df2)
pd.merge(df1, df2, on='C')
pd.merge(df1, df2, how='outer')
pd.merge(df1, df2, left_on='A', right_on='B')
pd.merge(df1, df2, left_on='A', right_on='B', how='outer')
pd.merge(df1, df2, left_index=True, right_index=True)
pd.merge(df1, df2, on='C', left_index=True)
pd.merge(df1, df2, on='C', right_index=True)
pd.merge(df1, df2, on='C', left_index=True, right_index=True)
```
## Performance Aspects
```
data = np.random.standard_normal((1000000, 2))
data.nbytes
df = pd.DataFrame(data, columns=['x', 'y'])
df.info()
%time res = df['x'] + df['y']
res[:3]
%time res = df.sum(axis=1)
res[:3]
%time res = df.values.sum(axis=1)
res[:3]
%time res = np.sum(df, axis=1)
res[:3]
%time res = np.sum(df.values, axis=1)
res[:3]
%time res = df.eval('x + y')
res[:3]
%time res = df.apply(lambda row: row['x'] + row['y'], axis=1)
res[:3]
```
<img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="35%" align="right" border="0"><br>
<a href="http://tpq.io" target="_blank">http://tpq.io</a> | <a href="http://twitter.com/dyjh" target="_blank">@dyjh</a> | <a href="mailto:training@tpq.io">training@tpq.io</a>
| github_jupyter |
Deep Learning
=============
Assignment 4
------------
Previously in `2_fullyconnected.ipynb` and `3_regularization.ipynb`, we trained fully connected networks to classify [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) characters.
The goal of this assignment is make the neural network convolutional.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import time
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
Reformat into a TensorFlow-friendly shape:
- convolutions need the image data formatted as a cube (width by height by #channels)
- labels as float 1-hot encodings.
```
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
def reformat(dataset, labels):
dataset = dataset.reshape(
(-1, image_size, image_size, num_channels)).astype(np.float32)
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
```
Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
```
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 1001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
```
---
Problem 1
---------
The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (`nn.max_pool()`) of stride 2 and kernel size 2.
---
```
# TODO
```
---
Problem 2
---------
Try to get the best performance you can using a convolutional net. Look for example at the classic [LeNet5](http://yann.lecun.com/exdb/lenet/) architecture, adding Dropout, and/or adding learning rate decay.
---
```
batch_size = 16
patch_size = 3
depth = 16
num_hidden = 705
num_hidden_last = 205
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layerconv1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layerconv1_biases = tf.Variable(tf.zeros([depth]))
layerconv2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth * 2], stddev=0.1))
layerconv2_biases = tf.Variable(tf.zeros([depth * 2]))
layerconv3_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth * 2, depth * 4], stddev=0.03))
layerconv3_biases = tf.Variable(tf.zeros([depth * 4]))
layerconv4_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth * 4, depth * 4], stddev=0.03))
layerconv4_biases = tf.Variable(tf.zeros([depth * 4]))
layerconv5_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth * 4, depth * 16], stddev=0.03))
layerconv5_biases = tf.Variable(tf.zeros([depth * 16]))
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size / 7 * image_size / 7 * (depth * 4), num_hidden], stddev=0.03))
layer3_biases = tf.Variable(tf.zeros([num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_hidden_last], stddev=0.0532))
layer4_biases = tf.Variable(tf.zeros([num_hidden_last]))
layer5_weights = tf.Variable(tf.truncated_normal(
[num_hidden_last, num_labels], stddev=0.1))
layer5_biases = tf.Variable(tf.zeros([num_labels]))
# Model.
def model(data, use_dropout=False):
conv = tf.nn.conv2d(data, layerconv1_weights, [1, 1, 1, 1], padding='SAME')
hidden = tf.nn.elu(conv + layerconv1_biases)
pool = tf.nn.max_pool(hidden, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
conv = tf.nn.conv2d(pool, layerconv2_weights, [1, 1, 1, 1], padding='SAME')
hidden = tf.nn.elu(conv + layerconv2_biases)
#pool = tf.nn.max_pool(hidden, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
conv = tf.nn.conv2d(hidden, layerconv3_weights, [1, 1, 1, 1], padding='SAME')
hidden = tf.nn.elu(conv + layerconv3_biases)
pool = tf.nn.max_pool(hidden, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
# norm1
# norm1 = tf.nn.lrn(pool, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
conv = tf.nn.conv2d(pool, layerconv4_weights, [1, 1, 1, 1], padding='SAME')
hidden = tf.nn.elu(conv + layerconv4_biases)
pool = tf.nn.max_pool(hidden, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
# norm1 = tf.nn.lrn(pool, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
conv = tf.nn.conv2d(pool, layerconv5_weights, [1, 1, 1, 1], padding='SAME')
hidden = tf.nn.elu(conv + layerconv5_biases)
pool = tf.nn.max_pool(hidden, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
# norm1 = tf.nn.lrn(pool, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
shape = pool.get_shape().as_list()
#print(shape)
reshape = tf.reshape(pool, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.elu(tf.matmul(reshape, layer3_weights) + layer3_biases)
if use_dropout:
hidden = tf.nn.dropout(hidden, 0.75)
nn_hidden_layer = tf.matmul(hidden, layer4_weights) + layer4_biases
hidden = tf.nn.elu(nn_hidden_layer)
if use_dropout:
hidden = tf.nn.dropout(hidden, 0.75)
return tf.matmul(hidden, layer5_weights) + layer5_biases
# Training computation.
logits = model(tf_train_dataset, True)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.1, global_step, 3000, 0.86, staircase=True)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 45001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in xrange(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step", step, ":", l)
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print(time.ctime())
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
import ipdb
import dan_utils
warnings.filterwarnings("ignore")
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
from statsmodels.tsa.stattools import adfuller as ADF
from statsmodels.stats.diagnostic import acorr_ljungbox
from statsmodels.tsa.arima_model import ARIMA
def base_arima(data, pre_step):
# data should be 1-D DataFrame
D_data=data.diff(periods=1).dropna()
data = np.array(data)
model=ARIMA(data,(1,1,1)).fit()
forecast=model.forecast(pre_step)
return (forecast[0])
randseed = 25
dan_utils.setup_seed(randseed)
res = 11
v = pd.read_csv('../data/q_20_aggragated.csv')
v = v.rename(columns={'Unnamed: 0': 'id'})
det_with_class = pd.read_csv('../res/%i_res%i_id_402_withclass.csv'%(randseed, res), index_col=0)
v['class_i'] = ''
for i in range(len(v)):
v.loc[i, 'class_i'] = det_with_class[det_with_class['id']==v.loc[i, 'id']].iloc[0, 5] # 5 stands for 'class_i'
num_class = det_with_class['class_i'].drop_duplicates().size
v_class = []
for i in range(num_class):
v_class.append(v[v['class_i']==i])
print('There are %i class(es)'%num_class)
dist_mat = pd.read_csv('../data/dist_mat.csv', index_col=0)
id_info = pd.read_csv('../data/id2000.csv', index_col=0)
dist_mat.index = id_info['id2']
dist_mat.columns = id_info['id2']
for i in range(len(dist_mat)):
for j in range(len(dist_mat)):
if i==j:
dist_mat.iloc[i, j] = 0
near_id = pd.DataFrame(np.argsort(np.array(dist_mat)), index = id_info['id2'], columns = id_info['id2'])
seg = pd.read_csv('../data/segement.csv', header=None)
num_dets = 25
det_list_class = []
for i in range(num_class):
det_list_class_temp, v_class_temp = dan_utils.get_class_with_node(seg, v_class[i])
det_list_class.append(det_list_class_temp)
v_class_temp = v_class_temp[v_class_temp['id'].isin(det_list_class_temp[:num_dets])]
v_class[i] = v_class_temp
near_road_set = []
for i in range(num_class):
near_road_set.append(dan_utils.rds_mat(dist_mat, det_list_class[i][:num_dets], seg))
# ind, class
# 0 , blue
# 1 , green
# 2 , yellow <--
# 3 , black <--
# 4 , red <--
class_color_set = ['b', 'g', 'y', 'black', 'r']
class_i = 4
# v_class[4].iloc[:, 2:-1]
data = np.array(v_class[4].iloc[:, 2:-1])
window = 100
pred_num = 6
pred_mat_all = []
label_mat_all = []
for i in range(data.shape[0]): # iterate over detectors
pred_mat = []
label_mat = []
for j in range(data.shape[1] - window - pred_num):
data_temp = data[i, j:j+window]
label = data[i, j:j+window+pred_num]
pred = base_arima(pd.DataFrame(data_temp), pred_num)
pred_mat.append(pred)
label_mat.append(label)
pred_mat_all.append(np.array(pred_mat))
label_mat_all.append(np.array(label_mat))
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Custom training: basics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In the previous tutorial we covered the TensorFlow APIs for automatic differentiation, a basic building block for machine learning.
In this tutorial we will use the TensorFlow primitives introduced in the prior tutorials to do some simple machine learning.
TensorFlow also includes a higher-level neural networks API (`tf.keras`) which provides useful abstractions to reduce boilerplate. We strongly recommend those higher level APIs for people working with neural networks. However, in this short tutorial we cover neural network training from first principles to establish a strong foundation.
## Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
```
## Variables
Tensors in TensorFlow are immutable stateless objects. Machine learning models, however, need to have changing state: as your model trains, the same code to compute predictions should behave differently over time (hopefully with a lower loss!). To represent this state which needs to change over the course of your computation, you can choose to rely on the fact that Python is a stateful programming language:
```
# Using python state
x = tf.zeros([10, 10])
x += 2 # This is equivalent to x = x + 2, which does not mutate the original
# value of x
print(x)
```
TensorFlow, however, has stateful operations built in, and these are often more pleasant to use than low-level Python representations of your state. To represent weights in a model, for example, it's often convenient and efficient to use TensorFlow variables.
A Variable is an object which stores a value and, when used in a TensorFlow computation, will implicitly read from this stored value. There are operations (`tf.assign_sub`, `tf.scatter_update`, etc) which manipulate the value stored in a TensorFlow variable.
```
v = tf.Variable(1.0)
assert v.numpy() == 1.0
# Re-assign the value
v.assign(3.0)
assert v.numpy() == 3.0
# Use `v` in a TensorFlow operation like tf.square() and reassign
v.assign(tf.square(v))
assert v.numpy() == 9.0
```
Computations using Variables are automatically traced when computing gradients. For Variables representing embeddings TensorFlow will do sparse updates by default, which are more computation and memory efficient.
Using Variables is also a way to quickly let a reader of your code know that this piece of state is mutable.
## Example: Fitting a linear model
Let's now put the few concepts we have so far ---`Tensor`, `GradientTape`, `Variable` --- to build and train a simple model. This typically involves a few steps:
1. Define the model.
2. Define a loss function.
3. Obtain training data.
4. Run through the training data and use an "optimizer" to adjust the variables to fit the data.
In this tutorial, we'll walk through a trivial example of a simple linear model: `f(x) = x * W + b`, which has two variables - `W` and `b`. Furthermore, we'll synthesize data such that a well trained model would have `W = 3.0` and `b = 2.0`.
### Define the model
Let's define a simple class to encapsulate the variables and the computation.
```
class Model(object):
def __init__(self):
# Initialize variable to (5.0, 0.0)
# In practice, these should be initialized to random values.
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
```
### Define a loss function
A loss function measures how well the output of a model for a given input matches the desired output. Let's use the standard L2 loss.
```
def loss(predicted_y, desired_y):
return tf.reduce_mean(tf.square(predicted_y - desired_y))
```
### Obtain training data
Let's synthesize the training data with some noise.
```
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random_normal(shape=[NUM_EXAMPLES])
noise = tf.random_normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
```
Before we train the model let's visualize where the model stands right now. We'll plot the model's predictions in red and the training data in blue.
```
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: '),
print(loss(model(inputs), outputs).numpy())
```
### Define a training loop
We now have our network and our training data. Let's train it, i.e., use the training data to update the model's variables (`W` and `b`) so that the loss goes down using [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent). There are many variants of the gradient descent scheme that are captured in `tf.train.Optimizer` implementations. We'd highly recommend using those implementations, but in the spirit of building from first principles, in this particular example we will implement the basic math ourselves.
```
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
```
Finally, let's repeatedly run through the training data and see how `W` and `b` evolve.
```
model = Model()
# Collect the history of W-values and b-values to plot later
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# Let's plot it all
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'true W', 'true_b'])
plt.show()
```
## Next Steps
In this tutorial we covered `Variable`s and built and trained a simple linear model using the TensorFlow primitives discussed so far.
In theory, this is pretty much all you need to use TensorFlow for your machine learning research.
In practice, particularly for neural networks, the higher level APIs like `tf.keras` will be much more convenient since it provides higher level building blocks (called "layers"), utilities to save and restore state, a suite of loss functions, a suite of optimization strategies etc.
| github_jupyter |
# 转置卷积
:label:`sec_transposed_conv`
到目前为止,我们所见到的卷积神经网络层,例如卷积层( :numref:`sec_conv_layer`)和汇聚层( :numref:`sec_pooling`),通常会减少下采样输入图像的空间维度(高和宽)。
然而如果输入和输出图像的空间维度相同,在以像素级分类的语义分割中将会很方便。
例如,输出像素所处的通道维可以保有输入像素在同一位置上的分类结果。
为了实现这一点,尤其是在空间维度被卷积神经网络层缩小后,我们可以使用另一种类型的卷积神经网络层,它可以增加上采样中间层特征图的空间维度。
在本节中,我们将介绍
*转置卷积*(transposed convolution) :cite:`Dumoulin.Visin.2016`,
用于逆转下采样导致的空间尺寸减小。
```
from mxnet import init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
```
## 基本操作
让我们暂时忽略通道,从基本的转置卷积开始,设步幅为1且没有填充。
假设我们有一个$n_h \times n_w$的输入张量和一个$k_h \times k_w$的卷积核。
以步幅为1滑动卷积核窗口,每行$n_w$次,每列$n_h$次,共产生$n_h n_w$个中间结果。
每个中间结果都是一个$(n_h + k_h - 1) \times (n_w + k_w - 1)$的张量,初始化为0。
为了计算每个中间张量,输入张量中的每个元素都要乘以卷积核,从而使所得的$k_h \times k_w$张量替换中间张量的一部分。
请注意,每个中间张量被替换部分的位置与输入张量中元素的位置相对应。
最后,所有中间结果相加以获得最终结果。
例如, :numref:`fig_trans_conv`解释了如何为$2\times 2$的输入张量计算卷积核为$2\times 2$的转置卷积。

:label:`fig_trans_conv`
我们可以对输入矩阵`X`和卷积核矩阵`K`(**实现基本的转置卷积运算**)`trans_conv`。
```
def trans_conv(X, K):
h, w = K.shape
Y = np.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Y[i: i + h, j: j + w] += X[i, j] * K
return Y
```
与通过卷积核“减少”输入元素的常规卷积(在 :numref:`sec_conv_layer`中)相比,转置卷积通过卷积核“广播”输入元素,从而产生大于输入的输出。
我们可以通过 :numref:`fig_trans_conv`来构建输入张量`X`和卷积核张量`K`从而[**验证上述实现输出**]。
此实现是基本的二维转置卷积运算。
```
X = np.array([[0.0, 1.0], [2.0, 3.0]])
K = np.array([[0.0, 1.0], [2.0, 3.0]])
trans_conv(X, K)
```
或者,当输入`X`和卷积核`K`都是四维张量时,我们可以[**使用高级API获得相同的结果**]。
```
X, K = X.reshape(1, 1, 2, 2), K.reshape(1, 1, 2, 2)
tconv = nn.Conv2DTranspose(1, kernel_size=2)
tconv.initialize(init.Constant(K))
tconv(X)
```
## [**填充、步幅和多通道**]
与常规卷积不同,在转置卷积中,填充被应用于的输出(常规卷积将填充应用于输入)。
例如,当将高和宽两侧的填充数指定为1时,转置卷积的输出中将删除第一和最后的行与列。
```
tconv = nn.Conv2DTranspose(1, kernel_size=2, padding=1)
tconv.initialize(init.Constant(K))
tconv(X)
```
在转置卷积中,步幅被指定为中间结果(输出),而不是输入。
使用 :numref:`fig_trans_conv`中相同输入和卷积核张量,将步幅从1更改为2会增加中间张量的高和权重,因此输出张量在 :numref:`fig_trans_conv_stride2`中。

:label:`fig_trans_conv_stride2`
以下代码可以验证 :numref:`fig_trans_conv_stride2`中步幅为2的转置卷积的输出。
```
tconv = nn.Conv2DTranspose(1, kernel_size=2, strides=2)
tconv.initialize(init.Constant(K))
tconv(X)
```
对于多个输入和输出通道,转置卷积与常规卷积以相同方式运作。
假设输入有$c_i$个通道,且转置卷积为每个输入通道分配了一个$k_h\times k_w$的卷积核张量。
当指定多个输出通道时,每个输出通道将有一个$c_i\times k_h\times k_w$的卷积核。
同样,如果我们将$\mathsf{X}$代入卷积层$f$来输出$\mathsf{Y}=f(\mathsf{X})$,并创建一个与$f$具有相同的超参数、但输出通道数量是$\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\mathsf{X}$相同。
下面的示例可以解释这一点。
```
X = np.random.uniform(size=(1, 10, 16, 16))
conv = nn.Conv2D(20, kernel_size=5, padding=2, strides=3)
tconv = nn.Conv2DTranspose(10, kernel_size=5, padding=2, strides=3)
conv.initialize()
tconv.initialize()
tconv(conv(X)).shape == X.shape
```
## [**与矩阵变换的联系**]
:label:`subsec-connection-to-mat-transposition`
转置卷积为何以矩阵变换命名呢?
让我们首先看看如何使用矩阵乘法来实现卷积。
在下面的示例中,我们定义了一个$3\times 3$的输入`X`和$2\times 2$卷积核`K`,然后使用`corr2d`函数计算卷积输出`Y`。
```
X = np.arange(9.0).reshape(3, 3)
K = np.array([[1.0, 2.0], [3.0, 4.0]])
Y = d2l.corr2d(X, K)
Y
```
接下来,我们将卷积核`K`重写为包含大量0的稀疏权重矩阵`W`。
权重矩阵的形状是($4$,$9$),其中非0元素来自卷积核`K`。
```
def kernel2matrix(K):
k, W = np.zeros(5), np.zeros((4, 9))
k[:2], k[3:5] = K[0, :], K[1, :]
W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k
return W
W = kernel2matrix(K)
W
```
逐行连结输入`X`,获得了一个长度为9的矢量。
然后,`W`的矩阵乘法和向量化的`X`给出了一个长度为4的向量。
重塑它之后,可以获得与上面的原始卷积操作所得相同的结果`Y`:我们刚刚使用矩阵乘法实现了卷积。
```
Y == np.dot(W, X.reshape(-1)).reshape(2, 2)
```
同样,我们可以使用矩阵乘法来实现转置卷积。
在下面的示例中,我们将上面的常规卷积$2 \times 2$的输出`Y`作为转置卷积的输入。
想要通过矩阵相乘来实现它,我们只需要将权重矩阵`W`的形状转置为$(9, 4)$。
```
Z = trans_conv(Y, K)
Z == np.dot(W.T, Y.reshape(-1)).reshape(3, 3)
```
抽象来看,给定输入向量$\mathbf{x}$和权重矩阵$\mathbf{W}$,卷积的前向传播函数可以通过将其输入与权重矩阵相乘并输出向量$\mathbf{y}=\mathbf{W}\mathbf{x}$来实现。
由于反向传播遵循链式法则和$\nabla_{\mathbf{x}}\mathbf{y}=\mathbf{W}^\top$,卷积的反向传播函数可以通过将其输入与转置的权重矩阵$\mathbf{W}^\top$相乘来实现。
因此,转置卷积层能够交换卷积层的正向传播函数和反向传播函数:它的正向传播和反向传播函数将输入向量分别与$\mathbf{W}^\top$和$\mathbf{W}$相乘。
## 小结
* 与通过卷积核减少输入元素的常规卷积相反,转置卷积通过卷积核广播输入元素,从而产生形状大于输入的输出。
* 如果我们将$\mathsf{X}$输入卷积层$f$来获得输出$\mathsf{Y}=f(\mathsf{X})$并创造一个与$f$有相同的超参数、但输出通道数是$\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\mathsf{X}$相同。
* 我们可以使用矩阵乘法来实现卷积。转置卷积层能够交换卷积层的正向传播函数和反向传播函数。
## 练习
1. 在 :numref:`subsec-connection-to-mat-transposition`中,卷积输入`X`和转置的卷积输出`Z`具有相同的形状。他们的数值也相同吗?为什么?
1. 使用矩阵乘法来实现卷积是否有效率?为什么?
[Discussions](https://discuss.d2l.ai/t/3301)
| github_jupyter |
# The YUSAG Football Model
by Matt Robinson, matthew.robinson@yale.edu, Yale Undergraduate Sports Analytics Group
This notebook introduces the model we at the Yale Undergraduate Sports Analytics Group (YUSAG) use for our college football rankings. This specific notebook details our FBS rankings at the beginning of the 2017 season.
```
import numpy as np
import pandas as pd
import math
```
Let's start by reading in the NCAA FBS football data from 2013-2016:
```
df_1 = pd.read_csv('NCAA_FBS_Results_2013_.csv')
df_2 = pd.read_csv('NCAA_FBS_Results_2014_.csv')
df_3 = pd.read_csv('NCAA_FBS_Results_2015_.csv')
df_4 = pd.read_csv('NCAA_FBS_Results_2016_.csv')
df = pd.concat([df_1,df_2,df_3,df_4],ignore_index=True)
df.head()
```
As you can see, the `OT` column has some `NaN` values that we will replace with 0.
```
# fill missing data with 0
df = df.fillna(0)
df.head()
```
I'm also going to make some weights for when we run our linear regression. I have found that using the factorial of the difference between the year and 2012 seems to work decently well. Clearly, the most recent seasons are weighted quite heavily in this scheme.
```
# update the weights based on a factorial scheme
df['weights'] = (df['year']-2012)
df['weights'] = df['weights'].apply(lambda x: math.factorial(x))
```
And now, we also are going to make a `scorediff` column that we can use in our linear regression.
```
df['scorediff'] = (df['teamscore']-df['oppscore'])
df.head()
```
Since we need numerical values for the linear regression algorithm, I am going to replace the locations with what seem like reasonable numbers:
* Visiting = -1
* Neutral = 0
* Home = 1
The reason we picked these exact numbers will become clearer in a little bit.
```
df['location'] = df['location'].replace('V',-1)
df['location'] = df['location'].replace('N',0)
df['location'] = df['location'].replace('H',1)
df.head()
```
The way our linear regression model works is a little tricky to code up in scikit-learn. It's much easier to do in R, but then you don't have a full understanding of what's happening when we make the model.
In simplest terms, our model predicts the score differential (`scorediff`) of each game based on three things: the strength of the `team`, the strength of the `opponent`, and the `location`.
You'll notice that the `team` and `opponent` features are categorical, and thus are not curretnly ripe for use with linear regression. However, we can use what is called 'one hot encoding' in order to transform these features into a usable form. One hot encoding works by taking the `team` feature, for example, and transforming it into many features such as `team_Yale` and `team_Harvard`. This `team_Yale` feature will usally equal zero, except when the team is actually Yale, then `team_Yale` will equal 1. In this way, it's a binary encoding (which is actually very useful for us as we'll see later).
One can use `sklearn.preprocessing.OneHotEncoder` for this task, but I am going to use Pandas instead:
```
# create dummy variables, need to do this in python b/c does not handle automatically like R
team_dummies = pd.get_dummies(df.team, prefix='team')
opponent_dummies = pd.get_dummies(df.opponent, prefix='opponent')
df = pd.concat([df, team_dummies, opponent_dummies], axis=1)
df.head()
```
Now let's make our training data, so that we can construct the model. At this point, I am going to use all the avaiable data to train the model, using our predetermined hyperparameters. This way, the model is ready to make predictions for the 2017 season.
```
# make the training data
X = df.drop(['year','month','day','team','opponent','teamscore','oppscore','D1','OT','weights','scorediff'], axis=1)
y = df['scorediff']
weights = df['weights']
X.head()
y.head()
weights.head()
```
Now let's train the linear regression model. You'll notice that I'm actually using ridge regression (adds an l2 penalty with alpha = 1.0) because that prevents the model from overfitting and also limits the values of the coefficients to be more interpretable. If I did not add this penalty, the coefficients would be huge.
```
from sklearn.linear_model import Ridge
ridge_reg = Ridge()
ridge_reg.fit(X, y, sample_weight=weights)
# get the R^2 value
r_squared = ridge_reg.score(X, y, sample_weight=weights)
print('R^2 on the training data:')
print(r_squared)
```
Now that the model is trained, we can use it to provide our rankings. Note that in this model, a team's ranking is simply defined as its linear regression coefficient, which we call the YUSAG coefficient.
When predicting a game's score differential on a neutral field, the predicted score differential (`scorediff`) is just the difference in YUSAG coefficients. The reason this works is the binary encoding we did earlier.
#### More details below on how it actually works
Ok, so you may have noticed that every game in our dataframe is actually duplicated, just with the `team` and `opponent` variables switched. This may have seemed like a mistake but it is actually useful for making the model more interpretable.
When we run the model, we get a coefficient for the `team_Yale` variable, which we call the YUSAG coefficient, and a coefficient for the `opponent_Yale` variable. Since we allow every game to be repeated, these variables end up just being negatives of each other.
So let's think about what we are doing when we predict the score differential for the Harvard-Penn game with `team` = Harvard and `opponent` = Penn.
In our model, the coefficients are as follows:
- team_Harvard_coef = 7.78
- opponent_Harvard_coef = -7.78
- team_Penn_coef = 6.68
- opponent_Penn_coef = -6.68
when we go to use the model for this game, it looks like this:
`scorediff` = (location_coef $*$ `location`) + (team_Harvard_coef $*$ `team_Harvard`) + (opponent_Harvard_coef $*$ `opponent_Harvard`) + (team_Penn_coef $*$ `team_Penn`) + (opponent_Penn_coef $*$ `opponent_Penn`) + (team_Yale_coef $*$ `team_Yale`) + (opponent_Yale_coef $*$ `opponent_Yale`) + $\cdots$
where the $\cdots$ represent data for many other teams, which will all just equal $0$.
To put numbers in for the variables, the model looks like this:
`scorediff` = (location_coef $*$ $0$) + (team_Harvard_coef $*$ $1$) + (opponent_Harvard_coef $*$ $0$) + (team_Penn_coef $*$ $0$) + (opponent_Penn_coef $*$ $1$) + (team_Yale_coef $*$ $0$) + (opponent_Yale_coef $*$ $0$) + $\cdots$
Which is just:
`scorediff` = (location_coef $*$ $0$) + (7.78 $*$ $1$) + (-6.68 $*$ $1$) = $7.78 - 6.68$ = Harvard_YUSAG_coef - Penn_YUSAG_coef
Thus showing how the difference in YUSAG coefficients is the same as the predicted score differential. Furthermore, the higher YUSAG coefficient a team has, the better they are.
Lastly, if the Harvard-Penn game was to be home at Harvard, we would just add the location_coef:
`scorediff` = (location_coef $*$ $1$) + (team_Harvard_coef $*$ $1$) + (opponent_Penn_coef $*$ $1$) = $1.77 + 7.78 - 6.68$ = Location_coef + Harvard_YUSAG_coef - Penn_YUSAG_coef
```
# get the coefficients for each feature
coef_data = list(zip(X.columns,ridge_reg.coef_))
coef_df = pd.DataFrame(coef_data,columns=['feature','feature_coef'])
coef_df.head()
```
Let's get only the team variables, so that it is a proper ranking
```
# first get rid of opponent_ variables
team_df = coef_df[~coef_df['feature'].str.contains("opponent")]
# get rid of the location variable
team_df = team_df.iloc[1:]
team_df.head()
# rank them by coef, not alphabetical order
ranked_team_df = team_df.sort_values(['feature_coef'],ascending=False)
# reset the indices at 0
ranked_team_df = ranked_team_df.reset_index(drop=True);
ranked_team_df.head()
```
I'm goint to change the name of the columns and remove the 'team_' part of every string:
```
ranked_team_df.rename(columns={'feature':'team', 'feature_coef':'YUSAG_coef'}, inplace=True)
ranked_team_df['team'] = ranked_team_df['team'].str.replace('team_', '')
ranked_team_df.head()
```
Lastly, I'm just going to shift the index to start at 1, so that it corresponds to the ranking.
```
ranked_team_df.index = ranked_team_df.index + 1
ranked_team_df.to_csv("FBS_power_rankings.csv")
```
## Additional stuff: Testing the model
This section is mostly about how own could test the performance of the model or how one could choose appropriate hyperparamters.
#### Creating a new dataframe
First let's take the original dataframe and sort it by date, so that the order of games in the dataframe matches the order the games were played.
```
# sort by date and reset the indices to 0
df_dated = df.sort_values(['year', 'month','day'], ascending=[True, True, True])
df_dated = df_dated.reset_index(drop=True)
df_dated.head()
```
Let's first make a dataframe with training data (the first three years of results)
```
thirteen_df = df_dated.loc[df_dated['year']==2013]
fourteen_df = df_dated.loc[df_dated['year']==2014]
fifteen_df = df_dated.loc[df_dated['year']==2015]
train_df = pd.concat([thirteen_df,fourteen_df,fifteen_df], ignore_index=True)
```
Now let's make an initial testing dataframe with the data from this past year.
```
sixteen_df = df_dated.loc[df_dated['year']==2016]
seventeen_df = df_dated.loc[df_dated['year']==2017]
test_df = pd.concat([sixteen_df,seventeen_df], ignore_index=True)
```
I am now going to set up a testing/validation scheme for the model. It works like this:
First I start off where my training data is all games from 2012-2015. Using the model trained on this data, I then predict games from the first week of the 2016 season and look at the results.
Next, I add that first week's worth of games to the training data, and now I train on all 2012-2015 results plus the first week from 2016. After training the model on this data, I then test on the second week of games. I then add that week's games to the training data and repeat the same procedure week after week.
In this way, I am never testing on a result that I have trained on. Though, it should be noted that I have also used this as a validation scheme, so I have technically done some sloppy 'data snooping' and this is not a great predictor of my generalization error.
```
def train_test_model(train_df, test_df):
# make the training data
X_train = train_df.drop(['year','month','day','team','opponent','teamscore','oppscore','D1','OT','weights','scorediff'], axis=1)
y_train = train_df['scorediff']
weights_train = train_df['weights']
# train the model
ridge_reg = Ridge()
ridge_reg.fit(X_train, y_train, weights_train)
fit = ridge_reg.score(X_train,y_train,sample_weight=weights_train)
print('R^2 on the training data:')
print(fit)
# get the test data
X_test = test_df.drop(['year','month','day','team','opponent','teamscore','oppscore','D1','OT','weights','scorediff'], axis=1)
y_test = test_df['scorediff']
# get the metrics
compare_data = list(zip(ridge_reg.predict(X_test),y_test))
right_count = 0
for tpl in compare_data:
if tpl[0] >= 0 and tpl[1] >=0:
right_count = right_count + 1
elif tpl[0] <= 0 and tpl[1] <=0:
right_count = right_count + 1
accuracy = right_count/len(compare_data)
print('accuracy on this weeks games')
print(right_count/len(compare_data))
total_squared_error = 0.0
for tpl in compare_data:
total_squared_error = total_squared_error + (tpl[0]-tpl[1])**2
RMSE = (total_squared_error / float(len(compare_data)))**(0.5)
print('RMSE on this weeks games:')
print(RMSE)
return fit, accuracy, RMSE, right_count, total_squared_error
#Now the code for running the week by week testing.
base_df = train_df
new_indices = []
# this is the hash for the first date
last_date_hash = 2018
fit_list = []
accuracy_list = []
RMSE_list = []
total_squared_error = 0
total_right_count = 0
for index, row in test_df.iterrows():
year = row['year']
month = row['month']
day = row['day']
date_hash = year+month+day
if date_hash != last_date_hash:
last_date_hash = date_hash
test_week = test_df.iloc[new_indices]
fit, accuracy, RMSE, correct_calls, squared_error = train_test_model(base_df,test_week)
fit_list.append(fit)
accuracy_list.append(accuracy)
RMSE_list.append(RMSE)
total_squared_error = total_squared_error + squared_error
total_right_count = total_right_count + correct_calls
base_df = pd.concat([base_df,test_week],ignore_index=True)
new_indices = [index]
else:
new_indices.append(index)
# get the number of games it called correctly in 2016
total_accuracy = total_right_count/test_df.shape[0]
total_accuracy
# get the Root Mean Squared Error
overall_RMSE = (total_squared_error/test_df.shape[0])**(0.5)
overall_RMSE
```
| github_jupyter |
### Using fmriprep
[fmriprep](https://fmriprep.readthedocs.io/en/stable/) is a package developed by the Poldrack lab to do the minimal preprocessing of fMRI data required. It covers brain extraction, motion correction, field unwarping, and registration. It uses a combination of well-known software packages (e.g., FSL, SPM, ANTS, AFNI) and selects the 'best' implementation of each preprocessing step.
Once installed, `fmriprep` can be invoked from the command line. We can even run it inside this notebook! The following command should work after you remove the 'hashtag' `#`.
However, running fmriprep takes quite some time (we included the hashtag to prevent you from accidentally running it). You'll most likely want to run it in parallel on a computing cluster.
```
#!fmriprep \
# --ignore slicetiming \
# --ignore fieldmaps \
# --output-space template \
# --template MNI152NLin2009cAsym \
# --template-resampling-grid 2mm \
# --fs-no-reconall \
# --fs-license-file \
# ../license.txt \
# ../data/ds000030 ../data/ds000030/derivatives/fmriprep participant
```
The command above consists of the following parts:
- \"fmriprep\" calls fmriprep
- `--ignore slicetiming` tells fmriprep to _not_ perform slice timing correction
- `--ignore fieldmaps` tells fmriprep to _not_ perform distortion correction (unfortunately, there are no field maps available in this data set)
- `--output-space template` tells fmriprep to normalize (register) data to a template
- `--template MNI152NLin2009cAsym` tells fmriprep that the template should be MNI152 version 6 (2009c)
- `--template-resampling-grid 2mm` tells fmriprep to resample the output images to 2mm isotropic resolution
- `--fs-license-file ../../license.txt` tells fmriprep where to find the license.txt-file for freesurfer - you can ignore this
- `bids` is the name of the folder containing the data in bids format
- `output_folder` is the name of the folder where we want the preprocessed data to be stored,
- `participant` tells fmriprep to run only at the participant level (and not, for example, at the group level - you can forget about this)
The [official documentation](http://fmriprep.readthedocs.io/) contains all possible arguments you can pass.
### Using nipype
fmriprep makes use of [Nipype](https://nipype.readthedocs.io/en/latest/), a pipelining tool for preprocessing neuroimaging data. Nipype makes it easy to share and document pipelines and run them in parallel on a computing cluster. If you would like to build your own preprocessing pipelines, a good resource to get started is [this tutorial](https://miykael.github.io/nipype_tutorial/).
| github_jupyter |
SVM
```
import pandas as pd
from sklearn import svm, metrics
from sklearn.model_selection import train_test_split
wesad_eda = pd.read_csv('D:\data\wesad-chest-combined-classification-eda.csv') # need to adjust a path of dataset
wesad_eda.columns
original_column_list = ['MEAN', 'MAX', 'MIN', 'RANGE', 'KURT', 'SKEW', 'MEAN_1ST_GRAD',
'STD_1ST_GRAD', 'MEAN_2ND_GRAD', 'STD_2ND_GRAD', 'ALSC', 'INSC', 'APSC',
'RMSC', 'subject id', 'MEAN_LOG', 'INSC_LOG', 'APSC_LOG', 'RMSC_LOG',
'RANGE_LOG', 'ALSC_LOG', 'MIN_LOG', 'MEAN_1ST_GRAD_LOG',
'MEAN_2ND_GRAD_LOG', 'MIN_LOG_LOG', 'MEAN_1ST_GRAD_LOG_LOG',
'MEAN_2ND_GRAD_LOG_LOG', 'APSC_LOG_LOG', 'ALSC_LOG_LOG', 'APSC_BOXCOX',
'RMSC_BOXCOX', 'RANGE_BOXCOX', 'MEAN_YEO_JONSON', 'SKEW_YEO_JONSON',
'KURT_YEO_JONSON', 'APSC_YEO_JONSON', 'MIN_YEO_JONSON',
'MAX_YEO_JONSON', 'MEAN_1ST_GRAD_YEO_JONSON', 'RMSC_YEO_JONSON',
'STD_1ST_GRAD_YEO_JONSON', 'RANGE_SQRT', 'RMSC_SQUARED',
'MEAN_2ND_GRAD_CUBE', 'INSC_APSC', 'condition', 'SSSQ class',
'SSSQ Label', 'condition label']
original_column_list_withoutString = ['MEAN', 'MAX', 'MIN', 'RANGE', 'KURT', 'SKEW', 'MEAN_1ST_GRAD',
'STD_1ST_GRAD', 'MEAN_2ND_GRAD', 'STD_2ND_GRAD', 'ALSC', 'INSC', 'APSC',
'RMSC', 'MEAN_LOG', 'INSC_LOG', 'APSC_LOG', 'RMSC_LOG',
'RANGE_LOG', 'ALSC_LOG', 'MIN_LOG', 'MEAN_1ST_GRAD_LOG',
'MEAN_2ND_GRAD_LOG', 'MIN_LOG_LOG', 'MEAN_1ST_GRAD_LOG_LOG',
'MEAN_2ND_GRAD_LOG_LOG', 'APSC_LOG_LOG', 'ALSC_LOG_LOG', 'APSC_BOXCOX',
'RMSC_BOXCOX', 'RANGE_BOXCOX', 'MEAN_YEO_JONSON', 'SKEW_YEO_JONSON',
'KURT_YEO_JONSON', 'APSC_YEO_JONSON', 'MIN_YEO_JONSON',
'MAX_YEO_JONSON', 'MEAN_1ST_GRAD_YEO_JONSON', 'RMSC_YEO_JONSON',
'STD_1ST_GRAD_YEO_JONSON', 'RANGE_SQRT', 'RMSC_SQUARED',
'MEAN_2ND_GRAD_CUBE', 'INSC_APSC']
selected_colum_list = ['MEAN', 'MAX', 'MIN', 'RANGE', 'KURT', 'SKEW', 'MEAN_1ST_GRAD',
'STD_1ST_GRAD', 'MEAN_2ND_GRAD', 'STD_2ND_GRAD', 'ALSC', 'INSC', 'APSC',
'RMSC', 'subject id', 'MEAN_LOG', 'INSC_LOG', 'APSC_LOG', 'RMSC_LOG',
'RANGE_LOG', 'ALSC_LOG', 'MIN_LOG']
stress_data = wesad_eda[original_column_list_withoutString]
stress_label = wesad_eda['condition label']
stress_data
train_data, test_data, train_label, test_label = train_test_split(stress_data, stress_label)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(train_data)
X_t_train = pca.transform(train_data)
X_t_test = pca.transform(test_data)
model = svm.SVC()
model.fit(X_t_train, train_label)
predict = model.predict(X_t_test)
acc_score = metrics.accuracy_score(test_label, predict)
print(acc_score)
import pickle
from sklearn.externals import joblib
saved_model = pickle.dumps(model)
joblib.dump(model, 'SVMmodel1.pkl')
model_from_pickle = joblib.load('SVMmodel1.pkl')
predict = model_from_pickle.predict(test_data)
acc_score = metrics.accuracy_score(test_label, predict)
print(acc_score)
```
| github_jupyter |
# Time series analysis on AWS
*Chapter 1 - Time series analysis overview*
## Initializations
---
```
!pip install --quiet tqdm kaggle tsia ruptures
```
### Imports
```
import matplotlib.colors as mpl_colors
import matplotlib.dates as mdates
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import ruptures as rpt
import sys
import tsia
import warnings
import zipfile
from matplotlib import gridspec
from sklearn.preprocessing import normalize
from tqdm import tqdm
from urllib.request import urlretrieve
```
### Parameters
```
RAW_DATA = os.path.join('..', 'Data', 'raw')
DATA = os.path.join('..', 'Data')
warnings.filterwarnings("ignore")
os.makedirs(RAW_DATA, exist_ok=True)
%matplotlib inline
# plt.style.use('Solarize_Light2')
plt.style.use('fivethirtyeight')
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
plt.rcParams['figure.dpi'] = 300
plt.rcParams['lines.linewidth'] = 0.3
plt.rcParams['axes.titlesize'] = 6
plt.rcParams['axes.labelsize'] = 6
plt.rcParams['xtick.labelsize'] = 4.5
plt.rcParams['ytick.labelsize'] = 4.5
plt.rcParams['grid.linewidth'] = 0.2
plt.rcParams['legend.fontsize'] = 5
```
### Helper functions
```
def progress_report_hook(count, block_size, total_size):
mb = int(count * block_size // 1048576)
if count % 500 == 0:
sys.stdout.write("\r{} MB downloaded".format(mb))
sys.stdout.flush()
```
### Downloading datasets
#### **Dataset 1:** Household energy consumption
```
ORIGINAL_DATA = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00321/LD2011_2014.txt.zip'
ARCHIVE_PATH = os.path.join(RAW_DATA, 'energy-consumption.zip')
FILE_NAME = 'energy-consumption.csv'
FILE_PATH = os.path.join(DATA, 'energy', FILE_NAME)
FILE_DIR = os.path.dirname(FILE_PATH)
if not os.path.isfile(FILE_PATH):
print("Downloading dataset (258MB), can take a few minutes depending on your connection")
urlretrieve(ORIGINAL_DATA, ARCHIVE_PATH, reporthook=progress_report_hook)
os.makedirs(os.path.join(DATA, 'energy'), exist_ok=True)
print("\nExtracting data archive")
zip_ref = zipfile.ZipFile(ARCHIVE_PATH, 'r')
zip_ref.extractall(FILE_DIR + '/')
zip_ref.close()
!rm -Rf $FILE_DIR/__MACOSX
!mv $FILE_DIR/LD2011_2014.txt $FILE_PATH
else:
print("File found, skipping download")
```
#### **Dataset 2:** Nasa Turbofan remaining useful lifetime
```
ok = True
ok = ok and os.path.exists(os.path.join(DATA, 'turbofan', 'train_FD001.txt'))
ok = ok and os.path.exists(os.path.join(DATA, 'turbofan', 'test_FD001.txt'))
ok = ok and os.path.exists(os.path.join(DATA, 'turbofan', 'RUL_FD001.txt'))
if (ok):
print("File found, skipping download")
else:
print('Some datasets are missing, create working directories and download original dataset from the NASA repository.')
# Making sure the directory already exists:
os.makedirs(os.path.join(DATA, 'turbofan'), exist_ok=True)
# Download the dataset from the NASA repository, unzip it and set
# aside the first training file to work on:
!wget https://ti.arc.nasa.gov/c/6/ --output-document=$RAW_DATA/CMAPSSData.zip
!unzip $RAW_DATA/CMAPSSData.zip -d $RAW_DATA
!cp $RAW_DATA/train_FD001.txt $DATA/turbofan/train_FD001.txt
!cp $RAW_DATA/test_FD001.txt $DATA/turbofan/test_FD001.txt
!cp $RAW_DATA/RUL_FD001.txt $DATA/turbofan/RUL_FD001.txt
```
#### **Dataset 3:** Human heartbeat
```
ECG_DATA_SOURCE = 'http://www.timeseriesclassification.com/Downloads/ECG200.zip'
ARCHIVE_PATH = os.path.join(RAW_DATA, 'ECG200.zip')
FILE_NAME = 'ecg.csv'
FILE_PATH = os.path.join(DATA, 'ecg', FILE_NAME)
FILE_DIR = os.path.dirname(FILE_PATH)
if not os.path.isfile(FILE_PATH):
urlretrieve(ECG_DATA_SOURCE, ARCHIVE_PATH)
os.makedirs(os.path.join(DATA, 'ecg'), exist_ok=True)
print("\nExtracting data archive")
zip_ref = zipfile.ZipFile(ARCHIVE_PATH, 'r')
zip_ref.extractall(FILE_DIR + '/')
zip_ref.close()
!mv $DATA/ecg/ECG200_TRAIN.txt $FILE_PATH
else:
print("File found, skipping download")
```
#### **Dataset 4:** Industrial pump data
To download this dataset from Kaggle, you will need to have an account and create a token that you install on your machine. You can follow [**this link**](https://www.kaggle.com/docs/api) to get started with the Kaggle API. Once generated, make sure your Kaggle token is stored in the `~/.kaggle/kaggle.json` file, or the next cells will issue an error. In some cases, you may still have an error while using this location. Try moving your token in this location instead: `~/kaggle/kaggle.json` (not the absence of the `.` in the folder name).
To get a Kaggle token, go to kaggle.com and create an account. Then navigate to **My account** and scroll down to the API section. There, click the **Create new API token** button:
<img src="../Assets/kaggle_api.png" />
```
FILE_NAME = 'pump-sensor-data.zip'
ARCHIVE_PATH = os.path.join(RAW_DATA, FILE_NAME)
FILE_PATH = os.path.join(DATA, 'pump', 'sensor.csv')
FILE_DIR = os.path.dirname(FILE_PATH)
if not os.path.isfile(FILE_PATH):
if not os.path.exists('/home/ec2-user/.kaggle/kaggle.json'):
os.makedirs('/home/ec2-user/.kaggle/', exist_ok=True)
raise Exception('The kaggle.json token was not found.\nCreating the /home/ec2-user/.kaggle/ directory: put your kaggle.json file there once you have generated it from the Kaggle website')
else:
print('The kaggle.json token file was found: making sure it is not readable by other users on this system.')
!chmod 600 /home/ec2-user/.kaggle/kaggle.json
os.makedirs(os.path.join(DATA, 'pump'), exist_ok=True)
!kaggle datasets download -d nphantawee/pump-sensor-data -p $RAW_DATA
print("\nExtracting data archive")
zip_ref = zipfile.ZipFile(ARCHIVE_PATH, 'r')
zip_ref.extractall(FILE_DIR + '/')
zip_ref.close()
else:
print("File found, skipping download")
```
#### **Dataset 5:** London household energy consumption with weather data
```
FILE_NAME = 'smart-meters-in-london.zip'
ARCHIVE_PATH = os.path.join(RAW_DATA, FILE_NAME)
FILE_PATH = os.path.join(DATA, 'energy-london', 'smart-meters-in-london.zip')
FILE_DIR = os.path.dirname(FILE_PATH)
# Checks if the data were already downloaded:
if os.path.exists(os.path.join(DATA, 'energy-london', 'acorn_details.csv')):
print("File found, skipping download")
else:
# Downloading and unzipping datasets from Kaggle:
print("Downloading dataset (2.26G), can take a few minutes depending on your connection")
os.makedirs(os.path.join(DATA, 'energy-london'), exist_ok=True)
!kaggle datasets download -d jeanmidev/smart-meters-in-london -p $RAW_DATA
print('Unzipping files...')
zip_ref = zipfile.ZipFile(ARCHIVE_PATH, 'r')
zip_ref.extractall(FILE_DIR + '/')
zip_ref.close()
!rm $DATA/energy-london/*zip
!rm $DATA/energy-london/*gz
!mv $DATA/energy-london/halfhourly_dataset/halfhourly_dataset/* $DATA/energy-london/halfhourly_dataset
!rm -Rf $DATA/energy-london/halfhourly_dataset/halfhourly_dataset
!mv $DATA/energy-london/daily_dataset/daily_dataset/* $DATA/energy-london/daily_dataset
!rm -Rf $DATA/energy-london/daily_dataset/daily_dataset
```
## Dataset visualization
---
### **1.** Household energy consumption
```
%%time
FILE_PATH = os.path.join(DATA, 'energy', 'energy-consumption.csv')
energy_df = pd.read_csv(FILE_PATH, sep=';', decimal=',')
energy_df = energy_df.rename(columns={'Unnamed: 0': 'Timestamp'})
energy_df['Timestamp'] = pd.to_datetime(energy_df['Timestamp'])
energy_df = energy_df.set_index('Timestamp')
energy_df.iloc[100000:, 1:5].head()
fig = plt.figure(figsize=(5, 1.876))
plt.plot(energy_df['MT_002'])
plt.title('Energy consumption for household MT_002')
plt.show()
```
### **2.** NASA Turbofan data
```
FILE_PATH = os.path.join(DATA, 'turbofan', 'train_FD001.txt')
turbofan_df = pd.read_csv(FILE_PATH, header=None, sep=' ')
turbofan_df.dropna(axis='columns', how='all', inplace=True)
print('Shape:', turbofan_df.shape)
turbofan_df.head(5)
columns = [
'unit_number',
'cycle',
'setting_1',
'setting_2',
'setting_3',
] + ['sensor_{}'.format(s) for s in range(1,22)]
turbofan_df.columns = columns
turbofan_df.head()
# Add a RUL column and group the data by unit_number:
turbofan_df['rul'] = 0
grouped_data = turbofan_df.groupby(by='unit_number')
# Loops through each unit number to get the lifecycle counts:
for unit, rul in enumerate(grouped_data.count()['cycle']):
current_df = turbofan_df[turbofan_df['unit_number'] == (unit+1)].copy()
current_df['rul'] = rul - current_df['cycle']
turbofan_df[turbofan_df['unit_number'] == (unit+1)] = current_df
df = turbofan_df.iloc[:, [0,1,2,3,4,5,6,25,26]].copy()
df = df[df['unit_number'] == 1]
def highlight_cols(s):
return f'background-color: rgba(0, 143, 213, 0.3)'
df.head(10).style.applymap(highlight_cols, subset=['rul'])
```
### **3.** ECG Data
```
FILE_PATH = os.path.join(DATA, 'ecg', 'ecg.csv')
ecg_df = pd.read_csv(FILE_PATH, header=None, sep=' ')
print('Shape:', ecg_df.shape)
ecg_df.head()
plt.rcParams['lines.linewidth'] = 0.7
fig = plt.figure(figsize=(5,2))
label_normal = False
label_ischemia = False
for i in range(0,100):
label = ecg_df.iloc[i, 0]
if (label == -1):
color = colors[1]
if label_ischemia:
plt.plot(ecg_df.iloc[i,1:96], color=color, alpha=0.5, linestyle='--', linewidth=0.5)
else:
plt.plot(ecg_df.iloc[i,1:96], color=color, alpha=0.5, label='Ischemia', linestyle='--')
label_ischemia = True
else:
color = colors[0]
if label_normal:
plt.plot(ecg_df.iloc[i,1:96], color=color, alpha=0.5)
else:
plt.plot(ecg_df.iloc[i,1:96], color=color, alpha=0.5, label='Normal')
label_normal = True
plt.title('Human heartbeat activity')
plt.legend(loc='upper right', ncol=2)
plt.show()
```
### **4.** Industrial pump data
```
FILE_PATH = os.path.join(DATA, 'pump', 'sensor.csv')
pump_df = pd.read_csv(FILE_PATH, sep=',')
pump_df.drop(columns={'Unnamed: 0'}, inplace=True)
pump_df['timestamp'] = pd.to_datetime(pump_df['timestamp'], format='%Y-%m-%d %H:%M:%S')
pump_df = pump_df.set_index('timestamp')
pump_df['machine_status'].replace(to_replace='NORMAL', value=np.nan, inplace=True)
pump_df['machine_status'].replace(to_replace='BROKEN', value=1, inplace=True)
pump_df['machine_status'].replace(to_replace='RECOVERING', value=1, inplace=True)
print('Shape:', pump_df.shape)
pump_df.head()
file_structure_df = pump_df.iloc[:, 0:10].resample('5D').mean()
plt.rcParams['hatch.linewidth'] = 0.5
plt.rcParams['lines.linewidth'] = 0.5
fig = plt.figure(figsize=(5,1))
ax1 = fig.add_subplot(1,1,1)
plot1 = ax1.plot(pump_df['sensor_00'], label='Healthy pump')
ax2 = ax1.twinx()
plot2 = ax2.fill_between(
x=pump_df.index,
y1=0.0,
y2=pump_df['machine_status'],
color=colors[1],
linewidth=0.0,
edgecolor='#000000',
alpha=0.5,
hatch="//////",
label='Broken pump'
)
ax2.grid(False)
ax2.set_yticks([])
labels = [plot1[0].get_label(), plot2.get_label()]
plt.legend(handles=[plot1[0], plot2], labels=labels, loc='lower center', ncol=2, bbox_to_anchor=(0.5, -.4))
plt.title('Industrial pump sensor data')
plt.show()
```
### **5.** London household energy consumption with weather data
We want to filter out households that are are subject to the dToU tariff and keep only the ones with a known ACORN (i.e. not in the ACORN-U group): this will allow us to better model future analysis by adding the Acorn detail informations (which by definitions, won't be available for the ACORN-U group).
```
household_filename = os.path.join(DATA, 'energy-london', 'informations_households.csv')
household_df = pd.read_csv(household_filename)
household_df = household_df[(household_df['stdorToU'] == 'Std') & (household_df['Acorn'] == 'ACORN-E')]
print(household_df.shape)
household_df.head()
```
#### Associating households with they energy consumption data
Each household (with an ID starting by `MACxxxxx` in the table above) has its consumption data stored in a block file name `block_xx`. This file is also available from the `informations_household.csv` file extracted above. We have the association between `household_id` and `block_file`: we can open each of them and keep the consumption for the households of interest. All these data will be concatenated into an `energy_df` dataframe:
```
%%time
household_ids = household_df['LCLid'].tolist()
consumption_file = os.path.join(DATA, 'energy-london', 'hourly_consumption.csv')
min_data_points = ((pd.to_datetime('2020-12-31') - pd.to_datetime('2020-01-01')).days + 1)*24*2
if os.path.exists(consumption_file):
print('Half-hourly consumption file already exists, loading from disk...')
energy_df = pd.read_csv(consumption_file)
energy_df['timestamp'] = pd.to_datetime(energy_df['timestamp'], format='%Y-%m-%d %H:%M:%S.%f')
print('Done.')
else:
print('Half-hourly consumption file not found. We need to generate it.')
# We know have the block number we can use to open the right file:
energy_df = pd.DataFrame()
target_block_files = household_df['file'].unique().tolist()
print('- {} block files to process: '.format(len(target_block_files)), end='')
df_list = []
for block_file in tqdm(target_block_files):
# Reads the current block file:
current_filename = os.path.join(DATA, 'energy-london', 'halfhourly_dataset', '{}.csv'.format(block_file))
df = pd.read_csv(current_filename)
# Set readable column names and adjust data types:
df.columns = ['household_id', 'timestamp', 'energy']
df = df.replace(to_replace='Null', value=0.0)
df['energy'] = df['energy'].astype(np.float64)
df['timestamp'] = pd.to_datetime(df['timestamp'], format='%Y-%m-%d %H:%M:%S.%f')
# We filter on the households sampled earlier:
df_list.append(df[df['household_id'].isin(household_ids)].reset_index(drop=True))
# Concatenate with the main dataframe:
energy_df = pd.concat(df_list, axis='index', ignore_index=True)
datapoints = energy_df.groupby(by='household_id').count()
datapoints = datapoints[datapoints['timestamp'] < min_data_points]
hhid_to_remove = datapoints.index.tolist()
energy_df = energy_df[~energy_df['household_id'].isin(hhid_to_remove)]
# Let's save this dataset to disk, we will use it from now on:
print('Saving file to disk... ', end='')
energy_df.to_csv(consumption_file, index=False)
print('Done.')
start = np.min(energy_df['timestamp'])
end = np.max(energy_df['timestamp'])
weather_filename = os.path.join(DATA, 'energy-london', 'weather_hourly_darksky.csv')
weather_df = pd.read_csv(weather_filename)
weather_df['time'] = pd.to_datetime(weather_df['time'], format='%Y-%m-%d %H:%M:%S')
weather_df = weather_df.drop(columns=['precipType', 'icon', 'summary'])
weather_df = weather_df.sort_values(by='time')
weather_df = weather_df.set_index('time')
weather_df = weather_df[start:end]
# Let's make sure we have one datapoint per hour to match
# the frequency used for the household energy consumption data:
weather_df = weather_df.resample(rule='1H').mean() # This will generate NaN values timestamp missing data
weather_df = weather_df.interpolate(method='linear') # This will fill the missing values with the average
print(weather_df.shape)
weather_df
energy_df = energy_df.set_index(['household_id', 'timestamp'])
energy_df
hhid = household_ids[2]
hh_energy = energy_df.loc[hhid, :]
start = '2012-07-01'
end = '2012-07-15'
fig = plt.figure(figsize=(5,1))
ax1 = fig.add_subplot(1,1,1)
plot2 = ax1.fill_between(
x=weather_df.loc[start:end, 'temperature'].index,
y1=0.0,
y2=weather_df.loc[start:end, 'temperature'],
color=colors[1],
linewidth=0.0,
edgecolor='#000000',
alpha=0.25,
hatch="//////",
label='Temperature'
)
ax1.set_ylim((0,40))
ax1.grid(False)
ax2 = ax1.twinx()
ax2.plot(hh_energy[start:end], label='Energy consumption', linewidth=2, color='#FFFFFF', alpha=0.5)
plot1 = ax2.plot(hh_energy[start:end], label='Energy consumption', linewidth=0.7)
ax2.set_title(f'Energy consumption for household {hhid}')
labels = [plot1[0].get_label(), plot2.get_label()]
plt.legend(handles=[plot1[0], plot2], labels=labels, loc='upper left', fontsize=3, ncol=2)
plt.show()
acorn_filename = os.path.join(DATA, 'energy-london', 'acorn_details.csv')
acorn_df = pd.read_csv(acorn_filename, encoding='ISO-8859-1')
acorn_df = acorn_df.sample(10).loc[:, ['MAIN CATEGORIES', 'CATEGORIES', 'REFERENCE', 'ACORN-A', 'ACORN-B', 'ACORN-E']]
acorn_df
```
## File structure exploration
---
```
from IPython.display import display_html
def display_multiple_dataframe(*args, max_rows=None, max_cols=None):
html_str = ''
for df in args:
html_str += df.to_html(max_cols=max_cols, max_rows=max_rows)
display_html(html_str.replace('table','table style="display:inline"'), raw=True)
display_multiple_dataframe(
file_structure_df[['sensor_00']],
file_structure_df[['sensor_01']],
file_structure_df[['sensor_03']],
max_rows=10, max_cols=None
)
display_multiple_dataframe(
file_structure_df.loc['2018-04', :].head(6),
file_structure_df.loc['2018-05', :].head(6),
file_structure_df.loc['2018-06', :].head(6),
max_rows=None, max_cols=2
)
display_multiple_dataframe(
file_structure_df.loc['2018-04', ['sensor_00']].head(6),
file_structure_df.loc['2018-05', ['sensor_00']].head(6),
file_structure_df.loc['2018-06', ['sensor_00']].head(6),
max_rows=10, max_cols=None
)
display_multiple_dataframe(
file_structure_df.loc['2018-04', ['sensor_01']].head(6),
file_structure_df.loc['2018-05', ['sensor_01']].head(6),
file_structure_df.loc['2018-06', ['sensor_01']].head(6),
max_rows=10, max_cols=None
)
print('.\n.\n.')
display_multiple_dataframe(
file_structure_df.loc['2018-04', ['sensor_09']].head(6),
file_structure_df.loc['2018-05', ['sensor_09']].head(6),
file_structure_df.loc['2018-06', ['sensor_09']].head(6),
max_rows=10, max_cols=None
)
df1 = pump_df.iloc[:, [0]].resample('5D').mean()
df2 = pump_df.iloc[:, [1]].resample('2D').mean()
df3 = pump_df.iloc[:, [2]].resample('7D').mean()
display_multiple_dataframe(
df1.head(10), df2.head(10), df3.head(10),
pd.merge(pd.merge(df1, df2, left_index=True, right_index=True, how='outer'), df3, left_index=True, right_index=True, how='outer').head(10),
max_rows=None, max_cols=None
)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 10)
pd.merge(pd.merge(df1, df2, left_index=True, right_index=True, how='outer'), df3, left_index=True, right_index=True, how='outer').head(10)
plt.figure(figsize=(5,1))
for i in range(len(colors)):
plt.plot(file_structure_df[f'sensor_0{i}'], linewidth=2, alpha=0.5, label=colors[i])
plt.legend()
plt.show()
```
## Visualization
---
```
fig = plt.figure(figsize=(5,1))
ax1 = fig.add_subplot(1,1,1)
ax2 = ax1.twinx()
plot_sensor_0 = ax1.plot(pump_df['sensor_00'], label='Sensor 0', color=colors[0], linewidth=1, alpha=0.8)
plot_sensor_1 = ax2.plot(pump_df['sensor_01'], label='Sensor 1', color=colors[1], linewidth=1, alpha=0.8)
ax2.grid(False)
plt.title('Pump sensor values (2 sensors)')
plt.legend(handles=[plot_sensor_0[0], plot_sensor_1[0]], ncol=2, loc='lower right')
plt.show()
reduced_pump_df = pump_df.loc[:, 'sensor_00':'sensor_14']
reduced_pump_df = reduced_pump_df.replace([np.inf, -np.inf], np.nan)
reduced_pump_df = reduced_pump_df.fillna(0.0)
reduced_pump_df = reduced_pump_df.astype(np.float32)
scaled_pump_df = pd.DataFrame(normalize(reduced_pump_df), index=reduced_pump_df.index, columns=reduced_pump_df.columns)
scaled_pump_df
fig = plt.figure(figsize=(5,1))
for i in range(0,15):
plt.plot(scaled_pump_df.iloc[:, i], alpha=0.6)
plt.title('Pump sensor values (15 sensors)')
plt.show()
pump_df2 = pump_df.copy()
pump_df2 = pump_df2.replace([np.inf, -np.inf], np.nan)
pump_df2 = pump_df2.fillna(0.0)
pump_df2 = pump_df2.astype(np.float32)
pump_description = pump_df2.describe().T
constant_signals = pump_description[pump_description['min'] == pump_description['max']].index.tolist()
pump_df2 = pump_df2.drop(columns=constant_signals)
features = pump_df2.columns.tolist()
def hex_to_rgb(hex_color):
"""
Converts a color string in hexadecimal format to RGB format.
PARAMS
======
hex_color: string
A string describing the color to convert from hexadecimal. It can
include the leading # character or not
RETURNS
=======
rgb_color: tuple
Each color component of the returned tuple will be a float value
between 0.0 and 1.0
"""
hex_color = hex_color.lstrip('#')
rgb_color = tuple(int(hex_color[i:i+2], base=16) / 255.0 for i in [0, 2, 4])
return rgb_color
def plot_timeseries_strip_chart(binned_timeseries, signal_list, fig_width=12, signal_height=0.15, dates=None, day_interval=7):
# Build a suitable colormap:
colors_list = [
hex_to_rgb('#DC322F'),
hex_to_rgb('#B58900'),
hex_to_rgb('#2AA198')
]
cm = mpl_colors.LinearSegmentedColormap.from_list('RdAmGr', colors_list, N=len(colors_list))
fig = plt.figure(figsize=(fig_width, signal_height * binned_timeseries.shape[0]))
ax = fig.add_subplot(1,1,1)
# Devising the extent of the actual plot:
if dates is not None:
dnum = mdates.date2num(dates)
start = dnum[0] - (dnum[1]-dnum[0])/2.
stop = dnum[-1] + (dnum[1]-dnum[0])/2.
extent = [start, stop, 0, signal_height * (binned_timeseries.shape[0])]
else:
extent = None
# Plot the matrix:
im = ax.imshow(binned_timeseries,
extent=extent,
aspect="auto",
cmap=cm,
origin='lower')
# Adjusting the x-axis if we provide dates:
if dates is not None:
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(4)
tick.label.set_rotation(60)
tick.label.set_fontweight('bold')
ax.tick_params(axis='x', which='major', pad=7, labelcolor='#000000')
plt.xticks(ha='right')
# Adjusting the y-axis:
ax.yaxis.set_major_locator(ticker.MultipleLocator(signal_height))
ax.set_yticklabels(signal_list, verticalalignment='bottom', fontsize=4)
ax.set_yticks(np.arange(len(signal_list)) * signal_height)
plt.grid()
return ax
from IPython.display import display, Markdown, Latex
# Build a list of dataframes, one per sensor:
df_list = []
for f in features[:1]:
df_list.append(pump_df2[[f]])
# Discretize each signal in 3 bins:
array = tsia.markov.discretize_multivariate(df_list)
fig = plt.figure(figsize=(5.5, 0.6))
plt.plot(pump_df2['sensor_00'], linewidth=0.7, alpha=0.6)
plt.title('Line plot of the pump sensor 0')
plt.show()
display(Markdown('<img src="arrow.png" align="left" style="padding-left: 730px"/>'))
# Plot the strip chart:
ax = plot_timeseries_strip_chart(
array,
signal_list=features[:1],
fig_width=5.21,
signal_height=0.2,
dates=df_list[0].index.to_pydatetime(),
day_interval=2
)
ax.set_title('Strip chart of the pump sensor 0');
# Build a list of dataframes, one per sensor:
df_list = []
for f in features:
df_list.append(pump_df2[[f]])
# Discretize each signal in 3 bins:
array = tsia.markov.discretize_multivariate(df_list)
# Plot the strip chart:
fig = plot_timeseries_strip_chart(
array,
signal_list=features,
fig_width=5.5,
signal_height=0.1,
dates=df_list[0].index.to_pydatetime(),
day_interval=2
)
```
### Recurrence plot
```
from pyts.image import RecurrencePlot
from pyts.image import GramianAngularField
from pyts.image import MarkovTransitionField
hhid = household_ids[2]
hh_energy = energy_df.loc[hhid, :]
pump_extract_df = pump_df.iloc[:800, 0].copy()
rp = RecurrencePlot(threshold='point', percentage=30)
weather_rp = rp.fit_transform(weather_df.loc['2013-01-01':'2013-01-31']['temperature'].values.reshape(1, -1))
energy_rp = rp.fit_transform(hh_energy['2012-07-01':'2012-07-15'].values.reshape(1, -1))
pump_rp = rp.fit_transform(pump_extract_df.values.reshape(1, -1))
fig = plt.figure(figsize=(5.5, 2.4))
gs = gridspec.GridSpec(nrows=3, ncols=2, width_ratios=[3,1], hspace=0.8, wspace=0.0)
# Pump sensor 0:
ax = fig.add_subplot(gs[0])
ax.plot(pump_extract_df, label='Pump sensor 0')
ax.set_title(f'Pump sensor 0')
ax = fig.add_subplot(gs[1])
ax.imshow(pump_rp[0], cmap='binary', origin='lower')
ax.axis('off')
# Energy consumption line plot and recurrence plot:
ax = fig.add_subplot(gs[2])
plot1 = ax.plot(hh_energy['2012-07-01':'2012-07-15'], color=colors[1])
ax.set_title(f'Energy consumption for household {hhid}')
ax = fig.add_subplot(gs[3])
ax.imshow(energy_rp[0], cmap='binary', origin='lower')
ax.axis('off')
# Daily temperature line plot and recurrence plot:
ax = fig.add_subplot(gs[4])
start = '2012-07-01'
end = '2012-07-15'
ax.plot(weather_df.loc['2013-01-01':'2013-01-31']['temperature'], color=colors[2])
ax.set_title(f'Daily temperature')
ax = fig.add_subplot(gs[5])
ax.imshow(weather_rp[0], cmap='binary', origin='lower')
ax.axis('off')
plt.show()
hhid = household_ids[2]
hh_energy = energy_df.loc[hhid, :]
pump_extract_df = pump_df.iloc[:800, 0].copy()
gaf = GramianAngularField(image_size=48, method='summation')
weather_gasf = gaf.fit_transform(weather_df.loc['2013-01-01':'2013-01-31']['temperature'].values.reshape(1, -1))
energy_gasf = gaf.fit_transform(hh_energy['2012-07-01':'2012-07-15'].values.reshape(1, -1))
pump_gasf = gaf.fit_transform(pump_extract_df.values.reshape(1, -1))
fig = plt.figure(figsize=(5.5, 2.4))
gs = gridspec.GridSpec(nrows=3, ncols=2, width_ratios=[3,1], hspace=0.8, wspace=0.0)
# Pump sensor 0:
ax = fig.add_subplot(gs[0])
ax.plot(pump_extract_df, label='Pump sensor 0')
ax.set_title(f'Pump sensor 0')
ax = fig.add_subplot(gs[1])
ax.imshow(pump_gasf[0], cmap='RdBu_r', origin='lower')
ax.axis('off')
# Energy consumption line plot and recurrence plot:
ax = fig.add_subplot(gs[2])
plot1 = ax.plot(hh_energy['2012-07-01':'2012-07-15'], color=colors[1])
ax.set_title(f'Energy consumption for household {hhid}')
ax = fig.add_subplot(gs[3])
ax.imshow(energy_gasf[0], cmap='RdBu_r', origin='lower')
ax.axis('off')
# Daily temperature line plot and recurrence plot:
ax = fig.add_subplot(gs[4])
start = '2012-07-01'
end = '2012-07-15'
ax.plot(weather_df.loc['2013-01-01':'2013-01-31']['temperature'], color=colors[2])
ax.set_title(f'Daily temperature')
ax = fig.add_subplot(gs[5])
ax.imshow(weather_gasf[0], cmap='RdBu_r', origin='lower')
ax.axis('off')
plt.show()
mtf = MarkovTransitionField(image_size=48)
weather_mtf = mtf.fit_transform(weather_df.loc['2013-01-01':'2013-01-31']['temperature'].values.reshape(1, -1))
energy_mtf = mtf.fit_transform(hh_energy['2012-07-01':'2012-07-15'].values.reshape(1, -1))
pump_mtf = mtf.fit_transform(pump_extract_df.values.reshape(1, -1))
fig = plt.figure(figsize=(5.5, 2.4))
gs = gridspec.GridSpec(nrows=3, ncols=2, width_ratios=[3,1], hspace=0.8, wspace=0.0)
# Pump sensor 0:
ax = fig.add_subplot(gs[0])
ax.plot(pump_extract_df, label='Pump sensor 0')
ax.set_title(f'Pump sensor 0')
ax = fig.add_subplot(gs[1])
ax.imshow(pump_mtf[0], cmap='RdBu_r', origin='lower')
ax.axis('off')
# Energy consumption line plot and recurrence plot:
ax = fig.add_subplot(gs[2])
plot1 = ax.plot(hh_energy['2012-07-01':'2012-07-15'], color=colors[1])
ax.set_title(f'Energy consumption for household {hhid}')
ax = fig.add_subplot(gs[3])
ax.imshow(energy_mtf[0], cmap='RdBu_r', origin='lower')
ax.axis('off')
# Daily temperature line plot and recurrence plot:
ax = fig.add_subplot(gs[4])
start = '2012-07-01'
end = '2012-07-15'
ax.plot(weather_df.loc['2013-01-01':'2013-01-31']['temperature'], color=colors[2])
ax.set_title(f'Daily temperature')
ax = fig.add_subplot(gs[5])
ax.imshow(weather_mtf[0], cmap='RdBu_r', origin='lower')
ax.axis('off')
plt.show()
import matplotlib
import matplotlib.cm as cm
import networkx as nx
import community
def compute_network_graph(markov_field):
G = nx.from_numpy_matrix(markov_field[0])
# Uncover the communities in the current graph:
communities = community.best_partition(G)
nb_communities = len(pd.Series(communities).unique())
cmap = 'autumn'
# Compute node colors and edges colors for the modularity encoding:
edge_colors = [matplotlib.colors.to_hex(cm.get_cmap(cmap)(communities.get(v)/(nb_communities - 1))) for u,v in G.edges()]
node_colors = [communities.get(node) for node in G.nodes()]
node_size = [nx.average_clustering(G, [node])*90 for node in G.nodes()]
# Builds the options set to draw the network graph in the "modularity" configuration:
options = {
'node_size': 10,
'edge_color': edge_colors,
'node_color': node_colors,
'linewidths': 0,
'width': 0.1,
'alpha': 0.6,
'with_labels': False,
'cmap': cmap
}
return G, options
fig = plt.figure(figsize=(5.5, 2.4))
gs = gridspec.GridSpec(nrows=3, ncols=2, width_ratios=[3,1], hspace=0.8, wspace=0.0)
# Pump sensor 0:
ax = fig.add_subplot(gs[0])
ax.plot(pump_extract_df, label='Pump sensor 0')
ax.set_title(f'Pump sensor 0')
ax = fig.add_subplot(gs[1])
G, options = compute_network_graph(weather_mtf)
nx.draw_networkx(G, **options, pos=nx.spring_layout(G), ax=ax)
ax.axis('off')
# Energy consumption line plot and recurrence plot:
ax = fig.add_subplot(gs[2])
plot1 = ax.plot(hh_energy['2012-07-01':'2012-07-15'], color=colors[1])
ax.set_title(f'Energy consumption for household {hhid}')
ax = fig.add_subplot(gs[3])
G, options = compute_network_graph(energy_mtf)
nx.draw_networkx(G, **options, pos=nx.spring_layout(G), ax=ax)
ax.axis('off')
# Daily temperature line plot and recurrence plot:
ax = fig.add_subplot(gs[4])
start = '2012-07-01'
end = '2012-07-15'
ax.plot(weather_df.loc['2013-01-01':'2013-01-31']['temperature'], color=colors[2])
ax.set_title(f'Daily temperature')
ax = fig.add_subplot(gs[5])
G, options = compute_network_graph(weather_mtf)
nx.draw_networkx(G, **options, pos=nx.spring_layout(G), ax=ax)
ax.axis('off')
plt.show()
```
## Symbolic representation
---
```
from pyts.bag_of_words import BagOfWords
window_size, word_size = 30, 5
bow = BagOfWords(window_size=window_size, word_size=word_size, window_step=window_size, numerosity_reduction=False)
X = weather_df.loc['2013-01-01':'2013-01-31']['temperature'].values.reshape(1, -1)
X_bow = bow.transform(X)
time_index = weather_df.loc['2013-01-01':'2013-01-31']['temperature'].index
len(X_bow[0].replace(' ', ''))
# Plot the considered subseries
plt.figure(figsize=(5, 2))
splits_series = np.linspace(0, X.shape[1], 1 + X.shape[1] // window_size, dtype='int64')
for start, end in zip(splits_series[:-1], np.clip(splits_series[1:] + 1, 0, X.shape[1])):
plt.plot(np.arange(start, end), X[0, start:end], 'o-', linewidth=0.5, ms=0.1)
# Plot the corresponding letters
splits_letters = np.linspace(0, X.shape[1], 1 + word_size * X.shape[1] // window_size)
splits_letters = ((splits_letters[:-1] + splits_letters[1:]) / 2)
splits_letters = splits_letters.astype('int64')
for i, (x, text) in enumerate(zip(splits_letters, X_bow[0].replace(' ', ''))):
t = plt.text(x, X[0, x], text, color="C{}".format(i // 5), fontsize=3.5)
t.set_bbox(dict(facecolor='#FFFFFF', alpha=0.5, edgecolor="C{}".format(i // 5), boxstyle='round4'))
plt.title('Bag-of-words representation for weather temperature')
plt.tight_layout()
plt.show()
from pyts.transformation import WEASEL
from sklearn.preprocessing import LabelEncoder
X_train = ecg_df.iloc[:, 1:].values
y_train = ecg_df.iloc[:, 0]
y_train = LabelEncoder().fit_transform(y_train)
weasel = WEASEL(word_size=3, n_bins=3, window_sizes=[10, 25], sparse=False)
X_weasel = weasel.fit_transform(X_train, y_train)
vocabulary_length = len(weasel.vocabulary_)
plt.figure(figsize=(5,1.5))
width = 0.4
x = np.arange(vocabulary_length) - width / 2
for i in range(len(X_weasel[y_train == 0])):
if i == 0:
plt.bar(x, X_weasel[y_train == 0][i], width=width, alpha=0.25, color=colors[1], label='Time series for Ischemia')
else:
plt.bar(x, X_weasel[y_train == 0][i], width=width, alpha=0.25, color=colors[1])
for i in range(len(X_weasel[y_train == 1])):
if i == 0:
plt.bar(x+width, X_weasel[y_train == 1][i], width=width, alpha=0.25, color=colors[0], label='Time series for Normal heartbeat')
else:
plt.bar(x+width, X_weasel[y_train == 1][i], width=width, alpha=0.25, color=colors[0])
plt.xticks(
np.arange(vocabulary_length),
np.vectorize(weasel.vocabulary_.get)(np.arange(X_weasel[0].size)),
fontsize=2,
rotation=60
)
plt.legend(loc='upper right')
plt.show()
```
## Statistics
---
```
plt.rcParams['xtick.labelsize'] = 3
import statsmodels.api as sm
fig = plt.figure(figsize=(5.5, 3))
gs = gridspec.GridSpec(nrows=3, ncols=2, width_ratios=[1,1], hspace=0.8)
# Pump
ax = fig.add_subplot(gs[0])
ax.plot(pump_extract_df, label='Pump sensor 0')
ax.set_title(f'Pump sensor 0')
ax.tick_params(axis='x', which='both', labelbottom=False)
ax = fig.add_subplot(gs[1])
sm.graphics.tsa.plot_acf(pump_extract_df.values.squeeze(), ax=ax, markersize=1, title='')
ax.set_ylim(-1.2, 1.2)
ax.tick_params(axis='x', which='major', labelsize=4)
# Energy consumption
ax = fig.add_subplot(gs[2])
ax.plot(hh_energy['2012-07-01':'2012-07-15'], color=colors[1])
ax.set_title(f'Energy consumption for household {hhid}')
ax.tick_params(axis='x', which='both', labelbottom=False)
ax = fig.add_subplot(gs[3])
sm.graphics.tsa.plot_acf(hh_energy['2012-07-01':'2012-07-15'].values.squeeze(), ax=ax, markersize=1, title='')
ax.set_ylim(-0.3, 0.3)
ax.tick_params(axis='x', which='major', labelsize=4)
# Daily temperature:
ax = fig.add_subplot(gs[4])
start = '2012-07-01'
end = '2012-07-15'
ax.plot(weather_df.loc['2013-01-01':'2013-01-31']['temperature'], color=colors[2])
ax.set_title(f'Daily temperature')
ax.tick_params(axis='x', which='both', labelbottom=False)
ax = fig.add_subplot(gs[5])
sm.graphics.tsa.plot_acf(weather_df.loc['2013-01-01':'2013-01-31']['temperature'].values.squeeze(), ax=ax, markersize=1, title='')
ax.set_ylim(-1.2, 1.2)
ax.tick_params(axis='x', which='major', labelsize=4)
plt.show()
from statsmodels.tsa.seasonal import STL
endog = endog.resample('30T').mean()
plt.rcParams['lines.markersize'] = 1
title = f'Energy consumption for household {hhid}'
endog = hh_energy['2012-07-01':'2012-07-15']
endog.columns = [title]
endog = endog[title]
stl = STL(endog, period=48)
res = stl.fit()
fig = res.plot()
fig = plt.gcf()
fig.set_size_inches(5.5, 4)
plt.show()
```
## Binary segmentation
---
```
signal = weather_df.loc['2013-01-01':'2013-01-31']['temperature'].values.squeeze()
algo = rpt.Binseg(model='l2').fit(signal)
my_bkps = algo.predict(n_bkps=3)
my_bkps = [0] + my_bkps
my_bkps
fig = plt.figure(figsize=(5.5,1))
start = '2012-07-01'
end = '2012-07-15'
plt.plot(weather_df.loc['2013-01-01':'2013-01-31']['temperature'], color='#FFFFFF', linewidth=1.2, alpha=0.8)
plt.plot(weather_df.loc['2013-01-01':'2013-01-31']['temperature'], color=colors[2], linewidth=0.7)
plt.title(f'Daily temperature')
plt.xticks(rotation=60, fontsize=4)
weather_index = weather_df.loc['2013-01-01':'2013-01-31']['temperature'].index
for index, bkps in enumerate(my_bkps[:-1]):
x1 = weather_index[my_bkps[index]]
x2 = weather_index[np.clip(my_bkps[index+1], 0, len(weather_index)-1)]
plt.axvspan(x1, x2, color=colors[index % 5], alpha=0.2)
plt.title('Daily temperature segmentation')
plt.show()
```
| github_jupyter |
## ML Lab 3
### Neural Networks
In the following exercise class we explore how to design and train neural networks in various ways.
#### Prerequisites:
In order to follow the exercises you need to:
1. Activate your conda environment from last week via: `source activate <env-name>`
2. Install tensorflow (https://www.tensorflow.org) via: `pip install tensorflow` (CPU-only)
3. Install keras (provides high level wrapper for tensorflow) (https://keras.io) via: `pip install keras`
## Exercise 1: Create a 2 layer network that acts as an XOR gate using numpy.
XOR is a fundamental logic gate that outputs a one whenever there is an odd parity of ones in its input and zero otherwise. For two inputs this can be thought of as an exclusive or operation and the associated boolean function is fully characterized by the following truth table.
| X | Y | XOR(X,Y) |
|---|---|----------|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
The function of an XOR gate can also be understood as a classification problem on $v \in \{0,1\}^2$ and we can think about designing a classifier acting as an XOR gate. It turns out that this problem is not solvable by any single layer perceptron (https://en.wikipedia.org/wiki/Perceptron) because the set of points $\{(0,0), (0,1), (1,0), (1,1)\}$ is not linearly seperable.
**Design a two layer perceptron using basic numpy matrix operations that implements an XOR Gate on two inputs. Think about the flow of information and accordingly set the weight values by hand.**
### Data
```
import numpy as np
def generate_xor_data():
X = [(i,j) for i in [0,1] for j in [0,1]]
y = [int(np.logical_xor(x[0], x[1])) for x in X]
return X, y
print(generate_xor_data())
```
### Hints
A single layer in a multilayer perceptron can be described by the equation $y = f(\vec{b} + W\vec{x})$ with $f$ the logistic function, a smooth and differentiable version of the step function, and defined as $f(z) = \frac{1}{1+e^{-z}}$. $\vec{b}$ is the so called bias, a constant offset vector and $W$ is the weight matrix. However, since we set the weights by hand feel free to use hard thresholding instead of using the logistic function. Write down the equation for a two layer MLP and implement it with numpy. For documentation see https://docs.scipy.org/doc/numpy-1.13.0/reference/
```
"""
Implement your solution here.
"""
```
### Solution
| X | Y | AND(NOT X, Y) | AND(X,NOT Y) | OR[AND(NOT X, Y), AND(X, NOT Y)]| XOR(X,Y) |
|---|---|---------------|--------------|---------------------------------|----------|
| 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 0 | 1 | 1 |
| 1 | 0 | 0 | 1 | 1 | 1 |
| 1 | 1 | 0 | 0 | 0 | 0 |
Implement XOR as a combination of 2 AND Gates and 1 OR gate where each neuron in the network acts as one of these gates.
```
"""
Definitions:
Input = np.array([X,Y])
0 if value < 0.5
1 if value >= 0.5
"""
def threshold(vector):
return (vector>=0.5).astype(float)
def mlp(x, W0, W1, b0, b1, f):
x0 = f(np.dot(W0, x) + b0)
x1 = f(np.dot(W1, x0) + b1)
return x1
# AND(NOT X, Y)
w_andnotxy = np.array([-1.0, 1.0])
# AND(X, NOT Y)
w_andxnoty = np.array([1.0, -1.0])
# W0 weight matrix:
W0 = np.vstack([w_andnotxy, w_andxnoty])
# OR(X,Y)
w_or = np.array([1., 1.])
W1 = w_or
# No biases needed
b0 = np.array([0.0,0.0])
b1 = 0.0
print("Input", "Output", "XOR")
xx,yy = generate_xor_data()
for x,y in zip(xx, yy):
print(x, int(mlp(x, W0, W1, b0, b1, threshold))," ", y)
```
## Exercise 2: Use Keras to design, train and evaluate a neural network that can classify points on a 2D plane.
### Data generator
```
import numpy as np
import matplotlib.pyplot as plt
def generate_spiral_data(n_points, noise=1.0):
n = np.sqrt(np.random.rand(n_points,1)) * 780 * (2*np.pi)/360
d1x = -np.cos(n)*n + np.random.rand(n_points,1) * noise
d1y = np.sin(n)*n + np.random.rand(n_points,1) * noise
return (np.vstack((np.hstack((d1x,d1y)),np.hstack((-d1x,-d1y)))),
np.hstack((np.zeros(n_points),np.ones(n_points))))
```
### Training data
```
X_train, y_train = generate_spiral_data(1000)
plt.title('Training set')
plt.plot(X_train[y_train==0,0], X_train[y_train==0,1], '.', label='Class 1')
plt.plot(X_train[y_train==1,0], X_train[y_train==1,1], '.', label='Class 2')
plt.legend()
plt.show()
```
### Test data
```
X_test, y_test = generate_spiral_data(1000)
plt.title('Test set')
plt.plot(X_test[y_test==0,0], X_test[y_test==0,1], '.', label='Class 1')
plt.plot(X_test[y_test==1,0], X_test[y_test==1,1], '.', label='Class 2')
plt.legend()
plt.show()
```
### 2.1. Design and train your model
The current model performs badly, try to find a more advanced architecture that is able to solve the classification problem. Read the following code snippet and understand the involved functions. Vary width and depth of the network and play around with activation functions, loss functions and optimizers to achieve a better result. Read up on parameters and functions for sequential models at https://keras.io/getting-started/sequential-model-guide/.
```
from keras.models import Sequential
from keras.layers import Dense
"""
Replace the following model with yours and try to achieve better classification performance
"""
bad_model = Sequential()
bad_model.add(Dense(12, input_dim=2, activation='tanh'))
bad_model.add(Dense(1, activation='sigmoid'))
bad_model.compile(loss='mean_squared_error',
optimizer='SGD', # SGD = Stochastic Gradient Descent
metrics=['accuracy'])
# Train the model
bad_model.fit(X_train, y_train, epochs=150, batch_size=10, verbose=0)
```
### Predict
```
bad_prediction = np.round(bad_model.predict(X_test).T[0])
```
### Visualize
```
plt.subplot(1,2,1)
plt.title('Test set')
plt.plot(X_test[y_test==0,0], X_test[y_test==0,1], '.')
plt.plot(X_test[y_test==1,0], X_test[y_test==1,1], '.')
plt.subplot(1,2,2)
plt.title('Bad model classification')
plt.plot(X_test[bad_prediction==0,0], X_test[bad_prediction==0,1], '.')
plt.plot(X_test[bad_prediction==1,0], X_test[bad_prediction==1,1], '.')
plt.show()
```
### 2.2. Visualize the decision boundary of your model.
```
"""
Implement your solution here.
"""
```
## Solution
### Model design and training
```
from keras.layers import Dense, Dropout
good_model = Sequential()
good_model.add(Dense(64, input_dim=2, activation='relu'))
good_model.add(Dense(64, activation='relu'))
good_model.add(Dense(64, activation='relu'))
good_model.add(Dense(1, activation='sigmoid'))
good_model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
good_model.fit(X_train, y_train, epochs=150, batch_size=10, verbose=0)
```
### Prediction
```
good_prediction = np.round(good_model.predict(X_test).T[0])
```
### Visualization
#### Performance
```
plt.subplot(1,2,1)
plt.title('Test set')
plt.plot(X_test[y_test==0,0], X_test[y_test==0,1], '.')
plt.plot(X_test[y_test==1,0], X_test[y_test==1,1], '.')
plt.subplot(1,2,2)
plt.title('Good model classification')
plt.plot(X_test[good_prediction==0,0], X_test[good_prediction==0,1], '.')
plt.plot(X_test[good_prediction==1,0], X_test[good_prediction==1,1], '.')
plt.show()
```
#### Decision boundary
```
# Generate grid:
line = np.linspace(-15,15)
xx, yy = np.meshgrid(line,line)
grid = np.stack((xx,yy))
# Reshape to fit model input size:
grid = grid.T.reshape(-1,2)
# Predict:
good_prediction = good_model.predict(grid)
bad_prediction = bad_model.predict(grid)
# Reshape to grid for visualization:
plt.title("Good Decision Boundary")
good_prediction = good_prediction.T[0].reshape(len(line),len(line))
plt.contourf(xx,yy,good_prediction)
plt.show()
plt.title("Bad Decision Boundary")
bad_prediction = bad_prediction.T[0].reshape(len(line),len(line))
plt.contourf(xx,yy,bad_prediction)
plt.show()
```
## Design, train and test a neural network that is able to classify MNIST digits using Keras.
### Data
```
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
"""
Returns:
2 tuples:
x_train, x_test: uint8 array of grayscale image data with shape (num_samples, 28, 28).
y_train, y_test: uint8 array of digit labels (integers in range 0-9) with shape (num_samples,).
"""
# Show example data
plt.subplot(1,4,1)
plt.imshow(x_train[0], cmap=plt.get_cmap('gray'))
plt.subplot(1,4,2)
plt.imshow(x_train[1], cmap=plt.get_cmap('gray'))
plt.subplot(1,4,3)
plt.imshow(x_train[2], cmap=plt.get_cmap('gray'))
plt.subplot(1,4,4)
plt.imshow(x_train[3], cmap=plt.get_cmap('gray'))
plt.show()
"""
Implement your solution here.
"""
```
### Solution
```
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPooling2D
"""
We need to add a channel dimension
to the image input.
"""
x_train = x_train.reshape(x_train.shape[0],
x_train.shape[1],
x_train.shape[2],
1)
x_test = x_test.reshape(x_test.shape[0],
x_test.shape[1],
x_test.shape[2],
1)
"""
Train the image using 32-bit floats normalized
between 0 and 1 for numerical stability.
"""
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
input_shape = (x_train.shape[1], x_train.shape[2], 1)
"""
Output should be a 10 dimensional 1-hot vector,
not just an integer denoting the digit.
This is due to our use of softmax to "squish" network
output for classification.
"""
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
"""
We construct a CNN with 2 convolution layers
and use max-pooling between each convolution layer;
we finish with two dense layers for classification.
"""
cnn_model = Sequential()
cnn_model.add(Conv2D(filters=32,
kernel_size=(3,3),
activation='relu',
input_shape=input_shape))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(filters=32,
kernel_size=(3, 3),
activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Flatten())
cnn_model.add(Dense(64, activation='relu'))
cnn_model.add(Dense(10, activation='softmax')) # softmax for classification
cnn_model.compile(loss='categorical_crossentropy',
optimizer='adagrad', # adaptive optimizer (still similar to SGD)
metrics=['accuracy'])
"""Train the CNN model and evaluate test accuracy."""
cnn_model.fit(x_train,
y_train,
batch_size=128,
epochs=10,
verbose=1,
validation_data=(x_test, y_test)) # never actually validate using test data!
score = cnn_model.evaluate(x_test, y_test, verbose=0)
print('MNIST test set accuracy:', score[1])
"""Visualize some test data and network output."""
y_predict = cnn_model.predict(x_test, verbose=0)
y_predict_digits = [np.argmax(y_predict[i]) for i in range(y_predict.shape[0])]
plt.subplot(1,4,1)
plt.imshow(x_test[0,:,:,0], cmap=plt.get_cmap('gray'))
plt.subplot(1,4,2)
plt.imshow(x_test[1,:,:,0], cmap=plt.get_cmap('gray'))
plt.subplot(1,4,3)
plt.imshow(x_test[2,:,:,0], cmap=plt.get_cmap('gray'))
plt.subplot(1,4,4)
plt.imshow(x_test[3,:,:,0], cmap=plt.get_cmap('gray'))
plt.show()
print("CNN predictions: {0}, {1}, {2}, {3}".format(y_predict_digits[0],
y_predict_digits[1],
y_predict_digits[2],
y_predict_digits[3]))
```
| github_jupyter |
# Initial Modelling notebook
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
import bay12_solution_eposts as solution
```
## Load data
```
post, thread = solution.prepare.load_dfs('train')
post.head(2)
thread.head(2)
```
I will set the thread number to be the index, to simplify matching in the future:
```
thread = thread.set_index('thread_num')
thread.head(2)
```
We'll load the label map as well, which tells us which index goes to which label
```
label_map = solution.prepare.load_label_map()
label_map
```
## Create features from thread dataframe
We will fit a CountVectorizer, which is a simple transformation that counts the number of times the word was found.
The parameter `min_df` sets the minimum number of occurances in our set that will allow a word to join our vocabulary.
```
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(ngram_range=(1, 1), min_df=3)
word_vectors_raw = cv.fit_transform(thread['thread_name'])
```
To save space, this outputs a sparse matrix:
```
word_vectors_raw
```
However, since we'll be using it with a DataFrame, we need to convert it into a Pandas DataFrame:
```
word_df = pd.DataFrame(word_vectors_raw.toarray(), columns=cv.get_feature_names(), index=thread.index)
word_df.head()
```
The only other feature we have from our thread data is the number of replies. Let's add one to get the number of replies. Also, let's use the logarithm of post count as well, just for fun.
We'll concatenate those into our X dataframe (Note that I'm renaming the columns, to keep track more easily):
```
X = pd.concat([
(thread['thread_replies'] + 1).rename('posts'),
np.log(thread['thread_replies'] + 1).rename('log_posts'),
word_df,
], axis='columns')
X.head()
```
Our target is the category number. Remember that this isn't a regression task - there is no actual order between these categories! Also, our Y is one-dimensional, so we'll keep it as a Series (even though it prints less prettily).
```
y = thread['thread_label_id']
y.head()
```
## Split dataset into "training" and "validation"
In order to check the quality of our model in a more realistic setting, we will split all our input (training) data into a "training set" (which our model will see and learn from) and a "validation set" (where we see how well our model generalized). [Relevant link](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
```
from sklearn.model_selection import train_test_split
# NOTE: setting the `random_state` lets you get the same results with the pseudo-random generator
validation_pct = 0.25
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=validation_pct, random_state=99)
X_train.shape, y_train.shape
X_val.shape, y_val.shape
```
## Fit a model
Since we are fitting a multiclass model, [this scikit-learn link](https://scikit-learn.org/stable/modules/multiclass.html) is very relevant. To simplify things, we will be using an algorithm that is inherently multi-class.
```
from sklearn.tree import DecisionTreeClassifier
# Just using default parameters... what can do wrong?
cls = DecisionTreeClassifier(random_state=1337)
# Fit
cls.fit(X_train, y_train)
# In-sample and out-of-sample predictions
# NOTE: we
y_train_pred = pd.Series(
cls.predict(X_train),
index=X_train.index,
)
y_val_pred = pd.Series(
cls.predict(X_val),
index=X_val.index,
)
y_val_pred.head()
```
## Score the model
To find out how well the model did, we'll use the [model evaluation functionality of sklearn](https://scikit-learn.org/stable/modules/model_evaluation.html); specifically, the [multiclass classification metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-metrics).
```
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
```
The [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) shows how our predictions differ from the actual values.
It's important to note how strongly our in-sample (training) and out-of-sample (validation/test) metrics differ.
```
def confusion_df(y_actual, y_pred):
res = pd.DataFrame(
confusion_matrix(y_actual, y_pred, labels=label_map.values),
index=label_map.index.rename('predicted'),
columns=label_map.index.rename('actual'),
)
return res
confusion_df(y_train, y_train_pred).style.highlight_max()
confusion_df(y_val, y_val_pred).style.highlight_max()
```
Oh boy. That's pretty bad - we didn't predict anything for several columns!
Let's look at the metrics to confirm that it is indeed bad.
```
print("Test accuracy:", accuracy_score(y_train, y_train_pred))
print("Validation accuracy:", accuracy_score(y_val, y_val_pred))
report = classification_report(y_val, y_val_pred, labels=label_map.values, target_names=label_map.index)
print(report)
```
Well, that's pretty bad. We seriously overfit our training set... which is sort-of what I expected. Oh well.
By the way, the warnings at the bottom say that we have no real Precision or F-score to use, with no predictions for some classes.
# Predict with the model
Here, we will predict on the test set (predicitions to send in), then save the results and the model.
**IMPORTANT NOTE**: In reality, you need to re-train your same model on the entire set to predict! However, I'm just using the same model as before, as it will bad anyways. ;)
```
post_test, thread_test = solution.prepare.load_dfs('test')
thread_test = thread_test.set_index('thread_num')
thread_test.head(2)
```
We need to attach a `thread_label_id` column, as given in the training set:
```
thread.head(2)
```
Use the fitted CountVectorizer and other features to make our X dataframe:
```
word_vectors_raw_test = cv.transform(thread_test['thread_name'])
word_df_test = pd.DataFrame(word_vectors_raw_test.toarray(), columns=cv.get_feature_names(), index=thread_test.index)
word_df_test.head()
X_test = pd.concat([
(thread_test['thread_replies'] + 1).rename('posts'),
np.log(thread_test['thread_replies'] + 1).rename('log_posts'),
word_df_test,
], axis='columns')
X_test.head()
```
Now we predict with our model, then paste it to a copy of `thread_test` as column `thread_label_id`.
```
y_test_pred = pd.Series(
cls.predict(X_test),
index=X_test.index,
)
y_test_pred.head()
result = thread_test.copy()
result['thread_label_id'] = y_test_pred
result.head()
```
We need to reshape to conform to the submission format specified [here](https://www.kaggle.com/c/ni-mafia-gametype#evaluation).
```
result = result.reset_index()[['thread_num', 'thread_label_id']]
result.head()
```
# Export predictions, model
Our model consists of the text vectorizer `cv` and classifier `cls`. We already formatted our results, we just need to make sure not to write an extra index column.
```
# NOTE: Exporting next to the notebooks - the files are small, but usually you don't want to do this.
out_dir = os.path.abspath('1_output')
os.makedirs(out_dir, exist_ok=True)
result.to_csv(
os.path.join(out_dir, 'baseline_predict.csv'),
index=False, header=True, encoding='utf-8',
)
import joblib
joblib.dump(cv, os.path.join(out_dir, 'cv.joblib'))
joblib.dump(cls, os.path.join(out_dir, 'cls.joblib'))
print("Done. :)")
```
# Final Remarks
I'd like to mention that the above notebook is here JUST TO GET YOU STARTED. Feel free to change anything or everything above.
It may be a good idea to keep a piece of paper with you, and draw out your entire pipeline there, to keep organized.
This model is severely overfit because of a huge number of features from the names. Some ways to combat this are PCA and lowering dimensionality, increasing regularization, using a more feature-limited classifier, etc. You can also split this into two sub-problems: a classifier to tell whether it is a game or `"other"`, then classify game type if it's a game.
| github_jupyter |
```
# Import Dependencies
import os
import csv
# Establish filepath
budget_csv = os.path.join(".", "resources", "budget_data.csv")
output_file = os.path.join(".", "financial_analysis.txt")
# Index Reference for the Profit and Loss List
# Track Financial Parameters
# Open and read csv file
with open(budget_csv, newline='') as csvfile:
csvreader = csv.reader(csvfile, delimiter=',')
# Captures and removes the header row (list) into csvheader
csvheader = next(csvreader)
# Set up Counter, had to circle back to adjust bc of using next(csvreader) twice
total_months = 0
total_months = total_months + 1
# Setup for change analysis and calculations
financial_data = [867884]
# Calculating the "Average of Changes" and Tracking the Month
netchange_list = []
month_of_change_list = []
# Greatest Increase / Decrease- use list, save spot for Period and Value
# counter intuitive
greatest_increase = ["", 0]
greatest_decrease = ["", 999999]
# Captures and removes the next row into first_row (Python knows to go to the next line / list down in the csvreader)
first_row = next(csvreader) # first whole row is a list month & value
# Isolate the first value of "Profit/Losses"
# Note: the first_row[0] is Jan-10
prev_net = int(first_row[1])
for row in csvreader:
#print(f"{row[0]} , {row[1]}")
# Loop Thru and count the total number of months included in the dataset
total_months += 1
# The net total amount of “Profit/Losses” over the entire period
financial_data.append(int(row[1]))
# Average of the changes in “Profit/Losses” over the entire period
#Part one: "Numberator" Net Change
# Track the net change
# This calculates Month to Month (differences) aka changes
net_change = int(row[1]) - int(prev_net) # @ this point prev_net = first value
# This appends those changes to the list
netchange_list.append(net_change) #- JG initial thought
prev_net = int(row[1])
#netchange_list.append(net_change) # solution. test after
# Track month of change as well
#month_of_change_list = month_of_change_list + [row[0]] # concatenate row[0] to the list
month_of_change_list.append(row[0]) # add the month of change to list
# will not need this for calculations
# Greatest increase and decrease in the dataset caculations
if net_change > greatest_increase[1]:
greatest_increase[1] = net_change
greatest_increase[0] = row[0] #capture the month
if net_change < greatest_decrease[1]:
greatest_decrease[1] = net_change
greatest_decrease[0] = row[0]
net = sum(financial_data)
print(f"Financial Analysis")
print("="*60)
print(f"Total Months: {total_months}")
print(f"Total: ${net}")
print(f"Average Change: {sum(netchange_list)/len(netchange_list)}")
print(f"Greatest Increase in Profits: {greatest_increase[0]} '({greatest_increase[1]})'")
print(f"Greatest Decrease in Profits: {greatest_decrease[0]} '({greatest_decrease[1]})'")
print("="*60)
output = (
f"\nFinancial Analysis\n"
f"----------------------------\n"
f"Total Months: {total_months}\n"
f"Total: ${net}\n"
f"Average Change: {sum(netchange_list)/len(netchange_list)}\n"
f"Greatest Increase in Profits: {greatest_increase[0]} '({greatest_increase[1]})'\n"
f"Greatest Decrease in Profits: {greatest_decrease[0]} '({greatest_decrease[1]})'\n"
)
with open ("financial_analysis.txt", 'w') as txt_file:
txt_file.write(output)
# Test Cells
with open(budget_csv, newline='') as csvfile:
csvreader = csv.reader(csvfile, delimiter=',')
csvheader = next(csvreader)
total_months = 0
financial_data = []
rolling_average = []
first_row = next(csvreader)
print(first_row[1])
# test cells below.
```
| github_jupyter |
```
import numpy as np
import pandas as pd
```
# Pandas Metodları ve Özellikleri
### Veri Analizi için Önemli Konular
#### Eksik Veriler (Missing Value)
```
data = {'Istanbul':[30,29,np.nan],'Ankara':[20,np.nan,25],'Izmir':[40,39,38],'Antalya':[40,np.nan,np.nan]}
weather = pd.DataFrame(data,index=['pzt','sali','car'])
weather
```
Satırında değer olmayan satırları veya sütunları silmek için **dropna** fonksiyonu kullanılır.
```
weather.dropna()
weather.dropna(axis=1)
# sütunda 2 veya daha fazla nan var ise siler.
weather.dropna(axis=1, thresh=2)
```
Boş olan değerleri doldurmak için **fillna** fonksiyonunu kullanırız.
```
weather.fillna(22)
```
#### Gruplama (Group By)
```
data = {'Departman':['Yazılım','Pazarlama','Yazılım','Pazarlama','Hukuk','Hukuk'],
'Calisanlar':['Ahmet','Mehmet','Enes','Burak','Zeynep','Fatma'],
'Maas':[150,100,200,300,400,500]}
workers = pd.DataFrame(data)
workers
groupbyobje = workers.groupby('Departman')
groupbyobje.count()
groupbyobje.mean()
groupbyobje.min()
groupbyobje.max()
groupbyobje.describe()
```
#### Concatenation
```
data1 = {'Isim':['Ahmet','Mehmet','Zeynep','Enes'],
'Spor':['Koşu','Yüzme','Koşu','Basketbol'],
'Kalori':[100,200,300,400]}
data2 = {'Isim':['Osman','Levent','Atlas','Fatma'],
'Spor':['Koşu','Yüzme','Koşu','Basketbol'],
'Kalori':[200,200,30,400]}
data3 = {'Isim':['Ayse','Mahmut','Duygu','Nur'],
'Spor':['Koşu','Yüzme','Badminton','Tenis'],
'Kalori':[150,200,350,400]}
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
df3 = pd.DataFrame(data3)
pd.concat([df1,df2,df3], ignore_index=True, axis=0)
```
#### Merging
```
mdata1 = {'Isim':['Ahmet','Mehmet','Zeynep','Enes'],
'Spor':['Koşu','Yüzme','Koşu','Basketbol']}
mdata2 = {'Isim':['Ahmet','Mehmet','Zeynep','Enes'],
'Kalori':[100,200,300,400]}
mdf1 = pd.DataFrame(mdata1)
mdf1
mdf2 = pd.DataFrame(mdata2)
mdf2
pd.merge(mdf1,mdf2,on='Isim')
```
### Önemli Metodlar ve Özellikleri
```
data = {'Departman' : ['Yazılım','Pazarlama','Yazılım','Pazarlama','Hukuk','Hukuk'],
'Isim' : ['Ahmet','Mehmet','Enes','Burak','Zeynep','Fatma'],
'Maas' : [150,100,200,300,400,500]}
workerdf = pd.DataFrame(data)
workerdf
```
#### Unique Değerleri Listeleme ve Sayısını Bulma
```
workerdf['Departman'].unique()
workerdf['Departman'].nunique()
```
#### Sütundaki Değerlerden Toplamda Kaçar Adet Var?
```
workerdf['Departman'].value_counts()
```
#### Değerler Üzerinde Fonksiyon Yardımı ile İşlemler Yapmak
```
workerdf['Maas'].apply(lambda maas : maas*0.66)
```
#### Dataframe'de Null Değer Var mı?
```
workerdf.isnull()
```
#### Pivot Table
```
characters = {'Karakter Sınıfı':['South Park','South Park','Simpson','Simpson','Simpson'],
'Karakter Ismi':['Cartman','Kenny','Homer','Bart','Bart'],
'Puan':[9,10,50,20,10]}
dfcharacters = pd.DataFrame(characters)
dfcharacters
dfcharacters.pivot_table(values='Puan',index=['Karakter Sınıfı','Karakter Ismi'],aggfunc=np.sum)
```
#### Belli Bir Sütuna Göre Değerleri Sıralama (Sorting)
```
workerdf.sort_values(by='Maas', ascending=False)
```
#### Duplicate Veriler
```
employees = [('Stuti', 28, 'Varanasi'),
('Saumya', 32, 'Delhi'),
('Aaditya', 25, 'Mumbai'),
('Saumya', 32, 'Delhi'),
('Saumya', 32, 'Delhi'),
('Saumya', 32, 'Mumbai'),
('Aaditya', 40, 'Dehradun'),
('Seema', 32, 'Delhi')]
df = pd.DataFrame(employees, columns = ['Name', 'Age', 'City'])
duplicate = df[df.duplicated()]
print("Duplicate Rows :")
duplicate
duplicate = df[df.duplicated('City')]
print("Duplicate Rows based on City :")
duplicate
df.drop_duplicates()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/keivanipchihagh/Intro_To_MachineLearning/blob/master/Models/Newswires_Classification_with_Reuters.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Newswires Classification with Reuters
##### Imports
```
import numpy as np # Numpy
from matplotlib import pyplot as plt # Matplotlib
import keras # Keras
import pandas as pd # Pandas
from keras.datasets import reuters # Reuters Dataset
from keras.utils.np_utils import to_categorical # Categirical Classifier
import random # Random
```
##### Load dataset
```
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words = 10000)
print('Size:', len(train_data))
print('Training Data:', train_data[0])
```
##### Get the feel of data
```
def decode(index): # Decoding the sequential integers into the corresponding words
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in test_data[0]])
return decoded_newswire
print("Decoded test data sample [0]: ", decode(0))
```
##### Data Prep (One-Hot Encoding)
```
def vectorize_sequences(sequences, dimension = 10000): # Encoding the integer sequences into a binary matrix
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
train_data = vectorize_sequences(train_data)
test_data = vectorize_sequences(test_data)
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
```
##### Building the model
```
model = keras.models.Sequential()
model.add(keras.layers.Dense(units = 64, activation = 'relu', input_shape = (10000,)))
model.add(keras.layers.Dense(units = 64, activation = 'relu'))
model.add(keras.layers.Dense(units = 46, activation = 'softmax'))
model.compile( optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])
model.summary()
```
##### Training the model
```
x_val = train_data[:1000]
train_data = train_data[1000:]
y_val = train_labels[:1000]
train_labels = train_labels[1000:]
history = model.fit(train_data, train_labels, batch_size = 512, epochs = 10, validation_data = (x_val, y_val), verbose = False)
```
##### Evaluating the model
```
result = model.evaluate(train_data, train_labels)
print('Loss:', result[0])
print('Accuracy:', result[1] * 100)
```
##### Statistics
```
epochs = range(1, len(history.history['loss']) + 1)
plt.plot(epochs, history.history['loss'], 'b', label = 'Training Loss')
plt.plot(epochs, history.history['val_loss'], 'r', label = 'Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
plt.plot(epochs, history.history['accuracy'], 'b', label = 'Training Accuracy')
plt.plot(epochs, history.history['val_accuracy'], 'r', label = 'Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
##### Making predictions
```
prediction_index = random.randint(0, len(test_data))
prediction_data = test_data[prediction_index]
decoded_prediction_data = decode(prediction_index)
# Info
print('Random prediction index:', prediction_index)
print('Original prediction Data:', prediction_data)
print('Decoded prediction Data:', decoded_prediction_data)
print('Expected prediction label:', np.argmax(test_labels[prediction_index]))
# Prediction
predictions = model.predict(test_data)
print('Prediction index: ', np.argmax(predictions[prediction_index]))
```
| github_jupyter |
# autotimeseries
> Nixtla SDK. Time Series Forecasting pipeline at scale.
[](https://github.com/Nixtla/nixtla/actions/workflows/python-sdk.yml)
[](https://pypi.org/project/autotimeseries/)
[](https://pypi.org/project/autotimeseries/)
[](https://github.com/Nixtla/nixtla/blob/main/sdk/python-autotimeseries/LICENSE)
**autotimeseries** is a python SDK to consume the APIs developed in https://github.com/Nixtla/nixtla.
## Install
### PyPI
`pip install autotimeseries`
## How to use
Check the following examples for a full pipeline:
- [M5 state-of-the-art reproduction](https://github.com/Nixtla/autotimeseries/tree/main/examples/m5).
- [M5 state-of-the-art reproduction in Colab](https://colab.research.google.com/drive/1pmp4rqiwiPL-ambxTrJGBiNMS-7vm3v6?ts=616700c4)
### Basic usage
```python
import os
from autotimeseries.core import AutoTS
autotimeseries = AutoTS(bucket_name=os.environ['BUCKET_NAME'],
api_id=os.environ['API_ID'],
api_key=os.environ['API_KEY'],
aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'])
```
#### Upload dataset to S3
```python
train_dir = '../data/m5/parquet/train'
# File with target variables
filename_target = autotimeseries.upload_to_s3(f'{train_dir}/target.parquet')
# File with static variables
filename_static = autotimeseries.upload_to_s3(f'{train_dir}/static.parquet')
# File with temporal variables
filename_temporal = autotimeseries.upload_to_s3(f'{train_dir}/temporal.parquet')
```
Each time series of the uploaded datasets is defined by the column `item_id`. Meanwhile the time column is defined by `timestamp` and the target column by `demand`. We need to pass this arguments to each call.
```python
columns = dict(unique_id_column='item_id',
ds_column='timestamp',
y_column='demand')
```
#### Send the job to make forecasts
```python
response_forecast = autotimeseries.tsforecast(filename_target=filename_target,
freq='D',
horizon=28,
filename_static=filename_static,
filename_temporal=filename_temporal,
objective='tweedie',
metric='rmse',
n_estimators=170,
**columns)
```
#### Download forecasts
```python
autotimeseries.download_from_s3(filename='forecasts_2021-10-12_19-04-32.csv', filename_output='../data/forecasts.csv')
```
| github_jupyter |
```
#!/usr/bin/env python
# encoding: utf-8
"""
@Author: yangwenhao
@Contact: 874681044@qq.com
@Software: PyCharm
@File: cam_2.py
@Time: 2021/4/12 21:47
@Overview:
Created on 2019/8/4 上午9:37
@author: mick.yi
"""
import os
import pdb
import numpy as np
import torch
from torch.nn.parallel.distributed import DistributedDataParallel
from Define_Model.ResNet import ThinResNet
os.environ['CUDA_VISIBLE_DEVICES'] = "0,1"
torch.distributed.init_process_group(backend="nccl", init_method='tcp://localhost:12556', rank=0,
world_size=1)
class GradCAM(object):
"""
1: 网络不更新梯度,输入需要梯度更新
2: 使用目标类别的得分做反向传播
"""
def __init__(self, net, layer_name):
self.net = net
self.layer_name = layer_name
self.feature = {}
self.gradient = {}
self.net.eval()
self.handlers = []
self._register_hook()
def _get_features_hook(self, module, input, output):
print(type(module))
if isinstance(self.net, DistributedDataParallel):
self.feature[input[0].device] = output[0]
else:
self.feature = output[0]
# print("Device {}, forward out feature shape:{}".format(input[0].device, output[0].size()))
def _get_grads_hook(self, module, input_grad, output_grad):
"""
:param input_grad: tuple, input_grad[0]: None
input_grad[1]: weight
input_grad[2]: bias
:param output_grad:tuple,长度为1
:return:
"""
if isinstance(self.net, DistributedDataParallel):
if input_grad[0].device not in self.gradient:
self.gradient[input_grad[0].device] = output_grad[0]
else:
self.gradient[input_grad[0].device] += output_grad[0]
else:
self.gradient += output_grad[0]
# print(output_grad[0])
# print("Device {}, backward out gradient shape:{}".format(input_grad[0].device, output_grad[0].size()))
def _register_hook(self):
if isinstance(self.net, DistributedDataParallel):
modules = self.net.module.named_modules()
else:
modules = self.net.named_modules()
for (name, module) in modules:
if name == self.layer_name:
self.handlers.append(module.register_backward_hook(self._get_features_hook))
self.handlers.append(module.register_backward_hook(self._get_grads_hook))
def remove_handlers(self):
for handle in self.handlers:
handle.remove()
def __call__(self, inputs, index):
"""
:param inputs: [1,3,H,W]
:param index: class id
:return:
"""
# self.net.zero_grad()
output, _ = self.net(inputs) # [1,num_classes]
pdb.set_trace()
if index is None:
index = torch.argmax(output)
target = output.gather(1, index)# .mean()
# target = output[0][index]
for i in target:
i.backward(retain_graph=True)
if isinstance(self.net, DistributedDataParallel):
feature = []
gradient = []
for d in self.gradient:
feature.append(self.feature[d])
gradient.append(self.gradient[d])
feature = torch.cat(feature, dim=0)
gradient = torch.cat(gradient, dim=0)
else:
feature = self.feature
gradient = self.gradient
return feature, gradient
# gradient = self.gradient[0].cpu().data.numpy() # [C,H,W]
# weight = np.mean(gradient, axis=(1, 2)) # [C]
# feature = self.feature[0].cpu().data.numpy() # [C,H,W]
# cam = feature * weight[:, np.newaxis, np.newaxis] # [C,H,W]
# cam = np.sum(cam, axis=0) # [H,W]
# cam = np.maximum(cam, 0) # ReLU
#
# # 数值归一化
# cam -= np.min(cam)
# cam /= np.max(cam)
# # resize to 224*224
# cam = cv2.resize(cam, (224, 224))
# return cam
# print("gradient shape: ", gradient.shape)
# print("feature shape: ", feature.shape)
class Sum_GradCAM(object):
"""
1: 网络不更新梯度,输入需要梯度更新
2: 使用目标类别的得分做反向传播
"""
def __init__(self, net, layer_name):
self.net = net
self.layer_name = layer_name
self.feature = {}
self.gradient = {}
self.net.eval()
self.handlers = []
self._register_hook()
def _get_features_hook(self, module, input, output):
if isinstance(self.net, DistributedDataParallel):
self.feature[input[0].device] = output[0]
else:
self.feature = output[0]
# print("Device {}, forward out feature shape:{}".format(input[0].device, output[0].size()))
def _get_grads_hook(self, module, input_grad, output_grad):
"""
:param input_grad: tuple, input_grad[0]: None
input_grad[1]: weight
input_grad[2]: bias
:param output_grad:tuple,长度为1
:return:
"""
if isinstance(self.net, DistributedDataParallel):
if input_grad[0].device not in self.gradient:
self.gradient[input_grad[0].device] = output_grad[0]
else:
self.gradient[input_grad[0].device] += output_grad[0]
else:
self.gradient = output_grad[0]
# print(output_grad[0])
# print("Device {}, backward out gradient shape:{}".format(input_grad[0].device, output_grad[0].size()))
def _register_hook(self):
if isinstance(self.net, DistributedDataParallel):
modules = self.net.module.named_modules()
else:
modules = self.net.named_modules()
for (name, module) in modules:
if name == self.layer_name:
self.handlers.append(module.register_backward_hook(self._get_features_hook))
self.handlers.append(module.register_backward_hook(self._get_grads_hook))
def remove_handlers(self):
for handle in self.handlers:
handle.remove()
def __call__(self, inputs, index):
"""
:param inputs: [1,3,H,W]
:param index: class id
:return:
"""
# self.net.zero_grad()
output, _ = self.net(inputs) # [1,num_classes]
pdb.set_trace()
if index is None:
index = torch.argmax(output)
target = output.gather(1, index).mean()
target.backward(retain_graph=True)
if isinstance(self.net, DistributedDataParallel):
feature = []
gradient = []
for d in self.gradient:
feature.append(self.feature[d])
gradient.append(self.gradient[d])
feature = torch.cat(feature, dim=0)
gradient = torch.cat(gradient, dim=0)
else:
feature = self.feature
gradient = self.gradient
return feature, gradient
# print("gradient shape: ", gradient.shape)
# print("feature shape: ", feature.shape)
model = ThinResNet()
model = model.cuda()
model = DistributedDataParallel(model)
gc = GradCAM(model, 'layer4')
x = torch.randn((20, 1, 224, 224)).cuda() # *1.2 +1.
l = torch.range(0, 19).long().unsqueeze(1).cuda()
y = model(x)
#
cam = gc(x, l)
# print(cam.shape)
```
| github_jupyter |
# The Structure and Geometry of the Human Brain
[Noah C. Benson](https://nben.net/) <[nben@uw.edu](mailto:nben@uw.edu)>
[eScience Institute](https://escience.washingtonn.edu/)
[University of Washington](https://www.washington.edu/)
[Seattle, WA 98195](https://seattle.gov/)
## Introduction
This notebook is designed to accompany the lecture "Introduction to the Strugure and Geometry of the Human Brain" as part of the Neurohackademt 2020 curriculum. It can be run either in Neurohackademy's Jupyterhub environment, or using the `docker-compose.yml` file (see the `README.md` file for instructions).
In this notebook we will examine various structural and geometric data used commonly in neuroscience. These demos will primarily use [FreeSurfer](http://surfer.nmr.mgh.harvard.edu/) subjects. In the lecture and the Neurohackademy Jupyterhub environment, we will look primarily at a subject named `nben`; however, you can alternately use the subject `bert`, which is an example subject that comes with FreeSurfer. Optionally, this notebook can be used with subject from the [Human Connectome Project (HCP)](https://db.humanconnectome.org/)--see the `README.md` file for instructions on getting credentials for use with the HCP.
We will look at these data using both the [`nibabel`](https://nipy.org/nibabel/), which is an excellent core library for importing various kinds of neuroimaging data, as well as [`neuropythy`](https://github.com/noahbenson/neuropythy), which builds on `nibabel` to provide a user-friendly API for interacting with subjects. At its core, `neuropythy` is a library for interacting with neuroscientific data in the context of brain structure.
This notebook itself consists of this introduction as well as four sections that follow the topic areas in the slide-deck from the lecture. These sections are intended to be explored in order.
### Libraries
Before running any of the code in this notebook, we need to start by importing a few libraries and making sure we have configured those that need to be configured (mainly, `matplotlib`).
```
# We will need os for paths:
import os
# Numpy, Scipy, and Matplotlib are effectively standard libraries.
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.pyplot as plt
# Ipyvolume is a 3D plotting library that is used by neuropythy.
import ipyvolume as ipv
# Nibabel is the library that understands various neuroimaging file
# formats; it is also used by neuropythy.
import nibabel as nib
# Neuropythy is the main library we will be using in this notebook.
import neuropythy as ny
%matplotlib inline
```
## MRI and Volumetric Data
The first section of this notebook will deal with MR images and volumetric data. We will start by loading in an MRImage. We will use the same image that was visualized in the lecture (if you are not using the Jupyterhub, you won't have access to this subject, but you can use the subject `'bert'` instead).
---
### Load a subject.
---
For starters, we will load the subject.
```
subject_id = 'nben'
subject = ny.freesurfer_subject(subject_id)
# If you have configured the HCP credentials and wish to use an HCP
# subject instead of nben:
#
#subject_id = 111312
#subject = ny.hcp_subject(subject_id)
```
The `freesurfer_subject` function returns a `neuropythy` `Subject` object.
```
subject
```
---
### Load an MRImage file.
---
Let's load in an image file. FreeSurfer directories contain a subdirectory `mri/` that contains all of the volumetric/image data for the subject. This includes images that have been preprocessed as well as copies of the original T1-weighted image. We will load an image called `T1.mgz`.
```
# This function will load data from a subject's directory using neuropythy's
# builtin ny.load() function; in most cases, this calls down to nibabel's own
# nib.load() function.
im = subject.load('mri/T1.mgz')
# For an HCP subject, use this file instead:
#im = subject.load("T1w/T1w_acpc_dc.nii.gz")
# The return value should be a nibabel image object.
im
# In fact, we could just as easily have loaded the same object using nibabel:
im_from_nibabel = nib.load(subject.path + '/mri/T1.mgz')
print('From neuropythy: ', im.get_filename())
print('From nibabel: ', im_from_nibabel.get_filename())
# And neuropythy manages this image as part of the subject-data. Neuropythy's
# name for it is 'intensity_normalized', which is due to its position as an
# output in FreeSurfer's processing pipeline.
ny_im = subject.images['intensity_normalized']
(ny_im.dataobj == im.dataobj).all()
```
---
### Visualize some slices of the image.
---
Next, we will make 2D plots of some of the image slices. Feel free to change which slices you visualize; I have just chosen some defaults.
```
# What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = im.dataobj[slice_num,:,:]
elif axis == 1:
imslice = im.dataobj[:,slice_num,:]
else:
imslice = im.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off')
```
---
### Visualize the 3D Image as a whole.
---
Next we will use `ipyvolume` to render a 3D View of the volume. The volume plotting function is part of `ipyvolume` and has a variety of options that are beyond the scope of this demo.
```
# Note that this will generate a warning, which can be safely ignored.
fig = ipv.figure()
ipv.quickvolshow(subject.images['intensity_normalized'].dataobj)
ipv.show()
```
---
### Load and visualize anatomical segments.
---
FreeSurfer creates a segmentation image file called `aseg.mgz`, which we can load and use to identify ROIs. First, we will load this file and plot some slices from it.
```
# First load the file; any of these lines will work:
#aseg = subject.load('mri/aseg.mgz')
#aseg = nib.load(subject.path + '/mri/aseg.mgz')
aseg = subject.images['segmentation']
```
We can plot this as-is, but we don't know what the values in the numbers correspond to. Nonetheless, let's go ahead. This code block is the same as the block we used to plot slices above except that it uses the new image `aseg` we just loaded.
```
# What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off')
```
Clearly, the balues in the plots above are discretized, but it's not clear what they correspond to. The map of numbers to characters and colors can be found in the various FreeSurfer color LUT files. These are all located in the FreeSurfer home directory and end with `LUT.txt`. They are essentially spreadsheets and are loaded by `neuropythy` as `pandas.DataFrame` objects. In `neuropythy`, the LUT objects are associated with the `'freesurfer_home'` configuration variable. This has been setup automatically in the course and the `neuropythy` docker-image.
```
ny.config['freesurfer_home'].luts['aseg']
```
So suppose we want to look at left cerebral cortex. In the table, this has value 3. We can find this value in the images we are plotting and plot only it to see the ROI in each the slices we plot.
```
# We want to plot left cerebral cortex (label ID = 3, per the LUT)
label = 3
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
# Plot only the values that are equal to the label ID.
imslice = (imslice == label)
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off')
```
By plotting the LH cortex specifically, we can see that LEFT is in the direction of increasing rows (down the image slices, if you used `axis = 2`), thus RIGHT must be in the direction of decreasing rows in the image.
Let's also make some images from these slices in which we replace each of the pixels in each slice with the color recommended by the color LUT.
```
# We are using this color LUT:
lut = ny.config['freesurfer_home'].luts['aseg']
# The axis:
axis = 2
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
# Convert the slice into an RGBA image using the color LUT:
rgba_im = np.zeros(imslice.shape + (4,))
for (label_id, row) in lut.iterrows():
rgba_im[imslice == label_id,:] = row['color']
ax.imshow(rgba_im)
# Turn off labels:
ax.axis('off')
```
## Cortical Surface Data
Cortical surface data is handled and represented much differently than volumetric data. This section demonstrates how to interact with cortical surface data in a Jupyter notebook, primarily using `neuropythy`.
To start off, however, we will just load a surface file using `nibabel` to see what one contains.
---
### Load a Surface-Geometry File Using `nibabel`
---
```
# Each subject has a number of surface files; we will look at the
# left hemisphere, white surface.
hemi = 'lh'
surf = 'white'
# Feel free to change hemi to 'rh' for the RH and surf to 'pial'
# or 'inflated'.
# We load the surface from the subject's 'surf' directory in FreeSurfer.
# Nibabel refers to these files as "geometry" files.
filename = subject.path + f'/surf/{hemi}.{surf}'
# If you are using an HCP subject, you should instead load from this path:
#relpath = f'T1w/{subject.name}/surf/{hemi}.{surf}'
#filename = subject.pseudo_path.local_path(relpath)
# Read the file, using nibabel.
surface_data = nib.freesurfer.read_geometry(filename)
# What does this return?
surface_data
```
So when `nibabel` reads in one of these surface files, what we get back is an `n x 3` matrix of real numbers (coordiantes) and an `m x 3` matrix of integers (triangle indices).
The `ipyvolume` module has support for plotting triangle meshes--let's see how it works.
```
# Extract the coordinates and triangle-faces.
(coords, faces) = surface_data
# And get the (x,y,z) from coordinates.
(x, y, z) = coords.T
# Now, plot the triangle mesh.
fig = ipv.figure()
ipv.plot_trisurf(x, y, z, triangles=faces)
# Adjust the plot limits (making them equal makes the plot look good).
ipv.pylab.xlim(-100,100)
ipv.pylab.ylim(-100,100)
ipv.pylab.zlim(-100,100)
# Generally, one must call show() with ipyvolume.
ipv.show()
```
---
### Hemisphere (`neuropythy.mri.Cortex`) objects
---
Although one can load and plot cortical surfaces with `nibabel`, `neuropythy` builds on `nibabel` by providing a framework around which the cortical surface can be represented. It includes a number of utilities related specifically to cortical surface analysis, and allows much of the power of FreeSurfer to be leveraged through simple Python data structures.
To start with, we will look at our subject's hemispheres (`neuropythy.mri.Cortex` objects) and how they represent surfaces.
```
# Grab the hemisphere for our subject.
cortex = subject.hemis[hemi]
# Note that `cortex = subject.lh` and `cortex = subject.rh` are equivalent
# to `cortex = subject.hemis['lh']` and `cortex = subject.hemis['rh']`.
# What is cortex?
cortex
```
From this we can see which hemisphere we have selected, the number of triangle faces that it has, and the number of vertices that it has. Let's look at a few of its' properties.
#### Surfaces
Each hemisphere has a number of surfaces; we can view them through the `cortex.surfaces` dictionary.
```
cortex.surfaces.keys()
cortex.surfaces['white_smooth']
```
The `'white_smooth'` mesh is a well-processed mesh of the white surface that has been well-smoothed. You might notice that there is a `'midgray'` surface, even though FreeSurfer does not include a mid-gray mesh file. The `'midgray'` mesh, however, can be made by averaging the white and pial mesh vertices.
Recall that all surfaces of a hemisphere have equivalent vertices and identical triangles. We can test that here.
```
np.array_equal(cortex.surfaces['white'].tess.faces,
cortex.surfaces['pial'].tess.faces)
```
Surfaces track a large amount of data about their meshes and vertices and inherit most of the properties of hemispheres that are discussed below. In addition, surfaces uniquely carry data about cortical distances and surface areas. For example:
```
# The area of each of the triangle-faces in nthe white surface mesh, in mm^2.
cortex.surfaces['white'].face_areas
# The length of each edge in the white surface mesh, in mm.
cortex.surfaces['white'].edge_lengths
# And the edges themselves, as indices like the faces.
cortex.surfaces['white'].tess.edges
```
#### Vertex Properties
Properties arre values assigned to each surface vertex. They can include anatomical or geometric properties, such as ROI labels (i.e., a vector of values for each vertex: `True` if the vertex is in the ROI and `False` if not), cortical thickness (in mm), the vertex surface-area (in square mm), the curvature, or data from other functional measurements, such as BOLD-time-series data or source-localized MEG data.
The properties of a hemisphere are stored in the `properties` value. `Cortex.properties` is a kind of dictionary object and can generally be treated as a dictionary. One can also access property vectors via `cortex.prop(property_name)` rather than `cortex.properties[property_name]`; the former is largely short-hand for the latter.
```
sorted(cortex.properties.keys())
```
A few thigs worth noting: First, not all FreeSurfer subjects will have all of the properties listed. This is because different versions of FreeSurfer include different files, and sometimes subjects are distributed without their full set of files (e.g., to save storage space). However, rather than go and try to load all of these files right away, `neuropythy` makes place-holders for them and loads them only when first requested (this saves on loading time drastically). Accordingly, if you try to use a property whose file doesn't exist, an nexception will be raised.
Additionally, notice that the first several properties are for Brodmann Area labels. The ones ending in `_label` are `True` / `False` boolean labels indicating whether the vertex is in the given ROI (according to an estimation based on anatomy). The subject we are using in the Jupyterhub environment does not actually have these files included, but they do have, for example `BA1_weight` files. The weights represent the probability that a vertex is in the associated ROI, so we can make a label from this.
```
ba1_label = cortex.prop('BA1_weight') >= 0.5
```
We can now plot this property using `neuropythy`'s `cortex_plot()` function.
```
ny.cortex_plot(cortex.surfaces['white'], color=ba1_label)
```
**Improving this plot.** While this plot shows us where the ROI is, it's rather hard to interpret. Rather, we would prefer to plot the ROI in red and the rest of the brain using a binarized curvature map. `neuropythy` supports this kind of binarized curvature map as a default underlay, so, in fact, the easiest way to accomplish this is to tell `cortex_plot` to color the surface red, but to add a vertex mask that instructs the function to *only* color the ROI vertices.
Additionally, it is easier to see the inflated surface, so we will switch to that.
```
ny.cortex_plot(cortex.surfaces['inflated'], color='r', mask=ba1_label)
```
We can optionally make this red ROI plot a little bit transparent as well.
```
ny.cortex_plot(cortex.surfaces['inflated'], color='r', mask=ba1_label, alpha=0.4)
```
**Plotting the weight instead of the label.** Alternately, we might have wanted to plot the weight / probability of the ROI. Continuous properties like probability can be plotted using color-maps, similar to how they are plotted in `matplotlib`.
```
ny.cortex_plot(cortex.surfaces['inflated'], color='BA1_weight',
cmap='hot', vmin=0, vmax=1, alpha=0.6)
```
**Another property.** Other properties can be very informative. For example, the cortical thickness property, which is stored in mm. This can tell us the parts of the brain that are thick or not thick.
```
ny.cortex_plot(cortex.surfaces['inflated'], color='thickness',
cmap='hot', vmin=1, vmax=6)
```
---
### Interpolation (Surface to Image and Image to Surface)
---
Hemisphere/Cortex objects also manage interpolation, both to/from image volumes as well as to/from the cortical surfaces of other subjects (we will demo interpolation between subjects in the last section). Here we will focus on the former: interpolation to and from images.
**Cortex to Image Interpolation.**
Because our subjects only have structural data and do not have functional data, we do not have anything handy to interpolate out of a volume onto a surface. So instead, we will start by innterpolating from the cortex into the volume. A good property for this is the subject's cortical thickness. Thickness is difficult to calculate in the volume, so if one wants thickness data in a volume, it would typically be calculated using surface meshes then projected back into the volume. We will do that now.
Note that in order to create a new image, we have to provide the interpolation method with some information about how the image is oriented and shaped. This includes two critical pieces of information: the `'image_shape'` (i.e., the `numpy.shape` of the image's array) and the `'affine'`, which is simply the affine-transformation that aligns the image with the subject. Usually, it is easiest to provide this information in the form of a template image. For all kinds of subjects (HCP and FreeSurfer), an image is correctly aligned with a subject and thus the subject's cortical surfaces if its affine transfomation correctly aligns it with `subject.images['brain']`.
```
# We need a template image; the new image will have the same shape,
# affine, image type, and hader as the template image.
template_im = subject.images['brain']
# We can use just the template's header for this.
template = template_im.header
# We can alternately just provide information about the image geometry:
#template = {'image_shape': (256,256,256), 'affine': template_im.affine}
# Alternately, we can provide an actual image into which the data will
# be inserted. In this case, we would want to make a cleared-duplicate
# of the brain image (i.e. all voxels set to 0)
#template = ny.image_clear(template_im)
# All of the above templates should provide the same result.
# We are going to save the property from both hemispheres into an image.
lh_prop = subject.lh.prop('thickness')
rh_prop = subject.rh.prop('thickness')
# This may be either 'linear' or 'nearest'; for thickness 'linear'
# is probably best, but the difference will be small.
method = 'linear'
# Do the interpolation. This may take a few minutes the first time it is run.
new_im = subject.cortex_to_image((lh_prop, rh_prop), template, method=method,
# The template is integer, so we override it.
dtype='float')
```
Now that we have made this new image, let's take a look at it by plotting some slices from it, once again.
```
# What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = new_im.dataobj[slice_num,:,:]
elif axis == 1:
imslice = new_im.dataobj[:,slice_num,:]
else:
imslice = new_im.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='hot', vmin=0, vmax=6)
# Turn off labels:
ax.axis('off')
```
**Image to Cortex Interpolation.** A good test of our interpolation methods is now to ensure that, when we interpolate data from the image we just created back to the cortex, we get approximately the same values. The values we interpolate back out of the volume will not be identical to the volumes we started with because the resolution of the image is finite, but they should be close.
The `image_to_cortex()` method of the `Subject` class is capable of interpolating from an image to the cortical surface(s), based on the alignment of the image with the cortex.
```
(lh_prop_interp, rh_prop_interp) = subject.image_to_cortex(new_im, method=method)
```
We can plot the hemispheres together to visualize the difference between the original thickenss and the thickness that was interpolated into an image then back onto the cortex.
```
fig = ny.cortex_plot(subject.lh, surface='midgray',
color=(lh_prop_interp - lh_prop)**2,
cmap='hot', vmin=0, vmax=2)
fig = ny.cortex_plot(subject.rh, surface='midgray',
color=(rh_prop_interp - rh_prop)**2,
cmap='hot', vmin=0, vmax=2,
figure=fig)
ipv.show()
```
## Intersubject Surface Alignment
Comparison between multiple subjects is usually accomplished by first aligning each subject's cortical surface with that of a template surface (*fsaverage* in FreeSurfer, *fs_LR* in the HCP), then interpolating between vertices in the aligned arrangements. The alignment to the template are calculated and saved by FreeSurfer, the HCPpipelines, and various other utilities, but as of when this tutorial was written, `neuropythy` only supports these first two formats. Alignments are calculated by warping the vertices of the subject's spherical (fully inflated) hemisphere in a diffeomorphic fashion with the goal of minimizing the difference between the sulcal topology (curvature and depth) of the subject's vertices and that of the nearby *fsaverage* vertices. The process involves a number of steps, and any who are interested should follow up with the various documentations and papers published by the [FreeSurfer group](https://surfer.nmr.mgh.harvard.edu/).
For practical purposes, it is not necessary to understand the details of this algorithm--FreeSurfer is a large complex collection of software that has been under development for decades. However, to better understand what is produced by FreeSurfer's alignment procedure, let us start by looking at its outputs.
---
### Compare Subject Registrations
---
To better understand the various spherical surfaces produced by FreeSurfer, let's start by plotting three spherical surfaces in 3D. The first will be the subject's "native" inflated spherical surface. The next will be the subjects "fsaverage"-aligned sphere. The last will be The *fsaverage* subject's native sphere.
These spheres are accessed not through the `subject.surfaces` dictionary but through the `subject.registrations` dictionary. This is simply a design decision--registrations and surfaces are not fundamentally different except that registrations can be used for interpolation between subjects (more below).
Note that you may need to zoom out once the plot has been made.
```
# Get the fsaverage subject.
fsaverage = ny.freesurfer_subject('fsaverage')
# Get the hemispheres we will be examining.
fsa_hemi = fsaverage.hemis[hemi]
sub_hemi = subject.hemis[hemi]
# Next, get the three registrations we want to plot.
sub_native_reg = sub_hemi.registrations['native']
sub_fsaverage_reg = sub_hemi.registrations['fsaverage']
fsa_native_reg = fsa_hemi.registrations['native']
# We want to plot them all three together in one scene, so to do this
# we need to translate two of them a bit along the x-axis.
sub_native_reg = sub_native_reg.translate([-225,0,0])
fsa_native_reg = fsa_native_reg.translate([ 225,0,0])
# Now plot them all.
fig = ipv.figure(width=900, height=300)
ny.cortex_plot(sub_native_reg, figure=fig)
ny.cortex_plot(fsa_native_reg, figure=fig)
ny.cortex_plot(sub_fsaverage_reg, figure=fig)
ipv.show()
```
---
### Interpolate Between Subjects
---
Interpolation between subjects requires interpolating between a shared registration. For a subject and the *fsaverage*, this is the subject's *fsaverage*-aligned registration and *fsaverage*'s native. However, for two non-meta subjects, the *fsaverage*-aligned registration of both subjects are used.
We will first show how to interpolate from a subject over to the **fsaverage**. This is a very valuable operation to be able to do as it allows you to compute statistics across subejcts of cortical surface data (such as BOLD activation data or source-localized MEG data).
```
# The property we're going to interpolate over to fsaverage:
sub_prop = sub_hemi.prop('thickness')
# The method we use ('nearest' or 'linear'):
method = 'linear'
# Interpolate the subject's thickness onto the fsaverage surface.
fsa_prop = sub_hemi.interpolate(fsa_hemi, sub_prop, method=method)
# Let's make a plot of this:
ny.cortex_plot(fsa_hemi, surface='inflated',
color=fsa_prop, cmap='hot', vmin=0, vmax=6)
```
Okay, for our last exercise, let's interpolate back from the *fsaverage* subject to our subject. It is occasionally nice to be able to plot the *fsaverage*'s average curvature map as an underlay, so let's do that.
```
# This time we are going to interpolate curvature from the fsaverage
# back to the subject. When the property we are interpolating is a
# named property of the hemisphere, we can actually just specify it
# by name in the interpolation call.
fsa_curv_on_sub = fsa_hemi.interpolate(sub_hemi, 'curvature')
# We can make a duplicate subject hemisphere with this new property
# so that it's easy to plot this curvature map.
sub_hemi_fsacurv = sub_hemi.with_prop(curvature=fsa_curv_on_sub)
# Great, let's see what this looks like:
ny.cortex_plot(sub_hemi_fsacurv, surface='inflated')
```
| github_jupyter |
# Baye's Theorem
### Introduction
Befor starting with *Bayes Theorem* we can have a look at some definitions.
**Conditional Probability :**
Conditional Probability is the Probability of one event occuring with some Relationship to one or more events.
Let A and B be the two interdependent event,where A has already occured then the probabilty of B will be
$$ P(B|A) = P(A \cap B)|P(A) $$
**Joint Probability :**
Joint Probability is a Statistical measure that Calculates the Likehood of two events occuring together and at the same point in time.
$$ P(A \cap B) = P(A|B) * P(B) $$
### Bayes Theorem
Bayes Theorem was named after **Thomas Bayes**,who discovered it in **1763** and worked in the field of Decision Theory.
Bayes Theorem is a mathematical formula used to determine the **Conditional Probability** of events without the **Joint Probability**.
**Statement**
If B$_{1}$, B$_{2}$, B$_{3}$,.....,B$_{n}$ are Mutually exclusive event with P(B$_{i}$) $\not=$ 0 ,( i=1,2,3,...,n) of Random Experiment then for any Arbitrary event A of the Sample Space of the above Experiment with P(A)>0,we have
$$ P(B_{i}|A) = P(B_{i})P(A|B_{i})/ \sum\limits_{i=1}^{n} P(B_{i})P(A|B_{i}) $$
**Proof**
Let S be the Sample Space of the Random Experiment.The Event B$_{1}$, B$_{2}$, B$_{3}$,.....,B$_{n}$ being Exhaustive
$$ S = (B_{1} \cup B_{2} \cup ...\cup B_{n}) \hspace{1cm} \hspace{0.1cm} [\therefore A \subset S] $$
$$ A = A \cap S = A \cap ( B_{1} \cup B_{2} \cup B_{3},.....,\cup B_{n}) $$
$$ = (A \cap B_{1}) \cup (A \cap B_{2}) \cup ... \cup (A \cap B_{n}) $$
$$ P(A) = P(A \cap B_{1}) + P (A \cap B_{2}) + ...+ P(A \cap B_{n}) $$
$$ \hspace{3cm} \hspace{0.1cm} = P(B_{1})P(A|B_{1}) + P(B_{2})P(A|B_{2}) + ... +P(B_{n})P(A|B_{n}) $$
$$ = \sum\limits_{i=1}^{n} P(B_{i})P(A|B_{i}) $$
Now,
$$ P(A \cap B_{i}) = P(A)P(B_{i}|A) $$
$$ P(B_{i}|A) = P(A \cap B_{i})/P(A) = P(B_{i})P(A|B_{i})/\sum\limits_{i=1}^{n} P(B_{i})P(A|B_{i}) $$
**P(B)** is the Probability of occurence **B**.If we know that the event **A** has already occured.On knowing about the event **A**,**P(B)** is changed to **P(B|A)**.With the help of **Bayes Theorem we can Calculate P(B|A)**.
**Naming Conventions :**
<br>
P(A/B) : Posterior Probability
<br>
P(A) : Prior Probability
<br>
P(B/A) : Likelihood
<br>
P(B) : Evidence
<br>
So, Bayes Theorem can be Restated as :
$$ Posterior = Likelihood * Prior / Evidence $$
Now we will be looking at some problem examples on Bayes Theorem.
**Example 1** :Suppose that the reliability of a Covid-19 test is specified as follows:
<br>
Of Population having Covid-19 , 90% of the test detect the desire but 10% go undetected.Of Population free of Covid-19 , 99% of the test are judged Covid-19 -tive but 1% are diagnosed showing Covid-19 +tive.From a large population of which only 0.1% have Covid-19,one person is selected at Random,given the Covid-19 test,and the pathologist Report him/her as Covid-19 positive.What is the Probability that the person actually have Covid-19?
**Solution**<br>
Let, <br>
B$_{1}$ = The Person Selected is Actually having Covid-19.<br>
B$_{2}$ = The Person Selected is not having Covid-19.<br>
A = The Person Covid-19 Test is Diagnosed as Positive.<br>
P(B$_{1}$) = 0.1% = 0.1/100 = 0.001<br>
P(B$_{2}$) = 1-P(B$_{1}$) = 1-0.001 = 0.999<br>
P(A|B$_{1}$) = Probability that the person tested Covid-19 +tive given that he / she is actually having Covid-19.= 90/100 = 0.9 <br>
P(A|B$_{2}$) = Probability that the person tested Covid-19 +tive given that he / she is actually not having Covid-19.= 1/100 = 0.01 <br>
Required Probability = P(B$_{1}$|A) = P(B$_{1}$) * P(A|B$_{1}$)/ (((P(B$_{1}$) * P(A|B$_{1}$))+((P(B$_{2}$) * P(A|B$_{2}$)))<br>
= (0.001 * 0.9)/(0.001 * 0.9+0.999 * 0.01) = 90/1089 =0.08264
We will Now use Python to calculate the same.
```
#calculate P(B1|A) given P(B1),P(A|B1),P(A|B2),P(B2)
def bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2):
p_b1_given_a=(p_b1*p_a_given_b1)/((p_b1*p_a_given_b1)+(p_b2*p_a_given_b2))
return p_b1_given_a
#P(B1)
p_b1=0.001
#P(B2)
p_b2=0.999
#P(A|B1)
p_a_given_b1=0.9
#P(A|B2)
p_a_given_b2=0.01
result=bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2)
print('P(B1|A)=% .3f %%'%(result*100))
```
**Example 2 :** In a Quiz,a contestant either guesses or cheat or knows the answer to a multiple choice question with four choices.The Probability that he/she makes a guess is 1/3 and the Probability that he /she cheats the answer is 1/6.The Probability that his answer is correct,given that he cheated it,is 1/8.Find the Probability that he knows the answer to the question,given that he/she correctly answered it.
**Solution**<br>
Let, <br>
B$_{1}$ = Contestant guesses the answer.<br>
B$_{2}$ = Contestant cheated the answer.<br>
B$_{3}$ = Contestant knows the answer.<br>
A = Contestant answer correctly.<br>
clearly,<br>
P(B$_{1}$) = 1/3 , P(B$_{2}$) =1/6<br>
Since B$_{1}$ ,B$_{2}$, B$_{3}$ are mutually exclusive and exhaustive event.
P(B$_{1}$) + P(B$_{2}$) + P(B$_{3}$) = 1 => P(B$_{3}$) = 1 - (P(B$_{1}$) + P(B$_{2}$))
=1-1/3-1/6=1/2
If B$_{1}$ has already occured,the contestant guesses,the there are four choices out of which only one is correct.<br>
$\therefore$ the Probability that he answers correctly given that he/she has made a guess is 1/4 i.e. **P(A|B$-{1}$) = 1/4**<br>
It is given that he knew the answer = 1<br>
By Bayes Theorem,<br>
Required Probability = P(B$_{3}$|A)<br>
= P(B$_{3}$)P(A|B$_{3}$)/(P(B$_{1}$)P(A|B$_{1}$)+P(B$_{2}$)P(A|B$_{2}$)+P(B$_{3}$)P(A|B$_{3}$))
= (1/2 * 1) / ((1/3 * 1/4) + (1/6 * 1/8) + (1/2 * 1))=24/29
```
#calculate P(B1|A) given P(B1),P(A|B1),P(A|B2),P(B2),P(B3),P(A|B3)
def bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2,p_b3,p_a_given_b3):
p_b3_given_a=(p_b3*p_a_given_b3)/((p_b1*p_a_given_b1)+(p_b2*p_a_given_b2)+(p_b3*p_a_given_b3))
return p_b3_given_a
#P(B1)
p_b1=1/3
#P(B2)
p_b2=1/6
#P(B3)
p_b3=1/2
#P(A|B1)
p_a_given_b1=1/4
#P(A|B2)
p_a_given_b2=1/8
#P(A|B3)
p_a_given_b3=1
result=bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2,p_b3,p_a_given_b3)
print('P(B3|A)=% .3f %%'%(result*100))
```
| github_jupyter |
[learning-python3.ipynb]: https://gist.githubusercontent.com/kenjyco/69eeb503125035f21a9d/raw/learning-python3.ipynb
Right-click -> "save link as" [https://gist.githubusercontent.com/kenjyco/69eeb503125035f21a9d/raw/learning-python3.ipynb][learning-python3.ipynb] to get most up-to-date version of this notebook file.
## Quick note about Jupyter cells
When you are editing a cell in Jupyter notebook, you need to re-run the cell by pressing **`<Shift> + <Enter>`**. This will allow changes you made to be available to other cells.
Use **`<Enter>`** to make new lines inside a cell you are editing.
#### Code cells
Re-running will execute any statements you have written. To edit an existing code cell, click on it.
#### Markdown cells
Re-running will render the markdown text. To edit an existing markdown cell, double-click on it.
<hr>
## Common Jupyter operations
Near the top of the https://try.jupyter.org page, Jupyter provides a row of menu options (`File`, `Edit`, `View`, `Insert`, ...) and a row of tool bar icons (disk, plus sign, scissors, 2 files, clipboard and file, up arrow, ...).
#### Inserting and removing cells
- Use the "plus sign" icon to insert a cell below the currently selected cell
- Use "Insert" -> "Insert Cell Above" from the menu to insert above
#### Clear the output of all cells
- Use "Kernel" -> "Restart" from the menu to restart the kernel
- click on "clear all outputs & restart" to have all the output cleared
#### Save your notebook file locally
- Clear the output of all cells
- Use "File" -> "Download as" -> "IPython Notebook (.ipynb)" to download a notebook file representing your https://try.jupyter.org session
#### Load your notebook file in try.jupyter.org
1. Visit https://try.jupyter.org
2. Click the "Upload" button near the upper right corner
3. Navigate your filesystem to find your `*.ipynb` file and click "open"
4. Click the new "upload" button that appears next to your file name
5. Click on your uploaded notebook file
<hr>
## References
- https://try.jupyter.org
- https://docs.python.org/3/tutorial/index.html
- https://docs.python.org/3/tutorial/introduction.html
- https://daringfireball.net/projects/markdown/syntax
<hr>
## Python objects, basic types, and variables
Everything in Python is an **object** and every object in Python has a **type**. Some of the basic types include:
- **`int`** (integer; a whole number with no decimal place)
- `10`
- `-3`
- **`float`** (float; a number that has a decimal place)
- `7.41`
- `-0.006`
- **`str`** (string; a sequence of characters enclosed in single quotes, double quotes, or triple quotes)
- `'this is a string using single quotes'`
- `"this is a string using double quotes"`
- `'''this is a triple quoted string using single quotes'''`
- `"""this is a triple quoted string using double quotes"""`
- **`bool`** (boolean; a binary value that is either true or false)
- `True`
- `False`
- **`NoneType`** (a special type representing the absence of a value)
- `None`
In Python, a **variable** is a name you specify in your code that maps to a particular **object**, object **instance**, or value.
By defining variables, we can refer to things by names that make sense to us. Names for variables can only contain letters, underscores (`_`), or numbers (no spaces, dashes, or other characters). Variable names must start with a letter or underscore.
<hr>
## Basic operators
In Python, there are different types of **operators** (special symbols) that operate on different values. Some of the basic operators include:
- arithmetic operators
- **`+`** (addition)
- **`-`** (subtraction)
- **`*`** (multiplication)
- **`/`** (division)
- __`**`__ (exponent)
- assignment operators
- **`=`** (assign a value)
- **`+=`** (add and re-assign; increment)
- **`-=`** (subtract and re-assign; decrement)
- **`*=`** (multiply and re-assign)
- comparison operators (return either `True` or `False`)
- **`==`** (equal to)
- **`!=`** (not equal to)
- **`<`** (less than)
- **`<=`** (less than or equal to)
- **`>`** (greater than)
- **`>=`** (greater than or equal to)
When multiple operators are used in a single expression, **operator precedence** determines which parts of the expression are evaluated in which order. Operators with higher precedence are evaluated first (like PEMDAS in math). Operators with the same precedence are evaluated from left to right.
- `()` parentheses, for grouping
- `**` exponent
- `*`, `/` multiplication and division
- `+`, `-` addition and subtraction
- `==`, `!=`, `<`, `<=`, `>`, `>=` comparisons
> See https://docs.python.org/3/reference/expressions.html#operator-precedence
```
# Assigning some numbers to different variables
num1 = 10
num2 = -3
num3 = 7.41
num4 = -.6
num5 = 7
num6 = 3
num7 = 11.11
# Addition
num1 + num2
# Subtraction
num2 - num3
# Multiplication
num3 * num4
# Division
num4 / num5
# Exponent
num5 ** num6
# Increment existing variable
num7 += 4
num7
# Decrement existing variable
num6 -= 2
num6
# Multiply & re-assign
num3 *= 5
num3
# Assign the value of an expression to a variable
num8 = num1 + num2 * num3
num8
# Are these two expressions equal to each other?
num1 + num2 == num5
# Are these two expressions not equal to each other?
num3 != num4
# Is the first expression less than the second expression?
num5 < num6
# Is this expression True?
5 > 3 > 1
# Is this expression True?
5 > 3 < 4 == 3 + 1
# Assign some strings to different variables
simple_string1 = 'an example'
simple_string2 = "oranges "
# Addition
simple_string1 + ' of using the + operator'
# Notice that the string was not modified
simple_string1
# Multiplication
simple_string2 * 4
# This string wasn't modified either
simple_string2
# Are these two expressions equal to each other?
simple_string1 == simple_string2
# Are these two expressions equal to each other?
simple_string1 == 'an example'
# Add and re-assign
simple_string1 += ' that re-assigned the original string'
simple_string1
# Multiply and re-assign
simple_string2 *= 3
simple_string2
# Note: Subtraction, division, and decrement operators do not apply to strings.
```
## Basic containers
> Note: **mutable** objects can be modified after creation and **immutable** objects cannot.
Containers are objects that can be used to group other objects together. The basic container types include:
- **`str`** (string: immutable; indexed by integers; items are stored in the order they were added)
- **`list`** (list: mutable; indexed by integers; items are stored in the order they were added)
- `[3, 5, 6, 3, 'dog', 'cat', False]`
- **`tuple`** (tuple: immutable; indexed by integers; items are stored in the order they were added)
- `(3, 5, 6, 3, 'dog', 'cat', False)`
- **`set`** (set: mutable; not indexed at all; items are NOT stored in the order they were added; can only contain immutable objects; does NOT contain duplicate objects)
- `{3, 5, 6, 3, 'dog', 'cat', False}`
- **`dict`** (dictionary: mutable; key-value pairs are indexed by immutable keys; items are NOT stored in the order they were added)
- `{'name': 'Jane', 'age': 23, 'fav_foods': ['pizza', 'fruit', 'fish']}`
When defining lists, tuples, or sets, use commas (,) to separate the individual items. When defining dicts, use a colon (:) to separate keys from values and commas (,) to separate the key-value pairs.
Strings, lists, and tuples are all **sequence types** that can use the `+`, `*`, `+=`, and `*=` operators.
```
# Assign some containers to different variables
list1 = [3, 5, 6, 3, 'dog', 'cat', False]
tuple1 = (3, 5, 6, 3, 'dog', 'cat', False)
set1 = {3, 5, 6, 3, 'dog', 'cat', False}
dict1 = {'name': 'Jane', 'age': 23, 'fav_foods': ['pizza', 'fruit', 'fish']}
# Items in the list object are stored in the order they were added
list1
# Items in the tuple object are stored in the order they were added
tuple1
# Items in the set object are not stored in the order they were added
# Also, notice that the value 3 only appears once in this set object
set1
# Items in the dict object are not stored in the order they were added
dict1
# Add and re-assign
list1 += [5, 'grapes']
list1
# Add and re-assign
tuple1 += (5, 'grapes')
tuple1
# Multiply
[1, 2, 3, 4] * 2
# Multiply
(1, 2, 3, 4) * 3
```
## Accessing data in containers
For strings, lists, tuples, and dicts, we can use **subscript notation** (square brackets) to access data at an index.
- strings, lists, and tuples are indexed by integers, **starting at 0** for first item
- these sequence types also support accesing a range of items, known as **slicing**
- use **negative indexing** to start at the back of the sequence
- dicts are indexed by their keys
> Note: sets are not indexed, so we cannot use subscript notation to access data elements.
```
# Access the first item in a sequence
list1[0]
# Access the last item in a sequence
tuple1[-1]
# Access a range of items in a sequence
simple_string1[3:8]
# Access a range of items in a sequence
tuple1[:-3]
# Access a range of items in a sequence
list1[4:]
# Access an item in a dictionary
dict1['name']
# Access an element of a sequence in a dictionary
dict1['fav_foods'][2]
```
## Python built-in functions and callables
A **function** is a Python object that you can "call" to **perform an action** or compute and **return another object**. You call a function by placing parentheses to the right of the function name. Some functions allow you to pass **arguments** inside the parentheses (separating multiple arguments with a comma). Internal to the function, these arguments are treated like variables.
Python has several useful built-in functions to help you work with different objects and/or your environment. Here is a small sample of them:
- **`type(obj)`** to determine the type of an object
- **`len(container)`** to determine how many items are in a container
- **`callable(obj)`** to determine if an object is callable
- **`sorted(container)`** to return a new list from a container, with the items sorted
- **`sum(container)`** to compute the sum of a container of numbers
- **`min(container)`** to determine the smallest item in a container
- **`max(container)`** to determine the largest item in a container
- **`abs(number)`** to determine the absolute value of a number
- **`repr(obj)`** to return a string representation of an object
> Complete list of built-in functions: https://docs.python.org/3/library/functions.html
There are also different ways of defining your own functions and callable objects that we will explore later.
```
# Use the type() function to determine the type of an object
type(simple_string1)
# Use the len() function to determine how many items are in a container
len(dict1)
# Use the len() function to determine how many items are in a container
len(simple_string2)
# Use the callable() function to determine if an object is callable
callable(len)
# Use the callable() function to determine if an object is callable
callable(dict1)
# Use the sorted() function to return a new list from a container, with the items sorted
sorted([10, 1, 3.6, 7, 5, 2, -3])
# Use the sorted() function to return a new list from a container, with the items sorted
# - notice that capitalized strings come first
sorted(['dogs', 'cats', 'zebras', 'Chicago', 'California', 'ants', 'mice'])
# Use the sum() function to compute the sum of a container of numbers
sum([10, 1, 3.6, 7, 5, 2, -3])
# Use the min() function to determine the smallest item in a container
min([10, 1, 3.6, 7, 5, 2, -3])
# Use the min() function to determine the smallest item in a container
min(['g', 'z', 'a', 'y'])
# Use the max() function to determine the largest item in a container
max([10, 1, 3.6, 7, 5, 2, -3])
# Use the max() function to determine the largest item in a container
max('gibberish')
# Use the abs() function to determine the absolute value of a number
abs(10)
# Use the abs() function to determine the absolute value of a number
abs(-12)
# Use the repr() function to return a string representation of an object
repr(set1)
```
## Python object attributes (methods and properties)
Different types of objects in Python have different **attributes** that can be referred to by name (similar to a variable). To access an attribute of an object, use a dot (`.`) after the object, then specify the attribute (i.e. `obj.attribute`)
When an attribute of an object is a callable, that attribute is called a **method**. It is the same as a function, only this function is bound to a particular object.
When an attribute of an object is not a callable, that attribute is called a **property**. It is just a piece of data about the object, that is itself another object.
The built-in `dir()` function can be used to return a list of an object's attributes.
<hr>
## Some methods on string objects
- **`.capitalize()`** to return a capitalized version of the string (only first char uppercase)
- **`.upper()`** to return an uppercase version of the string (all chars uppercase)
- **`.lower()`** to return an lowercase version of the string (all chars lowercase)
- **`.count(substring)`** to return the number of occurences of the substring in the string
- **`.startswith(substring)`** to determine if the string starts with the substring
- **`.endswith(substring)`** to determine if the string ends with the substring
- **`.replace(old, new)`** to return a copy of the string with occurences of the "old" replaced by "new"
```
# Assign a string to a variable
a_string = 'tHis is a sTriNg'
# Return a capitalized version of the string
a_string.capitalize()
# Return an uppercase version of the string
a_string.upper()
# Return a lowercase version of the string
a_string.lower()
# Notice that the methods called have not actually modified the string
a_string
# Count number of occurences of a substring in the string
a_string.count('i')
# Count number of occurences of a substring in the string after a certain position
a_string.count('i', 7)
# Count number of occurences of a substring in the string
a_string.count('is')
# Does the string start with 'this'?
a_string.startswith('this')
# Does the lowercase string start with 'this'?
a_string.lower().startswith('this')
# Does the string end with 'Ng'?
a_string.endswith('Ng')
# Return a version of the string with a substring replaced with something else
a_string.replace('is', 'XYZ')
# Return a version of the string with a substring replaced with something else
a_string.replace('i', '!')
# Return a version of the string with the first 2 occurences a substring replaced with something else
a_string.replace('i', '!', 2)
```
## Some methods on list objects
- **`.append(item)`** to add a single item to the list
- **`.extend([item1, item2, ...])`** to add multiple items to the list
- **`.remove(item)`** to remove a single item from the list
- **`.pop()`** to remove and return the item at the end of the list
- **`.pop(index)`** to remove and return an item at an index
## Some methods on set objects
- **`.add(item)`** to add a single item to the set
- **`.update([item1, item2, ...])`** to add multiple items to the set
- **`.update(set2, set3, ...)`** to add items from all provided sets to the set
- **`.remove(item)`** to remove a single item from the set
- **`.pop()`** to remove and return a random item from the set
- **`.difference(set2)`** to return items in the set that are not in another set
- **`.intersection(set2)`** to return items in both sets
- **`.union(set2)`** to return items that are in either set
- **`.symmetric_difference(set2)`** to return items that are only in one set (not both)
- **`.issuperset(set2)`** does the set contain everything in the other set?
- **`.issubset(set2)`** is the set contained in the other set?
## Some methods on dict objects
- **`.update([(key1, val1), (key2, val2), ...])`** to add multiple key-value pairs to the dict
- **`.update(dict2)`** to add all keys and values from another dict to the dict
- **`.pop(key)`** to remove key and return its value from the dict (error if key not found)
- **`.pop(key, default_val)`** to remove key and return its value from the dict (or return default_val if key not found)
- **`.get(key)`** to return the value at a specified key in the dict (or None if key not found)
- **`.get(key, default_val)`** to return the value at a specified key in the dict (or default_val if key not found)
- **`.keys()`** to return a list of keys in the dict
- **`.values()`** to return a list of values in the dict
- **`.items()`** to return a list of key-value pairs (tuples) in the dict
## Positional arguments and keyword arguments to callables
You can call a function/method in a number of different ways:
- `func()`: Call `func` with no arguments
- `func(arg)`: Call `func` with one positional argument
- `func(arg1, arg2)`: Call `func` with two positional arguments
- `func(arg1, arg2, ..., argn)`: Call `func` with many positional arguments
- `func(kwarg=value)`: Call `func` with one keyword argument
- `func(kwarg1=value1, kwarg2=value2)`: Call `func` with two keyword arguments
- `func(kwarg1=value1, kwarg2=value2, ..., kwargn=valuen)`: Call `func` with many keyword arguments
- `func(arg1, arg2, kwarg1=value1, kwarg2=value2)`: Call `func` with positonal arguments and keyword arguments
- `obj.method()`: Same for `func`.. and every other `func` example
When using **positional arguments**, you must provide them in the order that the function defined them (the function's **signature**).
When using **keyword arguments**, you can provide the arguments you want, in any order you want, as long as you specify each argument's name.
When using positional and keyword arguments, positional arguments must come first.
## Formatting strings and using placeholders
## Python "for loops"
It is easy to **iterate** over a collection of items using a **for loop**. The strings, lists, tuples, sets, and dictionaries we defined are all **iterable** containers.
The for loop will go through the specified container, one item at a time, and provide a temporary variable for the current item. You can use this temporary variable like a normal variable.
## Python "if statements" and "while loops"
Conditional expressions can be used with these two **conditional statements**.
The **if statement** allows you to test a condition and perform some actions if the condition evaluates to `True`. You can also provide `elif` and/or `else` clauses to an if statement to take alternative actions if the condition evaluates to `False`.
The **while loop** will keep looping until its conditional expression evaluates to `False`.
> Note: It is possible to "loop forever" when using a while loop with a conditional expression that never evaluates to `False`.
>
> Note: Since the **for loop** will iterate over a container of items until there are no more, there is no need to specify a "stop looping" condition.
## List, set, and dict comprehensions
## Creating objects from arguments or other objects
The basic types and containers we have used so far all provide **type constructors**:
- `int()`
- `float()`
- `str()`
- `list()`
- `tuple()`
- `set()`
- `dict()`
Up to this point, we have been defining objects of these built-in types using some syntactic shortcuts, since they are so common.
Sometimes, you will have an object of one type that you need to convert to another type. Use the **type constructor** for the type of object you want to have, and pass in the object you currently have.
## Importing modules
## Exceptions
## Classes: Creating your own objects
```
# Define a new class called `Thing` that is derived from the base Python object
class Thing(object):
my_property = 'I am a "Thing"'
# Define a new class called `DictThing` that is derived from the `dict` type
class DictThing(dict):
my_property = 'I am a "DictThing"'
print(Thing)
print(type(Thing))
print(DictThing)
print(type(DictThing))
print(issubclass(DictThing, dict))
print(issubclass(DictThing, object))
# Create "instances" of our new classes
t = Thing()
d = DictThing()
print(t)
print(type(t))
print(d)
print(type(d))
# Interact with a DictThing instance just as you would a normal dictionary
d['name'] = 'Sally'
print(d)
d.update({
'age': 13,
'fav_foods': ['pizza', 'sushi', 'pad thai', 'waffles'],
'fav_color': 'green',
})
print(d)
print(d.my_property)
```
## Defining functions and methods
## Creating an initializer method for your classes
## Other "magic methods"
## Context managers and the "with statement"
| github_jupyter |
# Overview
### `clean_us_data.ipynb`: Fix data inconsistencies in the raw time series data from [`etl_us_data.ipynb`](./etl_us_data.ipynb).
Inputs:
* `outputs/us_counties.csv`: Raw county-level time series data for the United States, produced by running [etl_us_data.ipynb](./etl_us_data.ipynb)
* `outputs/us_counties_meta.json`: Column type metadata for reading `data/us_counties.csv` with `pd.read_csv()`
Outputs:
* `outputs/us_counties_clean.csv`: The contents of `outputs/us_counties.csv` after data cleaning
* `outputs/us_counties_clean_meta.json`: Column type metadata for reading `data/us_counties_clean.csv` with `pd.read_csv()`
* `outputs/us_counties_clean.feather`: Binary version of `us_counties_clean.csv`, in [Feather](https://arrow.apache.org/docs/python/feather.html) format.
* `outputs/dates.feather`: Dates associated with points in time series, in [Feather](https://arrow.apache.org/docs/python/feather.html) format.
**Note:** You can redirect these input and output files by setting the environment variables `COVID_INPUTS_DIR` and `COVID_OUTPUTS_DIR` to replacement values for the prefixes `inputs` and `outputs`, respectively, in the above paths.
# Read and reformat the raw data
```
# Initialization boilerplate
import os
import json
import pandas as pd
import numpy as np
import scipy.optimize
import sklearn.metrics
import matplotlib.pyplot as plt
from typing import *
import text_extensions_for_pandas as tp
# Local file of utility functions
import util
# Allow environment variables to override data file locations.
_INPUTS_DIR = os.getenv("COVID_INPUTS_DIR", "inputs")
_OUTPUTS_DIR = os.getenv("COVID_OUTPUTS_DIR", "outputs")
util.ensure_dir_exists(_OUTPUTS_DIR) # create if necessary
```
## Read the CSV file from `etl_us_data.ipynb` and apply the saved type information
```
csv_file = os.path.join(_OUTPUTS_DIR, "us_counties.csv")
meta_file = os.path.join(_OUTPUTS_DIR, "us_counties_meta.json")
# Read column type metadata
with open(meta_file) as f:
cases_meta = json.load(f)
# Pandas does not currently support parsing datetime64 from CSV files.
# As a workaround, read the "Date" column as objects and manually
# convert after.
cases_meta["Date"] = "object"
cases_raw = pd.read_csv(csv_file, dtype=cases_meta, parse_dates=["Date"])
# Restore the Pandas index
cases_vertical = cases_raw.set_index(["FIPS", "Date"], verify_integrity=True)
cases_vertical
```
## Replace missing values in the secondary datasets with zeros
```
for colname in ("Confirmed_NYT", "Deaths_NYT", "Confirmed_USAFacts", "Deaths_USAFacts"):
cases_vertical[colname].fillna(0, inplace=True)
cases_vertical[colname] = cases_vertical[colname].astype("int64")
cases_vertical
```
## Collapse each time series down to a single cell
This kind of time series data is easier to manipulate at the macroscopic level if each time series occupies a
single cell of the DataFrame. We use the [TensorArray](https://text-extensions-for-pandas.readthedocs.io/en/latest/#text_extensions_for_pandas.TensorArray) Pandas extension type from [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas).
```
cases, dates = util.collapse_time_series(cases_vertical, ["Confirmed", "Deaths", "Recovered",
"Confirmed_NYT", "Deaths_NYT",
"Confirmed_USAFacts", "Deaths_USAFacts"])
cases
# Note that the previous cell also saved the values from the "Date"
# column of `cases_vertical` into the Python variable `dates`:
dates[:10], dates.shape
# Print out the time series for the Bronx as a sanity check
bronx_fips = 36005
cases.loc[bronx_fips]["Confirmed"]
```
# Correct for missing data for today in USAFacts data
The USAFacts database only receives the previous day's updates late in the day,
so it's often missing the last value. Substitute the previous day's value if
that is the case.
```
# Last 10 days of the time series for the Bronx before this change
cases.loc[bronx_fips]["Deaths_USAFacts"].to_numpy()[-10:]
# last element <-- max(last element, second to last)
new_confirmed = cases["Confirmed_USAFacts"].to_numpy().copy()
new_confirmed[:, -1] = np.maximum(new_confirmed[:, -1], new_confirmed[:, -2])
cases["Confirmed_USAFacts"] = tp.TensorArray(new_confirmed)
new_deaths = cases["Deaths_USAFacts"].to_numpy().copy()
new_deaths[:, -1] = np.maximum(new_deaths[:, -1], new_deaths[:, -2])
cases["Deaths_USAFacts"] = tp.TensorArray(new_deaths)
# Last 10 days of the time series for the Bronx after this change
cases.loc[bronx_fips]["Deaths_USAFacts"].to_numpy()[-10:]
```
# Validate the New York City confirmed cases data
Older versions of the Johns Hopkins data coded all of New York city as being
in New York County. Each borough is actually in a different county
with a different FIPS code.
Verify that this problem hasn't recurred.
```
max_bronx_confirmed = np.max(cases.loc[36005]["Confirmed"])
if max_bronx_confirmed == 0:
raise ValueError(f"Time series for the Bronx is all zeros again:\n{cases.loc[36005]['Confirmed']}")
max_bronx_confirmed
```
Also plot the New York City confirmed cases time series to allow for manual validation.
```
new_york_county_fips = 36061
nyc_fips = [
36005, # Bronx County
36047, # Kings County
new_york_county_fips, # New York County
36081, # Queens County
36085, # Richmond County
]
util.graph_examples(cases.loc[nyc_fips], "Confirmed", {}, num_to_pick=5)
```
## Adjust New York City deaths data
Plot deaths for New York City in the Johns Hopkins data set. The jump in June is due to a change in reporting.
```
util.graph_examples(cases.loc[nyc_fips], "Deaths", {}, num_to_pick=5)
```
New York Times version of the time series for deaths in New York city:
```
util.graph_examples(cases.loc[nyc_fips], "Deaths_NYT", {}, num_to_pick=5)
```
USAFacts version of the time series for deaths in New York city:
```
util.graph_examples(cases.loc[nyc_fips], "Deaths_USAFacts", {}, num_to_pick=5)
```
Currently the USAFacts version is cleanest, so we use that one.
```
new_deaths = cases["Deaths"].copy(deep=True)
for fips in nyc_fips:
new_deaths.loc[fips] = cases["Deaths_USAFacts"].loc[fips]
cases["Deaths"] = new_deaths
print("After:")
util.graph_examples(cases.loc[nyc_fips], "Deaths", {}, num_to_pick=5)
```
# Clean up the Rhode Island data
The Johns Hopkins data reports zero deaths in most of Rhode Island. Use
the secondary data set from the New York Times for Rhode Island.
```
print("Before:")
util.graph_examples(cases, "Deaths", {}, num_to_pick=8,
mask=(cases["State"] == "Rhode Island"))
# Use our secondary data set for all Rhode Island data.
ri_fips = cases[cases["State"] == "Rhode Island"].index.values.tolist()
for colname in ["Confirmed", "Deaths"]:
new_series = cases[colname].copy(deep=True)
for fips in ri_fips:
new_series.loc[fips] = cases[colname + "_NYT"].loc[fips]
cases[colname] = new_series
# Note that the secondary data set has not "Recovered" time series, so
# we leave those numbers alone for now.
print("After:")
util.graph_examples(cases, "Deaths", {}, num_to_pick=8,
mask=(cases["State"] == "Rhode Island"))
```
# Clean up the Utah data
The Johns Hopkins data for Utah is missing quite a few data points.
Use the New York Times data for Utah.
```
print("Before:")
util.graph_examples(cases, "Confirmed", {}, num_to_pick=8,
mask=(cases["State"] == "Utah"))
# The Utah time series from the New York Times' data set are more
# complete, so we use those numbers.
ut_fips = cases[cases["State"] == "Utah"].index.values
for colname in ["Confirmed", "Deaths"]:
new_series = cases[colname].copy(deep=True)
for fips in ut_fips:
new_series.loc[fips] = cases[colname + "_NYT"].loc[fips]
cases[colname] = new_series
# Note that the secondary data set has not "Recovered" time series, so
# we leave those numbers alone for now.
print("After:")
util.graph_examples(cases, "Confirmed", {}, num_to_pick=8,
mask=(cases["State"] == "Utah"))
```
# Flag additional problematic and missing data points
Use heuristics to identify and flag problematic data points across all
the time series. Generate Boolean masks that show the locations of these
outliers.
```
# Now we're done with the secondary data set, so drop its columns.
cases = cases.drop(columns=["Confirmed_NYT", "Deaths_NYT", "Confirmed_USAFacts", "Deaths_USAFacts"])
cases
# Now we need to find and flag obvious data-entry errors.
# We'll start by creating columns of "is outlier" masks.
# We use integers instead of Boolean values as a workaround for
# https://github.com/pandas-dev/pandas/issues/33770
# Start out with everything initialized to "not an outlier"
cases["Confirmed_Outlier"] = tp.TensorArray(np.zeros_like(cases["Confirmed"].values))
cases["Deaths_Outlier"] = tp.TensorArray(np.zeros_like(cases["Deaths"].values))
cases["Recovered_Outlier"] = tp.TensorArray(np.zeros_like(cases["Recovered"].values))
cases
```
## Flag time series that go from zero to nonzero and back again
One type of anomaly that occurs fairly often involves a time series
jumping from zero to a nonzero value, then back to zero again.
Locate all instances of that pattern and mark the nonzero values
as outliers.
```
def nonzero_then_zero(series: np.array):
empty_mask = np.zeros_like(series, dtype=np.int8)
if series[0] > 0:
# Special case: first value is nonzero
return empty_mask
first_nonzero_offset = 0
while first_nonzero_offset < len(series):
if series[first_nonzero_offset] > 0:
# Found the first nonzero.
# Find the distance to the next zero value.
next_zero_offset = first_nonzero_offset + 1
while (next_zero_offset < len(series)
and series[next_zero_offset] > 0):
next_zero_offset += 1
# Check the length of the run of zeros after
# dropping back to zero.
second_nonzero_offset = next_zero_offset + 1
while (second_nonzero_offset < len(series)
and series[second_nonzero_offset] == 0):
second_nonzero_offset += 1
nonzero_run_len = next_zero_offset - first_nonzero_offset
second_zero_run_len = second_nonzero_offset - next_zero_offset
# print(f"{first_nonzero_offset} -> {next_zero_offset} -> {second_nonzero_offset}; series len {len(series)}")
if next_zero_offset >= len(series):
# Everything after the first nonzero was a nonzero
return empty_mask
elif second_zero_run_len <= nonzero_run_len:
# Series dropped back to zero, but the second zero
# part was shorter than the nonzero section.
# In this case, it's more likely that the second run
# of zero values are actually missing values.
return empty_mask
else:
# Series went zero -> nonzero -> zero -> nonzero
# or zero -> nonzero -> zero -> [end]
nonzero_run_mask = empty_mask.copy()
nonzero_run_mask[first_nonzero_offset:next_zero_offset] = 1
return nonzero_run_mask
first_nonzero_offset += 1
# If we get here, the series was all zeros
return empty_mask
for colname in ["Confirmed", "Deaths", "Recovered"]:
addl_outliers = np.stack([nonzero_then_zero(s.to_numpy()) for s in cases[colname]])
outliers_colname = colname + "_Outlier"
new_outliers = cases[outliers_colname].values.astype(np.bool) | addl_outliers
cases[outliers_colname] = tp.TensorArray(new_outliers.astype(np.int8))
# fips = 13297
# print(cases.loc[fips]["Confirmed"])
# print(nonzero_then_zero(cases.loc[fips]["Confirmed"]))
# Let's have a look at which time series acquired the most outliers as
# a result of the code in the previous cell.
df = cases[["State", "County"]].copy()
df["Confirmed_Num_Outliers"] = np.count_nonzero(cases["Confirmed_Outlier"], axis=1)
counties_with_outliers = df.sort_values("Confirmed_Num_Outliers", ascending=False).head(10)
counties_with_outliers
# Plot the couties in the table above, with outliers highlighted.
# The graph_examples() function is defined in util.py.
util.graph_examples(cases, "Confirmed", {}, num_to_pick=10, mask=(cases.index.isin(counties_with_outliers.index)))
```
## Flag time series that drop to zero, then go back up
Another type of anomaly involves the time series dropping down to
zero, then going up again. Since all three time series are supposed
to be cumulative counts, this pattern most likely indicates missing
data.
To correct for this problem, we mark any zero values after the
first nonzero, non-outlier values as outliers, across all time series.
```
def zeros_after_first_nonzero(series: np.array, outliers: np.array):
nonzero_mask = (series != 0)
nonzero_and_not_outlier = nonzero_mask & (~outliers)
first_nonzero = np.argmax(nonzero_and_not_outlier)
if 0 == first_nonzero and series[0] == 0:
# np.argmax(nonzero_mask) will return 0 if there are no nonzeros
return np.zeros_like(series)
after_nonzero_mask = np.zeros_like(series)
after_nonzero_mask[first_nonzero:] = True
return (~nonzero_mask) & after_nonzero_mask
for colname in ["Confirmed", "Deaths", "Recovered"]:
outliers_colname = colname + "_Outlier"
addl_outliers = np.stack([zeros_after_first_nonzero(s.to_numpy(), o.to_numpy())
for s, o in zip(cases[colname], cases[outliers_colname])])
new_outliers = cases[outliers_colname].values.astype(np.bool) | addl_outliers
cases[outliers_colname] = tp.TensorArray(new_outliers.astype(np.int8))
# fips = 47039
# print(cases.loc[fips]["Confirmed"])
# print(cases.loc[fips]["Confirmed_Outlier"])
# print(zeros_after_first_nonzero(cases.loc[fips]["Confirmed"], cases.loc[fips]["Confirmed_Outlier"]))
# Redo our "top 10 by number of outliers" analysis with the additional outliers
df = cases[["State", "County"]].copy()
df["Confirmed_Num_Outliers"] = np.count_nonzero(cases["Confirmed_Outlier"], axis=1)
counties_with_outliers = df.sort_values("Confirmed_Num_Outliers", ascending=False).head(10)
counties_with_outliers
util.graph_examples(cases, "Confirmed", {}, num_to_pick=10, mask=(cases.index.isin(counties_with_outliers.index)))
# The steps we've just done have removed quite a few questionable
# data points, but you will definitely want to flag additional
# outliers by hand before trusting descriptive statistics about
# any county.
# TODO: Incorporate manual whitelists and blacklists of outliers
# into this notebook.
```
# Precompute totals for the last 7 days
Several of the notebooks downstream of this one need the number of cases and deaths
for the last 7 days, so we compute those values here for convenience.
```
def last_week_results(s: pd.Series):
arr = s.to_numpy()
today = arr[:,-1]
week_ago = arr[:,-8]
return today - week_ago
cases["Confirmed_7_Days"] = last_week_results(cases["Confirmed"])
cases["Deaths_7_Days"] = last_week_results(cases["Deaths"])
cases.head()
```
# Write out cleaned time series data
By default, output files go to the `outputs` directory. You can use the `COVID_OUTPUTS_DIR` environment variable to override that location.
## CSV output
Comma separated value (CSV) files are a portable text-base format supported by a wide variety
of different tools. The CSV format does not include type information, so we write a second
file of schema data in JSON format.
```
# Break out our time series into multiple rows again for writing to disk.
cleaned_cases_vertical = util.explode_time_series(cases, dates)
cleaned_cases_vertical
# The outlier masks are stored as integers as a workaround for a Pandas
# bug. Convert them to Boolean values for writing to disk.
cleaned_cases_vertical["Confirmed_Outlier"] = cleaned_cases_vertical["Confirmed_Outlier"].astype(np.bool)
cleaned_cases_vertical["Deaths_Outlier"] = cleaned_cases_vertical["Deaths_Outlier"].astype(np.bool)
cleaned_cases_vertical["Recovered_Outlier"] = cleaned_cases_vertical["Recovered_Outlier"].astype(np.bool)
cleaned_cases_vertical
# Write out the results to a CSV file plus a JSON file of type metadata.
cleaned_cases_vertical_csv_data_file = os.path.join(_OUTPUTS_DIR,"us_counties_clean.csv")
print(f"Writing cleaned data to {cleaned_cases_vertical_csv_data_file}")
cleaned_cases_vertical.to_csv(cleaned_cases_vertical_csv_data_file, index=True)
col_type_mapping = {
key: str(value) for key, value in cleaned_cases_vertical.dtypes.iteritems()
}
cleaned_cases_vertical_json_data_file = os.path.join(_OUTPUTS_DIR,"us_counties_clean_meta.json")
print(f"Writing metadata to {cleaned_cases_vertical_json_data_file}")
with open(cleaned_cases_vertical_json_data_file, "w") as f:
json.dump(col_type_mapping, f)
```
## Feather output
The [Feather](https://arrow.apache.org/docs/python/feather.html) file format supports
fast binary I/O over any data that can be represented using [Apache Arrow](https://arrow.apache.org/)
Feather files also include schema and type information.
```
# Also write out the nested data in Feather format so that downstream
# notebooks don't have to re-nest it.
# No Feather serialization support for Pandas indices currently, so convert
# the index on FIPS code to a normal column
cases_for_feather = cases.reset_index()
cases_for_feather.head()
# Write to Feather and make sure that reading back works too.
# Also write dates that go with the time series
dates_file = os.path.join(_OUTPUTS_DIR, "dates.feather")
cases_file = os.path.join(_OUTPUTS_DIR, "us_counties_clean.feather")
pd.DataFrame({"date": dates}).to_feather(dates_file)
cases_for_feather.to_feather(cases_file)
pd.read_feather(cases_file).head()
# Also make sure the dates can be read back in from a binary file
pd.read_feather(dates_file).head()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.